paper_id
stringlengths
12
48
title
stringlengths
12
155
url
stringlengths
39
46
abstract
stringlengths
389
2.11k
ocr_markdown
stringlengths
18.1k
576k
sai-b-etal-2023-indicmt
{I}ndic{MT} Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for {I}ndian Languages
https://aclanthology.org/2023.acl-long.795
The rapid growth of machine translation (MT) systems necessitates meta-evaluations of evaluation metrics to enable selection of those that best reflect MT quality. Unfortunately, most meta-evaluation studies focus on European languages, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from them, and to date, there are no such systematic studies focused solely on English to Indian language MT. This paper fills this gap through a Multidimensional Quality Metric (MQM) dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems. We evaluate 16 metrics and show that, pre-trained metrics like COMET have the highest correlations with annotator scores as opposed to n-gram metrics like BLEU. We further leverage our MQM annotations to develop an Indic-COMET metric and show that it outperforms COMET counterparts in both human scores correlations and robustness scores in Indian languages. Additionally, we show that the Indic-COMET can outperform COMET on some unseen Indian languages. We hope that our dataset and analysis will facilitate further research in Indic MT evaluation.
# Indicmt Eval: A Dataset To Meta-Evaluate Machine Translation Metrics For Indian Languages Ananya B. Sai1 Tanay Dixit1 Vignesh Nagarajan2 **Anoop Kunchukuttan**2,4 Pratyush Kumar2,4 Mitesh M. Khapra1,2 **Raj Dabre**3 Indian Institute of Technology Madras1 AI4Bharat2 National Institute of Information and Communications Technology3 Microsoft4 ananya@cse.iitm.ac.in dixittanay@gmail.com vignesh.vn.nagarajan@gmail.com ankunchu@microsoft.com pratykumar@microsoft.com miteshk@cse.iitm.ac.in raj.dabre@nict.go.jp ## Abstract The rapid growth of machine translation (MT) systems necessitates meta-evaluations of evaluation metrics to enable selection of those that best reflect MT quality. Unfortunately, most meta-evaluation studies focus on European languages, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from them, and to date, there are no such systematic studies focused solely on English to Indian language MT. This paper fills this gap through a Multidimensional Quality Metric (MQM) dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems. We evaluate 16 metrics and show that, pre-trained metrics like COMET have the highest correlations with annotator scores as opposed to n-gram metrics like BLEU. We further leverage our MQM annotations to develop an Indic-COMET metric and show that it outperforms COMET counterparts in both human scores correlations and robustness scores in Indian languages. Additionally, we show that the Indic-COMET can outperform COMET on some unseen Indian languages. We hope that our dataset and analysis will facilitate further research in Indic MT evaluation. ## 1 Introduction Natural language generation (NLG) has seen rapid progress in the past few years due to advancements in the field of large language models (LLMs) (Lewis et al., 2020; Liu et al., 2020; Dabre et al., 2022; Scao et al., 2022). Although initial research had focused on high-resource languages, recently the focus has shifted to middle-resource and lowresource languages. In the context of machine translation (MT), there is increasing interest in building massively multilingual models supporting numerous translation directions. For example, Costa-jussà et al. (2022) release a model which supports around 200 languages (40K directions). While this is commendable, to make MT truly inclusive, it is important that various design choices in the MT life-cycle are evaluated for low-resource languages and not simply transferred and adapted from English. One such important choice is of the correct evaluation metric to be used for evaluating MT systems. A recent survey by Sai et al. (2022) has shown that over the last decade, many evaluation metrics have been proposed for MT. In parallel, several works (Callison-Burch et al., 2006; Sai et al., 2021; Mathur et al., 2020a; Tan et al., 2015; Fabbri et al., 2021) have shown the inadequacy of popular metrics, such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004). However, many languages are not represented in these works and most of the focus is on European languages. On the other hand, there is a growing body of work on machine translation focused on language groups such as Indian (Dabre et al., 2022), Indonesian (Cahyawijaya et al., 2021), and African (Reid et al., 2021). However, these works rely on English centric metrics due to lack of sufficient studies with tried and tested recommendations for their evaluation of the languages. While techniques like MQM (Multidimensional Quality Metric) are being used for collecting better quality human-evaluation data for English and a select few other languages (Freitag et al., 2021a), such multidimensional evaluations and analyses are not available for several language groups. We narrow our focus to evaluation of one of these language groups, namely Indian languages which have more than a billion speakers worldwide. Indian languages are morphologically rich, especially Dravidian languages, which exhibit agglutination. Furthermore, they have relatively freeword order (Murthy et al., 2019; Kunchukuttan and 14210 Bhattacharyya, 2020) as compared to European languages which means that frequently used metrics such as BLEU may not always be reliable. This calls for an independent focused study on the evaluation of metrics for Indic languages in order to understand whether these conclusions drawn hold true for the Indian languages. In this paper, we aim to bridge this gap by focusing on the evaluation of MT (from English) into 5 Indian languages from 2 different families and make significant contributions towards designing MT evaluation metrics for these languages. Our main contribution is in the form of the MQM dataset for Indian languages created by taking outputs generated by 7 popular MT systems and asking human annotators to judge the quality of the translations using the MQM style guidelines (Lommel et al., 2014). With the help of language experts who are experienced in translation, we generate an MQM dataset consisting of 7000 annotated sentences, 1400 per language. We use the aforementioned dataset to establish correlations between the annotator scores and existing automatic metrics scores belonging to the following classes: (i) n-gram and character based such as BLEU, METEOR, chrF++, (ii) embeddings based such as Vector Extrema, BERTScore, (iii) pre-trained metrics like BLEURT-20, COMET. We observe that pre-trained metrics show the highest correlations with the annotator scores, with the COMET metric performing the best (§5.1). Additionally, we also observe that the metrics are not capable of capturing the fluency-based errors for Indian languages (§5.4). Finally, we use our data to train an Indic-COMET metric which not only shows stronger correlations with human judgement on Indian languages, but is also more robust to perturbations (§6). We hope that our dataset and metric, which are publicly available1, will help spur research in this field. ## 2 Related Work Meta-evaluation studies: Evaluation metrics have been under intense scrutiny in order to establish their reliability. Mathur et al. (2020b) discuss that studying evaluation metrics needs to be a meticulous task by showing many potential issues and oversights that could lead to wrong conclusions. Other works focus on extending the resources for meta-evaluations (Sai et al., 2021; Karpinska et al., 1https://github.com/AI4Bharat/IndicMT-Eval 2022) and different genres (van der Wees et al., 2018). While most of these works focus on English, there are works that evaluate the efficacy of metrics on other languages such as German, Chinese, Spanish, etc. (Rivera-Trigueros and Olvera-Lobo, 2021; Freitag et al., 2021b). On the other hand, we focus on Indian languages, which have not received much attention. Collecting human annotations: Metaevaluation studies rely heavily on human-annotated translations of various systems. Since humans are better at providing relative ranking (i.e., comparing the qualities of 2 or more items) rather than providing absolute scores to quantify the quality of an item, WMT15-17 collected Relative Rankings (Bojar et al., 2015, 2016a, 2017). However, since they require a quadratic number of ratings, Direct Assessment (DA) scores, which are *quality* assessment scores over each output on a scale of 0-100, are easier and faster to collect (Kocmi et al., 2021). More recently, the Multidimensional Quality Metric (MQM) approach for collecting human judgments was adopted for Machine Translation by Freitag et al. (2021b). They obtained annotations from professional raters with MQM training, which Clark et al. (2021) recommend. On a related note, Klubicka et al. (2018) conduct human studies for Croatian, whereas Fabbri et al. (2021) followed systematic approaches to collect and provide multidimensional scores for other tasks such as summarization. ## 3 Indic-Mt Eval Dataset Following Freitag et al. (2021b), we collect MQM annotations for 5 Indian languages, i.e., Tamil (ta), Gujarati (gu), Hindi (hi), Marathi (mr), and Malayalam (ml). We sample 200 sentences from the FLORES-101 dataset (Goyal et al., 2022) and obtain the translation outputs from 7 machine translation systems (§3.1) for each of the 5 Indian languages. ## 3.1 Mt Systems Considered We use state-of-the-art models to obtain translation outputs in the 5 languages. These include EnglishXX translation outputs obtained from open-sourced pre-trained models like mBART (Liu et al., 2020), mT5 (Xue et al., 2021), IndicTrans (Ramesh et al., 2022), cvit (Philip et al., 2019), NLLB (Costajussà et al., 2022), as well as outputs obtained2 2Collected in February 2022 ![2_image_0.png](2_image_0.png) from Microsoft Azure Cognitive Services API3and Google translation API4(additional details in Appendix B.1). Note that for Gujarati, we find all mBART outputs to be unintelligible and filled with a mixture of characters from several languages. We hence re-allocate the budget corresponding to these sentences for Gujarati to collect annotations on the references instead. Similar to the findings of Clark et al. (2021), we observe that the references are not always perfect, and these sentences also have errors. However, we find that these errors are often of lower severity. ## 3.2 Methodology We adopt the MQM-framework (Lommel et al., 2014) for collecting human annotations on the data at the segment level. In general, a segment may contain one or more sentences. Bilingual language experts, proficient in English and a native language, were employed as annotators for the task of identifying and marking errors in each segment. As shown in Figure 2, the source segment in English and the translated segment are presented to the annotators, along with provisions to mark up to 5 errors of various categories and severity (§3.4). If there are more than five errors, the annotators are asked to identify only the five most severe ones. 3Bing API 4Google API In cases where there are more than five severe errors, or if it is not possible to reliably identify distinct errors because the translation is unrelated to the source, then the translation is marked as nontranslation, a special category error spanning the entire segment. Depending on the quality of the translation and the errors identified, the annotators were also asked to provide a score out of 25 for each translation after marking all the errors (if any) for that translation. More detailed guidelines are presented in Appendix A. ## 3.3 Quality Assurance We first performed pilot studies on collecting data via crowd-sourced annotators who are native speakers of the languages we use in this study. In a pilot which directly asked for the final scores, similar to DA scores used in WMT in a few years (Bojar et al., 2016c,b), we found the scores to be highly subjective, similar to Clark et al. (2021). We also found that displaying the reference translations, which are not always perfect, was biasing the annotators to ignore some errors. Another pilot task involved MQM instead of DA scores in the same crowd-sourced setting. However, we found it difficult to achieve consistency in annotations with crowd-sourced raters. We tried the following strategies to improve the quality (i) We provided the same set of segments to 3 annotators per language and then organized a discussion among them to resolve disagreements. The idea was to eventually converge to a consistent marking scheme after a few initial sets of different markings, (Nema and Khapra, 2018). (ii) We collected annotations from 3 annotators per language and provided all the 3 annotations to a different fourth annotator to aggregate them. However, neither yielded fruitful results in terms of agreement with MQM annotations. Finally, we employed language experts who have ![3_image_0.png](3_image_0.png) experience in translation tasks and observe that we were able to achieve better consistency among annotators. We use the first 50 sentences (sampled randomly from various models and of various lengths) as a pilot to help the annotator get an idea of the variety and kind of translations in the dataset. Note that MQM-style annotations use a formula to automatically compute scores for each segment based on the errors identified. The score, s, for each segment with a set of identified errors, E, is given by s = 25 −Pi∈E wi ∗ ei, where wiis the penalty associated with the severity of the error and eiis the penalty associated with the type of error. Appendix A provides more details on the penalties used for the different error types and severities, following Lommel et al. (2014). In addition to the formula-based score, we also ask the annotator to provide an overall score after marking the errors for that segment. We then verify the correlations between the formula-based scores and the scores provided by the annotator and found them to be highly correlated (i.e., > 0.7 Kendall-tau correlation) for all languages. In order to compute the Inter Annotator agreement score, we sample 200 segments for each language and compute the Kendall-tau correlation between the scores given by two annotators. For all the languages, we observe high correlation scores of 0.61, 0.57, 0.55, 0.538, and 0.52 for Malayalam, Gujarati, Tamil, Hindi, and Marathi respectively. ## 3.4 Analysis Distribution of Error types: Following the MQM guidelines and prior work on MQM (Freitag et al., 2021b), we have 13 categories of errors, including 4 sub-categories under fluency and 5 under accuracy, style error, source error, non-translation (a special case to mark segments that are extremely poor translations or have more than 5 high severity errors) and an *'other'* category for any error types that are not accounted for in the list of error types. The error types are listed in Appendix A. On all languages except Tamil, we found *'Accuracy Mistranslation'* to have the highest error count among all error types. More generally, on average, the machine translation models today primarily err on accuracy-based errors and make fewer fluencybased mistakes as seen in Figure 1. Severity of Errors: We plot all the fluency and accuracy errors graded by error severities for all languages in Figure 1. As depicted, there are 5 error severity types: *Very High, High, Medium, Low*, and *Very Low*. For all the Indo-Aryan languages (gu, hi and mr), the majority of the errors observed are accuracy-based errors. For Tamil, a Dravidian language, we find the accuracy errors and fluency errors in almost equal proportions. We find Malayalam, another Dravidian language, to have more accuracy errors than fluency errors, with majority medium-severity errors, as shown in Figure 1. MT systems: Figure 3 shows the total number of errors per model (inclusive of all severities) for each language. We find that the recent MT models (NLLB, IndicTrans) have fewer errors compared to the relatively older models (CVIT). Table 9 in Appendix provides a more detailed picture which also inherently takes into account the severities of the errors. It shows the average score of each system computed as the mean of the human scores obtained on all the outputs from that system. We find that IndicTrans model, which focuses on Indian languages, has the highest scores on Hindi, Malayalam and Tamil. NLLB is the best performing model on Marathi and Bing API for Gujarati. Considering the average performance across all languages, the best performing models in descending order are IndicTrans, NLLB, Google API, Bing API, mT5, CVIT, and mBART. ## 4 Experimental Setup In this section, we discuss the various evaluation metrics under consideration (§4.1) and evaluating strategies (§4.2) followed. ## 4.1 Evaluation Metrics Used For Mt We consider the most popular metrics being used in Barrault et al. (2021, 2020) along with other | gu | hi | mr | ml | ta | Average | | | | | | | | |--------------|-------|-------|-------|-------|-----------|-------|-------|-------|--------|--------|-------|-------| | Metric | ρ | τ | ρ | τ | ρ | τ | ρ | τ | ρ | τ | ρ | τ | | BLEU 1 | 0.364 | 0.255 | 0.266 | 0.187 | 0.228 | 0.148 | 0.393 | 0.331 | 0.316 | 0.213 | 0.314 | 0.227 | | BLEU 2 | 0.329 | 0.247 | 0.280 | 0.192 | 0.190 | 0.135 | 0.331 | 0.302 | 0.291 | 0.205 | 0.284 | 0.216 | | BLEU 3 | 0.294 | 0.234 | 0.265 | 0.186 | 0.134 | 0.119 | 0.250 | 0.271 | 0.227 | 0.182 | 0.234 | 0.198 | | BLEU 4 | 0.235 | 0.215 | 0.245 | 0.171 | 0.091 | 0.103 | 0.180 | 0.246 | 0.171 | 0.168 | 0.184 | 0.181 | | SacreBLEU | 0.293 | 0.239 | 0.255 | 0.168 | 0.164 | 0.132 | 0.274 | 0.298 | 0.244 | 0.189 | 0.246 | 0.205 | | ROUGE-L | 0.350 | 0.251 | 0.295 | 0.204 | 0.206 | 0.132 | 0.376 | 0.322 | 0.308 | 0.206 | 0.307 | 0.223 | | chrF++ | 0.408 | 0.287 | 0.299 | 0.205 | 0.260 | 0.170 | 0.411 | 0.338 | 0.361 | 0.250 | 0.348 | 0.250 | | TER | 0.304 | 0.237 | 0.263 | 0.196 | 0.203 | 0.135 | 0.343 | 0.307 | 0.272 | 0.199 | 0.277 | 0.215 | | EA | 0.331 | 0.181 | 0.086 | 0.066 | 0.143 | 0.054 | 0.397 | 0.301 | 0.203 | 0.149 | 0.232 | 0.150 | | VE | 0.380 | 0.265 | 0.274 | 0.183 | 0.234 | 0.153 | 0.412 | 0.331 | 0.337 | 0.227 | 0.327 | 0.232 | | GM | 0.394 | 0.266 | 0.234 | 0.162 | 0.241 | 0.147 | 0.426 | 0.338 | 0.382 | 0.264 | 0.335 | 0.235 | | LASER embs | 0.094 | 0.156 | 0.135 | 0.123 | 0.159 | 0.069 | 0.357 | 0.295 | 0.126 | 0.099 | 0.174 | 0.148 | | LabSE embs | 0.504 | 0.319 | 0.149 | 0.185 | 0.319 | 0.204 | 0.416 | 0.337 | 0.339 | 0.286 | 0.345 | 0.266 | | mBERT | 0.448 | 0.297 | 0.337 | 0.231 | 0.301 | 0.194 | 0.462 | 0.367 | 0.413 | 0.281 | 0.392 | 0.274 | | distilmBERT | 0.431 | 0.289 | 0.316 | 0.220 | 0.281 | 0.181 | 0.465 | 0.371 | 0.415 | 0.278 | 0.382 | 0.268 | | IndicBERT | 0.456 | 0.308 | 0.346 | 0.235 | 0.281 | 0.182 | 0.440 | 0.357 | 0.402 | 0.282 | 0.385 | 0.273 | | MuRIL | 0.465 | 0.322 | 0.353 | 0.243 | 0.292 | 0.184 | 0.449 | 0.369 | 0.410 | 0.290 | 0.394 | 0.282 | | PRISM | 0.114 | 0.024 | 0.178 | 0.124 | 0.131 | 0.084 | 0.089 | 0.064 | -0.040 | -0.040 | 0.094 | 0.051 | | BLEURT-20 | 0.509 | 0.371 | 0.296 | 0.300 | 0.409 | 0.286 | 0.496 | 0.390 | 0.491 | 0.374 | 0.440 | 0.344 | | COMET-QE-DA | 0.417 | 0.324 | 0.535 | 0.404 | 0.551 | 0.430 | 0.386 | 0.341 | 0.531 | 0.391 | 0.414 | 0.378 | | COMET-QE-MQM | 0.387 | 0.309 | 0.590 | 0.403 | 0.577 | 0.392 | 0.438 | 0.392 | 0.571 | 0.399 | 0.513 | 0.379 | | COMET-DA | 0.557 | 0.403 | 0.581 | 0.390 | 0.426 | 0.306 | 0.531 | 0.419 | 0.529 | 0.412 | 0.525 | 0.386 | | COMET-MQM | 0.465 | 0.360 | 0.529 | 0.370 | 0.686 | 0.459 | 0.508 | 0.392 | 0.597 | 0.432 | 0.557 | 0.402 | variants to suit the languages under consideration. In total, we study 16 metrics belonging to different classes (Sai et al., 2022) of either word overlapbased, embedding-based, or trained metrics. - In the word overlap-based category, we consider (i) BLEU (Papineni et al., 2002), (ii) SacreBLEU (Post, 2018), (iii) ROUGE (Lin, 2004), (iv) chrF++ (Popovic, 2017), (v) TER (Snover et al., 2006). - For the embedding-based metrics, we use (i) Vector Extrema (VE) (Forgues and Pineau, 2014), (ii) Greedy Matching (GM) (Rus and Lintean, 2012), (iii) Embedding Averaging (EA) (Landauer and Dumais, 1997), (iv) LabSE (Feng et al., 2020) & (v) LASER (Artetxe and Schwenk, 2019) embeddings and (vi) BERTScore (Zhang et al., 2020). - For computing BERTScore, in addition to the official implementation, which uses mBERT, we also consider other variants that use BERT models trained on Indian languages, namely IndicBERT (Kakwani et al., 2020) and MuRIL (Khanuja et al., 2021). - The end-to-end trained metrics we consider are (i) PRISM (Thompson and Post, 2020), (ii) BLEURT (Sellam et al., 2020) and (iii) ## Comet Variants (Rei Et Al., 2020). 4.2 Meta Evaluation For evaluating the evaluation metrics we measure how well the metrics correlate with human judgments on two granularities i.e.: segment-level and system-level. We use Pearson correlation (ρ) which measures the linear correlation between two sets of data and Kendall's Tau (τ ) to measure the ordinal association between two quantities. ## 5 Results And Discussions In this section, we present the segment-level correlations in §5.1 and system-level correlations in §5.2, followed by analyzing metrics in §5.3, §5.4. ## 5.1 Segment-Level Evaluation The correlation between MQM-based scores and metric scores, measured using Pearson and Kendalltau correlations on 1400 segments per language as shown in Table 1. We observe that out of the overlap-based metrics, chrF++ has the highest correlation across all languages, but overall overlapbased metrics are the worst performing which is in line with the findings of Kocmi et al. (2022). Among the embedding-based metrics, LabSE embeddings yields better correlations than any of the | gu | hi | mr | ml | ta | | | | | | | |--------------|---------|--------|---------|--------|---------|--------|---------|--------|--------|--------| | Metric | ρ | τ | ρ | τ | ρ | τ | ρ | τ | ρ | τ | | BLEU 1 | 0.927∗ | 0.600 | 0.684 | 0.429 | 0.949∗ | 0.143 | 0.913∗ | 0.619 | 0.698 | 0.429 | | BLEU 2 | 0.922∗ | 0.600 | 0.697 | 0.524 | 0.922∗ | 0.143 | 0.885∗ | 0.619 | 0.714 | 0.619 | | BLEU 3 | 0.930∗ | 0.600 | 0.687 | 0.524 | 0.891∗ | 0.143 | 0.829∗ | 0.619 | 0.674 | 0.619 | | BLEU 4 | 0.914∗ | 0.600 | 0.651 | 0.429 | 0.793∗ | 0.143 | 0.772∗ | 0.619 | 0.598 | 0.524 | | SacreBLEU | 0.926∗ | 0.600 | 0.648 | 0.429 | 0.912∗ | 0.143 | 0.849∗ | 0.619 | 0.656 | 0.619 | | ROUGE-L | 0.928∗ | 0.600 | 0.741 | 0.524 | 0.949∗ | 0.143 | 0.909∗ | 0.619 | 0.697 | 0.524 | | chrF++ | 0.923∗ | 0.600 | 0.67 | 0.429 | 0.9∗ | 0.429 | 0.895∗ | 0.524 | 0.756∗ | 0.619 | | TER | -0.931∗ | -0.600 | -0.757∗ | -0.524 | -0.977∗ | -0.143 | -0.911∗ | -0.619 | -0.696 | -0.619 | | EA | 0.927∗ | 0.600 | 0.547 | 0.411 | 0.968∗ | 0.238 | 0.919∗ | 0.586 | 0.739 | 0.429 | | VE | 0.952∗ | 0.733 | 0.654 | 0.524 | 0.967∗ | 0.143 | 0.958∗ | 0.619 | 0.766∗ | 0.524 | | GM | 0.942∗ | 0.733 | 0.636 | 0.524 | 0.977∗ | 0.143 | 0.949∗ | 0.619 | 0.777∗ | 0.524 | | LASER | 0.273 | 0.067 | 0.372 | 0.143 | 0.797∗ | 0.048 | 0.873∗ | 0.429 | 0.67 | 0.333 | | LabSE | 0.931∗ | 0.600 | 0.253 | 0.048 | 0.968∗ | 0.238 | 0.823∗ | 0.333 | 0.725 | 0.429 | | mBERT | 0.947∗ | 0.600 | 0.705 | 0.524 | 0.978∗ | 0.143 | 0.940∗ | 0.683 | 0.798∗ | 0.524 | | distilmBERT | 0.945∗ | 0.600 | 0.629 | 0.429 | 0.976∗ | 0.143 | 0.946∗ | 0.683∗ | 0.825∗ | 0.524 | | IndicBERT | 0.949∗ | 0.733 | 0.747 | 0.524 | 0.971∗ | 0.143 | 0.938∗ | 0.619 | 0.758∗ | 0.524 | | MuRIL | 0.957∗ | 0.733 | 0.742 | 0.524 | 0.976∗ | 0.143 | 0.926∗ | 0.619 | 0.777∗ | 0.524 | | PRISM | 0.810 | 0.467 | 0.583 | 0.238 | 0.979∗ | 0.238 | 0.877∗ | 0.619 | 0.611 | 0.238 | | BLEURT-20 | 0.978∗ | 1.000∗ | 0.582 | 0.714∗ | 0.993∗ | 0.619 | 0.952∗ | 0.39 | 0.927∗ | 0.905∗ | | COMET-QE-DA | 0.852∗ | 0.866∗ | 0.878∗ | 0.714∗ | 0.854∗ | 0.714∗ | 0.986∗ | 0.809∗ | 0.911∗ | 0.714∗ | | COMET-QE-MQM | 0.657 | 0.733 | 0.831∗ | 0.809∗ | 0.971∗ | 0.619 | 0.798∗ | 0.428 | 0.892∗ | 0.714∗ | | COMET-DA | 0.986∗ | 1.000∗ | 0.970∗ | 1.000∗ | 0.994∗ | 0.781∗ | 0.936∗ | 0.333 | 0.868∗ | 0.619∗ | | COMET-MQM | 0.932∗ | 0.733 | 0.759∗ | 0.809∗ | 0.991∗ | 0.904∗ | 0.953∗ | 0.523 | 0.892 | 0.714 | other embedding-based approaches. The correlations improve further when we use BERTscore with embeddings obtained from different multilingual models. The results in this case are mixed, with MuRIL showing the best correlations on average. Overall, we observe that neural-networkbased, end-to-end trained metrics with exposure to Indian languages are the best-performing metrics on average. The trained metric PRISM, which has been trained on 39 languages, out of which the only Indian language is Bengali, performs very poorly on all the 5 Indian languages in our study, partially owing to the minimal Bengali data used for training. On the other hand, BLEURT-20, a metric finetuned on ratings from the WMT Metrics Shared Task and synthetic data from the WMT corpus, has fairly good correlations on all languages except Hindi. COMET-metric variants have the highest overall correlations for all the languages. ## 5.2 System-Level Evaluation Table 2 shows the Pearson and Kendall-tau correlations at the system-level following Louis and Nenkova (2013). Since Kendall-tau is based on pairwise score comparisons, it reflects a common ranking use case and is more reliable for systemlevel correlations. The metric rankings remain consistent across both granularities, with more vari- ![5_image_0.png](5_image_0.png) ability observed in the segment-level task. Similar to the segment-level correlations, trained metrics show the highest correlations across all languages. COMET shows the highest correlations, followed by BLEURT-20. Although on the segment level COMET-QE was not at par with the COMET reference-based metrics, for system ranking the reference-free COMET-QE metrics show high correlations and are well suited for ranking system pairs. Although Kocmi et al. (2021) already observed this for other languages, with the help of our dataset and experiments we are able to pro- ![6_image_0.png](6_image_0.png) vide empirical evidence to confirm this for Indian languages. ## 5.3 Spread Of Metric Scores While the correlations of metrics are good, we still find that the range of metric scores is skewed. That is, most of the metrics do not utilise their entire scoring range, and often provide scores in a narrow range. This skew in the spread hinders the interpretability of the scores provided by the metric. For example, SacreBLEU has a scoring range between 0 to 100. However, the scores are almost always in the lower half of the scoring range as seen in Figure 4 containing the spread of normalised scores of each metric5. This is not a case of an issue with the data being always poor as we can see in Figure 4 that the human scores for Tamil show a spread through-out the scale. On the other hand, the embedding-based metrics, which use cosine similarity, have a theoretical maximum of 1 and minimum of 0; however, the scores are concentrated at the higher end of the scale, rendering the individual scores uninterpretable despite decent correlations. ## 5.4 Correlations Conditioned On Error Type Mathur et al. (2020b); Sai et al. (2021) show that correlations do not convey the true picture and it is important to perform in-depth analysis to understand the true ability of the metrics. Hence we perform the following experiment to examine the performance of metrics on the two primary error categories in the MQM framework, i.e, fluency and accuracy. We select those annotated segments that contain only a single error type in order to clearly separate the two error types. This gives us two MQM data subsets, one containing only fluency errors and the other only accuracy errors. Since the dataset size could be different, we control for the size by sampling an equal number of segments from both sets. Figure 5 contains the correlation values for the various metrics. Splitting the dataset based on the error types shows a more nuanced picture. The majority of the metrics show a higher correlation with human scores when only accuracy errors are annotated. This implies that the metrics are able to capture the accuracy errors well but fail on fluency-based errors. We hope that future works on designing better evaluation metrics for Indian languages focus more on developing metrics that can capture fluency-based errors. ## 6 Indic Comet Having analyzed various metrics, we fine-tune the best performing metric - COMET - using our MQM dataset (§6.1) and show that the new finetuned metric not only outperforms the COMET metric on the majority of the languages but also is more robust to perturbations (§6.2). Additionally, we also test the zero-shot evaluation ability of the Indic-COMET metric in §6.3. ## 6.1 Training We build our metric with the architecture of COMET (Rei et al., 2020). We use the Estimator model, which uses XLM-RoBERTa (Conneau et al., | Metrics | gu | hi | mr | ml | ta | Avg. | | | | | | | |---------------|-------|-------|-------|-------|-------|--------|-------|-------|-------|-------|-------|-------| | ρ | τ | ρ | τ | ρ | τ | ρ | τ | ρ | τ | ρ | τ | | | COMET-DA | 0.487 | 0.359 | 0.380 | 0.319 | 0.422 | 0.302 | 0.529 | 0.421 | 0.525 | 0.410 | 0.469 | 0.362 | | COMET-MQM | 0.422 | 0.346 | 0.528 | 0.370 | 0.455 | 0.314 | 0.493 | 0.380 | 0.588 | 0.429 | 0.497 | 0.367 | | IndicCOMETXLM | 0.437 | 0.353 | 0.609 | 0.397 | 0.413 | 0.311 | 0.559 | 0.418 | 0.585 | 0.426 | 0.521 | 0.381 | | IndicCOMETDA | 0.431 | 0.339 | 0.554 | 0.384 | 0.436 | 0.310 | 0.526 | 0.410 | 0.587 | 0.433 | 0.507 | 0.375 | | IndicCOMETMQM | 0.446 | 0.360 | 0.616 | 0.419 | 0.463 | 0.331 | 0.566 | 0.416 | 0.597 | 0.441 | 0.537 | 0.393 | 2019) backbone to encode the source, hypothesis, and reference. We use the same training process and hyper-parameters as COMET for a fair comparison (additional details in Appendix B.2). Following Rei et al. (2021), we experiment with initializing the model with different checkpoints, namely, XLM-R, COMET-DA, and COMET-MQM, and fine-tune it on our MQM dataset. | Metrics | gu | hi | mr | ml | ta | |---------------|-------|-------|-------|-------|-------| | COMETDA | 0.359 | 0.319 | 0.302 | 0.421 | 0.410 | | COMETMQM | 0.346 | 0.370 | 0.314 | 0.380 | 0.429 | | IndicCOMETMQM | 0.355 | 0.395 | 0.322 | 0.394 | 0.430 | ## 6.2 Evaluation Table 3 compares the correlation values of our fine-tuned Indic-COMET with the best-performing COMET baselines. Since no other evaluation datasets for Indian languages are available, we use our own MQM dataset for both training and testing. Hence to perform a throughout evaluation we perform a 3-fold cross-evaluation by splitting the dataset into 3 independent training and testing datasets and report the mean correlation values across the 5 languages in consideration. We observe that Indic-COMET fine-tuned from the COMET-MQM checkpoint shows higher correlations across all languages, compared to the other variants on average. Indic-COMET-MQM outperforms both the COMET baselines on 3 out of the 5 languages and shows higher correlations than COMET-MQM across all languages. The most notable gains are in Hindi. Inspired by recent works on meta-evaluation (Kocmi et al., 2022; Sai et al., 2021), we also analyze the robustness of metrics on challenge sets. We make use of the challenge set created by Amrhein et al. (2022) since it contains Indian languages. We use the subset of the dataset that only contains Indian languages and follow Amrhein et al. (2022) to report performance with Kendall's tau-like correlations. Indic-COMETMQM has a correlation score of 0.306 and is more robust than the COMET counterpart which has a score of 0.272. Overall, we observe that fine-tuning the COMET metric on our MQM dataset not only improves correlations with human scores but also increases the robustness to perturbations. ## 6.3 Zero-Shot Evaluation Since we evaluate only 5 Indian languages, out of the 22 official languages (and over a hundred major languages that are spoken in the country6), we investigate whether the metric has the potential to perform better in other Indian languages as well. In order to test this ability, we finetune on only 4 languages and test on the unseen one. We use the same evaluation setup as discussed in §6.2. Table 4 contains the comparison between the best performing Indic-COMET variant i.e.: Indic-COMETMQM and COMET baselines. We observe that IndicCOMET still outperforms both the COMET baselines on the majority of languages even though it is not trained on the specific Indian languages. It also shows higher correlations than COMET-MQM across all languages. This suggests that collecting annotations for some Indian languages is key for progress in Indic evaluation as it can benefit other low-resource languages too. ## 7 Conclusion We present a large-scale MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, for evaluating machine translation metrics for Indian languages. With the help of this dataset, we show that the 6https://www.britannica.com/topic/Indian-languages current pre-trained metrics outperform the overlapbased metrics (§5.1) in terms of correlations with the human scores. Additionally, we also perform an in-depth study (§5.4) to identify the drawbacks of the current metrics. We then use our dataset to train an Indic specific COMET metric that outperforms existing metrics in terms of both correlations and robustness scores (§6.2). We hope that our dataset and analysis will help promote further research in Indic MT evaluation. ## 8 Acknowledgements We thank the Ministry of Electronics and Information Technology (MeitY), Government of India, for setting up the ambitious Digital India Bhashini Mission with the goal of advancing Indian language technology. The annotators and language experts who worked on this project were supported by the generous grant given by Digital India Bhashini Mission to IIT Madras to serve as the Data Management Unit for the mission. We are indebted to Shri Nandan Nilekani and Shrimati Rohini Nilekani for supporting our work through generous grants from EkStep Foundation and Nilekani Philanthropies. These grants were used for (i) supporting many of the students, research associates and developers who worked on this project, (ii) fulfilling many of our compute needs and (iii) recruiting project managers to oversee the massive pan-India data collection activity undertaken as a part of this work. We thank the Centre for Development and Advancement of Computing, Pune (CDAC Pune) for access to its Param Siddhi super-computer which was used for mining bitext pairs at scale. We thank the Google India Ph.D. Fellowship Program and the Prime Minister's Fellowship Scheme for Doctoral Research for supporting Ananya Sai. ## 9 Limitations The approach to collect our dataset is expensive and laborious. This along with the dependence on expert annotators makes the transfer of such an approach challenging for other low-resource languages. We however, find this a necessary endeavor to develop initial resources that can help provide a starting point to extend access to more languages and iteratively improve research, technologies and services across languages. ## 10 Ethical Considerations For the human annotations on the dataset, the language experts were paid a competitive monthly salary to help with the task. The salary was determined based on the skill set and experience of the expert and adhered to the norms of the government of our country. The dataset has no harmful content. The annotations are collected on a publicly available dataset and will be released publicly for future use. All the datasets created as part of this work will be released under a CC-0 license7and all the code and models will be release under an MIT license8. ## References Chantal Amrhein, Nikita Moghe, and Liane Guillou. 2022. Aces: Translation accuracy challenge sets for evaluating machine translation metrics. *arXiv* preprint arXiv:2210.15615. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Loïc Barrault, Magdalena Biesialska, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubešic, Christof ´ Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In *Proceedings of* the Fifth Conference on Machine Translation, pages 1–55, Online. Association for Computational Linguistics. Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, and Christof Monz, editors. 2021. *Proceedings of the Sixth Conference on* Machine Translation. Association for Computational Linguistics, Online. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 7https://creativecommons.org/ publicdomain/zero/1.0 8https://opensource.org/licenses/MIT 2017. Findings of the 2017 conference on machine translation (WMT17). In *Proceedings of the Second* Conference on Machine Translation, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016a. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131–198, Berlin, Germany. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics. Ondrej Bojar, Christian Federmann, Barry Haddow, Philipp Koehn, Matt Post, and Lucia Specia. 2016b. Ten years of wmt evaluation campaigns: Lessons learnt. *Translation Evaluation: From Fragmented* Tools and Data Sets to an Integrated Ecosystem, page 27. Ondrej Bojar, Yvette Graham, Amir Kamran, and Milos Stanojevic. 2016c. Results of the WMT16 metrics shared task. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pages 199–231. The Association for Computer Linguistics. Samuel Cahyawijaya, Genta Indra Winata, Bryan Wilie, Karissa Vincentio, Xiaohong Li, Adhiguna Kuncoro, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Khodra, Ayu Purwarianti, and Pascale Fung. 2021. IndoNLG: Benchmark and resources for evaluating Indonesian natural language generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 8875–8898, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In *11th Conference of* the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282–7296, Online. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh Khapra, and Pratyush Kumar. 2022. IndicBART: A pre-trained model for indic natural language generation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1849–1863, Dublin, Ireland. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391–409. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Languageagnostic bert sentence embedding. arXiv preprint arXiv:2007.01852. Gabriel Forgues and Joelle Pineau. 2014. Bootstrapping dialog systems with word embeddings. In *NeurIPS,* modern machine learning and natural language processing workshop. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation. *Transactions of the Association for Computational Linguistics*, 9:1460–1474. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021b. Experts, errors, and context: A large-scale study of human evaluation for machine translation. *Transactions of the Association for Computational Linguistics*, 9:1460–1474. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Transactions of the Association for* Computational Linguistics, 10:522–538. Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In *Findings of EMNLP*. Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, and Mohit Iyyer. 2022. DEMETR: diagnosing evaluation metrics for translation. *CoRR*, abs/2210.13746. Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, et al. 2021. Muril: Multilingual representations for indian languages. arXiv preprint arXiv:2103.10730. Filip Klubicka, Antonio Toral, and Víctor M. SánchezCartagena. 2018. Quantitative fine-grained human evaluation of machine translation systems: a case study on english to croatian. *Mach. Transl.*, 32(3):195–215. Tom Kocmi, Rachel Bawden, Ondřej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novák, Martin Popel, Maja Popović, and Mariya Shmatova. 2022. Findings of the 2022 conference on machine translation (wmt22). In *Proceedings of the Seventh Conference on Machine Translation*, pages 1–45, Abu Dhabi. Association for Computational Linguistics. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In *Proceedings of the Sixth* Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Anoop Kunchukuttan and Pushpak Bhattacharyya. 2020. Utilizing language relatedness to improve machine translation: A case study on languages of the indian subcontinent. *arXiv preprint arXiv:2003.08925*. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. *arXiv* preprint arXiv:1910.09700. Thomas K Landauer and Susan T Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. *Psychological review*, 104(2):211. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Revista Tradumàtica: tecnologies de la traducció, (12):455–463. Annie Louis and Ani Nenkova. 2013. Automatically Assessing Machine Summary Content Without a Gold Standard. *Computational Linguistics*, 39(2):267– 300. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020a. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4984–4997, Online. Association for Computational Linguistics. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020b. Tangled up in BLEU: reevaluating the evaluation of automatic machine translation evaluation metrics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, Online, July 5-10, 2020, pages 4984–4997. Association for Computational Linguistics. Rudra Murthy, Anoop Kunchukuttan, and Pushpak Bhattacharyya. 2019. Addressing word-order divergence in multilingual neural machine translation for extremely low resource languages. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3868–3873, Minneapolis, Minnesota. Association for Computational Linguistics. Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3950–3959. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Jerin Philip, Shashank Siripragada, Upendra Kumar, Vinay Namboodiri, and C V Jawahar. 2019. CVIT's submissions to WAT-2019. In *Proceedings of the* 6th Workshop on Asian Translation, pages 131–136, Hong Kong, China. Association for Computational Linguistics. Maja Popovic. 2017. chrf++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 612–618. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Srihari Nagaraj, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2022. Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages. Transactions of the Association for Computational Linguistics, 10:145–162. Ricardo Rei, Ana C Farinha, Chrysoula Zerva, Daan van Stigt, Craig Stewart, Pedro Ramos, Taisiya Glushkova, André F. T. Martins, and Alon Lavie. 2021. Are references really needed? unbabel-IST 2021 submission for the metrics shared task. In *Proceedings of the Sixth Conference on Machine Translation*, pages 1030–1040, Online. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Machel Reid, Junjie Hu, Graham Neubig, and Yutaka Matsuo. 2021. AfroMT: Pretraining strategies and reproducible benchmarks for translation of 8 African languages. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 1306–1320, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Irene Rivera-Trigueros and María-Dolores Olvera-Lobo. 2021. Building a corpus for corporate websites machine translation evaluation. a step by step methodological approach. In Proceedings of the Translation and Interpreting Technology Online Conference, pages 93–101, Held Online. INCOMA Ltd. Vasile Rus and Mihai C. Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In *BEA@NAACL-HLT*. Ananya B. Sai, Tanay Dixit, Dev Yashpal Sheth, Sreyas Mohan, and Mitesh M. Khapra. 2021. Perturbation CheckLists for evaluating NLG evaluation metrics. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7219–7234, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ananya B Sai, Akash Kumar Mohankumar, and Mitesh M Khapra. 2022. A survey of evaluation metrics used for nlg systems. ACM Computing Surveys (CSUR), 55(2):1–39. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, Online, July 5-10, 2020, pages 7881–7892. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In *In Proceedings of Association for Machine* Translation in the Americas, pages 223–231. Liling Tan, Jon Dehdari, and Josef van Genabith. 2015. An awkward disparity between BLEU / RIBES scores and human judgements in machine translation. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 74–81, Kyoto, Japan. Workshop on Asian Translation. Brian Thompson and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2018. Evaluation of machine translation performance across multiple genres and languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA). Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. ## A **Mqm Guidelines To Annotators, Error** Types & Severities The annotators assess translations at the segment level, where a segment may contain one or more sentences. Each translated segment is aligned with a corresponding source segment, and both the source and translated segments are displayed. Table 5 shows the error hierarchies for all the error types. Each category has severity levels ranging from very high to very low on a 5-point scale of (Very low, Low, Medium, High, and Very-high). Table 6 shows the descriptions of the end-points of the scale, as shown to the annotators. For computing scores for each segment based on the annotations, we use the following weights/penalties: very low: 1, low: 2, medium: 3, high: 4, very high: 5. Each of the sub-categories under Accuracy, Fluency, Terminology Inappropriate, Style have equal weightage since each of these are accompanied with a corresponding severity marking. Non-translation errors, by definition, elicit a score of 0. Sentences that are marked with a source error are discarded. The following guidelines were provided to the annotators: - Identify all errors within each translated segment, up to a maximum of five. If there are more than five errors, identify only the five most severe. - To identify an error, highlight the relevant span of text using text colors, and select a category/sub-category and severity level from the available options. (The span of text may be in the source segment if the error is a source error or an omission.) - When identifying errors, be as fine-grained as possible. For example, if a sentence contains two words that are each mistranslated, two separate mistranslation errors should be recorded. - If a single stretch of text contains multiple errors, (that is, if there are overlapping errors) one only needs to indicate the one that is most severe. If all have the same severity, choose the first matching category listed in the error typology (eg, Accuracy, then Fluency, then Terminology, etc). - There are two special error categories: Source error and Non-translation. Source errors should be annotated separately, highlighting the relevant span in the source segment. A sentence that has a source error need not be scored but the error in the source segment is to be highlighted. - If it is not possible to reliably identify distinct errors because the translation is too badly garbled or is unrelated to the source, then mark a single Non-translation error that spans the entire segment. There can be at most one Non-translation error per segment, which should span the entire segment. No other errors should be identified if Non-Translation is selected. - Depending on the quality of the translation and the errors identified, provide a score out of 25 for each translation. Indicate the score in the final score column, after marking all the errors (if any) for that translation. ## B Additional Details B.1 Mt Systems Considered For the mBART we use the Huggingface Transformers (Wolf et al., 2020) for generating the outputs for the various languages. Specifically, we use the facebook/mbart-large-50-many -to-many-mmt model. For mT5 we finetune the pre-trained mT*BASE* model for the translation task using all existing sources of parallel data provided by Ramesh et al. (2022). We finetune one model | Error Category | Explanation | | |---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------| | Accuracy | Addition | Translation includes information not present in the source. | | Omission | Translation is missing content from the source. | | | Mistranslation | Translation does not accurately represent the source. | | | Untranslated text | Source text has been left untranslated | | | Fluency | Spelling | Incorrect spelling or capitalization. | | Grammar | Problems with grammar, other than orthography. | | | Register | Wrong grammatical register (eg, inappropriately informal pronouns). | | | Character Encoding | Characters are garbled due to incorrect encoding. Example: Sink ->$ink | | | Terminology Inappropriate | Terminology is non-standard or does not fit context. | | | Style Awkward | The style of the text does not feel very apt. (Example: 1. The source sentence feels formal like in a newspaper, but the translation doesn't. 2. Sentences are correct, but simply too long, etc..) | | | Transliteration | If it transliterates instead of translating words/ phrases, where it should not. | | | Other | Any other issues. | | | Source Error | An error in the source. | | | Non Translation | Impossible to reliably characterize the 5 most severe errors. | | Table 5: Error Hierarchy with corresponding explanations provided to the annotators | Error Severity | Description | |------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Very High | Errors that may confuse or mislead the reader due to significant changes in meaning or because they appear in a visible or important part of the content. Errors that don't lead to loss of meaning and wouldn't confuse or mislead | | Very Low | the reader but would be noticed, would decrease stylistic quality, fluency, or clarity, or would make the content less appealing. | Table 6: Error Severity End-points Description for every language pair. For IndicTrans and CVIT, we use the models released by Ramesh et al. (2022) and Philip et al. (2019) respectively. ## B.2 Indic Comet Training All experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh. A cumulative of 10 hours of computation was performed on a single RTX A4000 GPU. Total emissions are estimated to be 0.6 kgCO2eq of which 0 percent were directly offset. Estimations were conducted using the MachineLearning Impact calculator presented in Lacoste et al. (2019). For training, we follow the same process as Rei et al. (2020). We load the pretrained encoder and initialize it with either XLM-Roberta, COMETDA or COME-MQM weights. During training, we divide the model parameters into two groups: the encoder parameters, that include the encoder model and the regressor parameters, that include the parameters from the top feed-forward network. We apply gradual unfreezing and discriminative learning rates, meaning that the encoder model is frozen | Hyperparameters | Value | |-----------------------|-------------| | batch size | 16 | | dropout | 0.1 | | encoder learning rate | 1.0e-05 | | encoder model | XLM-RoBERTa | | hidden sizes | 3072, 1536 | | layer | mix | | layerwise decay | 0.95 | | learning rate | 3.0e-05 | | no. of frozen epochs | 1 | | optimizer | AdamW | | pool | avg | Table 7: Hyper-parameters for training the various IndicCOMET model. The initialised model weights are the only difference between all variants; all variants share the same set of hyper-parameters. for one epoch while the feed-forward is optimized with a learning rate. After the first epoch, the entire model is fine-tuned with a different learning rate. Since we are fine-tuning on a small dataset, we make use of early stopping with a patience of 3. The best saved checkpoint is decided using the ![14_image_0.png](14_image_0.png) Metric Min Max Human 0 25 ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) COMET -1.6 1.3 IndicBERT 0.46 1.0 Vector Extrema 0.3 1.0 GM 0.4 1.0 mBERT 0.56 1.0 MurIL 0.29 1.0 TER 0.0 361.1 chrF++ 1.7 100.0 sacreBLEU 0.0 100.0 ROUGE 0.0 100.0 BLEU1 0.0 100.0 overall Kendall-tau correlation on the test set. The training hyper-parameters used are given in Table 7. Since we have a total of 7000 annotated segments, we perform a 3 fold cross validation split (500 training and 2000 testing) and ensure that the English sentences in the test set are not present during training. We report the mean correlation values for each language. The variance was observed to be less than 0.02. A similar experiment setup was followed for the zero shot evaluation of IndicCOMET, where additionally training segments belonging to a particular language was dropped from the training dataset. ## C Additional Results Table 8 shows the maximum and minimum values of each metric on our dataset, across all languages. Note that while some of the metrics are bounded by a theoretical minimum and maximum, some others (especially the trained metrics) are not strictly restricted to a specific scoring range. It would be possible to see a lower minimum value or a higher maximum value on other datasets with such metrics. Figure 6 shows metric scores for different human score intervals (0-5, 6-10, 11-20, 21-25). This helps analyse whether the metric scores are roughly in the same buckets or same range as human-scores without focusing on the fine-grained ratings that might not always be of significance. From the plots, we observe that high-performing metrics such as BERTScores and COMET-DA correlate positively with the metric scores as the human scores increase. However, poor-performing metrics on Indic languages such as PRISM (due to lack of training data for Indic languages) do not have correlated metric v/s human spreads even at a coarse-level. Figure 7 depicts a scatter plot of metric scores on the y-axis against human scores on the x-axis. The scatter plots provide more insights than just ![15_image_0.png](15_image_0.png) the correlation values (Mathur et al., 2020b). We note that the metrics falter by producing some false high and false-low scores. However, the metrics produce a higher density of decently correlated scores to produce a net positive correlation trend in most cases. Table 9 shows the average scores per system considering the scores provided by the annotator on all the outputs from that system. We find that the best performing model changes across the 5 languages. For Hindi, Malayalam and Tamil, IndicTrans outputs are found to get higher scores on average. For Malayalam Bing API is a close-second and NLLB for Tamil. For Gujarati Bing-API is the best performing, with IndicTrans and NLLB performances being very close. In case of Marathi, NLLB outputs are better, followed by IndicTrans. Averaging further across all 5 languages, IndicTrans is found to be the highest scoring model. various metrics on the Fluency-only and Accuracyonly error subsets discussed in section 5.4. We observe that all the metrics on average correlate better with the human scores when only accuracy errors are annotated compared to having only fluency errors. Table 10 contains the correlation values for the | Average computed human scores for each system | | | | | | | | |-------------------------------------------------|------------|----------|------------|------------|--------|--------|--------| | lang | IndicTrans | Bing API | CVIT-IIITH | Google API | mBART | mT5 | NLLB | | gu | 22.639 | 23.179 | 19.034 | 21.686 | 0.000 | 20.067 | 22.490 | | hi | 20.120 | 14.405 | 14.962 | 19.484 | 15.703 | 18.012 | 18.445 | | mr | 18.484 | 17.934 | 17.586 | 15.750 | 5.773 | 14.441 | 18.618 | | ml | 22.676 | 22.617 | 17.844 | 21.955 | 17.355 | 20.169 | 21.515 | | ta | 17.978 | 16.516 | 11.933 | 16.651 | 13.522 | 15.994 | 17.578 | | avg | 20.379 | 18.930 | 16.272 | 19.105 | 10.471 | 17.737 | 19.729 | Table 9: Average human score per system gu hi mr ml ta Metric **Flu Acc Flu Acc Flu Acc Flu Acc Flu Acc** BLEU-1 0.138 0.268 0.067 0.151 0.162 0.215 0.212 0.388 0.145 0.371 BLEU-2 0.123 0.249 0.074 0.155 0.199 0.211 0.192 0.348 0.161 0.312 BLEU-3 0.126 0.242 0.077 0.159 0.202 0.203 0.18 0.313 0.162 0.275 BLEU-4 0.13 0.227 0.078 0.156 0.208 0.18 0.186 0.29 0.158 0.28 SacreBLEU 0.112 0.246 0.076 0.156 0.224 0.212 0.194 0.338 0.154 0.331 ROUGE-L 0.126 0.247 0.061 0.154 0.182 0.196 0.22 0.352 0.164 0.334 chrF++ 0.1 0.309 0.047 0.164 0.171 0.25 0.169 0.413 0.161 0.413 TER 0.127 0.232 0.072 0.154 0.18 0.209 0.237 0.341 0.15 0.317 EA 0.076 0.19 -0.004 0.091 0.135 0.171 0.184 0.363 0.069 0.362 VE 0.143 0.27 0.052 0.172 0.115 0.214 0.217 0.356 0.146 0.376 GM 0.13 0.265 0.038 0.142 0.18 0.219 0.214 0.383 0.187 0.42 LASER 0.102 0.171 -0.056 0.099 0.111 0.186 0.161 0.393 0.011 0.189 LabSE 0.086 0.342 -0.064 0.116 0.093 0.292 0.155 0.44 0.127 0.427 mBERT 0.099 0.313 0.068 0.209 0.168 0.278 0.23 0.434 0.159 0.435 distilmBERT 0.075 0.309 0.063 0.196 0.145 0.249 0.226 0.42 0.14 0.409 IndicBERT 0.111 0.31 0.063 0.184 0.18 0.276 0.217 0.425 0.158 0.437 MuRIL 0.093 0.331 0.063 0.203 0.165 0.283 0.229 0.436 0.18 0.461 PRISM 0.04 0.006 -0.051 0.078 0.078 0.133 0.001 0.115 -0.087 0.068 BLEURT-20 0.066 0.367 -0.016 0.194 0.155 0.341 0.232 0.451 0.193 0.457 COMET-DA 0.174 0.412 0.121 0.313 0.167 0.38 0.254 0.503 0.308 0.525 COMET-MQM 0.140 0.317 0.017 0.221 0.130 0.298 0.242 0.379 0.240 0.466 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 9 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4,5,6 ✓ B1. Did you cite the creators of artifacts you used? 3,4,5,6 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 10 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3,4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3,4, Appendix ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4, Appendix B.2 ## C ✓ **Did You Run Computational Experiments?** 6.1, Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix A ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 10 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix A ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 10 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3.2
zhu-etal-2023-weaker
Weaker Than You Think: A Critical Look at Weakly Supervised Learning
https://aclanthology.org/2023.acl-long.796
Weakly supervised learning is a popular approach for training machine learning models in low-resource settings. Instead of requesting high-quality yet costly human annotations, it allows training models with noisy annotations obtained from various weak sources. Recently, many sophisticated approaches have been proposed for robust training under label noise, reporting impressive results. In this paper, we revisit the setup of these approaches and find that the benefits brought by these approaches are significantly overestimated. Specifically, we find that the success of existing weakly supervised learning approaches heavily relies on the availability of clean validation samples which, as we show, can be leveraged much more efficiently by simply training on them. After using these clean labels in training, the advantages of using these sophisticated approaches are mostly wiped out. This remains true even when reducing the size of the available clean data to just five samples per class, making these approaches impractical. To understand the true value of weakly supervised learning, we thoroughly analyze diverse NLP datasets and tasks to ascertain when and why weakly supervised approaches work. Based on our findings, we provide recommendations for future research.
# Weaker Than You Think: A Critical Look At Weakly Supervised Learning Dawei Zhu1 Xiaoyu Shen2∗ Marius Mosbach1 Andreas Stephan3 **Dietrich Klakow**1 1Saarland University, Saarland Informatics Campus 2Amazon Alexa AI 3University of Vienna dzhu@lsv.uni-saarland.de ## Abstract Weakly supervised learning is a popular approach for training machine learning models in low-resource settings. Instead of requesting high-quality yet costly human annotations, it allows training models with noisy annotations obtained from various weak sources. Recently, many sophisticated approaches have been proposed for robust training under label noise, reporting impressive results. In this paper, we revisit the setup of these approaches and find that the benefits brought by these approaches are significantly overestimated. Specifically, we find that the success of existing weakly supervised learning approaches heavily relies on the availability of clean validation samples which, as we show, can be leveraged much more efficiently by simply training on them. After using these clean labels in training, the advantages of using these sophisticated approaches are mostly wiped out. This remains true even when reducing the size of the available clean data to just five samples per class, making these approaches impractical. To understand the true value of weakly supervised learning, we thoroughly analyze diverse NLP datasets and tasks to ascertain when and why weakly supervised approaches work. Based on our findings, we provide recommendations for future research.1 ## 1 Introduction Weakly supervised learning (WSL) is one of the most popular approaches for alleviating the annotation bottleneck in machine learning. Instead of collecting expensive clean annotations, it leverages weak labels from various weak labeling sources such as heuristic rules, knowledge bases or lowerquality crowdsourcing (Ratner et al., 2017). These weak labels are inexpensive to obtain, but are often noisy and inherit biases from their sources. Deep learning models trained on such noisy data without ∗Work done outside Amazon. 1Our code is available at: https://github.com/ uds-lsv/critical_wsl ![0_image_0.png](0_image_0.png) regularization can easily overfit to the noisy labels (Zhang et al., 2017; Tänzer et al., 2022). Many advanced WSL techniques have recently been proposed to combat the noise in weak labels, and significant progress has been reported. On certain datasets, they even manage to match the performance of fully-supervised models (Liang et al., 2020; Ren et al., 2020; Yu et al., 2021). In this paper, we take a close look at the claimed advances of these WSL approaches and find that the *benefits of using them are significantly overestimated*. Although they appear to require only weak labels during training, a substantial number of clean validation samples are used for various purposes such as early-stopping (Liang et al., 2020; Yu et al., 2021) and meta-learning (Ren et al., 2018; 14229 Shu et al., 2019; Zheng et al., 2021). We cast doubt on this practice: in real-world applications, these clean validation samples could have instead been used for training. To address our concern, we explore fine-tuning models directly on the validation splits of eight datasets provided by the WRENCH benchmark (Zhang et al., 2021b) and compare it to recent WSL algorithms. The results are shown in Figure 1. Interestingly, although all WSL models generalize better than the weak labels, **simply** fine-tuning on the validation splits outperforms all WSL methods in almost all cases, sometimes even by a large margin. This suggests that existing WSL approaches are not evaluated in a realistic setting and the claimed advances of these approaches may be overoptimistic. In order to determine the true benefits of WSL approaches in a realistic setting, we conduct extensive experiments to investigate the role of clean validation data in WSL. Our findings can be summarized as follows: - Without access to any clean validation samples, all WSL approaches analyzed in this work *fail to* work, performing similarly to or worse than the weak labels (§4). - Although increasing the amount of clean validation samples improves WSL performance (§5), these validation samples can be more efficiently leveraged by directly training on them, which can outperform WSL approaches when there are more than 10 samples per class for most datasets (§6). - Even when enabling WSL models to continue training on clean validation samples, they can barely beat an embarrassingly simple baseline which directly fine-tunes on weak labels followed by fine-tuning on clean samples. This stays true with as few as 5 samples per class (§7). - The knowledge encoded in pre-trained language models biases them to seek linguistic correlations rather than shallow rules from the weak labels; further fine-tuning the pretrained language models with contradicting examples helps reduce biases from weak labels (§8). Altogether, we show that existing WSL approaches significantly overestimate their benefits in a realistic setting. We suggest future work to (1) fully leverage the available clean samples instead of only using them for validation and (2) consider the simple baselines discussed in this work when comparing WSL approaches to better understand WSL's true benefits. ## 2 Related Work Weak supervision. Weak supervision is proposed to ease the annotation bottleneck in training machine learning models. It uses weak sources to automatically annotate the data, making it possible to obtain a large amount of annotated data at a low cost. A comprehensive survey is done in Zhang et al. (2022). Ratner et al. (2017) propose to label data programmatically using heuristics such as keywords, regular expressions or knowledge bases. One drawback of weak supervision is that its annotations are noisy, i.e., some annotations are incorrect. Training models on such noisy data may result in poor generalization (Zhang et al., 2017; Tänzer et al., 2022; Zhang et al., 2022). One option to counter the impact of wrongly labeled samples is to re-weight the impact of examples in loss computation (Ren et al., 2018; Shu et al., 2019; Zheng et al., 2021). Another line of research leverages the knowledge encoded in large language models (Ren et al., 2020; Stephan et al., 2022). Methods such as BOND (Liang et al., 2020), ASTRA (Karamanolakis et al., 2021) and COSINE (Yu et al., 2021) apply teacher-student frameworks to train noise-robust models. Zhu et al. (2023) show that teacher-student frameworks may still be fragile in challenging situations and propose incorporating meta-learning techniques in such cases. Multiple benchmarks are available to evaluate weak supervision systems, e.g., WRENCH (Zhang et al., 2021b), Skweak (Lison et al., 2021), and WALNUT (Zheng et al., 2022a). In this paper, we take representative datasets from WRENCH and reevaluate existing WSL approaches in more realistic settings. Realistic evaluation. Certain pitfalls have been identified when evaluating machine learning models developed for low-resource situations. Earlier work in semi-supervised learning (SSL) in computer vision, for example, often trains with a few hundred training examples while retaining thousands of validation samples for model selection (Tarvainen and Valpola, 2017; Miyato et al., 2018). Oliver et al. (2018) criticize this setting and provide specific guidance for realistic SSL evaluation. Recent work in SSL has been adapted to discard the validation set and use a fixed set of hyperparameters across datasets (Xie et al., 2020; Zhang et al., 2021a; Li et al., 2021). In NLP, it has been shown that certain (prompt-based) few-shot learning approaches are sensitive to prompt selection which requires separate validation samples (Perez et al., 2021). This defeats the purported goal of few-shot learning, which is to achieve high performance even when collecting additional data is prohibitive. Recent few-shot learning algorithms and benchmarks have adapted to a more realistic setting in which fine-grained model selection is either skipped (Gao et al., 2021; Alex et al., 2021; Bragg et al., 2021; Schick and Schütze, 2022; Lu et al., 2022) or the number of validation samples are strictly controlled (Bragg et al., 2021; Zheng et al., 2022b). To our knowledge, no similar work exists exploring the aforementioned problems in the context of weak supervision. This motivates our work. ## 3 Overall Setup Problem formulation. Formally, let X and Y be the feature and label space, respectively. In standard supervised learning, we have access to a training set D = {(xi, yi)} N i=1 sampled from a clean data distribution Dc of random variables (*X, Y* ) ∈ X × Y. In weak supervision, we are instead given a weakly labeled dataset Dw = {(xi, yˆi)} N i=1 sampled from a noisy distribution Dn, where yˆi represents labels obtained from weak labeling sources such as heuristic rules or crowd-sourcing.2 yˆiis noisy, i.e., it may be different from the ground-truth label yi. The goal of WSL algorithms is to *obtain a* model that generalizes well on Dtest ∼ Dc despite being trained on Dw ∼ Dn. In recent WSL work, a set of clean samples, Dv ∼ Dc, is also often included for model selection.3 Datasets. We experiment with eight datasets covering different NLP tasks in English. Concretely, we include four text classification datasets: (1) AGNews (Zhang et al., 2015), (2) IMDb (Maas et al., 2011), (3) Yelp (Zhang et al., 2015), (4) TREC (Li and Roth, 2002), two relation classification datasets: (5) SemEval (Hendrickx et al., 2010) and (6) ChemProt (Krallinger et al., 2017), and 2Majority voting can be used to resolve conflicting weak labels from different labeling sources. 3We refer to model selection as the process of finding the best set of hyperparameters via a validation set, including the optimal early-stopping time. Prior work has shown that earlystopping is crucial for learning with noisy labels (Arpit et al., 2017; Yu et al., 2021; Zhu et al., 2022; Tänzer et al., 2022). two Named-Entity Recognition (NER) datasets: (7) CoNLL-03 (Tjong Kim Sang and De Meulder, 2003) and (8) OntoNotes (Pradhan et al., 2013). The weak annotations are obtained from the WRENCH (Zhang et al., 2021b) benchmark. Table 1 summarizes the basic statistics of the datasets. | Dataset | Task | # Class | # Train | # Val | # Test | |---------------|-----------|-----------|-----------|---------|----------| | AGNews | Topic | 4 | 96K | 12K | 12K | | IMDb | Sentiment | 2 | 20K | 2.5K | 2.5K | | Yelp | Sentiment | 2 | 30K | 3.8K | 3.8K | | TREC | Question | 6 | 4,965 | 500 | 500 | | SemEval | Relation | 9 | 1,749 | 178 | 600 | | ChemProt | Relation | 10 | 13K | 1.6K | 1.6K | | CoNLL-03 | NER | 4 | 14K | 3.2K | 3.4K | | OntoNotes 5.0 | NER | 18 | 115K | 5K | 23K | Table 1: **Dataset statistics.** Additional details on datasets are provided in Appendix A. WSL baselines. We analyze popular WSL approaches including: (1) FTW represents the standard fine-tuning approach4(Howard and Ruder, 2018; Devlin et al., 2019). Ren et al. (2020), Zhang et al. (2021b) and Zheng et al. (2022a) show that a pre-trained language model (PLM) fine-tuned on a weakly labeled dataset often generalizes better than the weak labels synthesized by weak labeling sources. (2) L2R (Ren et al., 2018) uses metalearning to determine the optimal weights for each (noisy) training sample so that the model performs best on the (clean) validation set. Although this method was originally proposed to tackle artificial label noise, we find it performs on par with or better than recent weak supervision algorithms on a range of datasets. (3) MLC (Zheng et al., 2021) uses meta-learning as well, but instead of weighting the noisy labels, it uses the meta-model to correct them. The classifier is then trained on the corrected labels. (4) **BOND** (Liang et al., 2020) is a noise-aware self-training framework designed for learning with weak annotations. (5) **COSINE** (Yu et al., 2021) underpins self-training with contrastive regularization to improve noise robustness further and achieves state-of-the-art performance on the WRENCH (Zhang et al., 2021b) benchmark. To provide a fair comparison, we use RoBERTabase (Liu et al., 2019) as the common backbone PLM for all WSL approaches (re)implemented in this paper. 4We use the subscript "W" to emphasize that this finetuning is done on the weakly annotated data and to distinguish it from the fine-tuning experiments in Section 6 which are done on clean data. ![3_image_0.png](3_image_0.png) ## 4 Is Clean Data Necessary For Wsl? Recent best-performing WSL approaches rely on a clean validation set for model selection. Figure 1 reveals that they fail to outperform a simple model that is directly fine-tuned on the validation set. Therefore, a natural question to ask is: "*Will* WSL still work without accessing the clean validation set?". If the answer is yes, then we can truly reduce the burden of data annotation and the benefits of these WSL approaches would be undisputed. This section aims to answer this question. Setup. We compare three different validation choices for model selection using either (1) a clean validation set from Dv as in prior work, (2) weak labels from D˜v obtained by annotating the validation set via weak labeling sources (the same procedure used to construct training annotations), or (3) no validation data at all. In the last setting, we randomly select 5 sets of hyperparameters from our search space (see Appendix C). We run the WSL approaches introduced in Section 3 on all eight datasets with different validation choices and measure their test performance. Each experiment is repeated 5 times with different seeds. While one may expect a certain drop in performance when switching from Dv to D˜v, the absolute performance of a model does not determine the usefulness of a WSL method. We are more interested in whether a trained model generalizes better than the weak labels.5In realistic applications, it is only worth deploying trained models if they demonstrate clear advantages over the weak labels. Therefore, we report the relative performance gain of WSL approaches over the weak labels. Formally, let PW L, Pα denote the performance (accuracy, F1-score, etc.) achieved by the weak labels and a certain WSL method α, respectively. The the relative performance gain is defined as Gα = (Pα − PW L)/PW L. We consider a WSL approach to be *effective* and practically useful only if Gα > 0. Results. Figure 2 shows the relative performance gain for all considered WSL approaches. When model selection is performed on a clean validation set (green curve), all weak supervision baselines generalize better than the weak labels. Sophisticated methods like COSINE and L2R push the performance even further. This observation is consistent with previous findings (Zhang et al., 2021b; Zheng et al., 2022a). However, when using a weakly labeled validation set (yellow curve), all WSL baselines become *ineffective* and barely outperform the weak labels. More interestingly, models selected through the weakly labeled validation sets do not outperform models configured with random hyperparameters (purple curve). These results demonstrate that model selection on clean validation samples plays a vital role in the effectiveness of WSL methods. **Without clean validation** samples, existing WSL approaches do not work. ## 5 How Much Clean Data Does Wsl Need? Now that we know clean samples are necessary for WSL approaches to work, a follow-up question would be: "How many clean samples do we need?" Intuitively, we expect an improvement in performance as we increase the amount of clean data, but it is unclear how quickly this improvement starts to level off, i.e., we may find that a few dozen clean samples are enough for WSL approaches to perform model selection. The following section seeks to answer this question. 5Weak labeling sources are typically applied to the training data to synthesize a weakly annotated training set. However, it is also possible to synthesize the weak labels for the test set following the same procedure and measure their performance. In other words, weak labeling sources can be regarded as the most basic classification model, and the synthesized weak labels are its predictions. ![4_image_0.png](4_image_0.png) Setup. We apply individual WSL approaches and vary the size of clean data sub-sampled from the original validation split. For text and relation classification tasks, we draw an increasing number of clean samples N ∈ {5, 10, 15, 20, 30, 40, 50} per class when applicable.6In the case of NER, as a sentence may contain multiple labels from different classes, selecting exactly N samples per class at random is impractical. Hence, for NER we sample N ∈ {50, 100, 200, 300, 400, 500} sentences for validation. For each N, we run the same experiment 5 times. Note that the clean data is *used solely* for model selection in this set of experiments. Results. As shown in Figure 3, in most cases, a handful of validation samples already make WSL work better than the weak labels. We observe an increasing trend in performance with more validation samples, but typically this trend weakens with a moderate size of samples (~30 samples per class or ~200 sentences) and adding more samples provides little benefit. There are a few exceptions. For example, on IMDb all methods except L2R consistently perform better with more validation data. On CoNLL-03, on the other hand, most methods seem to be less sensitive to the number of samples. Overall, the results suggest that **a small amount** 6The validation set of SemEval is too small to support N > 20. Also, if a dataset is unbalanced, we randomly select N × C samples, where C denotes the number of classes. This is a realistic sampling procedure when performing data annotation. of clean validation samples may be sufficient for current WSL methods to achieve good performance. Using thousands of validation samples, like in the established benchmarks (Zhang et al., 2021b; Zheng et al., 2022a), is neither realistic nor necessary. ## 6 Is Wsl Useful With Less Clean Data? The previous sections have shown that current WSL approaches (1) do not improve over direct finetuning on the existing validation splits (Figure 1) and (2) require only a small amount of validation samples to be effective (Figure 3). This section investigates whether the conclusion from Figure 1 would change with less clean data, i.e., can WSL approaches outperform direct fine-tuning when less clean data is available? Setup. We follow the same procedure as in Section 5 to subsample the *cleanly annotated* validation sets and fine-tune models directly on the sampled data. In addition to the standard fine-tuning approach (Devlin et al., 2019), we also experiment with three parameter-efficient fine-tuning (PEFT) approaches as - in the few-shot setting - they have been shown to achieve comparable or even better performance than fine-tuning all parameters (Peters et al., 2019; Logan IV et al., 2022; Liu et al., 2022). In particular, we include adapters (Houlsby et al., 2019), LoRA (Hu et al., 2022), and BitFit (Zaken et al., 2022). ![5_image_0.png](5_image_0.png) We use one fixed set of hyperparameter configurations and train models for 6000 steps on each dataset.7 We report performance at the last step and compare it with WSL approaches which use the same amount of clean data for validation. Results. Figure 4 shows the performance difference between the fine-tuning baselines and COSINE, one of the best-performing WSL approaches, when varying the number of clean samples. It can be seen that in extremely low-resource cases (less than 5 clean samples per class), COSINE outperforms fine-tuning. However, fine-tuning approaches quickly take over when more clean samples are available. LoRA performs better than COSINE on three out of four text classification tasks with just 10 samples per class. AGNews is the only exception, where COSINE outperforms LoRA by about 1% when 20 samples per class are available, but adapters outperform COSINE in this case. Relation extraction has the same trend where 10–20 samples per class are often enough for fine-tuning approaches to catch up. For NER tasks, all finetuning approaches outperform COSINE with as 7The hyperparameters are randomly picked from the ranges mentioned in the original papers of corresponding methods and fixed across all experiments. *We did not cherrypick them based on the test performances*. In most cases the training loss converges within 300 steps. We intentionally extend training to show that we do not rely on extra data for early-stopping. We find that overfitting to the clean data does not hurt generalization. A similar observation is made in Mosbach et al. (2021). Detailed configurations are presented in Appendix D. few as 50 sentences on CoNLL-03. OntoNotes seems to be more challenging for fine-tuning and 400 sentences are required to overtake COSINE. Still, 400 sentences only account for 0.3% of the weakly labeled samples used for training COSINE. This indicates that models can benefit much more from training on a small set of clean data rather than on vast amounts of weakly labeled data. Note that the fine-tuning approaches we experiment with work out-of-the-box across NLP tasks. If one specific task is targeted, few-shot learning methods with manually designed prompts might perform even better.8 Hence, the performance shown here should be understood as a lower bound of what one can achieve by fine-tuning. Nevertheless, we can see that even considering the lower bound of fine-tuning-based methods, **the advantage of using WSL approaches vanishes when we have as** few as 10 clean samples per class. For many realworld applications, this annotation workload may be acceptable, limiting the applicability of WSL approaches. ## 7 Can Wsl Benefit From Fine-Tuning? The WSL approaches have only used clean samples for validation so far, which is shown to be inefficient compared to training directly on them. 8For example, Zhao et al. (2021) achieve an accuracy of 85.9% on AGNews using just 4 labeled samples in total. For comparison, COSINE needs 20 labeled samples for validation to reach 84.21%. ![6_image_0.png](6_image_0.png) We question whether enabling WSL methods to further fine-tune on these clean samples would improve their performance. In this section, we study a straightforward training approach that makes use of both clean and weak labels.9 Setup. Given both the weakly labeled training data and a small amount of clean data, we consider a simple two-phase training baseline. In the first phase, we apply WSL approaches on the weakly labeled training set, using the clean data for validation. In the second phase, we take the model trained on the weakly labeled data as a starting point and continue to train it on the clean data. We call this approach continuous fine-tuning (CFT). In our experiment, we apply CFT to the two bestperforming WSL approaches, COSINE and L2R, along with the most basic WSL baseline, FTW. We sample clean data in the same way as in Section 5. The training steps of the second phase are fixed at 6000. Each experiment is repeated 5 times with different seeds. Results. Figure 5 shows the model performance before and after applying CFT. It can be seen that 9In Appendix E we also explored other baselines that combine clean and weak data, but they perform considerably worse than the approach we consider in this section. CFT does indeed benefit WSL approaches in most cases even with very little clean data (Figure 5a). For L2R, however, the improvement is less obvious, and there is even a decrease on Yelp and OntoNotes. This could be because L2R uses the validation loss to reweight training samples, meaning that the value of the validation samples beyond that may only be minimal. When more clean samples are provided, CFT exhibits a greater performance gain (Figure 5b). It is also noticeable that CFT reduces the performance gap among all three WSL methods substantially. Even the simplest approach, FTW, is comparable to or beats L2R and COSINE in all tasks after applying CFT. Considering that COSINE and L2R consume far more computing resources, our findings suggest that **the net benefit** of using sophisticated WSL approaches may be significantly overestimated and impractical for real-world use cases. Finally, we find the advantage of performing WSL diminishes with the increase of clean samples even after considering the boost from CFT. When 50 clean samples per class (500 sentences for NER) are available, applying WSL+CFT only results in a performance boost of less than 1% on 6 out of 8 datasets, compared with the baseline which only fine-tunes on clean samples. Note that weak la- ![7_image_0.png](7_image_0.png) bels are no free lunch. Managing weak annotation resources necessitates experts who not only have linguistic expertise for annotation but also the ability to transform that knowledge into programs to automate annotations. This additional requirement naturally reduces the pool of eligible candidates and raises the cost. In this situation, annotating a certain amount of clean samples may be significantly faster and cheaper. Thus, we believe WSL has a long way to go before being truly helpful in realistic low-resource scenarios. ## 8 What Makes Ftw**+Cft Effective?** As seen in the previous section, combining FTW with CFT yields a strong baseline that more sophisticated WSL approaches can hardly surpass. This section examines factors that contribute to the effectiveness of this method. Specifically, we aim to answer two questions: (1) "How does FTW resist biases despite being trained only on weak labels?" and (2) "How does CFT further reduce bias introduced by weak labels?". Setup. To answer question (1), we modify the backbone PLM to see if its encoded knowledge plays an important role. We explore two additional PLMs that are pre-trained on less data: RoBERTasmall-1M and RoBERTa-base-10M, which are pre- ![7_image_1.png](7_image_1.png) trained on 1M and 10M words, respectively.10 We report model performance on both clean labels and weak labels to see which labels the model tends to fit. To answer question (2), we vary the agreement ratio in the clean samples to see how these clean labels help combat biases from weak labels. The agreement ratio is defined as the percentage of samples whose clean labels match the corresponding weak labels. Intuitively, if the clean label for a specific training example matches its weak label, then this example may not contribute additional information to help combat bias. A higher agreement ratio should therefore indicate fewer informative samples. Results. Figure 6 shows the performances for different PLMs. Pre-training on more data clearly helps to overcome biases from weak labels. When the pre-training corpus is small, the model tends to fit the noisy weak labels more quickly than the clean labels and struggles to outperform weak labels throughout the entire training process. With a large pre-training corpus, however, the model can make better predictions on clean labels than 10The original RoBERTa-base model is pre-trained on 100B words. The two less pre-trained models are obtained from (Warstadt et al., 2020). RoBERTa-base-10M retains the same architecture as RoBERTa-base, while RoBERTa-small-1M contains fewer parameters. weak labels in the early stages of training, even when it is only trained on weak labels. If we apply proper early-stopping before the model is eventually dragged toward weak labels, we can attain a model that generalizes significantly better than the weak labels. This indicates that pre-training provides the model with an inductive bias to seek more general linguistic correlations instead of superficial correlations from the weak labels, which aligns with previous findings in Warstadt et al. (2020). This turns out to be the key to why simple FTW works here. Figure 7 shows how the agreement ratio α in clean samples affects the performance. Performance declines substantially for α > 70%, showing that it is necessary to have contradictory samples in order to reap the full advantage of CFT. This is reasonable, given that having examples with clean labels that coincide with their weak labels may reinforce the unintended bias learned from the weakly labeled training set. The optimal agreement ratio lies around 50%. However, having α = 0 also yields decent performance for most datasets except TREC, suggesting contradictory samples play a more important role here and at least a minimum set of contradictory samples are required for CFT to be beneficial. ## 9 Conclusions And Recommendations Our extensive experiments provide strong evidence that recent WSL approaches heavily overestimate their performance and practicality. We demonstrated that they hinge on clean samples for model selection to reach the claimed performance, yet models that are simply trained on these clean samples are already better. When both clean and weak labels are available, a simple baseline (FTW+CFT) performs on par with or better than more sophisticated methods while requiring much less computation and effort for model selection. Inspired by prior work (Oliver et al., 2018; Perez et al., 2021), our recommendations for future WSL approaches are the following: - Report the model selection criteria for proposed methods and, especially, how much they rely on the presence of clean data. - Report how many cleanly annotated samples are required for a few-shot learning approach to reach the performance of a proposed WSL approach. If thousands of weakly annotated samples are comparable to a handful of clean samples - as we have seen in Section 6 - then WSL may not be the best choice for the given low-resource setting. - If a proposed WSL method requires extra clean data, such as for validation, then the simple FTW+CFT baseline should be included in evaluation to claim the real benefits gained by applying the method. We hope our findings and recommendations will spur more robust future work in WSL such that new methods are truly beneficial in realistic lowresource scenarios. ## Limitations We facilitate fair comparisons and realistic evaluations of recent WSL approaches. However, our study is not exhaustive and has the following limitations. First, it may be possible to perform model selection by utilizing prior knowledge about the dataset. For example, if the noise ratio (the proportion of incorrect labels in the training set) is known in advance, it can be used to determine (a subset of) hyperparameters (Han et al., 2018; Li et al., 2020). In this case, certain WSL approaches may still work without access to extra clean data. Second, in this paper we concentrate on tasks in English where strong PLMs are available. As we have shown in Section 6, training them on a small amount of data is sufficient for generalization. For low-resource languages where no PLMs are available, training may not be that effective, and WSL methods may achieve higher performance. Third, we experiment with datasets from the established WRENCH benchmark, where the weak labels are frequently assigned by simple rules like as regular expressions (see Appendix B for examples). However, in a broader context, weak supervision can have different forms. For example, Smith et al. (2022) generates weak labels through large language models. Zhou et al. (2022) use hyper-link information as weak labels for passage retrieval. We have not extended our research to more diverse types of weak labels. Despite the above limitations, however, we identify the pitfalls in the existing evaluation of current WSL methods and demonstrate simple yet strong baselines through comprehensive experiments on a wide range of tasks. ## Acknowledgments We thank Vagrant Gautam for thoughtful suggestions and insightful discussions. We would also like to thank our anonymous reviewers for their constructive feedback. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 232722074 – SFB 1102 and the EU Horizon 2020 projects ROXANNE under grant number 833635. ## References Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel, Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier, Michael Noetel, and Andreas Stuhlmüller. 2021. RAFT: A real-world few-shot text classification benchmark. In *Proceedings of the Neural Information Processing Systems Track on Datasets and* Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 233–242. PMLR. Jonathan Bragg, Arman Cohan, Kyle Lo, and Iz Beltagy. 2021. FLEX: unifying evaluation for few-shot NLP. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information* Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 15787–15800. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 8536–8546. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multiway classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38, Uppsala, Sweden. Association for Computational Linguistics. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long* Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799. PMLR. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328–339. Association for Computational Linguistics. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, and Ahmed Hassan Awadallah. 2021. Self-training with weak supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 845–863. Association for Computational Linguistics. Martin Krallinger, Obdulia Rabal, Saber A Akhondi, Martın Pérez Pérez, Jesús Santamaría, Gael Pérez Rodríguez, Georgios Tsatsaronis, Ander Intxaurrondo, José Antonio López, Umesh Nandal, et al. 2017. Overview of the BioCreative VI chemicalprotein interaction track. In *Proceedings of the* sixth BioCreative challenge evaluation workshop, volume 1, pages 141–146. Junnan Li, Richard Socher, and Steven C. H. Hoi. 2020. DivideMix: Learning with noisy labels as semisupervised learning. In 8th International Confer- ence on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Junnan Li, Caiming Xiong, and Steven C. H. Hoi. 2021. CoMatch: Semi-supervised learning with contrastive graph regularization. In *2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021,* Montreal, QC, Canada, October 10-17, 2021, pages 9455–9464. IEEE. Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, and Chao Zhang. 2020. BOND: BERT-assisted open-domain named entity recognition with distant supervision. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1054–1064. ACM. Pierre Lison, Jeremy Barnes, and Aliaksandr Hubin. 2021. skweak: Weak supervision made easy for NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 337–346, Online. Association for Computational Linguistics. Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In Advances in Neural Information Processing Systems. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Robert Logan IV, Ivana Balazevic, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. 2022. Cutting down on prompts and parameters: Simple few-shot learning with language models. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2824–2835, Dublin, Ireland. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8086– 8098. Association for Computational Linguistics. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of* the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT '11, page 142–150, USA. Association for Computational Linguistics. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semisupervised learning. *IEEE transactions on pattern* analysis and machine intelligence, 41(8):1979–1993. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net. Avital Oliver, Augustus Odena, Colin Raffel, Ekin Dogus Cubuk, and Ian J. Goodfellow. 2018. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3239– 3250. Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. In *Advances in Neural Information Processing Systems 34:* Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 11054–11070. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? Adapting pretrained representations to diverse tasks. In *Proceedings of* the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7–14, Florence, Italy. Association for Computational Linguistics. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In *Proceedings of the Seventeenth Conference on Computational* Natural Language Learning, pages 143–152, Sofia, Bulgaria. Association for Computational Linguistics. Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017. Snorkel: Rapid training data creation with weak supervision. *Proc. VLDB Endow.*, 11(3):269–282. Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to reweight examples for robust deep learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 4331–4340. PMLR. Wendi Ren, Yinghao Li, Hanting Su, David Kartchner, Cassie Mitchell, and Chao Zhang. 2020. Denoising multi-source weak supervision for neural text classification. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3739–3754, Online. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2022. True few-shot learning with Prompts—A real-world perspective. Transactions of the Association for Computational Linguistics, 10:716–731. Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-weightnet: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1917–1928. Ryan Smith, Jason A. Fries, Braden Hancock, and Stephen H. Bach. 2022. Language models in the loop: Incorporating prompting into weak supervision. CoRR, abs/2205.02318. Andreas Stephan, Vasiliki Kougia, and Benjamin Roth. 2022. SepLL: Separating latent class labels from weak supervision noise. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 3918–3929, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Michael Tänzer, Sebastian Ruder, and Marek Rei. 2022. Memorisation versus generalisation in pre-trained language models. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7564–7578. Association for Computational Linguistics. Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R. Bowman. 2020. Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually). In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 217–235, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021. Fine-tuning pretrained language model with weak supervision: A contrastive-regularized self-training approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1063–1077. Association for Computational Linguistics. Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin, Ireland, May 2227, 2022, pages 1–9. Association for Computational Linguistics. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. 2021a. FlexMatch: Boosting semisupervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 18408–18419. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, and Alexander Ratner. 2022. A survey on programmatic weak supervision. *CoRR*, abs/2202.05433. Jieyu Zhang, Yue Yu, NameError, Yujing Wang, Yaming Yang, Mao Yang, and Alexander Ratner. 2021b. WRENCH: A comprehensive benchmark for weak supervision. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 12697–12706. PMLR. Guoqing Zheng, Ahmed Hassan Awadallah, and Susan T. Dumais. 2021. Meta label correction for noisy label learning. In *Thirty-Fifth AAAI Conference* on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 11053–11061. AAAI Press. Guoqing Zheng, Giannis Karamanolakis, Kai Shu, and Ahmed Awadallah. 2022a. WALNUT: A benchmark on semi-weakly supervised learning for natural language understanding. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 873–899, Seattle, United States. Association for Computational Linguistics. Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Li Jian, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, and Zhilin Yang. 2022b. FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 501–516. Association for Computational Linguistics. Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Lan Luo, Ke Zhan, Enrui Hu, Xinyu Zhang, Hao Jiang, Zhao Cao, Fan Yu, Xin Jiang, Qun Liu, and Lei Chen. 2022. Hyperlink-induced pre-training for passage retrieval in open-domain question answering. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7135–7146. Association for Computational Linguistics. Dawei Zhu, Michael A. Hedderich, Fangzhou Zhai, David Ifeoluwa Adelani, and Dietrich Klakow. 2022. Is BERT robust to label noise? A study on learning with noisy labels in text classification. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, Insights@ACL 2022, Dublin, Ireland, May 26, 2022, pages 62–67. Association for Computational Linguistics. Dawei Zhu, Xiaoyu Shen, Michael A. Hedderich, and Dietrich Klakow. 2023. Meta self-refinement for robust learning with weak supervision. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1043–1058. Association for Computational Linguistics. ## A Datasets In the following, we give a more comprehensive description of the datasets used. A subset of the commonly used WRENCH (Zhang et al., 2021b) benchmark is used, covering various aspects such as task type, coverage and dataset size. There is a total of four classification, two relation extraction and two sequence labeling datasets. See Table 2 for a detailed set of data statistics. AGNews (Zhang et al., 2015) is a topic classification dataset. The task is to classify news articles into four topics, namely world, sports, business and Sci-Fi/technology. Each labeling function is composed of multiple keywords to search for. The number of keywords differs from a few up to dozens. IMDb (Maas et al., 2011) is a dataset of movie reviews sampled from the IMDb website. The task is binary sentiment analysis. The labeling functions are composed of keyword searches and regular expressions. Yelp (Zhang et al., 2015) is another sentiment analysis dataset, containing crowd-sourced business reviews. The labeling functions are created using keywords and a lexicon-based sentiment analysis library. TREC (Li and Roth, 2002) is a question classification dataset, i.e., it asks what type of response is expected. The labels are abbreviation, description and abstract concepts, entities, human beings, locations or numeric values. The labeling functions are created using regular expressions and make a lot of use of question words such as "what", "where" or "who". SemEval (Hendrickx et al., 2010) is a relation classification dataset, using nine relation types. Examples for relation labels are cause-effect, entityorigin or message-topic. Labeling functions are created using entities within a regular expression. ChemProt (Krallinger et al., 2017) is another relation classification dataset, focusing on chemical research literature. It contains ten different types of relations, for example chemical-protein relations such as "biological properties upregulator". The labeling functions are created using rules. CoNLL-03 (Tjong Kim Sang and De Meulder, 2003) is a named entity recognition (NER) dataset, with labels for the entities "person", "location", "organization", and "miscellaneous". Labeling functions are built using previously trained keywords, regular expressions and NER models. OntoNotes 5.0 (Pradhan et al., 2013) is a another NER dataset, using more fine-grained entities as CoNLL-03. Here, a subset of the CoNLL weak labeling sources is combined with keyword and regular expression based weak labeling sources. ## B Labeling Functions Weak labeling sources are often abstracted as labeling functions and vary in aspects such as coverage, precision, or overlap (Ratner et al., 2017; Karamanolakis et al., 2021). To showcase how the weak labeling process works, a selection of examples of labeling functions is presented. More specifically, we provide examples of rules for the two classification datasets IMDb (Table 3) and TREC (Table 4), the relation classification dataset SemEval (Table 5) and the NER dataset CoNLL-03 (Table 6). ## C Overall Implementation Details This section summarizes the overall implementation details of WSL approaches used in our paper. Refer to Appendix D for hyperparameter configurations of PEFT approaches. We use the PyTorch framework11 to implement all approaches discussed in the paper. Hugging Face (Wolf et al., 2020) is used for downloading and training the RoBERTa-base model. AdapterHub (Pfeiffer et al., 2020) is used for implementing parameter-efficient fine-tuning. Hyperparameters In this paper, we implemented five WSL methods: FT (Devlin et al., 2019), L2R (Ren et al., 2018), MLC (Zheng et al., 2021), BOND (Liang et al., 2020), and COSINE (Yu et al., 2021). We report the search ranges of the hyperparameters in Table 7. We do not search for batch size as we find it has minor effects on the final performance. Instead, a batch size of 32 is used across experiments. Also, RoBERTa-base (Liu et al., 2019) is used as the backbone PLM and AdamW (Loshchilov and Hutter, 2019) is the optimizer used across all methods. Computing infrastructure and training cost We use Nvidia V100-32 GPUs for training deep learning models. All WSL approaches studied in 11https://pytorch.org/ | Avg. over labeling functions (LFs) | | | | | | | | | | | | | |--------------------------------------|---------------------------|----------|------|----------------|-----------|----------|-----------|--------|-------|---------|--------|--------| | Dataset | Task | #Classes | #LFs | %Ovr. Coverage | %Coverage | %Overlap | %Conflict | %Prec. | MV | #Train | #Dev | #Test | | AGNews | News Class. | 4 | 9 | 69.08 | 10.34 | 5.05 | 2.43 | 81.66 | 81.23 | 96,000 | 12,000 | 12,000 | | IMDb | Movie Sentiment Class. | 2 | 5 | 87.58 | 23.60 | 11.60 | 4.50 | 69.88 | 73.86 | 20,000 | 2,500 | 2,500 | | Yelp | Business Sentiment Class. | 2 | 8 | 82.78 | 18.34 | 13.58 | 4.94 | 73.05 | 73.31 | 30,400 | 3,800 | 3,800 | | TREC | Question Class. | 6 | 68 | 95.13 | 2.55 | 1.82 | 0.84 | 75.92 | 62.58 | 4,965 | 500 | 500 | | SemEval | Web Text Relation Class. | 9 | 164 | 100.00 | 0.77 | 0.32 | 0.14 | 97.69 | 77.33 | 1,749 | 200 | 692 | | ChemProt | Chemical Relation Class. | 10 | 26 | 85.62 | 5.93 | 4.40 | 3.95 | 46.65 | 55.12 | 12,861 | 1,607 | 1,607 | | CoNLL-03 | English News NER | 4 | 16 | 100 | 100 | 4.30 | 1.44 | 72.19 | 60.38 | 14,041 | 3250 | 3453 | | OntoNotes 5.0 | Multi-Domain NER | 18 | 17 | 100 | 100 | 1.55 | 0.54 | 54.84 | 58.92 | 115,812 | 5,000 | 22,897 | Table 3: Examples of two keyword based and two regular expression based rules for the IMDb dataset. | Label | Labeling Function | |---------|------------------------------------------------------------------------------------------------| | POS | beautiful, handsome, talented | | NEG | than this, than the film, than the movie | | POS | .*(highly|do|would|definitely|certainly|strongly|i|we).*(recommend|nominate).* | | POS | .*(high|timeless|priceless|HAS|great|real|instructive).*(value|quality|meaning|significance).* | this paper can fit into one single GPU. We report the training time of the WSL methods in Table 8. ## D Training With Clean Samples D.1 Methods And Implementation Details In Section 6, we apply four (parameter-efficient) fine-tuning approaches to train models on clean validation sets. Since we do not have extra data for model selection, we choose a fixed set of hyperparameters for all datasets. In the following we briefly introduce the fine-tuning approaches, together with their hyperparameter configurations. - Vanilla fine-tuning (Devlin et al., 2019; Liu et al., 2019) is the standard fine-tuning approaches for pre-trained language models. It works by adding a randomly initialized classifier on top of the pre-trained model and training it together with all other model parameters. We use a fixed learning rate of 2e−5in all experiments. - Adapter-based fine-tuning (Houlsby et al., 2019) adds additional feed-forward layers called adapters to each layer of the pre-trained language model. During fine-tuning, we only update the weights of these adapter layers and keep all other parameters *frozen* at their pretrained values. We use a fixed learning rate of 2e−5in all experiments. The reduction factor is set to 16. - BitFit (Zaken et al., 2022) updates only the bias parameters of every layer and keeps all other weights frozen. Despite its simplicity it has been demonstrated to achieve similar results to adapter-based fine-tuning. We use a fixed learning rate of 1e−4in all experiments. - LoRA (Hu et al., 2022) is a recently proposed adapter-based fine-tuning method which uses a low-rank bottleneck architecture in each of the newly added feed-forward networks. The motivation here is to perform a low rank update to the model during fine-tuning. We use a fixed learning rate of 2e−5in all experiments. The α value used in LoRa is fixed to 16. In all experiments, the batch size used in all finetuning approaches is 32. The optimizer is AdamW (Loshchilov and Hutter, 2019). ## D.2 Training On The Full Validation Sets In addition to training sets, the WRENCH (Zhang et al., 2021b) benchmark provides a validation set for each of its tasks. The validation sets are cleanly annotated and typically range in size from 5% to 25% of the weakly annotated training sets. Although such validation size is reasonable for fully supervised learning, we suspect that it is exorbitant in the sense that it provides a significantly better training signal for models than the weakly annotated training set. Thus we compare the performance of recent WSL approaches that access Label Labeling Function Table 4: Rules for the TREC dataset. For each label a representative labeling function is given. | ABBREVIATION | ( |^)(what|what)[^\w]* (\w+ ){0,1}(does|does)[^\w]* ([^\s]+ )*(stand for)[^\w]*( |$) | |----------------|----------------------------------------------------------------------------------------| | DESCRIPTION | ( |^)(explain|describe|how|how)[^\w]* (\w+ ){0,1}(can|can)[^\w]*( |$) | | ENTITY | ( |^)(which|what|what)[^\w]* ([^\s]+ )*(organization|trust|company|company)[^\w]*( |$) | | LOCATION | ( |^)(which|what|where|where)[^\w]* ([^\s]+ )*(situated|located|located)[^\w]*( |$) | | NUMERIC | ( |^)(by how|how|how)[^\w]* (\w+ ){0,1}(much|many|many)[^\w]*( |$) | Table 5: One labeling function for each label of the SemEval dataset. Here e1 and e2 are entities which are already available in the dataset. | Label | Labeling Function | |---------------------------|---------------------------------------| | Cause-Effect(e1,e2) | SUBJ-O caused OBJ-O | | Component-Whole(e1,e2) | SUBJ-O is a part of the OBJ-O | | Content-Container(e1,e2) | SUBJ-O was contained in a large OBJ-O | | Entity-Destination(e1,e2) | SUBJ-O into OBJ-O | | Entity-Origin(e1,e2) | SUBJ-O emerged from the OBJ-O | | Instrument-Agency(e2,e1) | SUBJ-O took the OBJ-O | | Member-Collection(e2,e1) | SUBJ-O of different OBJ-O | | Message-Topic(e1,e2) | SUBJ-O states that the OBJ-O | | Product-Producer(e1,e2) | SUBJ-O created by the OBJ-TITLE | Table 6: For each label, one labeling function of the CoNLL-03 dataset is displayed. | Label | Labeling Function | |---------------|-------------------------------------------------------------------------------| | PERSON | RegEx searching list one of 7559 first names, followed by an upper-cased word | | LOCATION | List of 15205 places | | ORGANIZATION | WTO, Starbucks, mcdonald, google, Baidu, IBM, Sony, Nikon | | MISCELLANEOUS | List of countries, languages, events and facilities | | Hyperparameter | Search Range | Hyperparameter | Search Range | |----------------------------------------------------|------------------------------|------------------|------------------| | Learning rate | 2e-5, 3e-5, 5e-5 | | | | Warm-up steps | 50, 100, 200 | Learning rate | 2e-5, 3e-5, 5e-5 | | Meta-learning rate | 1e-4, 2e-5, 1e-5 | | | | (a) FT (for both training on clean or weak labels) | (b) L2R | | | | Hyperparameter | Search Range | | | | Hyperparameter | Search Range | Learning rate | 2e-5, 3e-5, 5e-5 | | T1 | 5000 | | | | T2 | 5000 | | | | T3 | 50, 100, 300, 500 | | | | Confidence threshold | 0.1, 0.3, 0.5, 0.7, 0.8, 0.9 | | | | Learning rate | 2e-5, 3e-5, 5e-5 | | | | Meta-learning rate | 1e-4, 2e-5, 1e-5 | | | | hdim | 512, 768 | | | | (c) MLC | (d) BOND | | | | Hyperparameter | Search Range | | | | Learning rate | 2e-5, 3e-5, 5e-5 | | | | T1 | 5000 | | | | T2 | 5000 | | | | T3 | 50, 100, 300, 500 | | | | Distance measure | cosine | | | | Regularization factor | 0.05 0.1 0.2 | | | | Confidence threshold | 0.1, 0.3, 0.5, 0.7, 0.8, 0.9 | | | | (e) COSINE | | | | | AGNews | IMDb | Yelp | TREC | SemEval | ChemProt | CoNLL-03 | OntoNotes 5.0 | | |----------|--------|--------|--------|-----------|------------|------------|-----------------|-----| | FT | 0.2 | 0.2 | 0.2 | 0.1 | 0.1 | 0.2 | 0.2 | 0.5 | | L2R | 2.0 | 1.2 | 1.5 | 0.3 | 0.3 | 0.4 | 0.9 | 1.2 | | MLC | 1.2 | 0.8 | 1.2 | 0.3 | 0.2 | 0.5 | 1.2 | 1.0 | | BOND | 0.5 | 0.2 | 0.5 | 0.1 | 0.1 | 0.2 | 0.4 | 1.1 | | COSINE | 0.6 | 0.2 | 0.6 | 0.2 | 0.2 | 0.3 | 0.5 | 1.5 | Table 8: Running time in hours of each WSL method when trained on a weakly labeled training set. Since we also track the validation and test performance during training, the training time reported here actually overestimates the training time required for each method. both the training and validation sets with a model that is directly fine-tuned on the validation set. The following WSL methods are included in this experiment: L2R (Ren et al., 2018), MetaWN (Shu et al., 2019), BOND (Liang et al., 2020), Denoise (Ren et al., 2020), MLC (Zheng et al., 2021), and COSINE (Yu et al., 2021). Following prior work, we select the best set of hyperparameters via the validation set when applying the WSL methods. Also, early-stopping based on the validation performance is applied. In contrast, the direct fine-tuning baseline uses a fixed set of hyperparameters across all datasets, and no early-stopping is applied (same configuration as in Appendix D.1). We train this baseline for 6000 steps. In all cases, the training losses converged much earlier than 6000 steps, but we deliberately kept training for longer to show that the good performance achieved by this baseline is not due to any fine-grained configurations. As shown in Figure 1, this simple baseline outperforms all the WSL methods in all but one case. ## D.3 **Extended Comparison Of Training On Clean** Data And Validation For Wsl Approaches In Section 6, standard fine-tuning (FT) and multiple parameter-efficient fine-tuning (PEFT) are compared with the competitive WSL method COSINE. In this section, we provide additional plots which show the same comparison with the other WSL methods examined in this work, namely L2R, MLC, and BOND. We report average performance (Acc. and F1 in %) difference between (parameterefficient) fine-tuning methods and the specific WSL method for varying number of clean samples. The overall tendency is consistent with the results in Section 6: WSL methods perform well on a small amount of clean labeled data but PEFT outperforms WSL methods with an increasing amount of clean labeled data. ## E **Additional Baselines That Combine Weak** And Clean Data During Training Besides CFT we also explored two simple baselines that combine both the cleanly and weakly annotated data in training: 1. WCmix: it mixes the clean data into the weakly labeled training set. We then fine-tune a PLM on this combined dataset. 2. WC**batch**: in each batch, we mix the weakly and cleanly labeled data at a ratio of 50:50. This makes sure that the model can access clean samples in each batch. We compared these two baselines with CFT, the results are shown in Figure 9. It can be seen that when the same amount of data is accessed, CFT outperforms the two baselines in most cases, sometimes by a large margin. ## F Additional Plots On Cft With Different Numbers Of Clean Samples We show further plots of experiments in Section 7 with different numbers of clean samples in Figure 10. More specifically, it shows the results for selecting N ∈ {10, 20, 30, 40} clean samples per class from the clean validation set for classification and N ∈ {100, 200, 300, 400} for NER tasks. These results corroborate the analysis presented in Section 7. ## G Cft With Different Plms And Agreement Ratios We provide additional plots of the experiments mentioned in Section 8 on more datasets. Figure 11 shows the performance of CFT using different PLMs during training and Figure 12 shows the performance when the number of clean samples and the agreement ratio is varied. ![18_image_0.png](18_image_0.png) ![19_image_0.png](19_image_0.png) ![20_image_0.png](20_image_0.png) ![21_image_0.png](21_image_0.png) ![22_image_0.png](22_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section after conclusion, no section number A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Sec 4-8 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Sec 4-8 + appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sec 4-8 + appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sec 4-8 + appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sec 4-8 + appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
li-etal-2023-prompt
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
https://aclanthology.org/2023.acl-long.797
As the representation capability of Pre-trained Language Models (PLMs) improve, there is growing concern that they will inherit social biases from unprocessed corpora. Most previous debiasing techniques used Counterfactual Data Augmentation (CDA) to balance the training corpus. However, CDA slightly modifies the original corpus, limiting the representation distance between different demographic groups to a narrow range. As a result, the debiasing model easily fits the differences between counterfactual pairs, which affects its debiasing performance with limited text resources. In this paper, we propose an adversarial training-inspired two-stage debiasing model using Contrastive learning with Continuous Prompt Augmentation (named CCPA) to mitigate social biases in PLMs{'} encoding. In the first stage, we propose a data augmentation method based on continuous prompt tuning to push farther the representation distance between sample pairs along different demographic groups. In the second stage, we utilize contrastive learning to pull closer the representation distance between the augmented sample pairs and then fine-tune PLMs{'} parameters to get debiased encoding. Our approach guides the model to achieve stronger debiasing performance by adding difficulty to the training process. Extensive experiments show that CCPA outperforms baselines in terms of debiasing performance. Meanwhile, experimental results on the GLUE benchmark show that CCPA retains the language modeling capability of PLMs.
## Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach To Mitigate Social Biases Yingji Li1, Mengnan Du2, Xin Wang3∗, Ying Wang1,4∗ 1College of Computer Science and Technology, Jilin University, Changchun, China 2Department of Data Science, New Jersey Institute of Technology, Newark, USA 3School of Artificial Intelligence, Jilin University, Changchun, China 4Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China yingji21@mails.jlu.edu.cn, mengnan.du@njit.edu, {xinwang,wangying2010}@jlu.edu.cn ## Abstract As the representation capability of Pre-trained Language Models (PLMs) improve, there is growing concern that they will inherit social biases from unprocessed corpora. Most previous debiasing techniques used Counterfactual Data Augmentation (CDA) to balance the training corpus. However, CDA slightly modifies the original corpus, limiting the representation distance between different demographic groups to a narrow range. As a result, the debiasing model easily fits the differences between counterfactual pairs, which affects its debiasing performance with limited text resources. In this paper, we propose an adversarial training-inspired two-stage debiasing model using Contrastive learning with Continuous Prompt Augmentation (named CCPA) to mitigate social biases in PLMs' encoding. In the first stage, we propose a data augmentation method based on continuous prompt tuning to push farther the representation distance between sample pairs along different demographic groups. In the second stage, we utilize contrastive learning to pull closer the representation distance between the augmented sample pairs and then fine-tune PLMs' parameters to get debiased encoding. Our approach guides the model to achieve stronger debiasing performance by adding difficulty to the training process. Extensive experiments show that CCPA outperforms baselines in terms of debiasing performance. Meanwhile, experimental results on the GLUE benchmark show that CCPA retains the language modeling capability of PLMs. ## 1 Introduction Pre-trained Language Models (PLMs) have demonstrated outstanding performance in recent years and have been widely used in natural language understanding tasks (Peters et al., 2018; Delobelle et al., 2022). However, the powerful language modeling ∗Corresponding author ![0_image_0.png](0_image_0.png) capability enables PLMs to learn good representations from large-scale training corpora while capturing human-like social biases. Recent studies have demonstrated that the representations encoded by PLMs learn social biases specific to demographic groups (e.g., gender, race, religion) and can be amplified to downstream tasks, leading to unfair outcomes and adverse social effects (Zhao et al., 2019; Webster et al., 2020). As a result, mitigating social biases in PLMs' encoding can improve the fairness of NLP systems significantly (Bolukbasi et al., 2016; Bender and Friedman, 2018). Most existing debiasing techniques first need to construct sample pairs using Counterfactual Data Augmentation (CDA) (Zmigrod et al., 2019; Wang et al., 2022) to balance the training corpora. The general approach of CDA is to replace the original corpus with attribute words (e.g., he/she, man/*woman*) specific to different demographic groups. For example, RCDA (Chen et al., 2021) uses a generator to generate a large number of antisense sentences and then uses a discriminator 14254 to evaluate the quality of the original and antisense samples jointly. FairFil (Cheng et al., 2021) obtains a pair of positive sample sentences by replacing the attribute words in the training corpora with the antonyms and then uses contrastive learning to train a filter for debiasing. Auto-Debias (Guo et al., 2022) uses pairs of attribute words as training corpora, amplifies the bias between sample pairs by searching biased prompt texts in the Wikipedia vocabulary, and then performs semantic alignment using Jensen-Shannon divergence. These methods aim to mitigate social biases between different demographic groups by narrowing the representation distance between sample pairs. However, CDA slightly modifies the original corpus, limiting the representation distance between different demographic groups to a narrow range. As a result, the debiasing model is easy to overfit the difference between counterfactual pairs, which affects its learning ability with limited text resources. As shown in Figure 1, it is difficult for PLMs to achieve the ideal debiasing performance for newly input samples with greater difficulty. In this work, we propose a two-stage debiasing method using Contrastive learning with Continuous Prompt Augmentation (named CCPA) to mitigate social biases in PLMs' encoding. Inspired by adversarial training, our approach improves the debiasing ability of PLMs by first amplifying and then attenuating the bias between different demographic groups. Specifically, we first use CDA to replace attribute words in the original training corpus to construct counterfactual pairs corresponding to different demographic groups. In the first stage, we augment the positive sample pairs with continuous prompt tuning to increase the distance between them to amplify the biases between different demographic groups. In the second stage, we utilize contrastive learning to pull the distance between the positive sample pairs to attenuate the biases between different demographic groups. CCPA increases the difficulty of model fitting by expanding the representation space between sample pairs. We believe that difficult learning experiences make the model more powerful, thus improving the debiasing ability of PLMs training in corpora with limited resources. Our main contributions are as follows: - We propose the CCPA debiasing framework that combines prompt tuning and contrastive learning to learn a debiased PLM representation. The PLM's parameters are fixed in the first stage, and a generator encoding continuous prompts is trained. In the second stage, the prompts are fixed, and the PLM's parameters are fine-tuned using contrastive learning. - We propose data augmentation using continuous prompts to achieve excellent debiasing performance using small training data rather than relying on a large external corpus. Given that continuous prompts may cause the representation distance between sample pairs to be too far apart, causing the semantic space to degrade, we propose constraining the prompt tuning using the Mahalanobis Distance to keep the semantic space as stable as possible. - We train CCPA on several real-world corpora and mitigate bias on the most common gender bias. The results on BERT and DistilBERT show that CCPA is superior to state-of-the-art models. In addition, we test the downstream tasks on the GLUE benchmark, and show that CCPA retains the language modeling capability while improving the PLMs' fairness. ## 2 Methodology In this section, we propose the Contrastive learning with Continuous Prompt Augmentation (CCPA) framework to mitigate the social bias in the encoding of PLMs specific to the most common gender bias. Our proposed CCPA consists of two stages: 1) Continuous Prompt Tuning and 2) FineTuning with Contrastive Learning. The framework of CCPA is shown in Figure 2. ## 2.1 Pre-Processing Based On Cda First, we pre-process the training corpus with imbalanced samples using Counterfactual Data Augmentation (CDA). Given a list of attribute words specific to gender bias,1for each attribute word (e.g., male/*female*), we match sentences containing an attribute word in the training corpus. The attribute word is then replaced with the opposite word in a different gender direction (e.g., *male* is replaced by *female*), leaving the other words unchanged. Then, we get the pre-processed training corpus S = {(s1, s′1 ),(s2, s′2 ), · · · ,(sN , s′N )} consists of N counterfactual pairs (si, s′i ) along different gender directions. 1We only consider the binary gender direction and use the same list of gender-specific attribute words as (Bolukbasi et al., 2016; Liang et al., 2020; Cheng et al., 2021). ![2_image_0.png](2_image_0.png) ## 2.2 Continuous Prompt Tuning Prompt-based learning is similar to giving instructions to the model task to guide the model learning knowledge more directly (Petroni et al., 2019). A lot of work utilize manually constructed prompts (Schick and Schütze, 2020, 2021) or automatically searched discrete prompts (Shin et al., 2020) to assist language models. However, manually constructed templates are heavily based on the designers' experience and automatically searched prompts are limited by the search space (Liu et al., 2021a). Instead of limiting the prompts to human interpretable natural language, the continuous prompts (Li and Liang, 2021; Zhong et al., 2021) guide directly within the embedding space of the model. Meanwhile, continuous prompts tune their parameters, removing the constraint of templates being parameterized by PLMs' parameters. Inspired by adversarial training, we believe that increasing the difficulty of the training process can guide the model in acquiring a stronger learning ability. To achieve this goal, we propose a data augmentation method based on continuous prompt tuning to further push the differences between counterfactual pairs. Data augmentation method based on continuous prompt tuning adds difficult information to the model by concatenating embeddings that amplify bias across different demographic groups over counterfactual pairs. Given a template T = {[p1], [p2], *· · ·* , [pm], s}, where s denotes a sentence, [pj ] is a virtual token represented as [*P ROMP T*] and m virtual tokens form a prompt sequence P. For each counterfactual pair (si, s′i ) ∈ S obtained by data preprocessing, we concatenate the same prompt sequence P at the head of each sentence (see Figure 2). The augmented sample pair is denoted by ( ˆsi, sˆi ′) and is fed into a PLM to obtain the sentence representation. Formally, let M denote a PLM whose encoder E(·) encodes an input sentence sˆi and outputs a sentence embedding zi = E( ˆsi). Similarly, z′i = E( ˆsi ′). In order to obtain continuous prompt embeddings, we train a generator G(·) to encode the prompt sequence P. Following P-Tuning (Liu et al., 2021b), we choose a bidirectional long-short-term memory network (LSTM), which consists of a two-layer multilayer perceptron (MLP) and a ReLU activation layer. The embedding hj of each virtual token [pj ] in the prompts sequence is encoded by G(·) as follows: $$\begin{array}{l}\mathbf{h}_{j}=G([\mathbf{h}_{j}:\mathbf{\hat{h}}_{j}])\\ =G([LSTM(\mathbf{h}_{1:j}):LSTM(\mathbf{h}_{j:m+1})]).\end{array}\tag{1}$$ Afterwards, we replace the continuous prompt embeddings {h1, h2, *· · ·* , hm} to the corresponding positions of the sentence embeddings zito obtain the sentence representations pairs (zi, z′i ). In this stage, our training objective is to push away the distance of representation (zi, z′i ) between sample pairs ( ˆsi, sˆi ′). Briefly, we take the Cosine Similarity between sentence representations as the loss function, defined as follows: $${\mathcal{L}}_{c o s}={\frac{\mathbf{z}\cdot\mathbf{z}^{\prime}}{\|\mathbf{z}\|\|\mathbf{z}^{\prime}\|}}={\frac{\sum_{i=1}^{n}\mathbf{z}_{i}\cdot\mathbf{z}_{i}^{\prime}}{\sqrt{\sum_{i=1}^{n}\mathbf{z}_{i}^{2}}{\sqrt{\sum_{i=1}^{n}\mathbf{z}_{i}^{\prime2}}}}},\ (2)$$ where z and z′ denote sentence representations with different sensitive attributes within a batch of size n, respectively. The representation distance between the sample pairs is enlarged with the gradient of similarity decreasing, thus amplifying the bias information between different genders. Considering that the sentence representation with high-dimensional linear distribution is not independently and equally distributed among the dimensions, only relying on Euclidean distance training may cause the sentence representation to deviate from the original distribution and thus destroy the semantic information. To constrain the change of sentence representation within the original distribution, Mahalanobis distance is taken as the regularization term of the loss function: $${\mathcal{L}}_{m a h a l}={\sqrt{(\mathbf{z}-\mathbf{S})^{\top}\Sigma^{-1}(\mathbf{z}-\mathbf{S})}},\qquad\qquad(3)$$ where z is the representation of a batch size of samples with concatenated prompt embeddings, S is the representation of the entire pre-processed training samples without concatenated prompt embeddings, and Σ is the covariance matrix of S. Mahalanobis distance is a correction of the Euclidean distance, which corrects the assumption that the Euclidean distance is independent and equally distributed among all dimensions. With the constraint of Mahalanobis distance, the augmented samples of each batch can vary within the distribution range of the original training data to maintain the semantics. The overall loss function of the continuous prompt tuning stage is defined as: $${\mathcal{L}}_{P T}={\mathcal{L}}_{c o s}+\alpha\times{\mathcal{L}}_{m a h a l},$$ where α is a hyperparameter that adjusts the weight of L*mahal*. In the gradient descent process of LP T , we only adjust the parameters of the generator G(·) and fix the PLMs' parameters to obtain the continuous prompt embeddings that further amplifies the bias between different sensitive attributes. Algorithm 1: Proposed CCPA framework. Input: Pre-processed training corpus S, PLM encoder E(·), Initial prompt generator G(·), Prompt template T, Hyperparameter *α, β, τ* . 1 **while** *stage 1* do 2 Apply T to ∀(si, s′i) ∈ S to obtain (ˆsi, sˆ ′i); 3 Obtain (zi, z ′i) = (E(ˆsi), E(ˆs ′i)); 4 Replace {h1, h2, *· · ·* , hm} encoded by G(·) in the corresponding position in (zi, z ′i); 5 Calculate Lcos and L*mahal* with {(zi, z ′i)} n i=1; 6 Update G(·)'s parameters following Equation 4; 7 end 8 **while** *stage 2* do 9 Mask ∀(si, s′i) randomly with a 15% probability; 10 Obtain {(zi, z ′i)} n i=1 using E(·) and G(·); 11 Calculate Lnce and Lmlm and update E(·)'s parameters following Equation 6. 12 end ## 2.3 Fine-Tuning With Contrastive Learning We then use contrastive learning to mitigate the social bias in PLMs' encoding for different demographic groups. Contrastive learning (Yang et al., 2019) is a task-agnostic self-supervision method that learns data features by minimizing contrastive loss to maximize the similarity of the representation vectors of positive sample pairs (Das et al., 2022). Specifically, we encourage as much consistency as possible among representations of different sensitive attributes by maximizing the similarity of the augmented counterfactual pairs. Noise Contrast Estimation (Gutmann and Hyvärinen, 2010) is usually used as a contrastive loss function, given an augmented sample pair of a batch {(ˆsi, sˆ′i )} n i=1, which is defined as follows: $${\mathcal{L}}_{n c e}={\frac{1}{n}}\sum_{i=1}^{n}\log{\frac{e^{s i m({\bf z}_{i},{\bf z}_{i}^{\prime})/\tau}}{{\frac{1}{n}}\sum_{j=1}^{n}e^{s i m({\bf z}_{i},{\bf z}_{j})/\tau}}},\quad(5)$$ $$(4)$$ where (zi, z′i ) = (E(ˆsi), E(ˆs′i )), τ is a temperature hyperparameter and sim(·, ·) denotes the similarity function usually using cosine similarity. During training, we only fine-tune the PLMs' parameters and fix the embedding of continuous prompts. By maximizing Lnce, differences in the encoding of PLM outputs specific to different demographic groups are eliminated, resulting in representations independent of sensitive attributes. Considering that the attenuation of biases towards encoding may affect PLMs' language modeling capability, we add a Masking Language Modeling (MLM) loss during the fine-tuning stage to aid PLM training (He et al., 2022). Following previous work (Devlin et al., 2019), we randomly mask tokens in training texts with a 15% probability.2 Our objective is to train the encoder to predict the masked tokens through contextual semantics, thereby preserving the language modeling capability of PLMs. The overall loss function in the fine-tuning stage is defined as follows: $${\mathcal{L}}_{F T}={\mathcal{L}}_{n c e}+\beta\times{\mathcal{L}}_{m l m},$$ LF T = Lnce + β × Lmlm, (6) where β is a hyperparameter that controls the weight of Lmlm. Our overall algorithm is given in Algorithm 1. ## 3 Experiments In this section, we conduct experiments to evaluate the performance of CCPA, in order to answer the following three research questions. Q1. How effective is CCPA in mitigating social biases in PLMs' encoding? Q2. How does each component affect CCPA? Q3. Will CCPA preserve the language modeling capability of PLMs? ## 3.1 Experimental Setup 3.1.1 Attribute Word List & Datasets Following (Bolukbasi et al., 2016; Liang et al., 2020; Cheng et al., 2021; He et al., 2022), our gender attribute word list is set to: {*MALE, FEMALE*}={(man, woman), (boy, girl), (he, she), (father, mother), (son, daughter), (guy, gal), (male, female), (his, her), (himself, herself), (John, Mary)}. Following (Liang et al., 2020; Cheng et al., 2021), we select five real-world datasets as the initial training corpus, which are Stanford Sentiment Treebank (Socher et al., 2013), POM (Park et al., 2014), WikiText-2 (Merity et al., 2017), Reddit (Völske et al., 2017) and MELD (Poria et al., 2019) respectively. We set the maximum sentence length to 100, and the pre-processed training corpus contained 10,510 sentences. ## 3.1.2 Baselines & Implementation Details We select seven recent task-agnostic debiasing models as baselines. CDA (Zmigrod et al., 2019), Dropout (Webster et al., 2020), **Sent-Debias** (Liang et al., 2020), **FairFil** (Cheng et al., 2021), INLP (Ravfogel et al., 2020) and **MABEL** (He 2In practice, the chosen masked token has an 80% chance of being masked, a 10% chance of being replaced with another word, and a 10% chance of remaining unchanged. $\mu$. et al., 2022) apply counterfactual data augmentation to sentence-level debiasing, where **FairFil** and MABEL adopt the contrastive learning framework training model. **Auto-Debias** (Guo et al., 2022) directly uses the attribute word list and the stereotype words list as the training corpus. We perform the main experiments on BERT (Devlin et al., 2019) and compare CCPA to all baseline models. We also test debiasing performance on DistilBERT (Sanh et al., 2019) and ELEATRA (Clark et al., 2020). All checkpoints use bert-base-uncased, *distilbert-base-uncased*, and *google/electra-base-generator* implemented by Huggingface Transformers library (Wolf et al., 2020). In the continuous prompt tuning stage, the learning rate is set to 1e−5, the batch size is set to 64 and α = 0.005. Following P-Tuning (Liu et al., 2021b), the virtual tokens template of continuous prompts is denoted as a triplet with the length of each element selected on {1, 2, 3}. In the fine-tuning stage, the learning rate is set to 1e−4. The batch size is set to 32, β = 1 and τ = 1. We report the average of the results of three runs over 20 epochs. To compare the baseline models more fairly, we apply the same attribute word lists and training datasets to CDA and Dropout as CCPA. The implementation codes for CDA, Dropout, Sent-Debias, and INLP are provided by (Meade et al., 2022), and the implementation codes for FairFil and AutoDebias are provided by the authors. For MABEL, we report the results from its original paper. ## 3.2 Evaluation Metrics We measure debiasing performance using the common three internal bias evaluation metrics and two external bias evaluation metrics. ## 3.2.1 Internal Bias Evaluation Metrics Sentence Encoder Association Test (SEAT) (May et al., 2019) uses sentence templates to evaluate the association between different sensitive attribute demographic and target concepts. Given the attribute word lists A and B, the target words lists X , Y. The results are presented by effect size, defined as: $$d={\frac{\mu(\{s(x,{\mathcal{A}},{\mathcal{B}})\})-\mu(\{s(y,{\mathcal{A}},{\mathcal{B}})\})}{\sigma(\{s(t,{\mathcal{X}},{\mathcal{Y}})\}_{t\in{\mathcal{A}}\cup{\mathcal{B}}})}},\,\,(7)$$ where x ∈ X and y ∈ Y, µ(·) is the mean function and σ(·) is the standard deviation. And s(w, A, B) 14258 Model Metric SEAT-6 SEAT-6b SEAT-7 SEAT-7b SEAT-8 SEAT-8b Avg. LM SS ICAT CrowS BERT 0.932 0.090 -0.124 0.937 0.783 0.858 0.621 84.17 60.28 66.86 57.86 +CDA 0.596 -0.103 -0.236 0.800 0.394 0.734 0.477 85.47 58.69 70.63 55.35 +Dropout 0.912 0.121 0.321 0.857 0.777 0.867 0.642 85.42 60.11 68.16 55.35 +Sent-Debias 0.336 -0.314 -0.624 **0.514** 0.391 0.436 0.436 **85.60** 59.05 70.11 42.14 +FairFil 0.683 -0.140 -0.616 0.839 **0.049** -0.501 0.471 48.78 **46.44** 45.31 62.89 +Auto-Debias 0.373 **-0.056** 0.745 1.175 0.856 0.823 0.671 81.76 57.33 69.78 52.83 +INLP 0.619 -0.226 0.326 0.591 0.430 0.549 0.457 82.69 58.09 69.31 50.94 +MABEL 0.664 0.167 0.479 0.647 0.465 0.570 0.499 84.80 56.92 73.07 **50.76** +CCPA (Ours) **0.181** -0.317 **0.104** 0.633 0.142 **0.115 0.249** 84.44 56.61 **73.28** 51.57 DistilBERT 1.380 0.446 -0.179 1.242 0.837 1.217 0.883 **84.75** 60.52 66.93 59.75 +CCPA (Ours) **0.409 -0.024 0.138 -0.029 -0.029 0.283 0.152** 81.91 56.47 71.30 **50.31** ELEATRA 0.820 0.036 1.180 1.007 0.782 0.958 0.797 **85.12** 58.15 71.24 52.83 +CCPA (Ours) **0.251 0.012 0.647 0.437 0.720 0.460 0.421** 84.63 52.97 79.61 **49.06** is the bias degree defined as: s(w, A, B) = µ(cos(w, a)) − µ(cos(*w, b*)). The gender-specific subsets of SEAT are 6, 6b, 7, 7b, 8, and 8b. We report the effect size of debiasing models on each subset and the average value of the absolute value of the six subsets, respectively. StereoSet (Nadeem et al., 2021) uses the fill-in-theblank template to investigate the stereotype association of PLM. The Language Modeling Score (LM) is the percentage of stereotype or anti-stereotype words selected by the model based on incomplete contextual sentences. The Stereotype Score (SS) is the percentage of models that choose stereotypes over anti-stereotypes. The Idealized Context Association Test (ICAT) is a comprehensive evaluation index of LM and SS. Crowdsourced Stereotype Pairs (CrowS-Pairs) (Nangia et al., 2020) is a dataset containing pairs of stereotype sentences and anti-stereotype sentences. We report the ratio of mask token probabilities assigned to stereotype sentences rather than anti-stereotype sentences, denoted using CrowS. ## 3.2.2 External Bias Evaluation Metrics Bias-in-Bios (De-Arteaga et al., 2019) is a biography dataset in which each sample is labeled with gender (male or female) and occupation (28 categories). We fine-tune the debiased model on the training set with the goal of predicting occupations. Overall Accuracy result is used to measure task precision, and individual Accuracy results for male and female are used to measure gender fairness. Furthermore, we report the gap between the true positive rates of the male prediction results and the female prediction results denotes as GAP*T P R*, as well as the root mean square of the true positive rates difference for each category denotes as GAPRMS. The closer their score is to 0, the better. They are defined as follows: $$G A P_{T P R}=|T P R_{M}-T P R_{F}|,$$ $$G A P_{R M S}={\sqrt{\frac{1}{|C|}}}\sum_{y\in C}(G A P_{T P R,y})^{2}.\quad(9)$$ $$({\boldsymbol{\delta}})$$ Bias-NLI (Dev et al., 2020) fills gender words and occupation words with stereotypes into sentence templates to form sentence pairs, and the training goal is to inference whether the sentence pair is neutral or not. It defines three metrics to reflect the fairness of the model: 1) Net Neutral (NN), the average probability of neutral labels across all sentence pairs; 2) Fraction Neutral (FN), the proportion of sentence pairs marked as neutral; 3) Threshold:τ (T:τ ), The fraction of samples with neutral probability above τ is reported. ## 3.3 Debiasing Performance Analysis 3.3.1 Internal Debiasing Results Table 1 shows the experimental results of three bias evaluation metrics for CCPA and baseline models on BERT, DistilBERT, and ELEATRA. We also report results for biased BERT, DistilBERT, and ELEATRA as references. The results show that CCPA achieves a better balance between PLMs' fairness and language modeling capability than the baseline models. For BERT, CCPA reduces the average effect size from 0.621 to 0.249, increases ICAT from 66.86 to ![6_image_0.png](6_image_0.png) 73.28, and reduces CrowS from 57.86 to 51.57. Our method has achieved optimal results in the three test subsets of SEAT 6, 7, 8b and the average effect size, and has also been greatly improved in the other test subsets. The results on StereoSet show that CCPA does not weaken BERT's language modeling ability but slightly improves it. Although LM and SS do not achieve optimal results, our comprehensive index ICAT is better than other models. Both FairFil and MABEL are biased by contrastive learning, but their overall performance is not ideal. Although FairFil is outstanding in terms of SS performance, it seriously damages BERT's language modeling ability, possibly because it only considers sentence-level representation and does not retain token-level encoding ability. MABEL achieves promising results on StereoSet and CrowS-Pairs, but its SEAT results must be improved. Regarding overall performance, CCPA outperforms other contrastive learning frameworks, demonstrating that our adversarial training inspired approach can improve the model's learning ability by increasing the complex information in the model. For DistilBERT, CCPA decreases the average effect size from 0.883 to 0.152 and improves ICAT from 66.93 to 71.30. Our model gets excellent experimental results on most test subsets of SEAT and reaches an almost ideal 50.31% result on CrowSPairs. LM score decreases, and we analyze that the semantic information of the original representation is affected by too much debiasing. For ELEATRA, which does not belong to the bert-series PLM, the debiasing effect of CCPA is equally significant, and the experimental results are fairer than the original ELEATRA on all three intrinsic metrics. In detail, CCPA reduced the average effect size from 0.797 to 0.421, increases ICAT by 8.37% without significantly decreasing LM score, and reduces CrowS score by 1.89%. We also perform a small qualitative study by vi- | Model | Acc. | Acc. | Acc. | GAP | GAP | |--------------|--------|--------|--------|-------|-------| | (All) | (M) | (F) | TPR | RMS | | | BERT | 84.14 | 84.69 | 83.50 | 1.189 | 0.144 | | +INLP | 70.50 | - | - | - | 0.067 | | +Sent-Debias | 83.56 | 84.10 | 82.92 | 1.180 | 0.144 | | +FairFil | 83.18 | 83.52 | 82.78 | 0.746 | 0.142 | | +MABEL | 84.85 | 84.92 | 84.34 | 0.599 | 0.132 | | +CCPA (Ours) | 85.65 | 85.41 | 85.95 | 0.544 | 0.121 | | Model | NN | FN | T:0.5 | T:0.7 | |--------------|-------|-------|---------|---------| | BERT | 0.799 | 0.879 | 0.874 | 0.798 | | +Sent-Debias | 0.793 | 0.911 | 0.897 | 0.788 | | +FairFil | 0.829 | 0.883 | 0.846 | 0.845 | | +MABEL | 0.900 | 0.977 | 0.974 | 0.935 | | +CCPA (Ours) | 0.883 | 0.932 | 0.929 | 0.878 | sualizing t-SNE plots of sentence embedding. As can be seen from Figure 3, in BERT, male attribute words are more inclined to target words in the technical field (such as career or *science*) in the embedded space, while female attribute words are more inclined to target words in the humanities (such as family or *poetry*). After using CCPA to debias, it is observed that gender-attribute words are pulled closer together and away from neutral words in the representational space. ## 3.3.2 External Debiasing Results We fine-tune the debiased BERT on two downstream tasks Bias-in-Bios and Bias-NLI to verify the effect of CCPA on external debiasing, and the results are shown in Tables 2 and 3. All our experimental setups are consistent with MABEL, and all the results reported in the table for the baseline models are from MABEL. On the Bias-in-Bios task as shown in Table 2, CCPA not only achieves the optimal results on task accuracy, but also performs the best on all gender fairness metrics except GAPRMS. Although INLP obtains the best score on the GAPRMS metric, its task accuracy is clearly impaired from the reported results. Compared to all baselines, CCPA achieves the best overall debiasing performance while preserving the model's prediction performance on downstream tasks. On the Bias-NLI task as shown in Table 3, CCPA BERT+ SEAT-6 SEAT-6b SEAT-7 SEAT-7b SEAT-8 SEAT-8b Avg. LM SS ICAT CrowS T(1,1,1) CCPA 0.181 -0.317 **0.104** 0.633 **0.142** 0.115 0.249 **84.44** 56.61 **73.28** 51.57 CCPA− -0.198 **-0.100** -0.162 **-0.309** 0.535 0.243 0.258 79.34 57.31 67.75 47.17 CCPA∗ **0.044** -0.295 -0.340 0.425 -0.400 **0.091** 0.266 78.94 **56.13** 69.26 **50.31** T(2,2,2) CCPA 0.126 -0.135 -0.379 0.144 0.416 0.056 0.209 **82.40 55.36 73.56** 51.57 CCPA− 0.264 **0.065 -0.372 0.127** 0.150 0.481 0.243 79.23 57.44 67.44 48.43 CCPA∗ **0.0918** -0.178 0.509 0.311 **0.144** 0.271 0.251 79.84 54.94 71.95 **50.94** T(3,3,3) CCPA **0.034 -0.006 0.193** 0.278 -0.189 0.346 0.174 82.62 54.80 74.68 **49.06** CCPA− 0.149 0.037 -0.826 **-0.106** -0.124 **-0.303** 0.258 79.18 59.63 63.92 43.40 CCPA∗ 0.119 -0.131 -0.334 0.225 **-0.098** -0.365 0.212 79.95 56.93 68.87 **50.94** NO*prompt* 0.325 0.186 0.342 0.535 0.144 0.553 0.347 81.32 54.82 73.48 60.38 NOprompt+*mask* 0.730 -0.012 -0.185 -0.530 0.927 -0.158 0.424 61.85 53.98 56.92 34.59 achieves sub-optimal results on all the metrics. It is worth stating that MABEL is a debiasing method trained on the NLI task, which we analyze as the main reason for its most outstanding performance. Even so, the strong debiasing effect shown by CCPA on task Bias-NLI is heartening. The results of the internal debiasing experiment and the external debiasing experiment show that our proposed CCPA has outstanding performance in mitigating gender bias in PLMs' encoding. CCPA has an efficient debiasing performance, which answers the first question (Q1) proposed at the beginning of this section. ## 3.4 Ablation Analysis We conduct ablation experiments on BERT to investigate how each component affects CCPA performance. The results are shown in Table 4. T(1,1,1) indicates that the continuous prompt template is a triplet with one virtual token for each element, i.e., the length of prompts is 3. By analogy, T(2,2,2) and T(3,3,3) represent prompt templates of lengths 6 and 9. The purpose of this setting is to make it easier to observe the effect of the prompts' length on the model. In the experimental group of each template, we compare three versions of CCPA: the original CCPA, the version without Lmlm represented as CCPA− and the version without L*mahal* represented as CCPA∗. In addition, we have experimented with both CCPA without prompts and CCPA without prompts and Lmlm. It is observed from the experimental results that the debiasing ability of CCPA increases with the rise of the template's length. This indicates that longer continuous prompt embeddings bring more difficult information to the model, thus increasing the debiasing effort. However, more extended templates can cause the original sentence semantics to be broken and thus weaken PLM's language modeling capability. In each experimental group, both CCPA− and CCPA∗show a decrease in the results of the three evaluation metrics compared to CCPA. This phenomenon verifies that both MLM-assisted loss and Mahalanobis distance constraint benefit CCPA. Overall, MLM has a greater influence, especially on SS and CrowS, which may be because random mask tokens train encoders to retain tokenlevel semantic information. In addition, the results of NO*prompt* verify that continuous prompts play an essential role in CCPA. NOprompt+*mask* tests the effect of finetuning PLMs based solely on contrastive learning. Unsurprisingly, the performance on all indexes could be better. The results of NO*prompt* and NOprompt+*mask* again reflect our method's effectiveness. The ablation studies answer our second question (Q2) by exploring the role played by each component of the CCPA. ## 3.5 Language Modeling Capability Analysis We perform experiments on nine natural language understanding tasks of the GLUE benchmark to verify the language modeling capability of CCPA on downstream tasks. In task-specific fine-tuning, we set the learning rate to 2e − 5 and the batch size to 32 for all models. As in Table 5, CCPA's performance in 9 tasks is comparable to that of the original BERT, and the average results are almost equivalent to BERT's. CCPA also shows similar performance on DistilBERT, indicating that our model is effective on other models besides BERT. Combined with the LM score in Table 1, the experiment shows that CCPA can debias without damaging the language modeling capability of PLMs, thus answering the third research question (Q3). Model CoLA MNLI MRPC QNLI QQP RTE SST STS-B WNLI Average BERT 56.78 84.76 **89.54** 91.51 88.06 64.62 **93.35** 88.24 **56.34** 79.24 +CDA 2.07 84.84 81.22 84.84 87.85 47.29 92.32 40.83 43.66 62.77 +Dropout 2.07 84.78 81.22 91.49 88.02 47.29 92.09 40.87 43.66 63.50 +Sent-Debias 55.72 **84.94** 88.81 91.54 87.88 63.90 93.12 88.23 **56.34** 78.94 +FairFil 55.72 84.85 88.33 **91.84** 87.43 64.98 93.12 88.55 50.7 78.39 +Auto-Debias 57.01 84.91 88.54 91.65 87.92 64.62 92.89 88.43 40.85 77.42 +INLP 56.50 84.78 89.23 91.38 87.94 **65.34** 92.66 88.73 54.93 79.05 +MABEL **57.80** 84.50 85.00 91.60 **88.10** 64.30 92.20 89.20 - **81.59** +CCPA 55.91 84.73 88.65 91.42 87.98 64.93 93.09 88.44 55.66 78.98 DistilBERT **47.93** 82.01 **88.47 88.61** 86 68 **58.84** 90.71 **86.26 56.34 76.21** +CCPA 46.73 **82.53** 86.99 87.76 **86.85** 56.26 **90.83** 85.89 55.93 75.53 ## 4 Related Work We divide debiasing methods into two categories based on the debiasing strategy: task-specific methods and task-agnostic methods. ## 4.1 Task-Specific Methods Task-specific methods adopt the strategy of debiasing in the fine-tuning stage of the downstream task, of which the downstream task is known (Han et al., 2021; Chi et al., 2022). One representative work is INLP (Ravfogel et al., 2020, 2022), which repeatedly trains a linear classifier that predicts the target concept, and then projects the representation into the null space of the classifier's weight matrix to remove the representation bias. Contrastive learning is proposed to mitigate bias in classifier training (Shen et al., 2021). It encourages instances sharing the same class labels to have similar representations while ensuring that protected attributes have different distributions. These methods use attribute words to label training data without CDA. However, they are biased towards specific downstream tasks and cannot be applied to other tasks in general. When training data change, task-specific methods are difficult to transfer to new tasks. ## 4.2 Task-Agnostic Methods Task-agnostic methods adopt the strategy of debiasing representation or processing unbalanced data before the downstream task, and they can be applied to any downstream task (Dev et al., 2020, 2021). Most of these methods apply counterfactual data augmentation to augment the unbalanced corpus and then debias the augmented text information. Counterfactual data augmentation (Lu et al., 2020) is a general approach to augment corpora through causal intervention and has since been widely used to mitigate social biases. Different variants of counterfactual data augmentation have been proposed, such as Sent-Debias (Liang et al., 2020), FairFil (Cheng et al., 2021), MABEL (He et al., 2022), to name a few examples. Task-agnostic methods primarily use the CDA to balance the training corpus by constructing counterfactual pairs specific to different demographic groups. However, simply applying CDA to the original corpus makes minor changes, constraining the representation space to a narrow range. This makes the model easily fit the differences between counterfactual pairs, weakening the debiasing ability. Unlike existing CDA methods, we train a generator that encodes continuous prompts before fine-tuning PLM. The goal is to widen the representation distance between different groups to increase the difficulty of the model-learning process. ## 5 Conclusions Inspired by adversarial training, we propose CCPA, a two-stage debiasing model that combines contrastive learning with continuous prompts. In the continuous prompt tuning stage, we train a generator encoding continuous prompt embeddings to increase the representative distance between counterfactual pairs. In the fine-tuning stage, we use contrastive learning to reduce the representation distance between the augmented sample pairs. By increasing the difficulty of the training process, CCPA enables PLMs to learn a stronger debiasing ability. Extensive experiments on BERT and DistilBERT show that CCPA effectively reduces social bias in PLM representation while retaining language modeling capability. ## Limitations In this work, we focus on debiasing the gender bias for PLMs. In the future, we will try to mitigate social biases other than gender, such as race and religion. In addition, we also plan to extend our debiasing method to more language models, such as Natural Language Generation (NLG) models. ## Ethics Statement This paper has been thoroughly reviewed for ethical considerations and has been found to be in compliance with all relevant ethical guidelines. The paper does not raise any ethical concerns and is a valuable contribution to the field. ## Acknowledgments We express gratitude to the anonymous reviewers for their hard work and kind comments. The work was supported in part by the National Natural Science Foundation of China (No.62272191, No.61976102), the Science and Technology Development Program of Jilin Province (No.20220201153GX), the Interdisciplinary and Integrated Innovation of JLU (No.JLUXKJC2020207), and the Graduate Innovation Fund of Jilin University (No.2022214). ## References Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Trans. Assoc. Comput. Linguistics, TACL, 6:587–604. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems,NeurIPS*, pages 4349– 4357. Hao Chen, Rui Xia, and Jianfei Yu. 2021. Reinforced counterfactual data augmentation for dual sentiment classification. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, EMNLP, pages 269–278. Association for Computational Linguistics. Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. Fairfil: Contrastive neural debiasing method for pretrained text encoders. In Proceedings of the 9th International Conference on Learning Representations, ICLR. OpenReview.net. Jianfeng Chi, William Shand, Yaodong Yu, Kai-Wei Chang, Han Zhao, and Yuan Tian. 2022. Conditional supervised contrastive learning for fair text classification. *CoRR*, abs/2205.11485. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In Proceedings of the 8th International Conference on Learning Representations, ICLR. OpenReview.net. Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca J. Passonneau, and Rui Zhang. 2022. Container: Fewshot named entity recognition via contrastive learning. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics, ACL, pages 6338–6353. Association for Computational Linguistics. Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT, pages 120–128. ACM. Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, and Bettina Berendt. 2022. Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models. In *Proceedings of the 2022 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL, pages 1693–1706. Association for Computational Linguistics. Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Srikumar. 2020. On measuring and mitigating biased inferences of word embeddings. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, pages 7659–7666. AAAI Press. Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Srikumar. 2021. Oscar: Orthogonal subspace correction and rectification of biases in word embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 5034–5050. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL, pages 4171–4186. Association for Computational Linguistics. Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics, ACL, pages 1012–1023. Association for Computational Linguistics. Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings* of the 13th International Conference on Artificial Intelligence and Statistics, AISTATS, volume 9 of JMLR Proceedings, pages 297–304. JMLR.org. Xudong Han, Timothy Baldwin, and Trevor Cohn. 2021. Diverse adversaries for mitigating bias in training. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, EACL, pages 2760–2765. Association for Computational Linguistics. Jacqueline He, Mengzhou Xia, Christiane Fellbaum, and Danqi Chen. 2022. MABEL: Attenuating gender bias using textual entailment data. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, pages 4582–4597. Association for Computational Linguistics. Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2020. Towards debiasing sentence representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pages 5502–5515. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. GPT understands, too. *CoRR*, abs/2103.10385. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2020. Gender bias in neural natural language processing. In *Logic, Language, and Security - Essays Dedicated to Andre* Scedrov on the Occasion of His 65th Birthday, volume 12300 of *Lecture Notes in Computer Science*, pages 189–202. Springer. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 622–628. Association for Computational Linguistics. Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics, ACL, pages 1878–1898. Association for Computational Linguistics. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In *Proceedings of the 5th International Conference on Learning Representations, ICLR*. OpenReview.net. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, pages 5356–5371. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1953–1967. Association for Computational Linguistics. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In *Proceedings of the 16th International Conference on Multimodal Interaction, ICMI*, pages 50–57. ACM. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL, pages 2227–2237. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, pages 2463–2473. Association for Computational Linguistics. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL, pages 527– 536. Association for Computational Linguistics. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pages 7237–7256. Association for Computational Linguistics. Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan Cotterell. 2022. Linear adversarial concept erasure. In *Proceedings of the International Conference on Machine Learning, ICML*, volume 162 of Proceedings of Machine Learning Research, pages 18400–18421. PMLR. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Timo Schick and Hinrich Schütze. 2020. Few-shot text generation with pattern-exploiting training. *CoRR*, abs/2012.11926. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255–269. Association for Computational Linguistics. Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2021. Contrastive learning for fair representations. *CoRR*, abs/2109.10645. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 4222–4235. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1631–1642. ACL. Michael Völske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. Tl;dr: Mining reddit to learn automatic summarization. In *Proceedings of* the Workshop on New Frontiers in Summarization, NFiS@EMNLP, pages 59–63. Association for Computational Linguistics. Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022. Promda: Prompt-based data augmentation for lowresource NLU tasks. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics, ACL, pages 4242–4255. Association for Computational Linguistics. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. *CoRR*, abs/2010.06032. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP, pages 38–45. Association for Computational Linguistics. Zonghan Yang, Yong Cheng, Yang Liu, and Maosong Sun. 2019. Reducing word omission errors in neural machine translation: A contrastive learning approach. In *Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL*, pages 6191–6196. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL, pages 629–634. Association for Computational Linguistics. Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 5017–5033. Association for Computational Linguistics. Ran Zmigrod, S. J. Mielke, Hanna M. Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL, pages 1651–1661. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract; 5 Conclusions ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 3 Experiments ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Our model uses the standard pre-trained language model as the backbone network, and the number of parameters and consumption are equivalent to that of the pre-trained language model. This part is not the focus of our discussion and is not specifically mentioned. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3.1.2 Baselines & Implementation Detai ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3.1.2 Baselines & Implementation Detai ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3.1.2 Baselines & Implementation Detai D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zou-etal-2023-towards
Towards Understanding Omission in Dialogue Summarization
https://aclanthology.org/2023.acl-long.798
Dialogue summarization aims to condense the lengthy dialogue into a concise summary, and has recently achieved significant progress. However, the result of existing methods is still far from satisfactory. Previous works indicated that omission is a major factor in affecting the quality of summarization, but few of them have further explored the omission problem, such as how omission affects summarization results and how to detect omission, which is critical for reducing omission and improving summarization quality. Moreover, analyzing and detecting omission relies on summarization datasets with omission labels (i.e., which dialogue utterances are omitted in the summarization), which are not available in the current literature. In this paper, we propose the OLDS dataset, which provides high-quality omission labels for dialogue summarization. By analyzing this dataset, we find that a large improvement in summarization quality can be achieved by providing ground-truth omission labels for the summarization model to recover omission information, which demonstrates the importance of omission detection for omission mitigation in dialogue summarization. Therefore, we formulate an omission detection task and demonstrate our proposed dataset can support the training and evaluation of this task well. We also call for research action on omission detection based on our proposed datasets. Our dataset and codes are publicly available.
# Towards Understanding Omission In Dialogue Summarization Yicheng Zou1∗, Kaitao Song2†, Xu Tan2**, Zhongkai Fu**2, Qi Zhang1, Dongsheng Li2, **Tao Gui**3† 1School of Computer Science, Fudan University, Shanghai, China 2Microsoft Research Asia, China 3Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China {yczou18,qz,tgui}@fudan.edu.cn {kaitaosong,xuta,zhongfu,dongsheng.li}@microsoft.com ## Abstract Dialogue summarization aims to condense the lengthy dialogue into a concise summary, and has recently achieved significant progress. However, the result of existing methods is still far from satisfactory. Previous works indicated that omission is a major factor in affecting the quality of summarization, but few of them have further explored the omission problem, such as how omission affects summarization results and how to detect omission, which is critical for reducing omission and improving summarization quality. Moreover, analyzing and detecting omission relies on summarization datasets with omission labels (i.e., which dialogue utterances are omitted in the summarization), which are not available in the current literature. In this paper, we propose the OLDS dataset, which provides high-quality Omission Labels for Dialogue Summarization. By analyzing this dataset, we find that a large improvement in summarization quality can be achieved by providing ground-truth omission labels for the summarization model to recover omission information, which demonstrates the importance of omission detection for omission mitigation in dialogue summarization. Therefore, we formulate an omission detection task and demonstrate our proposed dataset can support the training and evaluation of this task well. We also call for research action on omission detection based on our proposed datasets. Our dataset and codes are publicly available 1. ## 1 Introduction With the exponential increase in the volume of conversational messages from daily life, there is a growing demand for dialogue summarization (Murray and Carenini, 2008; Gliwa et al., 2019; Chen and Yang, 2020; Zhong et al., 2021; Zou et al., ∗This work was done when the first author was an intern at Microsoft Research Asia. †Corresponding authors. 1https://github.com/microsoft/ MSummarizer/ 2021c), which compresses lengthy interactions into a more concise and structured piece of text while preserving the most important and relevant information. Recent years have witnessed significant progress in abstractive dialogue summarization, especially using large-scale pre-trained language models (Lewis et al., 2020; Raffel et al., 2020). Despite the advances in a high level of fluency and coherence, existing models are still prone to generate defective summaries (Krysci ´ nski et al. ´ , 2019; Maynez et al., 2020; Tang et al., 2022) that limit their practical usage. Previous works (Chen and Yang, 2020; Liu et al., 2021; Tang et al., 2022) have investigated the taxonomy of errors involved in output summaries, and human evaluations revealed that the majority of errors fall into the category of omission, which often leads to incomplete summaries where critical facts are lost. However, few of these works have further analyzed the omission problem, let alone addressing this problem. To reduce omission rate and improve summarization quality, a comprehensive analysis on omission problem (e.g., how omission affects summary results) and a precise omission detection (i.e., to locate which dialogue utterances are omitted in the summarization) is important. However, there are no omission related datasets in dialogue summarization literature to support such analysis and detection. Hence, in this work, we construct the OLDS dataset, which provides high-quality Omission Labels for Dialogue Summarization. Our dataset is built upon five existing benchmarks covering different domains. For each dialogue, we use different abstractive models to generate diverse candidates and propose a reference-based strategy to automatically label omissions for these candidates. The human evaluation indicates that our OLDS dataset presents a high quality of omission labels. Based on the curated OLDS dataset, we comprehensively investigate the omission problem in dialogue summarization from multiple aspects. **First**, 14268 we analyze the proportion of candidates with omission errors and the position distribution of omitted information in dialogues. The results reveal that omission is a severe problem that frequently occurs in dialogue summarization. **Second**, we measure the correlation between the omission rate and multiple reference-based metrics (e.g., ROUGE and BERTScore), discovering that omission is one of the decisive factors influencing the summary evaluation results. **Third**, we explore the potential performance improvement brought by utilizing the omission information in a post-editing manner. The analyses probe that candidate summaries could be effectively improved as long as the model is provided with the omitted dialogue utterances. Hence, how to accurately locate omission information in dialogue naturally becomes a critical question. To pave the way to omission mitigation and summary improvement, we formulate the task of omission detection, which aims to identify the omitted utterance given the whole dialogue utterances and the generated summary with potential omission. In addition, we present three different frameworks as baselines for the omission detection task, including pair-wise classification, sequence labeling, and pointer network extraction. Experimental analyses on the OLDS dataset reveal that omission detection, as a promising direction to assessment and improvement for dialogue summarization, poses significant values and challenges. The contributions of our paper are as follows: - We propose OLDS, a dataset with high-quality omission labels for dialogue summarization, to facilitate the research on the omission problem. - Based on OLDS, we systematically analyze the omission problem and demonstrate the significance of omission in dialogue summarization. - We introduce the omission detection task that paves the way to omission mitigation and summary improvement. We design 3 frameworks as baselines and conduct comprehensive analyses to provide possible directions for solving this task. ## 2 The Olds **Dataset** In this section, we first define what is omission. Then, we introduce OLDS, a dataset that contains Omission Labels for Dialogue Summarization that facilitates the analysis of omission problem and Dialogue: (01) **Adam:** Have you talked to May? (02) **Karen:** Yes, yesterday, why? (03) **Adam:** I just talked to her and I must admit I **worry** about her. (04) **Karen:** Me too, I suggested she should see a specialist, but she wasn't very happy about it. (05) **Adam:** No wonder... (06) **Karen:** I know, but I think this is serious. She's saying she's depressed, like everyone around, but in her case it may be true. (07) **Adam:** She was telling me she doesn't feel like doing anything, she's bored all the time, she never feels happy. It sounds like a real, typical depression. ...... ...... (12) **Adam:** Yes, but she doesn't want to see a specialist. Basically, she doesn't want to see anyone. (13) **Karen:** Hm... I don't know... How about I call someone for advice? So we could know what to do. (14) **Adam:** Sounds rational, do you know anyone you could call? Don't mention her name. (15) **Karen:** Of course I won't! I have a friend who's a **psychologist**, we can trust her. I'll let you know. (16) **Adam:** Thank you Karen! Reference summary: Adam and Karen are **worried** that May suffers from depression. Karen will call her friend who is a **psychologist** and ask for advice. Candidate summary: May is depressed. Karen suggested she should see a specialist, but she doesn't want to. Karen will call her friend for advice. Omission utterances (Labels): (03) (15) Table 1: An example of the OLDS dataset. The dialogue is from SAMSum and the candidate summary is generated from BARTlarge. The salient words are underlined, and the omission information is highlighted in red. the exploration of how to identify omission content. Finally, we conduct human assessment that demonstrates the high quality of OLDS. ## 2.1 The Definition Of Omission In summarization tasks, omission2is one of the most common factual errors in abstractive summaries. It usually refers to the missing content in the candidates, which is presented in the gold reference. The definition of omission content is flexible, which could refer to either the omitted keywords, text spans, or utterances. In dialogues, an utterance has a clearer boundary compared to text spans and can be viewed as a basic unit for identification and evaluation. Therefore, in this paper, we mainly focus on **utterance-level omission** and provide utterance-level labels. Table 1 shows an example of our OLDS dataset, which contains the original dialogue, reference summary, candidate summary, and omission labels. In this example, the candidate summary omits three key messages: the person "Adam", the attitude "worried" and the per-2In some previous works, omission is also called missing information (Chen and Yang, 2020; Tang et al., 2022). | Domain | Split | # of | Avg. Len. of Len. of | # of candidate summaries (Avg. ROUGE-1 score) | | | | | | | | | | | | | |-----------------------------|--------------|--------|------------------------|-----------------------------------------------------------------------|-----------------------------------------------------------------------|---------------------|-----|----------------------------------------|--------|---------------------------|--------|--------|-----|--------|-----|--------| | dialogs turns dialogs summ. | BARTL | BARTB | T5B | T5S | Transformer | PegasusL | | | | | | | | | | | | SAMSum | Train 14,732 | 11.2 | 124.1 | 23.4 | 25,424 (50.6) 12,687 (48.2) 29,473 (47.2) 32,959 (41.0) 46,777 (37.7) | 0 | (-) | | | | | | | | | | | Dev. | 818 | 10.8 | 121.2 | 23.4 | 1,636 (54.4) 1,636 (51.1) 1,636 (51.0) 1,636 (44.2) 1,636 | (39.2) 1,636 (51.1) | | | | | | | | | | | | Test | 819 | 11.3 | 126.6 | 23.1 | 1,638 (52.6) 1,638 (49.2) 1,638 (49.3) 1,638 (43.5) 1,638 | (37.9) 1,638 (50.4) | | | | | | | | | | | | Train 12,460 | 9.5 | 187.5 | 31.0 | 20,766 (46.1) 10,132 (44.1) 26,897 (44.7) 37,056 (39.5) 29,749 (39.1) | 0 | (-) | | | | | | | | | | | | DialogSum Dev. | 500 | 9.4 | 185.0 | 29.0 | 1,000 (49.5) 1,000 (46.8) 1,000 (46.2) 1,000 (40.3) 1,000 | (40.1) 1,000 (48.4) | | | | | | | | | | | | Test | 500 | 9.7 | 192.5 | 28.4 | 1,000 (46.9) 1,000 (44.3) 1,000 (44.7) 1,000 (39.1) 1,000 | (36.8) 1,000 (45.8) | | | | | | | | | | | | EmailSum | Train | 1,800 | 6.5 | 231.3 | 26.9 | 2,581 (33.2) | 730 | (33.8) 3,939 (33.0) 3,203 (29.7) 7,547 | (24.4) | 0 | (-) | | | | | | | Dev. | 249 | 6.5 | 227.2 | 26.2 | 498 | (37.8) | 498 | (36.6) | 498 | (36.1) | 498 | (34.0) | 498 | (24.8) | 498 | (35.9) | | Test | 500 | 6.5 | 243.0 | 28.2 | 1,000 (37.0) 1,000 (36.2) 1,000 (35.3) 1,000 (32.4) 1,000 | (25.7) 1,000 (35.2) | | | | | | | | | | | | QMSum | Train | 1,095 | 52.6 | 1,137.4 | 71.2 | 2,973 (38.3) | 624 | (37.2) 2,197 (31.7) 2,617 (29.9) 2,539 | (29.5) | 0 | (-) | | | | | | | Dev. | 237 | 57.7 | 1,145.4 | 71.4 | 474 | (36.0) | 474 | (33.9) | 474 | (33.0) | 474 | (28.1) | 474 | (29.5) | 474 | (24.9) | | Test | 244 | 55.6 | 1,152.2 | 63.9 | 488 | (37.4) | 488 | (35.0) | 488 | (33.8) | 488 | (29.4) | 488 | (29.1) | 488 | (24.6) | | Train | 879 | 10.5 | 244.0 | 48.2 | 678 | (47.3) | 649 | (47.2) | 919 | (43.2) 3,901 (30.7) 2,643 | (34.9) | 0 | (-) | | | | | TweetSumm Dev. | 110 | 10.2 | 226.1 | 48.4 | 220 | (52.6) | 220 | (50.0) | 220 | (48.7) | 220 | (34.8) | 220 | (35.4) | 220 | (49.0) | | Test | 110 | 10.6 | 258.2 | 47.8 | 220 | (48.4) | 220 | (46.9) | 220 | (44.4) | 220 | (32.6) | 220 | (36.1) | 220 | (45.4) | sona "psychologist", and thus the corresponding utterance-level omission labels are the 3rd and 15th utterances in the original dialogue. ## 2.2 Dataset Creation OLDS is a dataset that collects multiple candidates for dialogue summarization and provides their corresponding omission labels at the utterance level. This dataset first collects multiple public benchmarks, including SAMSum (Gliwa et al., 2019), DialogSum (Chen et al., 2021), EmailSum (Zhang et al., 2021), QMSum (Zhong et al., 2021) and TweetSumm (Feigenblat et al., 2021), to cover different dialogue domains. Then, in order to collect samples with omission contents, we still need to generate candidate summaries for each dialogue. To gain deeper insights into the omission problem induced by models with different capacities, we select 6 different model settings 3, including BARTlarge/base (Lewis et al., 2020), T5base/small (Raffel et al., 2020), Transformers (Vaswani et al., 2017), and Pegasuslarge (Zhang et al., 2020), to generate candidate summaries 4. Finally, based on the collected candidate summaries, we need to identify which salient information is omitted in these candidates. Therefore, we elaborately design a strategy to label omission automatically and the details are described in the next subsection. As a result, our OLDS is able to obtain 3We do not use extractive models because dialogue summaries are very abstractive. There is a huge gap in the format and style between summary sentences and dialogue utterances. 4Pegasuslarge is only used to generate summaries for dialogues in the validation/test sets. The purpose is to conduct the robustness evaluation on candidates from unseen sources. multiple candidates and their corresponding omission label for each dialogue. More details about the dataset creation can refer to Appendix A. ## 2.3 The Automatic Labeling Strategy It is generally a non-trivial task to identify the missing critical content in candidate summary. Fortunately, the existing datasets provide reference summaries as ground truths. We could locate the omitted information in dialogue by directly comparing candidates with references. Thus, we design a pipeline strategy for automatic omission labeling, which is composed of three steps: oracle extraction, omission identification, and redundancy removal. Appendix A.1 shows an example of the complete process of automatic omission labeling. Oracle Extraction The first step is to match summaries to the corresponding utterances in the dialogue. Following Nallapati et al. (2017), we use a greedy algorithm to select utterances from the dialogue that maximizes the Rouge score (Lin, 2004) with respect to the summary. We return this subset of utterances as oracle labels, representing their membership in the summary. We define the extracted oracle labels for reference summaries and candidate summaries as *Gold Oracle* and *Candidate Oracle*, denoted as G and C respectively. Omission Identification The goal of this step is to find out the omission set O. An intuitive solution is to calculate the complement of candidate oracle in gold oracle as G − C = {u|u ∈ *G, u /*∈ C}. Nevertheless, it is an imperfect solution because the utterances in C might still contain omitted words | Domain | Avg. Accept Num. (Rate) | kappa | |-----------|---------------------------|---------| | SAMSum | 182.3±3.6 (91.2%) | 0.689 | | DialogSum | 188.0±4.4 (94.0%) | 0.616 | | EmailSum | 192.3±2.5 (96.2%) | 0.633 | | QMSum | 197.0±1.0 (98.5%) | 0.549 | | TweetSumm | 194.0±1.7 (97.0%) | 0.656 | | Overall | 953.7±4.2 (95.4%) | 0.653 | or phrases. For instance, in Table 1, the 15th utterance with a phrase "I have a friend who's a psychologist" matches the key information "*friend*" in both reference and candidate, and this utterance would be included in both G and C. However, the keyword "*psychologist*" is actually omitted in the candidate, so the 15th utterance should be labeled as an omission. In other words, some utterances in the intersection of G and C may also be omissions. To further discover the potential omission utterances from G ∩ C = {u|u ∈ *G, u* ∈ C}, we empirically adopt a word-level comparison approach. Specifically, for each utterance u in G ∩ C, we further extract the overlapping words WuG / Wu C 5 between u and reference/candidate summary. If WuG ̸⊆ Wu C , we deem this corresponding utterance includes some key messages that are omitted in the candidate, and thus it should be labeled as an omission. During this process, we could obtain the omission words of utterance u, which is denoted as Wu = {w|w ∈ WuG*, w /*∈ Wu C}. Redundancy Removal After the omission identification, we can obtain the omission set O. However, some utterances in O can be redundant since they could share the identical missing content. For example, for utterance u1 and u2, their omission words Wu1 and Wu2 can be equal so that we can argue these two utterances share similar omission information. To reduce this redundancy, we only keep the utterance with the front position if multiple utterances have the same omission words. ## 2.4 Quality Assessment To assess the quality of the extracted omission labels for the OLDS dataset, we also conducted human evaluation to validate the correctness of the labeled utterances. We recruited three annotators 5We process words in a case-insensitive setting. We keep the original form of words but perform word stemming for comparison. Besides, stop words are removed. with NLP backgrounds and each annotator is required to answer the question whether the set of labeled omission utterances is Accept or *Reject*. The set should be marked as *Reject* as long as it misses any critical utterance (recall of labeled omissions), or includes any redundant or uninformative utterance (precision of labeled omissions). Otherwise, it should be marked as *Accept*. To this end, we randomly sampled 200 dialogue-candidate pairs from each domain for assessment. Table 3 reports the results of the human evaluation for quality assessment. The acceptance rate of human evaluation ranges between 91.2%-98.5%, which validates the effectiveness of our omission extraction strategy. Furthermore, in order to evaluate the reliability of this assessment, we measure the agreement between different annotators by reporting Fleiss' Kappa values (Fleiss et al., 1971) among the possible combinations of two annotators, as reported in Table 3. We find that the overall Kappa score is 0.653, which shows the substantial agreement between annotators. Overall, the results of human evaluation demonstrate that our omission extraction strategy is able to produce high-quality omission labels automatically. More details about human evaluation can refer to Appendix A.4. ## 2.5 Dataset Format And Statistics An example of our OLDS dataset is shown in Table 1, which contains the basic information, such as dialogue, reference, candidate, and omission labels. In the released version of OLDS, we further provide some auxiliary information. The detailed dataset format and a complete example can be seen in Appendix A.5. Table 2 shows the statistics of the OLDS dataset. We can see that the dialogues are from different domains, with different lengths and turns. Besides, the lengths of summaries also differ from each other, and the employed abstractive models are able to produce candidates with different qualities. We expect that our dataset could pave the way for analyzing the omission problem across different domains and diverse candidate summaries. ## 3 Understanding The Omission Problem In this section, we explore the omission problem in different aspects and analyze why we should pay attention to omission in dialogue summarization. ![4_image_0.png](4_image_0.png) Domains SAM. Dial. Email. QM. Tweet. RG-1 -0.563 -0.409 -0.445 -0.470 **-0.574** RG-2 -0.448 -0.342 -0.394 -0.480 -0.524 RG-L -0.510 -0.398 **-0.457** -0.494 -0.547 BLEU -0.332 -0.289 -0.231 -0.397 -0.467 BSB -0.549 -0.502 -0.418 -0.463 -0.485 BSL -0.562 **-0.504** -0.445 **-0.521** -0.546 BLEURT -0.567 -0.461 -0.292 -0.410 -0.525 ## 3.1 Distribution Of Omission Information To explain the importance of the omission problem, we answer the following two questions. Q1: How serious is the omission problem? For each abstractive model used in OLDS, we calculate the percentage of candidates which include omission information (i.e., the omission set O ̸= ∅). Generally, a lower percentage means the model's ability to identify the salient information in dialogue is more powerful. Figure 1 shows the statistical results of each model on different dialogue domains. We find that using pre-trained models always produces a lower ratio than the vanilla Transformer. Nevertheless, even using pre-trained models, we find it still reaches a high omission ratio of at least 70%. The omission phenomenon is worse in QMSum and TweetSumm, that almost 90% of their candidates have omission errors. From this perspective, we can conclude that omission is a general and grievous problem in dialogue summarization, and how to alleviate the omission problem is still intractable. Q2: How is the omission information distributed in the dialogue? To answer this question, we investigate the position distribution of omissions in dialogues. Just as shown in Figure 2, we observe ![4_image_1.png](4_image_1.png) that the omitted utterances are randomly distributed in each position of the dialogue, regardless of its length and domain. This position distribution also indicates that dialogues are unstructured, and how to identify the dispersed key information precisely is still difficult for current models. ## 3.2 **Correlation With Reference-Based Metrics** Since omission is defined by the difference between references and candidates, we thus investigate the correlation between the amount of omission content and a variety of reference-based metrics, to verify whether the omission rate of a candidate summary could affect these metrics. Here, we calculate the *omission rate* as follows: $ \text{OmissionRate}=\frac{\sum_{u\in O}|W^u|}{\sum_{u\in G}|W_G^u|},\qquad(1)$ u = ... ... where Wuand WuG denote the set of omitted words and the set of gold oracle words shared across u and the reference, respectively. It directly measures the amount of key information omitted by a summary, and a lower rate indicates the candidate is of higher quality. Table 4 demonstrates the Pearson correlations between omission rate and other reference-based metrics, including n-gram based metrics ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002), embedding-based metric BERTScore (Zhang et al., 2019), and learning-based metric BLEURT (Sellam et al., 2020). The results indicate that most of the reference-based metrics mod- ![5_image_0.png](5_image_0.png) erately correlate with the omission rate, among which BERTScoreLarge is the most stable metric that has a better correlation with the amount of omission content. By contrast, BLEU shows the least correlation because it is a precision-oriented metric. Empirical analyses indicate that the omission rate is strongly correlated with a wide range of evaluation metrics, and so how to mitigate the omission problem is one of the most important priorities to improve the quality of dialogue summaries. ## 3.3 Omission-Based Summary Refinement The above analyses demonstrate the importance of omission information. So we raise another question: what happens if we utilize the omissions to refine the summary quality? Hence, we adopt a postediting method to investigate the potential of using omissions. Specifically, we formulate summary refinement as a seq2seq task to predict the gold summary. Instead of inputting raw dialogue, we use the concatenation of candidate summary, omission utterances, and non-omission utterances as the input: "*Candidate* <sep> *Omission* <sep> *NonOmission*". By dividing dialogue utterances into the omission and non-omission groups, the model is able to distinguish omission information while perceiving the whole dialogue simultaneously. If the omission group is empty, it is identical to using candidate and raw dialogue for refinement, and we consider it as the baseline for comparison. We use BARTlarge and T5small as the backbone model, and the results are shown in Figure 3. The results show that performances are significantly enhanced by the refinement using omissions compared to that ![5_image_1.png](5_image_1.png) using raw dialogues. Notably, on some datasets like SAMSum and DialogSum, T5small with supplementary omission information even outperforms the raw BARTlarge, which indicates that omissionbased refinement is a promising direction for quality improvement in dialogue summarization. In addition, Figure 3 also shows an upper bound of performance boost by post-editing because we directly employ the gold omission utterances. However, in real situations, we may identify some incorrect omissions. To further explore the impact of wrong omissions on the post-editing results, we investigate three different perturbations by gradually injecting errors into the omission group: 1) we keep the precision as 1 and decrease the recall by moving utterances from the omission group to the non-omission group; 2) we keep the recall as 1 and decrease the precision by moving utterances from the non-omission group to the omission group; 3) we gradually exchange utterances in the two groups until they are swapped, and both the precision and recall decrease from 1 to 0. Figure 4 depicts the trend of performance degradation as the error rate increases. From the curves, we can find that the precision is relatively more important because the refinement model performs more robustly in the first type of perturbation and is sensitive to the addition of wrong omissions. ## 4 The Omission Detection Task Since candidate summaries could be effectively improved given the gold omission information, how to accurately detect omission utterances in dialogue naturally becomes a critical question. In this section, we formulate the omission detection task in a reference-agnostic setting. Formally, given a dialogue D = {u1, u2*, .., u*N } along with a candidate summary c, a detection model is required to extract | Model | SAMSum | DialogSum | EmailSum | QMSum | TweetSumm | | | | | | | | | | | | | | | |--------------------------|-------------------------|-------------------------|-------------------------|-------------------------|-------------------------|----|----|----|----|----|----|----|----|----|----|----|----|----|----| | P | R | F1 | WR | P | R | F1 | WR | P | R | F1 | WR | P | R | F1 | WR | P | R | F1 | WR | | Pair-wise Classification | | | | | | | | | | | | | | | | | | | | | BERT | 41.66 38.60 40.07 54.83 | 38.01 45.56 41.44 57.23 | 47.94 40.81 44.09 50.93 | 35.97 42.86 39.12 60.73 | 41.84 47.17 44.35 53.86 | | | | | | | | | | | | | | | | RoBERTa | 42.45 43.27 42.85 58.94 | 38.42 44.93 41.43 57.56 | 48.13 50.02 49.05 59.04 | 32.92 43.65 37.53 60.99 | 41.37 49.66 45.14 57.08 | | | | | | | | | | | | | | | | Sequence Labeling | | | | | | | | | | | | | | | | | | | | | BERT | 45.18 43.57 44.37 61.35 | 40.71 46.23 43.30 57.51 | 50.58 50.41 50.49 61.11 | 47.11 31.22 37.56 47.29 | 40.70 49.72 44.76 58.48 | | | | | | | | | | | | | | | | RoBERTa | 47.34 47.65 47.49 63.97 | 42.63 46.54 44.50 58.51 | 53.62 48.65 51.01 59.04 | 48.09 36.27 41.35 52.82 | 48.26 48.85 48.55 59.27 | | | | | | | | | | | | | | | | Pointer Network | | | | | | | | | | | | | | | | | | | | | BERT | 47.20 39.13 42.79 58.52 | 41.23 42.79 42.00 56.02 | 52.57 48.47 50.44 60.66 | 45.31 31.73 37.32 48.46 | 48.69 42.35 45.30 52.10 | | | | | | | | | | | | | | | | RoBERTa | 50.64 41.90 45.86 60.25 | 43.68 45.90 44.76 60.03 | 53.61 46.04 49.54 56.92 | 44.80 35.16 39.40 53.21 | 47.23 52.18 49.58 63.61 | | | | | | | | | | | | | | | a set of omission utterances O from D without knowing the reference summary. In this section, we introduce three typical frameworks as baselines and conduct evaluations to see how this task could benefit from them. ## 4.1 Model Settings To build a foundation for the omission detection task and explore what model architecture the task could benefit from, we investigate three frameworks as baselines, which have different input formats and structures. Their implementation and training details can be found in Appendix B.1. Pair-wise Classification A straightforward way is to model this task as an utterance-level classification problem. The input pattern for this paradigm is: <s> c </s> ui *</s>*, where <s> and *</s>* denote the classification token and separation token, respectively. c is the candidate summary and uiis the i-th utterance in the dialogue. The model would perform binary classification for the candidateutterance pair as y ∈ {0, 1}, where y = 1 represents that the utterance is identified as an omission. Sequence Labeling Inspired by BERTSum (Liu and Lapata, 2019) that formulates extractive summarization as a sequence labeling problem at the sentence level, we employ a similar strategy which assigns each utterance a label yi ∈ {0, 1} indicating whether the utterance is an omission. We append the candidate summary in front of the dialogue, as <s> c </s> <s> u1 </s> <s> u2 *</s>* ... <s> uN *</s>*. The last hidden layer of each <s> token will be used as utterance representations for classification. Pointer Network Pointer network is to select the omission utterance recurrently using glimpse operation (Vinyals et al., 2015) based on previous predictions. It is a widely-used strategy for sentence extraction in summarization (Chen and Bansal, 2018; Zhong et al., 2019; Zou et al., 2021b). Here, we use the same input format as in sequence labeling, and the pointer network outputs an extraction distribution based on the <s> representations. ## 4.2 Evaluation Metrics We use the standard Precision (P), Recall (R), and F1-score (F1) metrics on the utterance level to evaluate omission detection models. Furthermore, we calculate the percentage of gold omission words that are hit in the detected utterances to measure the word-level omission recall: $$W R={\frac{\mathrm{\#hit~omission~words}}{\mathrm{\#gold~omission~words}}}.\qquad(2)$$ \# means the counted number. The closer the wordlevel omission recall is to 1, the more the omission information is collected by the detection model. ## 4.3 Main Results Table 5 presents the experimental results on OLDS. All detection models are separately trained on the five domains. For each omission detection framework, we employ BERTbase and RoBERTabase as the backbone model to extract text features. Among these three frameworks, pair-wise classification performs the worst in most cases since it does not consider contextual information of dialogue. Meanwhile, sequence labeling is on par with the pointer network, which indicates that dialogue context is a crucial factor for models to detect the omitted content. However, although omission detection models only need to make a choice of whether the given utterance is an omission, the task is still very challenging. In Table 5, the best F1 score is around 50% in all five domains, while the recalled omission words in extracted utterances (WR) are around 60%. Besides, models in QMSum only achieve at most a F1-score of 41.35 and we guess it is due to the effect of longer dialogue in QMSum | Framework | Model | Overall | BARTLarge | BARTBase | T5Base | T5Small | Transformer | PegasusLarge | | | | | | |-------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|---------------|----------------|-------|----|----|----|----| | F1 | WR | F1 | WR | F1 | WR | F1 | WR | F1 | WR | F1 | WR | F1 | WR | | Pair-wise classification BERT | 40.07 54.83 | 31.98 54.77 | 38.40 55.14 | 33.80 51.75 | 42.83 55.51 | 48.60 56.69 | 38.25 | 55.11 | | | | | | | RoBERTa | 42.85 58.94 | 35.52 58.32 | 41.28 57.86 | 38.75 58.03 | 44.68 59.80 | 51.43 61.29 | 39.39 | 58.36 | | | | | | | Sequence Labeling | BERT | 44.37 61.35 | 35.58 59.94 | 42.23 60.23 | 40.27 60.63 | 46.70 62.77 | 53.82 64.85 | 41.07 | 59.60 | | | | | | RoBERTa | 47.49 63.97 | 38.37 61.23 | 45.66 62.51 | 43.50 62.42 | 50.41 66.81 | 55.94 67.43 | 44.67 | 63.32 | | | | | | | Pointer Network | BERT | 42.79 58.52 | 35.75 58.53 | 39.82 57.65 | 38.86 57.83 | 44.91 59.32 | 52.04 60.55 | 39.57 | 57.22 | | | | | | RoBERTa | 45.86 60.25 | 37.12 58.27 | 43.54 58.81 | 40.88 58.01 | 48.06 62.50 | 54.78 63.90 | 44.03 | 60.03 | | | | | | ![7_image_0.png](7_image_0.png) (over 1K tokens in Table 2). Intuitively, summarizers produce the candidates that have picked the low-hanging fruit, and the remaining omission information is a tough nut to crack. In other words, there exists some salient information omitted by the summarizer that is still difficult for detection models to capture. ## 4.4 Analysis And Discussion To understand what factors may affect the performance of the detection model, we conduct the following explanatory experiments. Label Imbalance We first calculate the percentage of omission utterances against non-omission utterances in five domains to investigate whether the label imbalance problem exists in the datasets. Figure 5 shows that the proportion of positive labels is always smaller than 25%, which indicates that label imbalance is a common problem in omission datasets. Besides, we observe that the degree of label imbalance is consistent with the performance of detection models, according to the results in Table 5. For example, the models achieve nearly 50% F1-score in EmailSum and TweetSumm, which have a ratio of 25% and 23% omission utterances. However, in QMSum, the detection models only achieve a 40% F1-score as the omission proportion of this dataset is only 8%. Hence, how to alleviate label imbalance is critical for omission detection and we leave it as future work. Candidate Quality Furthermore, we evaluate the performance of detection models on the candidates produced by different abstractive summarizers to investigate whether the candidate quality may influence detection models. The results are shown in Table 6, and we find the result of omission detection is negatively correlated with the performance of summarizers. For instance, BARTL and PegasusL produce candidates with higher quality, yet the detection model has difficulty obtaining their omissions. On the contrary, Transformer produces relatively low-quality candidates, while the detection model could produce better results (i.e., 55.94% F1-score). It indicates that capturing the remaining omissions for high-quality candidates is difficult, and how to address this issue is also valuable. Cross-Domain Results In addition, we conduct the cross-domain evaluation to investigate domain gaps and the generalizability of detection models. From Table 7, we can conclude that there are obvious differences between these five domains. For example, the models trained on the other domains perform poorly when tested directly on QMSum. Among these five domains, the difference between SAMSum and DialogSum is relatively small due to their similar performances across domains. We also find that the model trained on the large dataset SAMSum has a better capability of generalizing to other domains, even achieving the best result on the small datasets DialogSum and EmailSum. ## 4.5 Future Research Opportunities From the results in Table 5, we could observe that omission detection is a challenging task. Hence, we summarize some research directions as follows: - One direction is to develop a more advanced model for omission detection. Based on the analysis of Section 3.3, we could focus on improving the precision of omission detection results because a high precision of detected omissions | Domains | SAM. | Dial. | Email. | QM. | Tweet. | |-----------|--------|---------|----------|-------|----------| | SAM. | 63.97 | 59.39 | 66.80 | 36.68 | 53.92 | | Dial. | 49.65 | 58.51 | 66.58 | 43.50 | 53.77 | | Email. | 37.78 | 30.69 | 59.04 | 20.98 | 24.13 | | QM. | 41.39 | 47.00 | 61.15 | 52.82 | 28.20 | | Tweet. | 44.92 | 48.46 | 57.07 | 14.74 | 59.27 | would bring benefit to the refinement model. An ideal detection model could serve as a modelbased metric for reference-free summary evaluation. Besides, we could use the detected omission to improve the results of summarization. - Another research direction is to develop a refinement model for summary improvement using the detected omissions. In this paper, we briefly touch on this by introducing a post-editing approach in Section 3.3. The approach is straightforward, and the whole summarization procedure becomes a summarize-then-refine pipeline. However, the results show that the model is sensitive to wrong omissions. Hence, how to design a robust refinement model is also noteworthy. ## 5 Related Work 5.1 Dialogue Summarization Dialogue summarization is a challenging and valuable task that has recently received much attention, where a variety of dialogue domains are investigated, such as mail threads (Rambow et al., 2004; Zhang et al., 2021), meetings (Chen and Yang, 2020; Zhong et al., 2021), customer service (Zou et al., 2021a,b; Feigenblat et al., 2021), medical conversations (Joshi et al., 2020; Song et al., 2020), and daily chats (Gliwa et al., 2019; Chen et al., 2021). Different from conventional documents, dialogues have several inherent characteristics that make the summarization task more challenging (Zou et al., 2021c; Feng et al., 2022), e.g., multi-party information, coreferences, topic drifting, etc. Recent works have explored the types of errors in generated dialogue summaries to develop robust models (Tang et al., 2022), and omission is assessed as the most dominant error type in candidate summaries, which is also supported by human evaluations in previous works (Chen and Yang, 2020; Liu and Chen, 2021; Liu et al., 2021). In this work, we comprehensively analyze the omission problem in dialogue summarization based on the curated benchmark and investigate the feasibility of omission detection for generated candidates. ## 5.2 Omission In Text Generation Tasks Omission is a common error in machine translation (MT) (Russell, 1999; Sharma, 2015; Yang et al., 2019) and automatic speech recognition (ASR) tasks (Weng et al., 2020), which usually denotes the missing source information in the generated sequences. Although both summarization and MT/ASR belong to generation tasks, the definitions of omission error are different among these tasks. In MT/ASR tasks, the tokens between source and target sequences are usually well aligned, which means each token in the target sequence can locate its corresponding content in the source sequence. Due to such characteristics, previous works (Tu et al., 2016) in MT/ASR tasks usually adopted coverage mechanisms to eliminate the influence of omission error. Nevertheless, the source sequences in summarization tasks usually include abundant redundant and useless information, especially in dialogue scenarios, which makes omission a more serious problem in summarization-like tasks. ## 6 Conclusion In this work, we systematically study the omission problem in dialogue summarization based on the curated OLDS dataset, which collects candidate summaries from multiple models and domains and provides high-quality omission labels for them. We discover that omission is a significant problem that directly affects the results of dialogue summarization, and the defective candidate summary could be largely improved by leveraging the omission information properly. We further introduce an omission detection task to identify omission content, which is a challenging and valuable task that paves the way to omission mitigation and summary improvement in dialogue summarization. ## 7 Limitations The omission problem is critical in dialogue summarization, but even if this problem is solved, we still cannot guarantee a candidate is appropriate because it might bring hallucination content that is not presented by the source dialogue. Previous works (Tang et al., 2022; Maynez et al., 2020) also concluded that factual inconsistency is a critical problem in dialogue summarization, and it is not easy to distinguish. How to mitigate the omission problem while avoiding the occurrence of new errors is not discussed in this paper, and we hope to address this issue in future work. ## Acknowledgments The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057), Shanghai Rising-Star Program (23QA1400200), and Natural Science Foundation of Shanghai (23ZR1403500). ## References Jiaao Chen and Diyi Yang. 2020. Multi-view sequenceto-sequence models with conversational structure for abstractive dialogue summarization. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4106– 4118. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 675–686. Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. Dialogsum: A real-life scenario dialogue summarization dataset. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 5062–5074. Guy Feigenblat, Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, and Ranit Aharonov. 2021. Tweetsumm-a dialog summarization dataset for customer service. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 245–260. Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2022. A survey on dialogue summarization: Recent advances and new frontiers. In *Proceedings of the* Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 5453–5460. International Joint Conferences on Artificial Intelligence Organization. Survey Track. J.L. Fleiss et al. 1971. Measuring nominal scale agreement among many raters. *Psychological Bulletin*, 76(5):378–382. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79. Anirudh Joshi, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2020. Dr. summarize: Global summarization of medical dialogue by exploiting local structures. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3755– 3763. Wojciech Krysci ´ nski, Nitish Shirish Keskar, Bryan Mc- ´ Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540–551. Vladimir I. Levenshtein. 1965. Binary codes capable of correcting deletions, insertions, and reversals. *Soviet* physics. Doklady, 10:707–710. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730–3740. Zhengyuan Liu and Nancy Chen. 2021. Controllable neural dialogue summarization with personal named entity planning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 92–106. Zhengyuan Liu, Ke Shi, and Nancy Chen. 2021. Coreference-aware dialogue summarization. In *Proceedings of the 22nd Annual Meeting of the Special* Interest Group on Discourse and Dialogue, pages 509–519. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919. Gabriel Murray and Giuseppe Carenini. 2008. Summarizing spoken and written conversations. In Proceedings of the 2008 conference on empirical methods in natural language processing, pages 773–782. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-first AAAI conference on artificial intelligence. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Owen Rambow, Lokesh Shrestha, John Chen, and Chirsty Lauridsen. 2004. Summarizing email threads. In *Proceedings of HLT-NAACL 2004: Short Papers*, pages 105–108. Graham Russell. 1999. Errors of omission in translation. In Proceedings of the 8th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages, University College, Chester. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7881– 7892. Vipin Sharma. 2015. The relevance of addition, omission and deletion (aod) in translation. International Journal of Translation (IJT) (ISSN: 0940-9819), 27:1– 13. Yan Song, Yuanhe Tian, Nan Wang, and Fei Xia. 2020. Summarizing medical conversations via identifying important utterances. In *Proceedings of the 28th* International Conference on Computational Linguistics, pages 717–729. Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, and Dragomir Radev. 2022. CONFIT: Toward faithful dialogue summarization with linguistically-informed contrastive fine-tuning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5657–5668, Seattle, United States. Association for Computational Linguistics. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In *Proceedings of the 54th Annual Meeting of the Association for Computational* Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2015. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391. Yue Weng, Sai Sumanth Miryala, Chandra Khatri, Runze Wang, Huaixiu Zheng, Piero Molino, Mahdi Namazifar, Alexandros Papangelis, Hugh Williams, Franziska Bell, and Gökhan Tür. 2020. Joint contextual modeling for ASR correction and language understanding. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 6349– 6353. IEEE. Zonghan Yang, Yong Cheng, Yang Liu, and Maosong Sun. 2019. Reducing word omission errors in neural machine translation: A contrastive learning approach. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 6191– 6196, Florence, Italy. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR. Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, and Mohit Bansal. 2021. Emailsum: Abstractive email thread summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6895–6909. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*. Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2019. Searching for effective neural extractive summarization: What works and what's next. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1049–1058. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multi-domain meeting summarization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 5905–5921. Yicheng Zou, Jun Lin, Lujun Zhao, Yangyang Kang, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, and Xiaozhong Liu. 2021a. Unsupervised summarization for chat logs with topic-oriented ranking and context-aware auto-encoders. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14674–14682. Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, and Xiaozhong Liu. 2021b. Topic-oriented spoken dialogue summarization for customer service with saliency-aware topic modeling. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 35, pages 14665– 14673. Yicheng Zou, Bolin Zhu, Xingwu Hu, Tao Gui, and Qi Zhang. 2021c. Low-resource dialogue summarization with domain-agnostic multi-source pretraining. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 80–91. ## A Details Of The Olds **Dataset** A.1 **Example Of Automatic Omission Labeling** Figure 6 shows an example of the complete process of automatic omission labeling, which consists of three steps: oracle extraction, omission identification, and redundancy removal. For oracle extraction, we select utterances greedily from the dialogue to maximize the Rouge score with respect to the summary. We obtain this subset of utterances as oracle labels, representing their membership in the summary. In this example, we generate oracle labels for the reference as *Gold Oracles*, i.e., an utterance set of {0, 2, 5, 6, 9, 12, 13, 14, 16, 19}, and oracle labels for the candidate as Candidate Oracles, i.e., {0, 7, 12, 14, 16, 19}. In the process of omission identification, we traverse the utterances in Gold Oracles and extract WuG, which is a set of words containing the overlapping words between u and the reference. For instance, in the 14th utterance, "soon, Hector, Ashley" are the keywords appearing in the reference. Similarly, we extract Wu C that contains the overlapping words between u and the candidate summary, where u ∈ Gold Oracles. Then, by comparing WuG and Wu C , we could obtain the omission words Wu = {w|w ∈ WuG, w /∈ Wu C}. For any utterance u where Wu ̸= ∅, we label it as an omission utterance. In the example of Figure 6, the 14th utterance contains the keywords "soon, Ashley" which are omitted by the candidate, and it should be labeled as an omission. Finally, we conduct redundancy removal to discard redundant omission utterances. In Figure 6, the 2nd, 5th, and 19th utterances have redundant omission words Wu, which are the same as those in other omission utterances. Hence, we remove these utterances and the final omission labels are {0, 9, 14, 16}. ## A.2 Dialogue Domains We build the OLDS dataset upon five existing dialogue summarization datasets that cover different domains, which are described as follows: SAMSum It is the first high-quality online chat summarization corpus (Gliwa et al., 2019), which contains about 16k simulated conversations created by linguists with corresponding summaries. DialogSum It is a summarization dataset (Chen et al., 2021) with 13.5k real-life scenario dialogues, which are face-to-face spoken dialogues that cover a wide range of daily-life topics. EmailSum It is an email thread summarization dataset (Zhang et al., 2021) that consists of 2,549 email threads along with annotated summaries. The dataset has two types of summaries, short summary (<30 words) and long summary (<100 words). Here, we use the short version as references because they are more abstractive and challenging. QMSum It is a query-based multi-domain meeting summarization benchmark (Zhong et al., 2021) that contains 1,808 query-summary pairs over 232 meetings. We concatenate queries with their corresponding text spans as the input dialogues 6. TweetSumm It is a dataset focused on customer service conversations (Feigenblat et al., 2021), which contains 1,100 dialogues, each accompanied by 3 extractive and 3 abstractive summaries. We use the longest abstractive summary as the gold reference. ## A.3 Candidate Generation We use 6 abstractive models to generate candidates for the dialogues in OLDS, including BARTlarge/base, T5base/small, vanilla Transformer, and Pegasuslarge. Pegasuslarge is only used to generate candidates for dialogues in evaluation sets. To obtain the candidate summaries in training sets, we train the summarization models by adopting a 10-fold cross-validation approach, and each model generates 10 candidates for each dialogue in the validation fold via different configurations of beam search and sampling. As a result, we can obtain 50 candidates (5 models × 10 inferences) for each dialogue in the training set. To ensure the diversity of the generated candidates, we further calculate the average Levenshtein distance (Levenshtein, 1965) for each candidate and pick out 10 6We removed 232 query-summary pairs which summarize the whole meeting transcripts because their input lengths are significantly different from other pairs. As a result, the final number of pairs used in our dataset is 1,576. candidates with the largest scores. Specifically, we combine these candidates in pairs (a total of 50 × 50 = 2,500 pairs) and calculate the Levenshtein distance between them. Then, for each candidate, we average the distance results against the other 49 candidates to obtain the average Levenshtein distance. Finally, we rank these candidates based on the scores in descending order and pick out the top 10 candidates. As a result, we have 10 diverse candidates for each dialogue in the training sets. For the evaluation set of OLDS, we train the aforementioned 6 models on the training set of each domain to produce candidate summaries. Each summarization model produces 2 candidates, which are decoded by beam search (beam size = 5) and sampling, respectively. Hence, we totally have 12 candidates for each dialogue in evaluation sets. The training and inference process was conducted based on the official code of pre-trained language models 7. All experiments were conducted on one node with 4 32GB V100 GPUs. The learning rate is set to 5e-5 for pre-trained models and is set to 1e-4 for Transformer. The pre-trained models are fine-tuned with 3 epochs, while the vanilla Transformer is trained with 20 epochs. For SAMSum, the maximum source length and target length is 512 and 90, and for DialogSum, EmailSum, QMSum, and TweetSumm, this setting is 512/150, 1,024/65, 2,048/200, and 1,024/120, respectively. The other hyper-parameters are set by default. ## A.4 Details Of Quality Assessment Time Budget We recruited three annotators to conduct the quality assessment for OLDS. The total hits of judgment are 3000 (5 domains × 200 samples × 3 annotators). The annotating speed is 25 samples per hour and the workload is 120 hours (1000 / 25 * 3 = 120) in total. Instructions Each annotator was presented with a sample containing the dialogue, reference summary, candidate summary, gold oracles, candidate oracles, and the labeled omission utterances along with their corresponding omitted words. We instruct the annotators to make a binary choice whether the set of labeled omission utterances is Accept or *Reject*. Annotators should compare the candidate with the reference and find out omissions. Then, they should locate omissions in the original 7https://huggingface.co/docs/ transformers/tasks/summarization dialogue and record the corresponding utterances. Finally, they should compare the automatically labeled utterances with the recorded ones and make a judgment. The set of labeled omission utterances should be marked as *Reject* as long as it misses any critical utterance, or includes any redundant or uninformative utterance. Otherwise, it should be marked as *Accept*. To ensure that each choice is justified, we additionally asked annotators to perform corrections and renew the corresponding omitted words if the choice is *Reject*. Thus, we could verify why the labeled omission is marked as *Reject*. ## A.5 Data Format To facilitate the community to explore the effect of possible elements on the omission problem, in the released version of OLDS, we additionally provide some auxiliary information. Specifically, apart from the basic information of dialogue, reference summary, candidate summary, and omission labels, we further provide the intermediate information during labeling, including Gold Oracles, Candidate Oracles, omission words, and the source model and decoding strategy for each candidate summary, e.g., *'bart_base, beam'*, which represents that the candidate is generated by BART*base* using beam search. A complete example is shown in Table 9. ## A.6 More Results Of Candidate Summaries Table 8 shows the evaluation results of candidate summaries in OLDS assessed by various reference-based metrics. Here, we employ n-gram based metrics ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002), embedding-based metric BERTScore (Zhang et al., 2019), and learningbased metric BLEURT (Sellam et al., 2020) to evaluate the candidate summaries. ## B Omission Detection Models B.1 Implementation Details We use BERTbase and RoBERTabase as the backbone pre-trained encoder for the three frameworks. All the experiments were conducted on one node with a single A100 80GB GPU. For all three frameworks, the learning rate is set to 5e-5 and the training epoch is set to 5. The batch size was set to 128 for pair-wise classification and was set to 16 for sequence labeling and pointer network. We saved checkpoints after each epoch. The best performing checkpoint on the validation set was evaluated on the test set to report the final results. Pair-wise Classification For the framework of pair-wise classification, we use the official code of classification with pre-trained language models 8. The input format is <s> c </s> ui *</s>*, where <s> and *</s>* are classification token and separation token, respectively. c and ui represent the candidate and the i-th utterance in the dialogue. Sequence Labeling We use the same implementation as the extractive summarization model proposed by Liu and Lapata (2019). The only difference is that we append the candidate summary in front of the dialogue, denoted as <s> c *</s> <s>* u1 </s> <s> u2 *</s>* ... <s> uN *</s>*. The <s> token before the candidate summary is not involved in the calculation. For SAMSum, we truncate each input into a maximum length of 512, while for DialogSum, EmailSum, QMSum, and TweetSumm, this setting is 512, 1,024, 2,048, and 1,024. Pointer Network The autoregressive decoder of our pointer network is implemented by a Transformer decoder, which is proposed by Zou et al. (2021b) and was previously used for extractive summarization. Here, we also append the candidate summary in front of the dialogue, which has the same input format as in sequence labeling. The <s> token before the candidate summary is not involved in the calculation. We also set the same maximum length as in sequence labeling for input sequences in different domains. ![14_image_0.png](14_image_0.png) | Model | Metric | SAMSum | DialogSum | EmailSum | QMSum | TweetSumm | | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|-----------------------------|------------|------------------------------|------------------------|------|-------------|-------------|------|------|-------|------|------| | Train | Dev. | Test | Train | Dev. | Test | Train | Dev. | Test | Train | Dev. | Test | Train | Dev. | Test | | OmitR. | 32.81 31.45 31.93 30.66 28.30 28.48 51.09 41.10 42.52 68.91 74.71 73.82 62.96 54.29 55.91 | | | | | | | | | | | | | | | RG-1 | 43.47 48.47 47.17 41.98 45.22 42.93 28.89 34.22 33.61 32.85 30.90 31.54 35.74 45.10 42.32 | | | | | | | | | | | | | | | RG-2 | 17.97 23.81 22.21 16.21 19.59 16.81 6.43 10.33 9.52 10.48 9.88 11.11 13.77 22.27 19.10 | | | | | | | | | | | | | | | RG-L | 33.84 39.77 38.60 32.63 36.88 33.84 21.89 27.18 26.55 21.31 21.15 22.13 27.16 36.93 33.94 | | | | | | | | | | | | | | | BLEU | 10.66 14.70 13.60 16.05 20.24 16.94 3.29 | 5.69 | 5.50 | 7.22 | 6.15 | 7.63 | 9.98 | 15.14 13.67 | | | | | | | | BSB | 90.56 91.60 91.45 91.02 91.84 91.41 88.06 89.20 89.07 86.33 86.30 86.60 86.96 89.32 88.82 | | | | | | | | | | | | | | | BSL | 90.51 91.64 91.48 90.19 91.15 90.74 87.54 88.70 88.61 85.71 85.87 86.21 86.35 88.66 88.16 | | | | | | | | | | | | | | | BLEURT 48.32 52.39 52.17 48.12 51.06 50.04 47.47 49.31 49.06 39.00 39.53 40.70 45.73 52.20 50.93 | | | | | | | | | | | | | | | | ALL | OmitR. | 21.60 21.48 22.75 24.75 22.38 23.28 33.65 30.52 32.86 60.24 61.40 61.42 42.30 42.32 45.24 | | | | | | | | | | | | | | RG-1 | 50.62 54.38 52.57 46.08 49.55 46.91 33.16 37.86 36.98 38.25 35.94 37.35 47.25 52.58 48.53 | | | | | | | | | | | | | | | RG-2 | 25.29 29.91 28.00 21.18 25.11 21.32 9.13 | 13.0 11.60 13.74 12.74 14.85 22.19 28.72 23.90 | | | | | | | | | | | | | | RG-L | 39.98 45.10 43.78 36.50 41.94 37.97 24.72 30.04 29.06 23.83 23.84 25.58 36.47 43.89 39.42 | | | | | | | | | | | | | | | BLEU | 14.34 19.55 17.88 17.85 23.23 18.83 4.02 | 6.84 | 6.28 | 9.44 | 7.94 11.19 15.39 20.11 17.68 | | | | | | | | | | | BSB | 91.86 92.63 92.42 91.41 92.47 92.05 88.07 89.86 89.64 87.52 87.75 88.13 89.48 90.67 89.92 | | | | | | | | | | | | | | | BSL | 91.89 92.71 92.50 90.79 91.96 91.54 87.61 89.37 89.18 87.03 87.31 87.77 88.82 90.07 89.29 | | | | | | | | | | | | | | | BLEURT 56.54 58.83 57.95 53.99 56.38 55.17 49.14 50.43 50.08 42.06 41.58 43.09 54.15 57.65 55.30 | | | | | | | | | | | | | | | | BARTL | OmitR. | 27.21 28.67 30.56 27.95 26.57 27.47 37.01 35.97 37.49 62.74 69.64 69.46 41.05 45.48 46.37 | | | | | | | | | | | | | | RG-1 | 48.20 51.12 49.20 44.14 46.76 44.33 33.77 36.63 36.16 37.21 33.87 34.93 47.26 49.99 46.97 | | | | | | | | | | | | | | | RG-2 | 22.70 27.04 24.41 18.71 21.95 18.47 9.34 12.01 11.13 13.71 11.39 13.09 22.13 26.34 22.80 | | | | | | | | | | | | | | | RG-L | 38.58 42.73 40.73 34.99 38.89 35.11 26.23 29.20 28.95 24.44 22.79 24.24 36.21 41.61 38.39 | | | | | | | | | | | | | | | BLEU | 14.32 16.17 14.35 18.15 20.66 17.67 4.99 | 6.85 | 6.16 | 9.90 | 5.71 | 7.89 16.02 17.60 16.34 | | | | | | | | | | BSB | 91.42 92.21 91.92 91.63 92.21 91.67 89.17 89.66 89.58 87.62 87.55 87.98 88.83 90.38 89.71 | | | | | | | | | | | | | | | BSL | 91.47 92.26 91.97 90.99 91.60 91.15 88.73 89.22 89.18 87.06 87.09 87.55 88.10 89.74 89.07 | | | | | | | | | | | | | | | BLEURT 53.87 55.55 54.80 52.06 53.59 52.28 48.80 50.11 49.67 41.69 39.85 41.37 53.31 56.18 54.31 | | | | | | | | | | | | | | | | BARTB | OmitR. | 21.95 25.52 26.82 26.46 25.90 24.80 28.67 30.08 32.39 63.77 68.14 68.09 43.89 44.14 45.51 | | | | | | | | | | | | | | RG-1 | 47.18 50.94 49.29 44.68 46.21 44.74 33.03 36.17 35.36 31.67 33.05 33.79 43.16 48.83 44.38 | | | | | | | | | | | | | | | RG-2 | 21.70 26.41 24.17 19.36 20.48 18.23 9.58 11.77 10.83 10.58 11.40 13.10 20.60 26.21 21.10 | | | | | | | | | | | | | | | RG-L | 36.98 42.27 40.49 35.26 37.49 35.26 25.65 28.86 28.34 21.58 23.11 24.29 34.10 41.11 36.08 | | | | | | | | | | | | | | | BLEU | 12.00 16.77 15.40 18.55 20.59 17.58 4.23 | 5.75 | 5.75 | 7.31 | 7.49 10.11 14.56 18.50 15.47 | | | | | | | | | | | BSB | 91.22 92.00 91.79 91.68 92.03 91.68 88.72 89.44 89.30 86.16 86.97 87.26 88.51 90.32 89.32 | | | | | | | | | | | | | | | BSL | 91.22 92.06 91.83 90.88 91.38 91.03 88.23 88.89 88.82 85.80 86.51 86.91 87.91 89.71 88.77 | | | | | | | | | | | | | | | BLEURT 53.37 55.24 54.68 51.87 52.63 52.06 49.70 50.17 50.05 41.17 40.64 42.06 50.99 55.61 52.91 | | | | | | | | | | | | | | | | T5B | OmitR. | 33.16 35.84 34.81 30.34 31.46 30.58 35.91 38.30 39.59 66.26 76.05 73.71 53.21 53.31 53.63 | | | | | | | | | | | | | | RG-1 | 41.03 44.17 43.53 39.46 40.26 39.09 29.71 33.96 32.39 29.94 28.07 29.45 30.69 34.78 32.62 | | | | | | | | | | | | | | | RG-2 | 16.76 20.60 19.37 14.83 15.17 13.97 7.35 | 9.86 | 9.08 | 9.28 | 8.19 | 9.59 11.93 16.00 13.61 | | | | | | | | | | RG-L | 32.29 36.30 35.48 31.00 32.23 30.67 22.84 27.09 25.80 20.14 19.13 20.37 23.40 28.02 25.29 | | | | | | | | | | | | | | | BLEU | 9.40 12.47 11.55 14.27 15.79 13.88 3.37 | 4.99 | 5.24 | 5.99 | 5.76 | 6.80 | 8.35 | 11.59 10.15 | | | | | | | | BSB | 90.21 90.90 90.81 90.74 91.04 90.78 87.94 88.81 88.74 85.33 85.29 85.74 85.42 86.67 86.58 | | | | | | | | | | | | | | | BSL | 90.24 90.96 90.87 89.95 90.30 90.05 87.42 88.28 88.22 84.79 84.82 85.22 85.15 86.28 86.19 | | | | | | | | | | | | | | | BLEURT 47.66 48.87 49.25 48.09 48.34 48.12 47.86 48.94 48.09 36.92 36.94 37.03 42.49 44.10 44.49 | | | | | | | | | | | | | | | | T5S | OmitR. | 47.02 48.43 48.30 39.91 36.96 38.89 76.55 76.29 74.91 87.76 99.19 94.72 94.66 95.46 95.16 | | | | | | | | | | | | | | RG-1 | 37.70 39.14 37.94 39.09 40.13 36.76 24.44 24.79 25.68 29.52 29.49 29.11 34.84 35.44 36.03 | | | | | | | | | | | | | | | RG-2 | 11.21 12.53 11.64 10.75 11.57 9.07 | 3.18 | 3.25 | 3.51 | 7.03 | 7.12 | 6.98 | 9.90 | 10.45 11.39 | | | | | | | Transformer RG-L | 28.31 29.86 29.11 28.77 30.41 27.21 18.14 18.54 18.74 18.56 18.92 18.78 25.67 26.43 27.30 | | | | | | | | | | | | | | | BLEU | 5.44 | 5.87 | 5.86 12.14 13.77 10.92 1.15 | 1.27 | 1.36 | 2.71 | 2.39 | 2.36 | 4.35 | 4.21 | 5.03 | | | | | BSB | 89.46 89.77 89.67 90.28 90.76 90.22 87.64 87.89 87.88 85.83 85.93 86.00 87.57 87.87 88.01 | | | | | | | | | | | | | | | BSL | 89.25 89.66 89.54 89.16 89.77 89.23 87.10 87.31 87.34 84.71 84.95 85.08 86.51 86.84 86.88 | | | | | | | | | | | | | | | BLEURT 42.62 43.33 43.51 42.13 43.81 42.03 45.84 45.59 46.19 37.96 35.82 36.52 45.02 46.15 46.27 OmitR. - 28.75 28.33 - 26.56 25.86 - 35.46 37.86 - 73.85 75.53 - 45.04 49.51 RG-1 - 51.09 50.42 - 48.32 45.80 - 35.90 35.19 - 24.92 24.63 - 49.01 45.34 RG-2 - 26.44 25.66 - 23.27 19.74 - 12.04 10.97 - 8.41 9.05 - 25.79 21.91 RG-L - 42.44 41.96 - 40.32 36.93 - 29.33 28.41 - 19.09 19.52 - 40.35 37.07 BLEU - 16.24 15.44 - 23.84 20.04 - 6.85 6.33 - 3.90 4.49 - 17.68 16.41 BSB - 92.10 92.10 - 92.51 92.04 - 89.52 89.29 - 84.31 84.46 - 89.98 89.40 BSL - 92.21 92.16 - 91.90 91.45 - 89.13 88.94 - 84.55 84.76 - 89.35 88.78 BLEURT - 55.92 56.16 - 54.52 53.49 - 50.62 50.31 - 44.18 45.88 - 55.27 53.73 | | | | | | | | | | | | | | | | PegasusL | | | | | | | | | | | | | | | Dialogue: (0) @AzureSupport Hi guys we have signed up a trial for log analytics while we test setting up a custom log import. The issue I have already is trying to add a custom log import configuration, nothing is being added to the list, but no error messages? I have tried a hundred times. (1) @242694 Could you please post here: -link- and send us the link so we can have an engineer on that team assist (2) @AzureSupport Done. (3) @242694 Could you please send us the link to the created post so we can alert the team? Thanks! (4) @AzureSupport -link- (5) @242694 Thank you! We have alerted the team and they should respond to your post shortly. (6) @AzureSupport Thanks! (7) @AzureSupport Hey guys no word yet, got a client waiting please. (8) @242694 We're sorry about that. We've reached out again and will make sure that they reply to the forum post ASAP. (9) @AzureSupport -emoji- (10) @AzureSupport No word yet (11) @242694 We're sorry about this. We'll reach out to the team to ask where they are on this. (12) @242694 A forum engineer has replied to your post. Please have a look and reply to the thread if you need further assistance. Reference summary: The customer says that he has signed up a trail for log analytics but he is having an issue in adding a custom log import configuration. The agent asks to post the information regarding the isuue via provided link and to send them the link and says A forum engineer has replied to the post, then asks to reply to the thread for further assistance. Candidate summaries: [1] **Source: bart-large, Strategy: beam** Customer is complaining that he is trying to add a custom log import configuration, nothing is being added to the list, but no error messages. Agent updated that they have alerted the team and they should respond to their post shortly. [2] **Source: bart-large, Strategy: sample** Customer is complaining that he is trying to add a custom log import configuration, nothing is being added to the list, but no error messages. Agent updated that they have alerted the team and they should respond to their post shortly. [3] **Source: bart-base, Strategy: beam** The customer says that he has signed up a trial for log analytics while he test setting up a custom log import. The agent asks to send the link to the created post so that they can alert the team. [4] **Source: bart-base, Strategy: sample** Customer is complaining that they have signed up a trial for log analytics while they test setting up a custom log import. Agent updated that they will reach out to the team to ask where they are on this issue. [5] **Source: t5-base, Strategy: beam** Customer is complaining that they have signed up a trial for log analytics while they test setting up a custom log import. Agent updates that they have reached out again and will make sure that they reply to the forum post ASAP. [6] **Source: t5-base, Strategy: sample** Customer is complaining that they have signed up a trial for log analytics while they test setting up a custom log import. Agent updates that they have reached out again and will make sure that they reply to the forum post ASAP. [7] **Source: t5-small, Strategy: beam** AzureSupport Hi guys we have signed up a trial for log analytics while we test setting up a custom log import. The issue I have already is trying to add a custom log import configuration, nothing is being added to the list, but no error messages? I have tried a hundred times. [8] **Source: t5-small, Strategy: sample** The issue I have already is trying to add a custom log import configuration, nothing is being added to the list, but no error messages? I have tried a hundred times. Could you please post here: -link- and send us the link to the created post so we can alert the team? [9] **Source: transformer, Strategy: beam** The customer says that he is unable to find the product he can't work with his phone. The agent asks whether the customer is using and asks whether he will be able to send the issue and asks to assist further on the issue. [10] **Source: transformer, Strategy: sample** Customer is complaining that he is unable to know about the delay of the product. Agent updates that they are unable to reach out for further assistance and requests to DM the issue. [11] **Source: pegasus, Strategy: beam** The issue I have already is trying to add a custom log import configuration, nothing is being added to the list, but no error messages. The issue I have already is trying to add a custom log import configuration, nothing is being added to the list, but no error messages. [12] **Source: pegasus, Strategy: sample** Customer is complaining that they have signed up a trial for log analytics while they are testing setting up a custom log import. Agent updates that they have alerted the team and they should respond to their post shortly and adds that they have reached out again and will make sure that they reply to the forum post ASAP. Gold Oracles: (0) (1) (3) (8) (11) (12) Candidate Oracles: [1]: (0) (1) (5) (8) [4]: (0) (1) (5) (8) (11) [7]: (0) [10]: (0) (3) (8) (11) (12) [2]: (0) (1) (5) (8) [5]: (0) (1) (5) (8) (11) [8]: (0) (1) (3) [11]: (0) (3) (7) (12) [3]: (0) (1) (3) (8) (11) (12) [6]: (0) (1) (5) (8) (11) [9]: (0) (1) (3) (4) (8) (11) (12) [12]: (0) (1) (5) (8) (11) (12) Omission utterances (Labels): [1]: (0) (1) (12) [4]: (0) (1) (12) [7]: (1) (12) [10]: (0) (1) (12) [2]: (0) (1) (12) [5]: (0) (1) (12) [8]: (0) (12) [11]: (0) (1) (12) [3]: (0) (12) [6]: (0) (1) (12) [9]: (0) (1) (12) [12]: (0) (1) (12) Omission Words: [1]: (0) issue, analytics, signed (1) engineer, send, link (12) engineer, forum, replied, assistance, thread, reply [2]: (0) issue, analytics, signed (1) engineer, send, link (12) engineer, forum, replied, assistance, thread, reply [3]: (0) issue, configuration (12) engineer, forum, replied, assistance, thread, reply [4]: (0) configuration (1) engineer, post, send, link (12) engineer, forum, replied, assistance, thread, post, reply [5]: (0) issue, configuration (1) engineer, send, link (12) engineer, replied, assistance, thread [6]: (0) issue, configuration (1) engineer, send, link (12) engineer, replied, assistance, thread [7]: (1) engineer, post, send, link (12) engineer, forum, replied, assistance, thread, post, reply [8]: (0) analytics, signed (12) engineer, forum, replied, assistance, thread, reply [9]: (0) analytics, custom, import, signed, log, configuration (1) engineer, post, link (12) engineer, forum, replied, assistance, thread, post, reply [10]: (0) analytics, custom, import, signed, log, configuration (1) engineer, post, send, link (12) engineer, forum, replied, thread, post, reply [11]: (0) analytics, signed (1) engineer, post, send, link (12) engineer, forum, replied, assistance, thread, post, reply [12]: (0) issue, configuration (1) engineer, send, link (12) engineer, replied, assistance, thread Table 9: A complete example in the OLDS dataset, which is sampled from the test set of TweetSumm domain. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✗ A2. Did you discuss any potential risks of your work? We do not find any potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 5 ✓ B1. Did you cite the creators of artifacts you used? Section 3 and 5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All the used artifacts are publicly available. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? All the artifacts are publicly available and were checked in previous works. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 ## C ✓ **Did You Run Computational Experiments?** Section 3 And 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 and 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Appendix D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? In Appendix ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? In Appendix ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 2 and Appendix ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? All the used data are publicly available. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? In Appendix
li-etal-2023-python
Python Code Generation by Asking Clarification Questions
https://aclanthology.org/2023.acl-long.799
Code generation from text requires understanding the user{'}s intent from a natural languagedescription and generating an executable code snippet that satisfies this intent. While recent pretrained language models demonstrate remarkable performance for this task, these models fail when the given natural language description is under-specified. In this work, we introduce a novel and more realistic setup for this task. We hypothesize that the under-specification of a natural language description can be resolved by asking clarification questions. Therefore, we collect and introduce a new dataset named CodeClarQA containing pairs of natural language descriptions and code with created synthetic clarification questions and answers. The empirical results of our evaluation of pretrained language model performance on code generation show that clarifications result in more precisely generated code, as shown by the substantial improvement of model performance in all evaluation metrics. Alongside this, our task and dataset introduce new challenges to the community, including when and what clarification questions should be asked. Our code and dataset are available on GitHub.
# Python Code Generation By Asking Clarification Questions Haau-Sing Li1**, Mohsen Mesgar**2∗ , André F. T. Martins3,4,5**, Iryna Gurevych**1 1Ubiquitous Knowledge Processing Lab (UKP Lab) Department of Computer Science and Hessian Center for AI (hessian.AI), TU Darmstadt 2Bosch Center for Artificial Intelligence, Renningen, Germany 3Instituto Superior Técnico and LUMLIS (Lisbon ELLIS Unit) 4Instituto de Telecomunicações, Lisbon, Portugal 5Unbabel hli@ukp.tu-darmstadt.de ## Abstract Code generation from text requires understanding the user's intent from a natural language description and generating an executable code snippet that satisfies this intent. While recent pretrained language models demonstrate remarkable performance for this task, these models fail when the given natural language description is under-specified. In this work, we introduce a novel and more realistic setup for this task. We hypothesize that the underspecification of a natural language description can be resolved by asking clarification questions. Therefore, we collect and introduce a new dataset named **CodeClarQA** containing pairs of natural language descriptions and code with created synthetic clarification questions and answers. The empirical results of our evaluation of pretrained language model performance on code generation show that clarifications result in more precisely generated code, as shown by the substantial improvement of model performance in all evaluation metrics. Alongside this, our task and dataset introduce new challenges to the community, including when and what clarification questions should be asked. Our code and dataset are available on GitHub.1 ## 1 Introduction Text-to-code generation aims to understand a user's intention represented by a natural language description (NLD) to generate a code that satisfies the user's intention. Models for this task are a crucial component of digital pair-programmers, which assist data scientists (Agashe et al., 2019; Liu et al., 2021), software developers (Chen et al., 2021; Xu et al., 2022), and computer programming educators (Li et al., 2022). Recent work addresses this task using pretrained language models (PLMs) fine-tuned on large-scale ∗ Work done while being a postdoc at UKP Lab. 1https://github.com/UKPLab/codeclarqa ![0_image_0.png](0_image_0.png) code data in general-purpose programming languages, such as Python and Java (Chen et al., 2021; Li et al., 2022; Nijkamp et al., 2022; Chowdhery et al., 2022; Xu et al., 2022; Lahiri et al., 2022). Although these models are successful, they fail to resolve the case of an NLD that lacks enough specifications. Figure 1a depicts an example of under-specified NLD (shown in yellow). The problem of missing specifications in NLDs not only widely occurs in real-world use cases (Lahiri et al., 2022; Chaurasia and Mooney, 2017) but is also important for training text-to-code generation models. Although important, alleviating the underspecification of NLDs is challenging for two rea14287 ![1_image_0.png](1_image_0.png) sons. First, missing specifications can happen at various levels, including individual operations, argument values of the operations, and sub-tasks consisting of several operations decomposed from the entire source code file as the task. Second, it is not obvious how to identify if an NLD carries information about specifications at any level mentioned above or not. In this paper, we introduce interactivity into textto-code generation for specifications on the level of individual operation calls in Python. We hypothesize that by gathering more specifications using these interactions, we can alleviate the incompleteness of NLD's specifications and thus generate a more precise code (Figure 1a). To train and evaluate such models, we introduce the CodeClarQA dataset collected through a novel method to synthetically generate clarification questions and answers (CQAs) for an NLD-Code pair. To map the operations to natural language, we retrieve the API documentation. If there is a low similarity between the NLD and the operation's documentation, we identify the operation as a missing specification and generate a clarification question (CQ). The answers to CQs are selected from the given code. Furthermore, we propose a pipeline to demonstrate the use case of our dataset for developing NLD-Code generation models. Our pipeline consists of three modules - a clarification need predictor, a CQ generator, and a code generator. For each module, we introduce models that can serve as baselines. To evaluate the quality of our dataset, we conduct a human evaluation. We also evaluate the models we proposed for each component of our pipeline. Our empirical results show that by conditioning PLM-based code generation models on CQAs in our dataset, the performance of these models increases, indicating the correctness of our hypothesis and our collected dataset. Alongside, our experimental results show that advanced PLMs (e.g., RoBERTa, BART, T5, and CodeT5) struggle to achieve high performance under the interactive code generation pipeline. This important observation demonstrates the difficulty of our dataset for the recent PLMs, introducing new challenges for these models. ## 2 Creating The Codeclarqa Dataset We aim to make the process of code generation interactive such that by asking CQs about a given NLD, we resolve NLD under-specification before generating any code. To do so, we design a novel method to synthetically collect CQAs for a given NLD-Code pair, leading to the new dataset, which we call CodeClarQA. Figure 2 shows a general view of our data creation method. ## 2.1 Identifying Key Operations Key operations correspond to sub-tasks decomposed from the code snippet as the task. For instance, the code in Figure 1 can be decomposed into three sub-tasks: call the logistic model, use grid search to fit different logistic models, and save the best model. The corresponding key operations are *sklearn.LogisticRegression*, sklearn.GridSearchCV, and *joblib.dump*. Ideally, an NLD should provide enough information about key operations in a code. If the NLD does not do so, it lacks sufficient specifications. Thus, our first step for generating CQAs for a given NLD-code pair is to identify the key operations required to generate the code from NLD. To identify key operations, we represent the code as a graph. Data flow graphs are an effective structural representation of code semantics in a code. Given this fact, we use the graph defined by the GraphGen4Code toolkit (Abdelaziz et al., 2021), the state-of-the-art toolkit for generating a graph from a source code, including API-related operations and the data flow. This makes it easy for us to identify key operations. Figure 1a shows the graph representation. Non-leaf nodes represent key operations. Edges indicate the data flow of the operations. For each NLD-Code pair, we parse the code to generate a graph. Given a graph, we identify nodes with the following properties as key operations: **(i) For operations that are one object/function and its** methods/fields, we treat the object/function as a key operation. This is coherent with one hypothesis behind the design of GraphGen4Code, where an object/function is first initiated before fields/methods thereof are applied. For instance, sklearn.GridSearchCV is the key operation among all operations related to it, as other operations apply a method (*.fit*) or read a field (*.best_estimator_*) of it (Figure 1b). **(ii) For a multiple-operation** line of code, the last operation on the data flow path is a key operation. For instance, sklearn.GridSearchCV and *numpy.linspace* are in the same line. *sklearn.GridSearchCV* is a key operation since *sklearn.GridSearchCV* is the line's highest-level operation (Figure 1b). See Appendix A for details of the procedure of identifying key operations. ## 2.2 Is A Key Operation Missing In Nld? Given a set of key operations required to generate a code, we should identify if the given NLD provides any information about these operations. To do so, for each key operation, we propose to align the schema of textual documentation of a key operation with the schema of a given NLD. A schema (Majumder et al., 2021) is defined as a set of important elements of a document. Every schema element is either in the form of (verb, key-phrase, *relation*) or (*key-phrase*), where *key-phrase* is extracted using YAKE (Campos et al., 2020), and *verb* and *relation* are obtained by searching through the closest verb and its dependency relation using the dependency tree (Qi et al., 2020). An example of (verb, key-phrase, *relation*) is (transforms, final estimator, obl), and an example of (*key-phrase*) is (pipeline). For each key operation required to generate a code, we compute similarity scores for all schema element tuples using elements from the NLD and the documentation. For each pair of schema elements, we use a pretrained text encoder (Reimers and Gurevych, 2019) to compute similarity scores between these phrases as key information. Note that we combine *verb* and *key-phrase* if the schema element is in the triplet form before computing the similarity score. Eventually, we identify a key operation is missing in the NLD if the highest similarity score of all schema element pairs is lower than the threshold t. Each key operation is then labeled as aligned or *missing*. We perform a grid search to find the best t on a validation set, maximizing the F1 score. See Appendix B for an example. ## 2.3 Generating Cqas For Missing Key Operations We formulate CQs as multiple-choice questions and yes/no questions. The former needs an answer with yes/no following a choice of an API call. The latter requires only an answer with yes/no. Multiple-choice. We collect all extracted key operations from the dataset, mentioned or missing, that contain 1023 different API sub-modules, methods, and fields. We then extract the last tokens from each operation name, filter out all stop words, and keep operations that share the same last token of their names. For instance, *sklearn.partial_fit* and sklearn.fit share the same last token as fit. Note that we hypothesize that for operations with the same name but from a different library, e.g., *keras.fit* and sklearn.fit, they refer to the same operation. We generate multiple-choice questions for these key operations if they are missing. To do so, we use the template *Do you want to call anything related to* LAST_TOKEN*? If yes, which one?*. Yes/No. For operations that do not belong to multiple-choice questions, we generate a yes/no question using the template *Do you want to call* OPERATION_NAME *documented as* DOC?. For instance, a CQ about *numpy.logspace* is generated as "Do you want to call *numpy.logspace* documented as *Return numbers spaced evenly on a log scale?*" ## 2.4 Dataset We use NLD-Code pairs in the notebookCDG dataset (Liu et al., 2021) to create CQAs because of the high code quality ensured by votes of the Jupyter Notebooks and the high NLD quality ensured by the preprocessing method based on the study of markdown documentation (Wang et al., 2021a). We first identify key operations (§2.1) and label them as either aligned or *missing* (§2.2). Finally, we select NLD-Code pairs with at most five Total Train Dev Test | # NLD-Code Samples | 19368 | 17431 | 968 | 969 | |----------------------|---------|---------|-------|-------| | Avg. NLD Length | 12.43 | 12.45 | 12.22 | 12.30 | | Avg. Code Length | 44.40 | 44.52 | 44.52 | 42.25 | | # Samples w/ CQAs | 12339 | 11098 | 630 | 611 | | # Samples w/o CQAs | 7029 | 6333 | 338 | 358 | | # CQAs | 17506 | 15711 | 923 | 872 | | # Multiple-Choice Qs | 8952 | 8008 | 474 | 470 | | # Yes/No Qs | 8554 | 7703 | 449 | 402 | | # Operations | 817 | 749 | 227 | 215 | | # Packages | 89 | 82 | 33 | 28 | missing key operations, duplicate missing key operations, and create CQAs (§2.3). Table 1 shows dataset statistics. ## 3 Pipeline For Cq-Driven Code Generation Our system generates precise code by asking CQs before generating. To do so, it uses an interactive code generation pipeline that includes three modules: (i) a clarification need predictor, (ii) a CQ ranker, and (iii) a code generator. Given an NLD, the clarification need predictor predicts the need to ask CQs, with labels *Need* and No Need. If there is a need for asking CQs, the CQ ranker selects n CQs. We set n as five to push these models to choose CQs with the most information gains. Given the NLD, CQs and corresponding answers, the code generator generates a code. ## 4 Experiments Having proposed our dataset (§2) and a pipeline for interactive code generation (§3), we next evaluate the quality of the dataset creation method by focusing on §2.2 and results of use our dataset to evaluate recent PLM-based models for each pipeline module for interactive code generation, before assessing the quality of the pipeline. The dataset evaluation analyzes the effectiveness of identifying key operations, while experiments on the pipeline aim to validate our hypothesis that interactiveness helps code generation and evaluate task difficulty. ## 4.1 Dataset Evaluation To evaluate our dataset creation method, we randomly split our dataset into train/validation/test sets. We asked two Ph.D. students in computer science to annotate each NLD-Code pair in the validation and test sets. The annotation for each NLDCode pair is a binary label, indicating if the NLD misses any key operation from the code. These annotations let us (i) study the properties of our dataset and (ii) evaluate the quality of our method for finding missing key operations using different text encoders. See Appendix §D for more details. Setting. The validation and test set consist of 100 NLD-Code pairs respectively. The Fleiss Kappa is 0.74 (0.83 for the validation and 0.66 for the test set). We randomly chose one annotator's annotation as reference labels. See Appendix §E for more analysis on annotation results. ## 4.2 Clarification Need Predictor In order to label when CQs were needed, we learned a binary classifier. This classifier predicts, for an NLD, whether it needs further clarification. The classifier was trained on the NLD-Code pairs in the training portion of the **CodeClarQA** dataset. Setting. We fine-tune baseline pretrained transformer classifiers, including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and the encoder of BART (Lewis et al., 2020). To include models trained on NLD data, we also fine-tune the encoder of PLBART (Ahmad et al., 2021). Models are fine-tuned on the training set with NLDs as the input. We fine-tune each model for 10 epochs with learning rate 5×10−5and pick the best-performing model on accuracy. We compare the models on the test set using accuracy, precision, recall, and F1. ## 4.3 Cq Ranker Given an NLD, a CQ ranker should recommend potential key operations by asking CQs. We formulate this as a ranking task, where we select a subset of CQs from a universal set of CQs. We use all created CQs using our method mentioned in §2 as the universal set. Setting. We follow Aliannejadi et al. (2021) and fine-tune cross-encoders on all NLD-CQ pairs and experiment with models used in §4.2. Given an NLD-CQ pair, each model is trained to do binary classification. At inference time, all CQs in the universal set are paired with a given NLD and ranked by model score. Given an NLD, positive samples CQs created in the dataset. To create negative samples, we experiment with random negative sampling and BM25 (Robertson et al., 1995). The number of negative samples selected is the average number of positive samples. Each model is trained for 10 epochs with learning rate 5 × 10−5. | Model | Acc | P | R | F1 | |------------------------|-------|-------|-------|-------| | SentenceT5large(0.83) | 87.40 | 71.43 | 80.65 | 75.76 | | SentenceT5xl(0.82) | 87.40 | 68.57 | 82.76 | 75.00 | | GTRlarge(0.70) | 88.19 | 71.43 | 83.33 | 76.92 | | GTRxl(0.69) | 88.98 | 68.57 | 88.89 | 77.42 | | MiniLML12-all-v1(0.47) | 87.40 | 74.29 | 78.79 | 76.47 | | MiniLML12-all-v2(0.49) | 87.40 | 73.53 | 78.12 | 77.61 | | DistilRoBERTa(0.49) | 88.19 | 74.29 | 83.33 | 76.92 | | RoBERTalarge-all(0.46) | 89.76 | 80.00 | 82.35 | 81.16 | | MPNetbase-all-v1(0.40) | 84.25 | 80.00 | 68.29 | 73.68 | | MPNetbase-all-v2(0.42) | 86.61 | 82.86 | 72.5 | 77.33 | | MPNetbaseqa-dot(0.62) | 89.76 | 82.86 | 80.56 | 81.69 | | MPNetbaseqa-cos(0.41) | 90.55 | 82.86 | 82.86 | 82.86 | We evaluate model performance with the test set on R@*k, k* ∈ {1, 3, 5, 10}. ## 4.4 Code Generator The key hypothesis of our work is that interactive code generation systems outperform noninteractive ones. In this experiment, we conduct a proof-of-concept experiment to validate this hypothesis, assuming a perfect interactive system with perfectly asked CQs and answers. We finetune models with and without oracle CQAs from our dataset. Note that for both yes/no and multiplechoice questions, we have only positive answers in our dataset. Setting. We experiment with models mentioned by Zhou et al. (2022) for fine-tuning, including GPT-Neo-{125M, 1.3B} (Black et al., 2021), T5 (Raffel et al., 2020), and CodeT5 (Wang et al., 2021b). We include CodeParrot-{110M,1.5B} (Tunstall et al., 2022). Note that for CodeParrot110M, we use the model fine-tuned on text-to-code generation.2 Moreover, we finetune PLBART-base (Ahmad et al., 2021). We train each model for 40 epochs with learning rate 5 × 10−5. Each experiment takes up to 6 hours on a single A100 GPU. We evaluate models on BLEU score (Papineni et al., 2002), CodeBLEU score (Ren et al., 2020), and Exact Match (EM). Note that we don't include stateof-the-art execution-based metrics (Huang et al., 2022; Yin et al., 2022), since it requires us to include code context into the dataset, which leverages the difficulty of dataset construction. As we don't | Error | Freq | ER (%) | | | |-------------------|---------|----------|------|------| | Dev | Test | Dev | Test | | | Taxonomy (FP) | 3 (.33) | 3 (.50) | 7.32 | 8.57 | | Element Pair (FP) | 3 (.33) | 3 (.50) | 7.32 | 8.57 | | Argument (FN) | 4 (.57) | 4 (.67) | 4.08 | 4.35 | include code context into the dataset, code predictions are more likely to fail on e.g. variable naming, which affects the execution results but does not necessarily lead to poor code quality. ## 4.5 End-To-End Pipeline Evaluation To assess the performance of the entire pipeline (§3), we use the best-performing models for each module. We pass an NLD to the clarification need predictor. Given a positive prediction, we pass the NLD to the CQ ranker. For each NLD, we select the top-k (k ∈ {1, 3, 5}) ranked CQs by the CQ ranker. We compare them to CQs created using our approach and select overlapping CQs. Finally, we concatenate the NLD and all selected CQs with corresponding answers and feed them to the code generator. ## 5 Results 5.1 Dataset Evaluation We first evaluate the effect of different text encoders on the performance of our method for identifying missing operations. Table 2 shows the results. We achieve the best performance using MPNetbaseqa-cos text encoder. We then use our annotations to analyze the predictions of this model. Table 3 shows the results of this analysis in terms of False Positive (FP) and False Negative (FN) errors. For the sake of brevity, we report the full list in Appendix §D. The "Taxonomy" and "Element Pair" error types take up to 7.32% and 8.57% of all operations predicted as *aligned* in the validation/test sets, respectively. The rare case of FP predictions suggests that our approach to generating CQAs effectively creates CQAs for missing key operations. The *Taxonomy* error relates the differences related to the taxonomy of terms that could not be identified, taking up to about 8.57%. The *Element Pair* error relates to | Type | Category | Example NLD: We've addressed a lot of the issues holding us back when using a linear model... | |--------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | FP | Taxonomy | Code Line: LCV = LassoCV() Doc: Lasso CV: Lasso linear model with iterative fitting along a regularization path. NLD: ...we concatenate the two sets while remembering the index so we can split it later again. | | FP | Element Pair | Code Line: train_features = train.drop(['SalePrice'], axis=1) Doc: drop: Make new Index with passed list of labels deleted. NLD: Transforming some numerical variables. | | FN | Argument | Code Line: all_data['MSSubClass'] = all_data['MSSubClass'].apply(str) Doc: apply: Apply a function along an axis of the Data Frame. | Table 4: Examples of predictions in identifying missing key operations. We provide true positive (TP), false positive (FP), and false negative (FN) examples. **Category** refers to the assigned category of prediction by human evaluation. Key operations and schema element pairs with the highest similarity scores are highlighted. | Model | Acc | Precision | Recall | F1 | |-------------|-------|-------------|----------|-------| | RoBERTabase | 64.94 | 65.91 | 95.29 | 77.43 | | BARTbase | 70.33 | 74.78 | 79.95 | 77.24 | | PLBARTbase | 71.05 | 75.86 | 79.34 | 77.56 | | BERTbase | 71.49 | 75.72 | 80.73 | 78.13 | Table 5: Results of clarification need prediction. All numbers are averaged across four runs. the cases where non-relevant schema elements are aligned, taking up to about 8.57%. The *Argument* error represents the alignment between arguments, taking up only 4.08%/4.35% of all negative predictions from the validation/test set. Table 4 shows examples of these errors. For the taxonomy error, our method identifies a schema element match of *linear models* but fails to predict the difference between a *lasso linear model* and a *linear model* in the taxonomy of machine learning terms. This finding shows a potential direction of future work, in which *aligned* operations might require clarification to be distinguished from operations with similar names. The example of Argument error reflects the case where a complete semantics of the operation needs both the documentation and the argument values. As we proposed to compare documentation and the NLD, we miss out on arguments that can complement the semantics of the operation. The corresponding example shows that the operation *.apply* 's semantics is incomplete without the argument str. This is reflected in the design of our method, as we use API documentation which reflects the semantics of the API call, while argument values are not documented. The *Element Pair* error example shows that (make, index, obj) from the documentation's schema is aligned with (index) from NLD's schema. | R@k(%) | | | | | | |-------------|-------|-------|-------|-------|-------| | Model | NS | 1 | 3 | 5 | 10 | | BM25 | 0.43 | 0.79 | 0.79 | 1.22 | | | RoBERTabase | BM25 | 0.09 | 0.26 | 0.73 | 1.61 | | Rand. | 5.3 | 12.98 | 18.45 | 27.02 | | | BERTbase | BM25 | 5.21 | 13.17 | 17.47 | 24.39 | | Rand. | 4.57 | 12.41 | 17.82 | 26.88 | | | PLBARTbase | BM25 | 4.41 | 10.84 | 15.85 | 23.19 | | Rand. | 10.64 | 17.91 | 22.24 | 30.15 | | | BARTbase | BM25 | 4.88 | 10.98 | 14.66 | 21.44 | | Rand | 13.62 | 21.34 | 26.19 | 34.88 | | In contrast, the key operation from the documentation should be either drop or *deleted*. ## 5.2 Clarification Need Predictor Evaluation Table 5 summarizes the results of different classifiers. Most tested models obtain relatively high performances except for RoBERTabase, which overfits the imbalanced data where 63.71% samples have positive labels, as shown by the high recall but low precision. Moreover, BERTbase has the best performance on both accuracy and F1 score. ## 5.3 Cq Ranker Evaluation We report the results of our experiments on CQ generation in Table 6. The results confirm that our design of selecting CQs is reasonable, with the best-performing model showing similar results to the "Question Relevance" task designed by Aliannejadi et al. (2021). However, we hypothesize that our task is more challenging, as the lexical overlap between the NLD and the correctly selected CQs is low due to our design of dataset creation | Method | BLEU | CodeBLEU | EM(%) | |---------------------|--------|------------|---------| | T5base | 7.88 | 14.65 | 0.88 | | T5base+CQAs | 12.43 | 19.04 | 2.09 | | GPT-Neo125M | 11.89 | 24.75 | 0.00 | | GPT-Neo125M+CQAs | 15.63 | 26.97 | 0.00 | | GPT-Neo1.3B | 13.95 | 26.57 | 0.00 | | GPT-Neo1.3B+CQAs | 19.64 | 31.05 | 0.00 | | CodeParrot110M | 12.61 | 26.42 | 0.10 | | CodeParrot110M+CQAs | 17.97 | 31.01 | 0.00 | | CodeParrot1.5B | 12.04 | 26.02 | 0.10 | | CodeParrot1.5B+CQAs | 17.77 | 30.74 | 0.00 | | PLBARTbase | 24.63 | 28.04 | 12.00 | | PLBARTbase+CQAs | 38.91 | 38.54 | 18.03 | | CodeT5base | 27.03 | 32.66 | 10.84 | | CodeT5base+CQAs | 39.13 | 38.99 | 13.93 | which looks for key operations with documentation that has no keyword matches to the NLD. This requires the model to utilize the under-specified NLD and infer the topic of the task and the user's intent before providing suggestions by asking CQs. Our hypothesis is strongly supported by the low recall of the BM25 ranker, which ranks CQs based on their lexical similarities with NLD. Moreover, we find that models trained with the BM25 negative sampler always perform lower than the ones trained with the random sampler, which also supports our hypothesis because the BM25 negative sample is expected not to select CQs that have high lexical overlap with the NLD as negative samples, while they have a higher chance of asking key operations that are "mentioned". ## 5.4 Code Generator Evaluation We train recent models using only the NLDCode pairs or with NLD, Code, and CQAs in the CodeClarQA dataset. The experimental setup aims to test our hypothesis that interactiveness helps code generation by running code generation models with "perfect" clarifications. Note that this only serves as proof of concept, as CQAs contain operation names in the target source code, leading to data leakage because the names of the API calls exist in the CQs. Table 7 shows that all models fine-tuned with CQs have substantially better performance, with the largest gap of 14.28 in BLEU, 10.5 in CodeBLEU, and 6.03 in EM reached by PLBARTbase, which supports our hypothesis that interactions help code generation. Moreover, all models pre- | Model | k | BLEU | CodeBLEU | EM(%) | | | | |--------------|-------|--------|------------|---------|-------|------|------| | ✗ | ✓ | ✗ | ✓ | ✗ | ✓ | | | | 1 | 19.51 | 19.82 | 22.43 | 22.16 | 5.24 | 6.94 | | | PLBARTbase 3 | 15.57 | 21.33 | 22.69 | 23.51 | 4.15 | 7.53 | | | 5 | 13.20 | 22.07 | 21.77 | 24.07 | 3.95 | 7.92 | | | 1 | 19.14 | 24.04 | 24.14 | 25.56 | 5.37 | 7.15 | | | CodeT5base | 3 | 14.58 | 25.45 | 25.28 | 26.63 | 4.36 | 7.51 | | 5 | 13.03 | 26.27 | 25.13 | 27.24 | 4.33 | 7.82 | | trained on code data have better performances, with CodeT5base and PLBARTbase as the bestperforming models we tested. ## 5.5 Pipeline Evaluation We use BERTbase clarification need predictor, BARTbase CQ ranker with random negative sampling, and PLBARTbase trained with *CQAs*. Given the question ranker's predictions, we select CQAs from the test sample with CQ included in the top-k (k ∈ {1, 3, 5}) list yielded by the CQ ranker. Besides concatenating selected CQs to NLDs, we also concatenate CQs without selecting them, treating them as "unanswered clarifications". We report the results of pipeline evaluation in Table 8. We find that model performances on all evaluation metrics substantially increased with more highly-ranked CQs being included and "answered" by comparing highly-ranked CQs and the CQAs in the dataset. Moreover, we also find the opposite trend for "un-answered clarifications" where models perform worse with more highly-ranked CQs included (but not "answered"). This aligns with the challenge of asking CQs mentioned in §5.3. Last but not least, we compare the pipeline inference results in Table 8 to the results in Table 7. Notably, our pipeline underperforms models trained on data with only NLDs and code. This is expected, as we use code generators that are fine-tuned on all CQAs, and the results of ranking CQs suggest that the task of asking CQs is challenging (§5.3). ## 6 Analysis Intuitively, asking CQs helps code generation because it provides more specifications, thus aligning model generations to desired and better-quality outputs. To test if this hypothesis stands under the context of our proposed task and pipeline, we analyze model generations quantitatively and qualitatively. Recall of identified missing key operations. Table 9 shows the recall of missing key operations from predictions. We find that training with clarifications includes substantially more missing key operations, while the pipeline still does not outperform models trained on data with only NLDs and code, similar to Table 8. Furthermore, we report Pearson correlation between the recall of missing key operations and code generation results (See Table 10), finding high and positive correlations which support our hypothesis that asking CQs helps code generation through clarified key operations. Case study. We examine predictions and provide an example in Table 11. We find that training with oracle CQAs leads to predictions close to the ground truth, especially on operations, with only differences at argument-level specifications, which is expected as we focus on clarifications on operations. However, the task is challenging as the top 5 ranked CQs do not include CQs in the reference CQAs, leading to the pipeline prediction including a call of confusion matrix but missing *AdaBoostClassifier* and *cross_val_predict*. ## 7 Related Work CQ generation. Aliannejadi et al. (2019, 2021) define CQs based on facets/aspects of the text input's topic, guiding annotators to write CQs based on the facet information. Eberhart and McMillan (2022) ask CQs for query refinement based on facets/aspects from existing NLDs in a dataset. Our work is distinguished from the above works as our method does not require a predefined collection of facets/aspects of the text inputs. The advantage of our method is that we collect NLDs as specifications from code. More generally, two main focuses of work on CQ generation are (i) disambiguation of terms (Xu et al., 2019; Guo et al., 2021) and (ii) providing more information (Rao and Daumé III, 2018; Guo et al., 2021; Majumder et al., 2021; Nakano et al., 2022). With the goal of disambiguation of terms, Xu et al. (2019) utilize the knowledge base to create CQs that disambiguate different entities that share the same entity names. Guo et al. (2021) included CQs of coreference resolution that disambiguate pronouns. Rao and Daumé III (2018); Guo et al. (2021) define CQs to gather information missing from textual input. Majumder et al. (2021) ask CQs on missing information from the item description but existing in similar items, defined as missing schema. Nakano et al. (2022) construct pseudo-CQs by eliminating a part of a sentence and transforming it into a CQ and a corresponding answer. Our work adopts the definition of CQs as asking for new information and is distinguished from these works by defining a new type of information as key operations for a code, which are challenging to be defined and identified if they are included in the original text query. Text-to-Code generation. Text-to-code generation was first defined through learning on the parallel corpus of NLD-Code pairs (Allamanis et al., 2015; Miceli Barone and Sennrich, 2017; Yin et al., 2018). To study programming in practice with dependency between different code snippets, Iyer et al. (2018) introduced a more challenging task that studies generation based on NLD and programming context. Agashe et al. (2019) address the task of generating code cells on Jupyter Notebook given previous markdown and code cells. | Recall | | | | | |------------|-------|-------|-------|-------| | Model | micro | macro | | | | ✗ | ✓ | ✗ | ✓ | | | 31.08 | 32.89 | | | | | +top 1 | 14.39 | 25.14 | 15.50 | 23.69 | | +top 3 | 18.69 | 30.79 | 19.33 | 29.00 | | +top 5 | 17.31 | 32.65 | 18.20 | 30.79 | | +CQAs | 92.72 | 92.23 | | | | PLBARTbase | 37.44 | 39.17 | | | | +top 1 | 15.45 | 28.27 | 17.07 | 27.51 | | +top 3 | 17.72 | 33.62 | 18.90 | 32.47 | | +top 5 | 17.32 | 35.67 | 18.60 | 34.39 | | +CQAs | 92.71 | 92.63 | | | | CodeT5base | | | | | | ρ | | | | | |------------|--------|--------|----------|--------| | Model | Recall | BLEU | CodeBLEU | EM(%) | | PLBARTbase | micro | 0.929∗ | 0.949∗ | 0.915∗ | | macro | 0.932∗ | 0.962∗ | 0.923∗ | | | CodeT5base | micro | 0.918∗ | 0.938∗ | 0.910∗ | | macro | 0.909∗ | 0.949∗ | 0.911∗ | | | NLD: Confusion Matrix for the Best Model. CQ1: Do you want to call anything related to 'model/algorithm'? If yes, which one? A1: "Yes, I want to call 'sklearn.AdaBoostClassifier' Reference CQAs CQ2: Do you want to call anything related to 'predict'? If yes, which one? A2: Yes, I want to call 'sklearn.cross_val_predict' ada = AdaBoostClassifier(n_estimators=200, random_state=0, learning_rate=0.05) result = cross_val_predict(ada, X, Y, cv=10) Ground Truth sns.heatmap(confusion_matrix(Y, result), cmap='winter', annot=True, fmt='2.0f') plt.show() y_pred = model.predict(X_test) y_pred_classes = np.argmax(y_pred, axis=1) y_true = np.argmax(y_test, axis=1) import scikitplot as skplt skplt.metrics.plot_confusion_matrix(y_true, y_pred_classes, title='Confusion Matrix for Best Model') plt.show() CodeT5base CodeT5base+top 5 print(confusion_matrix(y_test, gbc.predict(X_test))[1]) print(classification_report(y_test, gbc.predict(X_test))[1]) ada = AdaBoostClassifier() result = cross_val_predict(ada, X, Y, cv=10) CodeT5base+CQAs sns.heatmap(confusion_matrix(Y, result), cmap='winter', annot=True, fmt='2.0f') plt.show() | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 11: Example of predictions CodeT5base without asking CQs, with pipeline predictions, and with oracle CQAs. Missing operations and schema element pairs with the highest similarity scores are highlighted. Note that *top 5* ranked CQs do not include CQs in reference CQAs. Our work also sources NL-Code pairs collected from Jupyter Notebooks (Liu et al., 2021). We do not consider dependency between different code/markdown cells when creating CQA, because including previous cells will change the necessity of asking some CQs and make our CQA creation method less controllable. Recent research also focuses on generating code utilizing API knowledge or existing source code. Xu et al. (2020) augment data with samples created by documentation. Parvez et al. (2021) retrieve samples from the training set or an archived code database. Zhou et al. (2022) use retrievalaugmented generation approach by retrieving documentation from source code API usage. In contrast, we design the task of retrieving CQs and consider interactivity between the model and the user. ## 8 Conclusion And Future Work In this paper, we introduced a new challenge of asking clarification questions for code generation for Python, along with a method to generate a dataset to create clarification questions and answers that do not require human annotations over the whole dataset. We release our collected dataset CodeClarQA, which consists of clarification questions and answers on API usage. We further proposed a pipeline system implemented by recent text and code encoders to evaluate model performances on this challenge. Our experimental results confirm that clarification questions and answers are strong information-gathering methods for better generation of code while deciding when to ask clarification questions and what questions to ask remains challenging. Future works include improving clarification questions for higher user engagement and question diversity; studying the lack of user intent completeness beyond the level of operations, e.g., lack of user intent completeness in arguments; and introducing conversational relations between clarification questions. ## Limitations Our method primarily focuses on operation-level specifications, while there are real-world use cases with other specifications. Moreover, our method of creating CQAs can only be scaled to all Python codes that involve heavy API usage. However, if a similar code knowledge graph generator of another language is developed, our method can also be scaled to the corresponding language. Our method is also limited in identifying specifications missing from the NLD, suggesting potential future work to create CQs about specifications "mentioned but not specified enough" in the NLD. ## Ethical Concerns One concern about the data is the issue of copyright. Liu et al. (2021) have checked the data policy of all 20 Kaggle competitions, in which none has copyright issues. Furthermore, they have contacted Kaggle's administrator and have made sure that the dataset collection procedure did not violate the platform's policy. We also check the license of open-source APIs when collecting documentation and make sure that there is no concern about copyright issues. Another concern about the data is that it might include privacy data. Again, we think that our data has a minimum risk of leakage of data with privacy concerns since we only collect data from the 20 Kaggle competitions where there is no concern of privacy data. The API documentation also has the minimum risk of containing data with privacy concerns. ## Acknowledgements We thank Xuye Liu and Dakuo Wang for providing the original dataset and source code for dataset preprocessing. We thank Nico Daheim, Ben Peters, Mert Tiftikci, Kexin Wang, Imbesat Hassan Rizvi, Dominic Petrak for their valuable feedback and suggestions on a draft of this paper. We thank the anonymous reviewers for their detailed and insightful comments. This work has been funded by the LOEWE Distinguished Chair "Ubiquitous Knowledge Processing" (LOEWE initiative, Hesse, Germany), by EU's Horizon Europe Research and Innovation Actions (UTTER, contract 101070631), and by the Fundação para a Ciência e Tecnologia through contract UIDB/50008/2020. ## References Ibrahim Abdelaziz, Julian Dolby, Jamie McCusker, and Kavitha Srinivas. 2021. A toolkit for generating code knowledge graphs. In Proceedings of the 11th on Knowledge Capture Conference, K-CAP '21, page 137–144, New York, NY, USA. Association for Computing Machinery. Ibrahim Abdelaziz, Julian Dolby, Jamie McCusker, and Kavitha Srinivas. 2022. Can machines read coding manuals yet? - a benchmark for building better language models for code understanding. In *Proceedings of the AAAI Conference on Artificial Intelligence* (AAAI 2022). Rajas Agashe, Srinivasan Iyer, and Luke Zettlemoyer. 2019. JuICe: A large scale distantly supervised dataset for open domain context-based code generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5436–5446, Hong Kong, China. Association for Computational Linguistics. Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics. Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail Burtsev. 2021. Building and evaluating open-domain dialogue corpora with clarifying questions. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 4473–4484, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarifying questions in open-domain information-seeking conversations. In *Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval*, SIGIR'19, page 475–484, New York, NY, USA. Association for Computing Machinery. Miltos Allamanis, Daniel Tarlow, Andrew Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pages 2123–2132, Lille, France. PMLR. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow. If you use this software, please cite it using these metadata. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. *Inf. Sci.*, 509(C):257–289. Shobhit Chaurasia and Raymond J. Mooney. 2017. Dialog for language to code. In *Proceedings of the* Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 175–180, Taipei, Taiwan. Asian Federation of Natural Language Processing. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, S. Arun Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *arXiv* preprint arXiv:2107.03374. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zachary Eberhart and Collin McMillan. 2022. Generating clarifying questions for query refinement in source code search. *2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)*, pages 140–151. Meiqi Guo, Mingda Zhang, Siva Reddy, and Malihe Alikhani. 2021. Abg-coQA: Clarifying ambiguity in conversational question answering. In *3rd Conference on Automated Knowledge Base Construction*. Junjie Huang, Chenglong Wang, Jipeng Zhang, Cong Yan, Haotian Cui, Jeevana Priya Inala, Colin Clement, and Nan Duan. 2022. Execution-based evaluation for data science code generation models. In Proceedings of the Fourth Workshop on Data Science with Humanin-the-Loop (Language Advances), pages 28–36, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643–1652, Brussels, Belgium. Association for Computational Linguistics. Shuvendu K. Lahiri, Aaditya Naik, Georgios Sakkas, Piali Choudhury, Curtis von Veh, Madanlal Musuvathi, Jeevana Priya Inala, Chenglong Wang, and Jianfeng Gao. 2022. Interactive code generation via test-driven user-intent formalization. *arXiv preprint* arXiv:2208.05950. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, PoSen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with alphacode. *arXiv preprint arXiv:2203.07814*. Xuye Liu, Dakuo Wang, April Wang, Yufang Hou, and Lingfei Wu. 2021. HAConvGNN: Hierarchical attention based convolutional graph neural network for code documentation generation in Jupyter notebooks. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4473–4485, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Bodhisattwa Prasad Majumder, Sudha Rao, Michel Galley, and Julian McAuley. 2021. Ask what's missing and what's useful: Improving clarification question generation using global knowledge. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4300–4312, Online. Association for Computational Linguistics. Antonio Valerio Miceli Barone and Rico Sennrich. 2017. A parallel corpus of python functions and documentation strings for automated code documentation and code generation. In *Proceedings of the Eighth International Joint Conference on Natural Language* Processing (Volume 2: Short Papers), pages 314– 319, Taipei, Taiwan. Asian Federation of Natural Language Processing. Yuya Nakano, Seiya Kawano, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2022. Pseudo ambiguous and clarifying questions based on sentence structures toward clarifying question answering system. In *Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering*, pages 31–40, Dublin, Ireland. Association for Computational Linguistics. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2719–2734, Punta Cana, Dominican Republic. Association for Computational Linguistics. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 101–108, Online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Sudha Rao and Hal Daumé III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2737–2746, Melbourne, Australia. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, M. Zhou, Ambrosio Blanco, and Shuai Ma. 2020. Codebleu: a method for automatic evaluation of code synthesis. arXiv preprint arXiv:2009.1029. Stephen Robertson, S. Walker, S. Jones, M. M. HancockBeaulieu, and M. Gatford. 1995. Okapi at trec-3. In *Overview of the Third Text REtrieval Conference* (TREC-3), pages 109–126. Gaithersburg, MD: NIST. L. Tunstall, L. von Werra, and T. Wolf. 2022. Natural Language Processing with Transformers: Building Language Applications with Hugging Face. O'Reilly Media. April Yi Wang, Dakuo Wang, Jaimie Drozdal, Xuye Liu, Soya Park, Steve Oney, and Christopher Brooks. 2021a. What makes a well-documented notebook? a case study of data scientists' documentation practices in kaggle. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA '21, New York, NY, USA. Association for Computing Machinery. Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021b. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, MAPS 2022, page 1–10, New York, NY, USA. Association for Computing Machinery. Frank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporating external knowledge through pre-training for natural language to code generation. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 6045–6052, Online. Association for Computational Linguistics. Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, and Xu Sun. 2019. Asking clarification questions in knowledgebased question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1618–1629, Hong Kong, China. Association for Computational Linguistics. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In *International Conference on Mining Software Repositories*, MSR, pages 476–486. ACM. Pengcheng Yin, Wen-Ding Li, Kefan Xiao, Abhishek Rao, Yeming Wen, Kensen Shi, Joshua Howland, Paige Bailey, Michele Catasta, Henryk Michalewski, Alex Polozov, and Charles Sutton. 2022. Natural language to code generation in interactive data science notebooks. *arXiv preprint arXiv:2212.09248*. Shuyan Zhou, Uri Alon, Frank F Xu, Zhengbao Jiang, and Graham Neubig. 2022. Docprompting: Generating code by retrieving the docs. arXiv preprint arXiv:2207.05987. ## Appendix A Procedure Of Identifying Key Operation. We present our procedure for identifying key operations in Algorithm 1 as a detailed description of §2.1. Given an NLD-Code pair and all source codes from its corresponding notebook, our method first extracts operations for the entire notebook and selects operations corresponding to the code from the NLD-Code pair. We then identify key operations by keeping (i) operations from the same API submodule that have the shortest data flow path and (ii) operations that correspond to the last operation within the same line. Note that we also filter out operations that (i) are print functions, (ii) are numerical operations, and (iii) have no corresponding documentation. ## B Preliminary Experiments On Identifying Missing Key Operations We also considered code/documentation-trained models for computing similarities preliminarily. We experimented with RFLHU-BERTOverflow (Abdelaziz et al., 2022), which is trained on documentation-StackOverflowPosts pairs and performs similarly to the publicly unavailable RFLHUCodeBERT in Abdelaziz et al. (2022). We obtained 75.59, 57.14, 55.56, and 56.34 in accuracy, precision, recall, and F1. This is substantially lower than all the results from Table 2. ## C Example Of Identifying If An Key Operation Is Missing We present an example of identifying if a key operation is missing figure 3. Given the key operations we have extracted (Figure 1b), we identify if a key operation is missing by comparing all its schema elements with schema elements of the NLD. ## D Examples Of Error Types We analyzed predictions of MPNetbaseqa-cos text encoder using our annotations. Table 12 shows examples of all types of FP and FN predictions we categorize. We also present in Table 13 the statistics of all FP and FN predictions. ## E Annotation We asked two Ph.D. students to annotate 200 NLDCode pairs, respectively. It takes a volunteer about Algorithm 1 Procedure of Extracting Key Operations ![13_image_0.png](13_image_0.png) Logistic Regression. [logistic regression, logistic, regression] ![13_image_1.png](13_image_1.png) 2 hours to annotate. We show the guide in figure 4 and an example of annotation figure 5. ## Discrepancy Of Annotation Between Development and test set. We noticed the discrepancy of Fleiss Kappa between the development and test set. We then asked annotators to provide reasons for different annotations. As a result, subjectivity is the main reason for differences between annotations. An example is shown in figure 5, where fitting the model is not directly mentioned yet can be inferred from the NLD. We also find that the test set contains more examples like this one, leading to a discrepancy of Fleiss Kappa between the development and the test set. We accept this difference as subjectivity is part of deciding *whether an operation is* mentioned. ## F Examples Of Codeclarqa Dataset We present examples from our dataset in Table 14. | Type of Error | Example | Explanation | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | NLD: We've addressed a lot of the issues holding us back when using a linear model... Code Line: LCV = LassoCV() Doc: Lasso CV: Lasso linear model with iterative fitting along a regularization path. | Lasso linear model should be distinguished from linear model. | | | Taxonomy (FP) | NLD: ...we concatenate the two sets while remembering the index so we can split it later again. | | | Element Pair (FP) | Code Line: train_features = train.drop(['SalePrice'], axis=1) Doc: drop: Make new Index with passed list of labels deleted. | Method identify drop as non-missing only by seeing index in both the NLD and the documentation. | | NLD: Categorical Features. Let us look at the missing values in categorical features in detail. Code Line: categorical_features.isnull().sum().sort_values(ascending=False) Doc: sort values: Sort the Categorical by category value returning a new Only isnull, sum, sort_values together refer to look at the missing values in categorical features. | | | | Multiple Operations (FP) | NLD: The variable importances from the boosted model on the reduced dataset. Code Line: sns.set_style('darkgrid') Doc: set style: Set the parameters that control the general style of the plots. | Method yields wrong prediction (positive) by comparing dataset and (set, plots, obj). | | Model (FP) | NLD: Transforming some numerical variables. Code Line: all_data['MSSubClass'] = | | | Argument (FN) | apply(str) corresponds to the | | | all_data['MSSubClass'].apply(str) | NLD, not apply itself. | | | Doc: apply: Apply a function along an axis of the Data Frame. NLD: The ' Age', ' Cabin', ' Embarked', ' Fare' columns have missing values. Schema: [embarked, missing, columns, age, cabin, fare] Code Line: full['Embarked'].fillna('S', inplace=True) Doc: fillna: Fill NA/ NaN values using the specified method. Schema: [(fill, fillna, nsubj), (fill, method, obj)] | | | | Element Missing (FN) | Method fails to extract NA and NaN and compare them to missing. | | | Paraphrase (FN) | NLD: Train again for all data and submit. Code Line: rfc .fit(X_train_all, y_train_all) Documentation: fit: Fit the calibrated model. | Model cannot yield high similarity scores between train and fit. | | NLD: GBDT: . Code Line: gbdt = GradientBoostingClassifier(...) | | | | Abbreviation (FN) | Documentation: Gradient Boosting Classifier: Gradient Boosting for classification. | Model cannot yield high similarity scores between gbdt and Gradient Boosting Classifier. | Table 12: Examples of all types of human evaluated errors in the human-annotated validation and test sets. We provide true positive (TP), false positive (FP), and false negative (FN) examples. Category refers to the assigned category of prediction by human evaluation. Key operations and schema element pairs with the highest similarity scores are highlighted. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) | Error Type | Freq | ER (%) | | | |--------------------------|---------|----------|------|------| | Dev | Test | Dev | Test | | | Taxonomy (FP) | 3 (.33) | 3 (.50) | 7.32 | 8.57 | | Element Pair (FP) | 3 (.33) | 3 (.50) | 7.32 | 8.57 | | Multiple Operations (FP) | 2 (.22) | 0 (.00) | 4.87 | 0.00 | | Model (FP) | 1 (.11) | 0 (.00) | 2.43 | 0.00 | | Argument (FN) | 4 (.57) | 4 (.67) | 4.08 | 4.35 | | Element Missing (FN) | 1 (.14) | 1 (.17) | 1.02 | 1.09 | | Paraphrase (FN) | 1 (.14) | 1 (.17) | 1.02 | 1.09 | | Abbreviation (FN) | 1 (.14) | 0 (.00) | 1.02 | 0.00 | NLD: So, 18 categorical features and 10 numerical features to clean. We start with the numerical features, first thing to do is have a look at them to learn more about their distribution and decide how to clean them. > 2.2 Numerical features. CQ1: Do you want to call 'pandas.head' documented as 'Return the first 'n' rows.'?" A1: Yes. CQ2: Do you want to call 'pandas.fillna' documented as 'Fill NA/ NaN values using the specified method.'? A2: Yes. Code: NAnum.head() c['MasVnrArea'] = c.MasVnrArea.fillna(0) c['LotFrontage'] = c.LotFrontage.fillna(c.LotFrontage.median()) c['GarageYrBlt'] = c['GarageYrBlt'].fillna(1980) NLD: There are many libraries out there that support one-hot encoding but the simplest one is using method. This function is named this way because it creates dummy/indicator variables. No CQAs. Code: Train_Master = pd.get_dummies(Train_Master, columns=['Sex', 'Pclass', 'Embarked'], drop_first=True) Train_Master.drop(['PassengerId', 'Name', 'Ticket'], axis=1, inplace=True) test_ids = Test_Master.loc[:, 'PassengerId'] Test_Master = pd.get_dummies(Test_Master, columns=['Sex', 'Embarked', 'Pclass'], drop_first=True) Test_Master.drop(['PassengerId', 'Name', 'Ticket'], axis=1, inplace=True) Train_Master.head() NLD: Need to look at the y_log relationship since that is what we will be predicting in the model. CQ1: Do you want to call anything related to 'plot'? If yes, which one? A1: Yes, I want to call 'matplotlib.plot'. CQ2: Do you want to call 'matplotlib.scatter' documented as 'A scatter plot of *y* vs. *x* with varying marker size and/or color.'? A2: Yes. Code: NAnum.head() y = np.exp(11.1618915) * np.exp(0.000570762509 * x_data) plt.plot(x_data, np.log(y_data), 'o') plt.scatter(x_data, np.log(y), c='red') NLD: Ensembling is a way to increase performance of a model by combining several simple models to create a single powerful model. I will use voting method in this kernal. CQ1: Do you want to call anything related to 'model/algorithm'? If yes, which one? A1: Yes, I want to call 'sklearn.RandomForestClassifier'. CQ2: Do you want to call anything related to 'model/algorithm'? If yes, which one? A2: Yes, I want to call 'sklearn.LogisticRegression'. CQ3: Do you want to call anything related to 'model/algorithm'? If yes, which one? A3: Yes, I want to call 'sklearn.DecisionTreeClassifier'. CQ4: Do you want to call anything related to 'model/algorithm'? If yes, which one? A4: Yes, I want to call 'sklearn.GaussianNB'. CQ5: Do you want to call anything related to 'score'? If yes, which one? A5: Yes, I want to call 'sklearn.cross_val_score'. Code: from sklearn.ensemble import VotingClassifier estimators = [('RFor', RandomForestClassifier(n_estimators=100, random_state=0)), ('LR', LogisticRegression(C=0.05, solver='liblinear')), ('DT', DecisionTreeClassifier()), ('NB', GaussianNB())] ensemble = VotingClassifier(estimators=estimators, voting='soft') ensemble.fit(train_X, train_Y.values.ravel()) print('The accuracy for ensembled model is:', ensemble.score(test_X, test_Y)) cross = cross_val_score(ensemble, X, Y, cv=10, scoring='accuracy') print('The cross validated score is', cross.mean()) Table 14: Examples of the CodeClarQA dataset. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? No section number, "limitations" section (page 8 and 9) ✓ A2. Did you discuss any potential risks of your work? Section 5.1 ✓ A3. Do the abstract and introduction summarize the paper's main claims? No section number, "abstract" (page 1) and "introduction" (page 1,2) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.1, 2.4 ✓ B1. Did you cite the creators of artifacts you used? Section 2.1, 2.4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 2.1, 2.4, "Ethical Concerns" section (page 9) ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 2.1, 2.4 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No section number, "Ethical Concerns" section (page 9) ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2.1, 2.4, "Ethical Concerns" section (page 9) ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2.4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4.1 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix D ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.1 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4.1 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. The only requirement for annotators is that they are experts in coding python. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? There is no need to do it. The only requirement is that they are experts in coding python.
ju-etal-2023-compare
A Compare-and-contrast Multistage Pipeline for Uncovering Financial Signals in Financial Reports
https://aclanthology.org/2023.acl-long.800
In this paper, we address the challenge of discovering financial signals in narrative financial reports. As these documents are often lengthy and tend to blend routine information with new information, it is challenging for professionals to discern critical financial signals. To this end, we leverage the inherent nature of the year-to-year structure of reports to define a novel signal-highlighting task; more importantly, we propose a compare-and-contrast multistage pipeline that recognizes different relationships between the reports and locates relevant rationales for these relationships. We also create and publicly release a human-annotated dataset for our task. Our experiments on the dataset validate the effectiveness of our pipeline, and we provide detailed analyses and ablation studies to support our findings.
## A Compare-And-Contrast Multistage Pipeline For Uncovering Financial Signals In Financial Reports Jia-Huei Ju1, Yu-Shiang Huang1,2, Cheng-Wei Lin1**, Che Lin**2,3,4, and **Chuan-Ju Wang**1,2 1Research Center for Information Technology Innovation, Academia Sinica, 2Graduate Program of Data Science, National Taiwan University and Academia Sinica, 3Graduate Institute of Communication Engineering, National Taiwan University, 4Department of Electrical Engineering, National Taiwan University, {jhjoo, yushuang, lcw.1997}@citi.sinica.edu.tw, chelin@ntu.edu.tw, cjwang@citi.sinica.edu.tw ## Abstract In this paper, we address the challenge of discovering financial signals in narrative financial reports. As these documents are often lengthy and tend to blend routine information with new information, it is challenging for professionals to discern critical financial signals. To this end, we leverage the inherent nature of the year-to-year structure of reports to define a novel signal-highlighting task; more importantly, we propose a compare-andcontrast multistage pipeline that recognizes different relationships between the reports and locates relevant rationales for these relationships. We also create and publicly release a humanannotated dataset for our task. Our experiments on the dataset validate the effectiveness of our pipeline, and we provide detailed analyses and ablation studies to support our findings. ## 1 Introduction With the rapid growth of information, many tasks in the field of natural language processing (NLP) involve streamlining information comprehension. One such task is summarization, which selects a subset of sentences or generates new content that best represents the given document (Hermann et al., 2015; See et al., 2017; Cohan et al., 2018). This task helps humans save time and effort by identifying important information in a text. In the finance context, comprehending regulatory narrative reports is a classic example of efficiently mining signals from a large amount of text. As these reports often contain rich information concerning specific financial entities, discovering valuable insights is crucial for academia and the finance industry. Much research has shown that textual features from financial reports contain valuable financial signals about future firm performance and market reactions (e.g., Badertscher et al., 2018; Ertugrul et al., 2017; You and Zhang, 2009). However, authorities such as the Securities and Exchange Commission (SEC) require that companies provide comprehensive and detailed information about their current status in these reports, which often contain much unimportant and already-known information. For example, the token overlap ratio between annual 10-K reports of the same company between adjacent years is often high,1 making it a challenging and tedious task to acquire important signals in new reports (termed as the *overlapping characteristic* hereafter). Recent advances in NLP technology have included attempts to efficiently and effectively comprehend lengthy financial documents. One approach to address this problem is through summarization (e.g., Zmandar et al., 2021b; Orzhenovskii, 2021; Gokhan et al., 2021). Other approaches additionally leverage numerical metrics, such as stock return volatility and abnormal trading volumes, to locate essential financial signals in reports (e.g., Kogan et al., 2009; Tsai and Wang, 2017; Rekabsaz et al., 2017; Agrawal et al., 2021; Lin et al., 2021). However, these approaches often require high-quality human annotation or suitable financial measures, which poses significant limitations in practical scenarios. In this study, we approach financial report comprehension from a novel perspective by leveraging the intrinsic *year-to-year* characteristic of reports (i.e., the overlapping characteristics). Specifically, for a particular company, we use the document published in the previous year as an information anchor (i.e., the reference) to construct a year-to-year structure and locate important financial signals in the report of the subsequent year (i.e., the target). This inherent structure enables us to mine financial signals in a compare-and-contrast self-supervised manner, compared to existing supervised approaches. Based on the year-to-year structure, we propose 1The overlap ratio calculated from Item 7 of the reports of the 3,849 companies from 2011 to 2018 (see FINAL dataset in Section 4) is around 0.826 on average. 14307 a *compare-and-contrast multistage pipeline* to effectively locate financial signals in reports. We first identify a few types of relationships between reference and target financial reports at the segment level. Then, using these recognized relationships, we present a novel financial signal-highlighting task together with a domain-adaptive highlighting model. The goal of this task is to identify the rationales, represented by the importance of certain words, for a specific pair of year-to-year segments. Therefore, the words with high importance are deemed to be crucial financial signals in these reports. For experiments, we present a synthetic dataset consisting of 30,400 reference-to-target segment pairs for financial signal highlighting.2 Experimental results validate the effectiveness of the proposed pipeline; detailed analyses and ablation studies are also provided. ## 2 Problem Definition The year-to-year nature of financial reports allows us to take advantage of the differences between a company's documents in consecutive years. These differences may reveal complex but insightful relationships within a pair of documents. To better understand these relationships, we investigate them through rationales (represented by the word importance), which are considered essential signals in financial reports. ## 2.1 Reference-To-Target Structure Formally, for each company, Dℓis a set containing all segments in its financial report at year ℓ, where each element d ∈ Dℓ refers to a single segment. While we regard a focal company's financial report at year ℓ, Dℓ, as the *target* document, we view the same company's report at year ℓ − 1, Dℓ−1, as the *reference* document. Given the annual nature (i.e., the reference-to-target structure) of financial reports, we further break down the document-todocument relationship between Dℓ and Dℓ−1 into enumerated segment-to-segment relationships. We denote the set of enumerated segment pairs as T¯. 3 However, as T¯ includes all pairs of segments enumerated from Dℓ, and Dℓ−1 (i.e., |Dℓ||Dℓ−1| pairs), intuitively, most segment pairs in T¯ have no interesting relationship. Hence, we reduce the 2The dataset and codes are available at https:// github.com/cnclabs/codes.fin.highlight. 3Note that each (Dℓ, Dℓ−1) pair corresponds to a set of segment pairs T¯; to simplify the notation, we do not use the subscript for T¯ to characterize the different sets. | (a) Segment pairs in T β | | |----------------------------|--------------------------------------------------------------------| | Our most critical accounting policies relate to revenue recognition, inventory, pension and other postretirement benefit costs, goodwill, ... | | | 2017 (ref.) | Our most critical accounting policies relate to revenue recognition, inventory, pension and other postretirement benefit costs, goodwill, ... | | 2018 (target) | (b) Segment pairs in T α | | 2017 | Net sales in the Americas increased 5%, or $201.8 | | (ref.) | million, to $4,302.9 million. | | 2018 | Net sales in the Americas decreased 1%, or $58.5 | | (target) | million, to $4,513.8 million. Table 1: Segment pair classification | set T¯ to T by removing irrelevant segment pairs based on their syntactical similarities. Specifically, for each target segment t ∈ Dℓ, we calculate the ROUGE-2 (Lin, 2004) scores between the target segment t and all reference segments r ∈ Dℓ−1 and sort the reference segments according to their scores in descending order as S¯(t) = r1, r2*, . . . , r*|Dℓ−1| . 4 With S¯(t), we then discard reference segments that fall behind the largest ROUGE-2 difference out of all possible ROUGE-2 differences, resulting in a truncated set S(t). 5 Note that the difference is calculated between the two consecutive ROUGE-2 scores in S¯(t). Finally, with S(t), the reduced segment pair set is T = {(r, t)|(r, t) ∈ T ∧ ¯ r ∈ S(t)}. To locate meaningful financial signals revealed by segment pair differences, we further classify each pair (r, t) ∈ T into the following two sets: 1. T βcontains reference-to-target segment pairs with largely similar meanings (see Table 1(a)). Generally, there is no additionally noteworthy content in target segment t compared to reference segment r. 2. T α = *T \ T* βcontains segment pairs with dissimilar meanings (see Table 1(b)). Pairs in T α are further classified into two types based on their syntactic and semantic similarity, as discussed in Section 3.2. Note that all the aforementioned terminologies and notations can also be found in Figure 1; they will be used throughout the following sections of this paper. ![2_image_0.png](2_image_0.png) ## 2.2 Highlighting Task We consider pairs in T α as the pairs of interest and provide rationales of underlying pairwise relationships by predicting the word importance for each segment pair (r, t) ∈ T α as $$(t|r),$$ ## R = ∆ Pf (T|R), (1) where R indicates the word importance of a target segment t conditioned on reference segment r, and the highlighting model is denoted as f (detailed in Sections 3.3 and 3.4). ## 3 Proposed Pipeline Here we describe the proposed multistage pipeline for discovering the rationale behind the referenceto-target structure in financial reports, as illustrated in Figure 1. ## 3.1 S0**: Document Segmentation** Financial reports are multimodal, often covering multiple aspects and topics; each aspect or topic usually uses one to three consecutive sentences to convey its meaning. Therefore, instead of considering sentences as the basic unit of text, we here regard *uni-modal segments* as the smallest unit for financial documents. We first use spaCy API for sentence segmentation.6 Then, we utilize the finetuned cross-segment BERT (Lukasik et al., 2020) to obtain coherent uni-modal segments. Note that some studies show that breaking a document into uni-modal segments benefits downstream applications (Shtekh et al., 2018; Qiu et al., 2022; Chivers et al., 2022). 6https://spacy.io/api/sentencizer ## 3.2 S1**: Relation Recognition** In this stage, a systematic procedure manages relation types T βand T α with semantic and syntactic similarity. Specifically, we use two functions, ROUGE-2 and Sentence-BERT (Reimers and Gurevych, 2019) cosine similarity,7to assess the syntactic and semantic similarity between each reference-to-target pair (r, t) ∈ T . 8 The scores for the syntactic and semantic similarity are denoted as ϕsyn(*r, t*) and ϕsem(*r, t*), respectively.9 We empirically design a rule-based procedure and classify each segment pair into three types. 1. *Insignificant* relations (T β) correspond to uninformative segment pairs with highly similar syntactic and semantic meanings between target and reference segment (i.e., ϕsyn > ϵsyn and ϕsem > ϵsem). 2. *Revised* relations (T α 1 ) correspond to segment pairs that differ in some words only but disclose quite different meanings, resulting in a high ϕsyn(*r, t*) but a relatively low ϕsem(*r, t*) (i.e., ϕsyn > ϵsyn and ϕsem < ϵsem). 3. *Mismatched* relations (T α 2 ) correspond to segment pair meanings that are to some extent mutually exclusive, resulting in a low ϕsyn(*r, t*) (i.e., ϕsyn < ϵsyn). The procedure and the setting of the two thresholds (ϵsem and ϵsyn) are also summarized in Figure 4 in Appendix C. 7We derive segment embeddings using average pooling. 8Note that before the following procedure, we first reduce the set T¯ to T by removing irrelevant segment pairs (see Section 2.1). 9Note that the scoring functions are not limited to these two but can be replaced with other suitable functions. ## 3.3 S2**: Out-Of-Domain Fine-Tuning** Here we pinpoint financial signals for segment pairs in T α = T α 1 ∪ T α 2 . Specifically, for each segment pair (r, t) ∈ T α, we discover rationales through predicted word importance in target segment t, where the rationales are inferred conditioned on reference segment r (see Eq. (1)). Binary token classification To accomplish this, we cast the word importance prediction as supervised binary token classification. First, we leverage the pre-trained BERT (Devlin et al., 2019) model to construct contextualized reference-to-target pair representations, where each pair of interest constitutes an input with special tokens as $${\bf h}_{(r,t)}={\mathrm{BERT}}(\,[\,\texttt{CLS}\,]\,r\,[\,\texttt{SEP}\,]\,t),$$ where h(r,t) ∈ Rn×dis the contextualized token representation of the pair, d is the dimension of each token representation, and n is the number of tokens in segment pair (*r, t*). Second, on top of the token representation h(r,t), we add a highlighting model f(·) (an MLP layer) with softmax function. The resultant conditional word importance P j f (t|r) for the j-th word in target segment t is $$P_{f}^{j}(t|r)={\frac{\exp\left(\left(f\left(\mathbf{h}_{(r,t)}^{j}\right)[1]\right)/\tau\right)\right.}{\sum_{i=1}^{2}\exp\left(\left.\left(f\left(\mathbf{h}_{(r,t)}^{j}\right)[i]\right)/\tau\right)\right.}},\tag{2}$$ where h j (r,t) denotes the token representation of the j-th word in target segment t (i.e., the j-th row vector of h(r,t)), f(·) : Rd → R2, and τ is a hyperparameter that controls the probability distribution. Signal highlighting warm-up As we view signal highlighting as binary token classification, we first fine-tune the model f(·) on e-SNLI (Camburu et al., 2018), an external human-annotated dataset, to obtain a zero-shot model. Note that e-SNLI was compiled for explanation generation with human-annotated rationales to distinguish relations of aligned sentence pairs (r′, t′) (i.e., premise and hypothesis) in natural language inference. We then treat the annotated words as the ground truth for the premise-to-hypothesis relation,10 which is similar to our reference-to-target structure. Formally, we adopt the binary cross-entropy objective for each token in hypothesis t′to fine-tune the BERT token representations and the highlighting model f(·) as $$\begin{array}{c}{{{\mathcal L}_{\mathrm{CE}}=\sum_{j}-\left(Y_{t^{\prime}}^{j}\log P_{f}^{j}\left(t^{\prime}|r^{\prime}\right)\right)}}\\ {{{}}}\\ {{{}}}\\ {{{}}}\\ {{{}}}\end{array}$$ where Yt′ is a vector in which each element Y j t′ indicates the binary label of word importance for the j-th word in hypothesis t′. For instance, Y j t′ = 1 implies the j-th word in t′is annotated as an important word conditioned on the given premise sentence r′. We thus construct the out-of-domain zero-shot highlighting model by fine-tuning on eSNLI, which is regarded as a baseline to proceed with the following financial domain adaptation (see Figure 1). ## 3.4 S2+**: In-Domain Fine-Tuning** Generally, for applications, particularly in niche domains like finance, models with a zero-shot setting may not be effective enough. Also, several studies show that language models exhibit poor performance under domain shift scenarios (Ben-David et al., 2006; Han and Eisenstein, 2019; Gururangan et al., 2020; Li et al., 2022). We account for this by equipping the proposed pipeline with an extra in-domain fine-tuning stage to enable our highlighting model to adapt properly to the financial domain. Specifically, we construct a domain-adaptive financial signal highlighting model f+(·) with the following learning strategies: (1) pseudo-labeling with revised segment pairs in T α 1 , and (2) further fine-tuning with soft labels. Pseudo-labeling with revised segment pairs We introduce a simple yet effective pseudo-labeling approach that uses revised segment pairs (i.e., T α 1 ) collected from stage S1 (see Section 3.2). Recall that these segment pairs differ in some words only but have quite different meanings. Given such a property, we establish a heuristic labeling approach for pseudo-labels of financial signals. Intuitively, we treat all *revised* words in target segment t as important words and mark them as positive, and randomly sample other words as negative ones.11 Further fine-tuning with soft labels To compensate for deficiencies in such assertive binary pseudo-labels, we use soft labeling to make the 11We set the number of negative labels to three times that of the positive ones. 10Here, we specifically select *contradiction* pairs in e-SNLI as this relationship is closer to our goal than the other two. token representations more generalized. Initially, as illustrated in Figure 1, we leverage the zeroshot highlighting model f(·) learned at stage S2 to calculate the approximate word importance of the revised segment pairs, the results of which are regarded as soft labels compared to the assertive pseudo-binary labels. We then construct the softlabeling objective LSL as $${\mathcal{L}}_{\mathrm{SL}}=\gamma{\mathcal{L}}_{\mathrm{CE}}+(1-\gamma){\mathcal{L}}_{\mathrm{KL}},$$ where $${\mathcal{L}}_{\mathrm{KL}}=\sum_{j}-\mathrm{KL}\left(P_{f}^{j}(t|r)\Big\|\,P_{f^{+}}^{j}(t|r)\right)\quad\quad(4)$$ and γ is a hyperparameter that controls the impact of soft labeling. In Eqs. (3) and (4), KL(·) denotes Kullback–Leibler (KL) divergence, and Pf (t|r) and Pf+ (t|r) indicate the estimated probability distributions predicted by f(·) and f+(·), respectively. Finally, we fine-tune the highlighting model f+(·) with the pseudo-labels annotated on segments in T α 1by optimizing LSL in Eq. (3). Note that we not only utilize probabilities Pf (t|r) as our training targets (i.e., soft labels) for LKL but we also adopt the warm-start token representations and highlighting layer f(·) as the initial checkpoint for fine-tuning f+(·). In addition, we discover that hyperparameters τ and γ affect the performance significantly. We discuss the hyperparameter search in Appendix B. ## 4 The Final Dataset We constructed FINAL (FINancial-ALpha), a financial signal highlighting dataset, consisting of 30,400 reference-to-target segment pairs in ∈ T α. ## 4.1 Financial 10-K Corpus Preprocessing We used Form 10-K filings collected from the Software Repository for Accounting and Finance,12 where a Form 10-K is an annual report required by the U.S. SEC. Specifically, we used 10-K filings from 2011 to 2018, which comprise 63,336 filings from 12,960 public companies. To make the best use of the year-to-year information, we discarded companies for which the reports in some years were missing during the period; 3,849 companies (3,849×8=30,792 reports total) remained after this filtering. We then randomly sampled 200 companies from the 3,849 companies with their 12https://sraf.nd.edu/sec-edgar-data/ | (a) FINAL dataset | | | | | | |-----------------------------|---------------------|----------|----------|----------|------| | #Pairs | Avg. |t| | Avg. |r| | Avg. #w+ | Avg. #w− | | | Train (T α ) | 30,000 | 31.3 | 33.2 | 3.7 | 60.8 | | 1 | | | | | | | Eval (T α ) | 200 | 33.2 | 31.3 | 5.5 | 25.9 | | Eval (T 1 ) | 200 | 29.6 | 29.0 | 11.0 | 18.0 | | α 2 | (b) e-SNLIc dataset | | | | | | #Pairs | Avg. |t| | Avg. |r| | Avg. #w+ | Avg. #w− | | | Train | 183,160 | 8.2 | 14.1 | 2.0 | 6.2 | | Test | 3237 | 8.1 | 15.3 | 2.1 | 6.0 | | Table 2: Dataset statistics | | | | | | annual reports to construct the dataset. In addition, while every 10-K annual report contains 15 schedules (e.g., Items 1, 1A, 1B, 2, 3, . . . , 7, 7A, . . . , 15), 13 we extracted only Item 7 (Management's Discussion and Analysis of Financial Conditions and Results of Operations ("MD&A")) to form the FINAL dataset.14 Finally, we aligned each document Dℓ with its corresponding last-year document Dℓ−1, resulting in 1,400 reference-to-target document pairs (i.e., 200 companies × 7 year-to-year pairs). ## 4.2 Year-To-Year Segment Pair Generation After preprocessing, we followed the proposed multistage pipeline by first passing each document pair through stage S0 to obtain an enumerated set of segment pairs T¯; we then reduced T¯ to T by removing irrelevant segment pairs (see Section 2.1). Next, we followed the relation recognition stage S1 in Section 3.2 to obtain the two groups of segment pairs: T α 1and T α 2 . From each of these two groups, we randomly sampled 200 pairs for human annotation as our evaluation sets. Likewise, we randomly sampled 30,000 pairs from the rest of the revised segment pairs (i.e., T α 1 ) as the training set for the pseudo-labeling approach in Section 3.4. ## 4.3 Human Annotation To evaluate the empirical effectiveness of the proposed pipeline, we manually annotated the sampled 400 segment pairs. For each segment pair (*r, t*), we collected the labels of rationales from three annotators. Specifically, the annotators were to distinguish which words in each target segment t to regard as important financial signals according to the context of the corresponding reference segment r. That is, the words with positive labels were to characterize the reference-to-target relationship or disclose extra information of interest,15 whereas the rest of the words in t were labeled as negative. We further assessed the inter-rater reliability of the three annotations with Fleiss' κ (Fleiss, 1971). For simplicity, we treat the prediction for the importance of each word in the target segment as independent classification tasks (containing roughly 12K words in the 400 evaluation pairs): for evaluation pairs from T α 1 , κ = 0.71; for those from T α 2 , κ = 0.60. The training and evaluation sets are described in Table 2(a), where Avg. |t| and Avg. |r| are the average lengths of target and reference segments, respectively, and Avg. \#w+ and Avg. \#w− are the average numbers of words annotated as positive and negative, respectively. ## 5 Experiments 5.1 Evaluation Datasets FINAL We evaluated the highlighting performance on the two evaluation sets with the humanannotated ground truth (see Table 2(a)). e-SNLIc We additionally evaluated the performance on e-SNLI. Particularly, in this paper, we used only the premise-to-hypothesis sentence pairs labeled as *contradiction* (denoted as e-SNLIc) in the test set of the e-SNLI dataset for evaluation (see Table 2(b)). ## 5.2 Evaluation Metrics Recall-sensitive metric In practice, financial practitioners are usually concerned more about the recall of the discovered signals than their precision due to the high cost of missing signals. Accordingly, we borrow the idea of R-precision (Buckley and Voorhees, 2000), a metric from the information retrieval field. In our case, R-precision (R-Prec) is the precision at R, where R is the number of annotated words in each target segment: if there are r annotated words among the top-R predicted words, then the R-precision is r/R. ## Sequence Agreement Of Word Importance In addition, we measure the agreement between the predicted importance of words for each target segment (considered as a number sequence) and its corresponding ground-truth sequence. Specifically, we use the Pearson correlation coefficient (PCC) for evaluation. 15The annotation guidelines are provided in Appendix D. ![5_image_0.png](5_image_0.png) Table 3: Highlighting performance Note that for R-Prec, we use majority voting to derive single ground-truth labels from the three annotators, whereas for PCC, we take the mean agreement of the three annotations as the ground truth. Note also that neither of the above two metrics requires a hard threshold to determine the important words for evaluation. Whereas R-Prec considers the words with the top-R highest predicted probabilities, PCC directly leverages the predicted probabilities of words as the importance of words for calculation. ## 5.3 Compared Methods Zero-shot We fine-tuned the BERT-base model on the e-SNLIc training set with the binary token classification cross-entropy objective (See Section 3.3 for details) and used this as a zero-shot approach for financial signal highlighting. Pseudo few-shot Instead of using e-SNLIc, we fine-tuned the BERT-base model on the 30,000 revised segment pairs in T α 1(see the "Train" data in Table 2(a)) with the pseudo-label tokens (see pseudo-labeling introduced in Section 3.4) and use this as a pseudo few-shot approach. Domain-adaptive Using the zero-shot highlighting model as the initialization, we further performed in-domain fine-tuning (see stage S + 2 in Section 3.4) for domain adaptation. ## 5.4 Empirical Results 5.4.1 Main Results For Signal Highlighting Performance on FINAL Table 3 tabulates the highlighting performance under four conditions (i.e., \#1–\#4), where W.U. denotes that e-SNLIc is used for warm-up fine-tuning (i.e., the zero-shot highlighting model), P and S denote pseudo and soft labeling, respectively, and '∗' denotes statistical significance with respect to the performance of zero-shot learning (\#1) under a paired t-test with p < 0.05. We first focus on the results of the main task on FINAL, where the listed results are those evaluated on the union of the two evaluation sets (including 400 segment pairs in total). As shown in the table, the proposed domain-adaptive approach using both pseudo and soft labeling techniques (i.e., condition \#4) achieves the best R-Prec of 0.7865 and PCC of 0.7290. In addition, from the performance increase from condition \#2 to \#3, we observe that warm-up fine-tuning (W.U.) plays an essential role in financial signal highlighting. Similarly, soft labeling is also beneficial for our task, bringing a 10% performance improvement in both evaluation metrics (by comparing the results of conditions \#3 and \#4). However, from the results of conditions \#1 and \#3, we observe that adopting pseudo-labeling alone might not be helpful for this task, perhaps because the pseudo-labels constructed by the proposed heuristic approach (see Section 3.3) are too aggressive for unimportant tokens, resulting in a biased highlighting model. In sum, we offer two main observations from Table 3. - The proposed domain-adaptive fine-tuning with pseudo and soft labeling is effective for signal highlighting in financial reports. - Warm-up fine-tuning and soft labeling are two crucial components to constructing an effective domain-adaptive highlighting model. Generalization ability between domains Table 3 also lists the results on the e-SNLIc testing data: only the model with condition \#4 performs on par with or even outperforms that with condition \#1 (i.e., zero-shot), showing that the highlighting model fine-tuned by the propose domainadaptive approach exhibits good generalizability. ## 5.4.2 Analyses On Different Types Of Relationships To better understand the empirical advantages of the domain-adaptive approach, we further investigate the highlighting performance for different kinds of reference-to-target relations, T α 1(*revised*) and T α 2(*mismatched*). Figure 2 compares the results of the zero-shot (\#1) and domain-adaptive (\#4) methods in terms of two metrics. We here focus on PCC, as R-precision considers only the set of important words (i.e., labeled as positive) instead of all the words in each target segment. In ![6_image_0.png](6_image_0.png) the figure, we see that despite the significant PCC improvements on both *revised* and *mismatched* pairs, the benefit of domain adaptation on mismatched pairs is markedly greater than that on revised pairs, yielding a PCC improvement of approximately 23%. Perhaps the important words in the mismatched pairs are more uncertain, necessitating intensive domain adaptation more than those in the revised pairs. Note that we fine-tuned the model on only 30,000 revised segment pairs in T α 1 for domain adaptation; however, the highlighting results of mismatched pairs T α 2exhibit more significant improvement. This suggests that the proposed domain-adaptive approach addresses domain shift and yields a superior ability to infer word importance even for unfamiliar (unseen) relationships (See Appendix E also). ## 5.5 Ablation Studies 5.5.1 Impact Of Referenced Sources We first determined the impact of the reference segment, which is viewed as the context of a given target segment in terms of discovering the financial signals in the target segment. To this end, for each reference-to-target pair (*r, t*), we substituted the original reference segment r (i.e., the most syntactically similar segment in the previous years' document Dℓ−1) for other text and constructed a few variants of variant-to-target segment pairs for inference using the highlighting model. Specifically, we fixed the target segment but recast the BERT contextualized representation of variant pairs as - **Empty**: A single [PAD] token is used as the reference segment (implying *none* in BERT); - **Same**: The target segment is used as the reference segment; | Pseudo-labeling | FINAL | e-SNLIc | | | | |-------------------------------------------------|---------|-----------|--------|--------|--------| | R-Prec | PCC | R-Prec | PCC | | | | Heuristic + Lexicon-based | 0.6457 | 0.5774 | 0.6419 | 0.5847 | | | + Soft Label | 0.6806 | 0.5932 | 0.8468 | 0.7261 | | | Heuristic (#2) | 0.6968 | 0.6368 | 0.6302 | 0.5752 | | | + Soft Label (#4) | 0.7865 | 0.7290 | 0.8605 | 0.7566 | | | Table 5: Different pseudo-labeling approaches | | | | | | | Reference settings | FINAL | e-SNLIc | | | | | R-Prec | PCC | R-Prec | PCC | | | | Empty | [PAD] | 0.4834 | 0.4033 | 0.6553 | 0.5687 | | Same | t | 0.5108 | 0.3850 | 0.5697 | 0.4994 | | Random | r˜ | 0.5345 | 0.4582 | 0.5658 | 0.4628 | | Original | r | 0.7865 | 0.7290 | 0.8605 | 0.7566 | | Table 4: Impact of referenced knowledge sources | | | | | | - **Random**: A randomly selected segment is used as the reference segment. In Table 4, the original setup significantly outperforms the other three settings in both FINAL and e-SNLIc, showing that the knowledge provided by the reference segments is critical for capturing important financial signals in the corresponding target segment. ## 5.5.2 Effect Of Lexicon-Based Labeling Recall that in Section 3.4, we introduced a heuristic pseudo-labeling approach that views all revised words in target segment t as important words and marks them as positive while we randomly sample other words as negative words. We here test the effect of additionally incorporating an external financial lexicon for pseudo-labeling. Specifically, we adopt the most representative financial sentiment lexicon—the Loughran–McDonald Master Dictionary (Loughran and Mcdonald, 2011)—and assume that in addition to the revised words in the heuristic approach, the 3,872 sentiment words in the dictionary also reveal important financial signals (i.e., are labeled as positive). Additionally, we treat the 20K most frequently-occurring words, as well as the standard stopwords, as negative words. As shown in Table 5, surprisingly, adding the lexicon for pseudo-labeling does not improve performance but instead worsens the highlighting results. Although these financial sentiment words convey important financial signals, they are globally important among all financial reports. However, this characteristic precludes the use of the lexicon for company-specific reference-to-target highlighting, which is focused more on local relationships between a pair of segments. ## 6 Related Work Research on financial report analysis has been ongoing for many years, with various studies utilizing both textual and numerical features to identify signals in reports. For instance, some researchers have used the relationship between tokens and quantitative indicators from the financial market to identify financial risks (e.g., Kogan et al., 2009; Tsai and Wang, 2017; Lin et al., 2021). Others have adapted unsupervised methods to recognize information and classify risk factors in financial reports (e.g., Huang and Li, 2011; Lin et al., 2011). However, previous research has mostly focused on risk factors in a global context rather than companyspecific signals, which is the focus of this study. Recently, transformer-based language models such as BERT, GPT-3, and T5 (Devlin et al., 2019; Brown et al., 2020; Raffel et al., 2020) have made significant strides in the summarization task. In 2020, Zmandar et al. (2021a) proposed the Financial Narrative Summarization shared task (FNS 2020), which aims to summarize annual UK reports. While some methods for this task have achieved satisfactory performance using ROUGE as a metric (e.g., Zmandar et al., 2021b; Orzhenovskii, 2021; Gokhan et al., 2021), they have been criticized for sometimes omitting essential signals under a ROUGE-guided policy. Additionally, the signals discovered through these approaches are heavily dependent on high-quality human-annotated summaries, making it challenging to apply them in real-world scenarios. In the field of NLP, some research has focused on developing rationalizing models related to the concept of our highlighting model. For example, Lei et al. (2016) proposed a method for learning the rationale (words) to justify a model's prediction by selecting a subset of text inputs. More recently, some studies have proposed methods that can rationalize the relationship of sentence pairs, such as natural language inference (Jiang et al., 2021) and query-document relevance (Kim et al., 2022). Additionally, DeYoung et al. (2020) released a benchmark to facilitate the development of interpretable NLP models with faithfulness. ## 7 Conclusion This paper addresses the task of identifying rationales as insightful financial signals between two narrative financial reports in consecutive years. We use the reference-to-target structure of financial reports to develop a compare-and-contrast multistage pipeline, comprising mainly of relation recognition and signal highlighting stages. In particular, we propose domain-adaptive learning strategies for the financial signal highlighting task, including out-ofdomain warm-up and in-domain fine-tuning. Our empirical results confirm the effectiveness of the proposed approaches. We also present the newly constructed FINAL dataset. Some future works include (a) improving highlighting effectiveness by developing multitask learning on large financial corpora as financial pretrained representations; (b) increasing efficiency by integrating dense retrieval methods in the Relation Recognition stage; (c) analyzing broader relationships beyond the year-to-year ones (e.g., crosscompany); (d) identifying important words in both reference segments and target segments (i.e., twoway rationalization), which may provide a more in-depth financial analysis. And we believe this research can facilitate NLP techniques applied in finance domain. ## 8 Limitations We identify crucial financial signals in reports which can help financial practitioners to digest long financial documents efficiently. However, factors such as macroeconomics, stock prices, and public policies may affect how a financial practitioner views financial reports in practice. Confidential intelligence or social media may greatly affect the analysis results. Therefore, we limit our task to the scenario in which the content in the reports is the sole information available to users. Accordingly, to prevent bias in the annotation process, we acquire annotations from annotators under similar scenarios (graduate students majoring in accounting or other related fields) rather than from financial professionals. In addition, language partially constrains our methods since the data we used in stage S2 is in English; adding a machine translation module may have sub-optimal effectiveness of financial signal highlighting. This is mainly because the financial signals highly depend on many languagespecific knowledge or country regulations. ## Acknowledgements We would like to thank the anonymous reviewers for the reviews. The work was supported by the grants from the National Science and Technology Council (Grant number NSTC 111-3111-E-002003 and MOST 111-2622-8-002-028-111C2415), National Taiwan University (Grant number NTUCC-112L891104), and NTU IoX Center (Grant number 109-3111-8-002-002-09HZA27003F and MOST 111-2622-8-002-028-111C2415). ## References Yash Agrawal, Vivek Anand, S Arunachalam, and Vasudeva Varma. 2021. Hierarchical model for goal guided summarization of annual financial reports. In Proc. of WWW, pages 247–254. Brad A Badertscher, Jeffrey J Burks, and Peter D Easton. 2018. The market reaction to bank regulatory reports. Rev. Account. Stud., 23(2):686–731. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2006. Analysis of representations for domain adaptation. In *Proc. of NIPS*, pages 137–144. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Proc. of* NeurIPS, pages 1877–1901. Chris Buckley and Ellen M. Voorhees. 2000. Evaluating evaluation measure stability. In *Proc. of SIGIR*, page 33–40. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural language inference with natural language explanations. In *Proc. of NIPS*, pages 9539–9549. Brian Chivers, Mason P. Jiang, Wonhee Lee, Amy Ng, I. Rapstine, and Natalya Alex Storer. 2022. ANTS: A framework for retrieval of text segments in unstructured documents. In *Proc. of NAACL DeepLo* Workshop, pages 38–47. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proc. of NAACL, pages 615–621. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proc. of NAACL*, pages 4171–4186. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In *Proc. of ACL*, pages 4443–4458. Mine Ertugrul, Jin Lei, Jiaping Qiu, and Chi Wan. 2017. Annual report readability, tone ambiguity, and the cost of borrowing. *J. Financ. Quant. Anal.*, 52(2):811–836. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychol. Bull.*, 76(5):378. Tuba Gokhan, Phillip Smith, and Mark Lee. 2021. Extractive financial narrative summarisation using sentencebert based clustering. In *Proc. of the FNP Workshop*, pages 94–98. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proc. of ACL, pages 8342–8360. Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In *Proc. of EMNLPIJCNLP*, pages 4238–4248. Karl Moritz Hermann, Tomáš Kociský, Edward Grefen- ˇ stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Proc. of NIPS*, page 1693–1701. Ke-Wei Huang and Zhuolun Li. 2011. A multilabel text classification algorithm for labeling risk factors in SEC form 10-K. *ACM Trans. Manag. Inf. Syst.*, 2(3):1–19. Zhongtao Jiang, Yuanzhe Zhang, Zhao Yang, Jun Zhao, and Kang Liu. 2021. Alignment rationale for natural language inference. In *Proc. of ACL-IJCNLP*, pages 5372–5387. Youngwoo Kim, Razieh Rahimi, and James Allan. 2022. Alignment rationale for query-document relevance. In *Proc. of SIGIR*, page 2489–2494. Shimon Kogan, Dimitry Levin, Bryan R Routledge, Jacob S Sagi, and Noah A Smith. 2009. Predicting risk from financial reports with regression. In *Proc.* of NAACL HLT, pages 272–280. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In *Proc. of EMNLP*, pages 107–117. Tian Li, Xiang Chen, Zhen Dong, Kurt Keutzer, and Shanghang Zhang. 2022. Domain-adaptive text classification with structured knowledge from unlabeled data. In *Proc. of IJCAI*, pages 4216–4222. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Proc. of ACL*, pages 74– 81. Ming-Chih Lin, Anthony JT Lee, Rung-Tai Kao, and Kuo-Tay Chen. 2011. Stock price movement prediction using representative prototypes of financial reports. *ACM Trans. Manag. Inf. Syst.*, 2(3):1–18. Ting-Wei Lin, Ruei-Yao Sun, Hsuan-Ling Chang, Chuan-Ju Wang, and Ming-Feng Tsai. 2021. XRR: Explainable risk ranking for financial reports. In Proc. of ECML-PKDD, pages 253–268. Tim Loughran and Bill Mcdonald. 2011. When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. *J. Finance*, 66(1):35–65. Michal Lukasik, Boris Dadachev, Kishore Papineni, and Gonçalo Simões. 2020. Text segmentation by cross segment attention. In *Proc. of EMNLP*, pages 4707– 4716. Mikhail Orzhenovskii. 2021. T5-LONG-EXTRACT at FNS-2021 shared task. In *Proc. of FNP Workshop*, pages 67–69. Jielin Qiu, Jiacheng Zhu, Mengdi Xu, Franck Dernoncourt, Trung Bui, Zhaowen Wang, Bo Li, Ding Zhao, and Hailin Jin. 2022. Semantics-consistent crossdomain summarization via optimal transport alignment. *arXiv:2210.04722*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proc. of EMNLP-IJCNLP*, pages 3982– 3992. Navid Rekabsaz, Mihai Lupu, Artem Baklanov, Alexander Dür, Linda Andersson, and Allan Hanbury. 2017. Volatility prediction using financial disclosures sentiments with word embedding-based IR models. In Proc. of ACL, pages 1712–1721. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In *Proc. of ACL*, pages 1073– 1083. Gennady Shtekh, Polina Kazakova, Nikita Nikitinsky, and Nikolay Skachkov. 2018. Applying topic segmentation to document-level information retrieval. In *Proc. of CEE-SECR*. Article no. 6. Ming-Feng Tsai and Chuan-Ju Wang. 2017. On the risk prediction and analysis of soft information in finance reports. *Eur. J. Oper. Res.*, 257(1):243–250. Haifeng You and Xiao-jun Zhang. 2009. Financial reporting complexity and investor underreaction to 10- K information. *Rev. Account. Stud.*, 14(4):559–586. Nadhem Zmandar, Mahmoud El-Haj, Paul Rayson, Marina Litvak, Geroge Giannakopoulos, Nikiforos Pittaras, et al. 2021a. The financial narrative summarisation shared task FNS 2021. In *Proc. of FNP Workshop*, pages 120–125. Nadhem Zmandar, Abhishek Singh, Mahmoud El-Haj, and Paul Rayson. 2021b. Joint abstractive and extractive method for long financial document summarization. In *Proc. of FNP Workshop*, pages 99–105. ![11_image_1.png](11_image_1.png) ## A Training Detail All model fine-tuning and inference (in Section 5 and Section 5.5) were conducted on an NVIDIA Tesla V100 32GB GPU. Each model fine-tuning can be done within three hours. We also ran all of the models with shared training settings, including the number of training steps, optimizers, and token batch sizes; we set other related training parameters as the settings in Huggingface Trainer.16 ## B Hyperparameter Search Recall that while the hyperparameter τ in Eq. (2) controls the probability distribution of the word importance, γ in Eq. (3) controls the impact of soft labeling. Figure 3 shows the performance in terms of R-Prec with different hyperparameter settings, where the left panel shows the results of τ fixed at 1 with γ ranging from 0 to 1, and the right panel shows that of γ fixed at 0.1 with τ ranging from 0.1 to 2. In the left panel of the figure, on FINAL, we see that solely adopting cross-entropy loss LCE (γ = 1) is not effective for fine-tuning the signal highlighting model, nor is adopting KL loss LKL (γ = 0) (see Eq. (3)); γ = 0.1 achieves the best R-Prec. These empirical results again validate the effectiveness of the proposed soft labeling for our highlighting task. In addition, we froze γ at 0.1 and experimented with different settings for the temperature parameter τ , the results of which are shown in the right panel of Figure 3, showing that τ = 2 is the most effective setting. We thus set our final hyperparameters to τ = 2 and γ = 0.1 to yield the best performance. ## C Empirical Thresholds For the relation recognition procedure in S1 (see Section 3.2 and Figure 4), we empirically set the 16https://huggingface.co/docs/ transformers/main_classes/trainer ![11_image_0.png](11_image_0.png) thresholds ϵsyn = 0.6296 and ϵsem = 0.9011. Both numbers are the 50 percentiles of the corresponding similarity scores calculated from the reduced segment pair set (T ). Note that, in this work, we adopt a rule-based heuristic method for recognizing relations using similarity functions with hard thresholds. We leave the exploration of other similarity functions, thresholds, and approaches to future work. ## D Annotation Guidelines For each segment pair, the annotators were to focus on the semantic difference regarding the referenceto-target relationship and annotate words in the target segment as positive when the words were considered important financial signals. The following guidelines were given for the annotators' reference. - Changes: Changing numbers or objects are important signals in financial reports (e.g., sales, cost, partnership, products, etc.). - Opposition: Descriptive phrases that indicate distant semantic meanings (e.g., increased/decreased, effective/ineffective, etc.). - Precise: Labeling words with high confidence as positive only (i.e., leaving ambiguous words as negative). - Extra information: Identifying new information according to the context, for which the annotators considered the reference segment as the context (e.g., new policy, canceled deals, newly published products, etc.). ## E Empirical Cases In Table 6, we take few revised segment pairs (T α 1 ) and mismatched segment pairs (T α 2 ) as examples. The underlined words are with the top-k highest importance predicted by the proposed pipeline. (a) Empirical examples of the revised segment pair (T α 1 ) [ k = 5] Reference segment Gross margin from manufacturing operations as a percentage of manufacturing revenues increased to 27% for the year ended December 31, 2014, from 23% for the comparable prior year period. Target segment Gross margin from manufacturing operations as a percentage of manufacturing revenues decreased to 15% for the year ended December 31, 2016 from 23% for the comparable prior year period. Reference segment We believe the increased sales achieved by our stores are the result of store growth and the high levels of customer service provided by our well-trained and technically proficient Team Members, superior inventory availability, including same day and over-night access to inventory in our regional distribution centers, enhanced services and programs offered in our stores, a broader selection of product offerings in most stores with a dynamic catalog system to identify and source parts, a targeted promotional and advertising effort through a variety of media and localized promotional events, continued improvement in the merchandising and store layouts of our stores, compensation programs for all store Team Members that provide incentives for performance and our continued focus on serving both DIY and professional service provider customers. Target segment We believe the increased sales achieved by our stores were the result of store growth, sales from one additional day due to Leap Day for the year ended December 31, 2016, sales from the acquired 48 Bond stores, the high levels of customer service provided by our well-trained and technically proficient Team Members, superior inventory availability, including same day and over-night access to inventory in our regional distribution centers, enhanced services and programs offered in our stores, a broader selection of product offerings in most stores with a dynamic catalog system to identify and source parts, a targeted promotional and advertising effort through a variety of media and localized promotional events, continued improvement in the merchandising and store layouts of our stores, compensation programs for all store Team Members that provide incentives for performance and our continued focus on serving both DIY and professional service provider customers. Reference segment Cash provided by operating activities during the year ended December 31, 2013 was primarily related to net income of $73.7 million and various non-cash add backs in operating activities and changes in operating assets and liabilities. Target segment Cash provided by operating activities during the year ended December 31, 2015 was primarily related to net income of $47.4 million, $23.4 million loss from discontinued operations, in addition to other non-cash add backs in operating activities and changes in operating assets and liabilities. (b) Empirical examples of the mismatched segment pair (T α 2 ) [ k = 10] Reference segment This increase of 1.0%, as a percentage of revenues, was primarily attributable to higher compensation costs of 0.4% primarily related to higher wage rates, higher facility-related costs of 0.2% principally from the expansion of U.S. facilities and lease termination costs in connection with the Fourth Quarter 2011 Exit Plan, higher software maintenance of 0.2%, higher legal and professional fees of 0.1%, higher taxes of 0.1% and higher other costs of 0.3%, partially offset by lower equipment and maintenance costs of 0.3%. Target segment The decrease in Americas general and administrative expenses, as a percentage of revenues, was primarily attributable to lower compensation costs of 0.6%, lower facility-related costs of 0.4% due to rationalization of facilities, lower equipment and maintenance costs of 0.2% and lower other costs of 0.1%. Reference segment The remaining capacity is expected to be placed into service in line with the expected in-service date of the Sandpiper Project. Target segment Three external parties filed motions requesting that the scoping process be re-opened or that a comment period be established because of the issuance of the Consent Decree settling the Line 6B pipeline crude oil release in Marshall, Michigan and the withdrawal of regulatory applications pending with the MNPUC with respect to the Sandpiper Project discussed above. Reference segment In December 2009, the FASB issued revised guidance related to the consolidation of variable interest entities ( VIE ). Target segment The Company assessed the accounting guidance related to the classification of the preferred shares after the modification on March 31, 2011 and concluded that the preferred shares should be classified as a mandatorily redeemable financial instrument, and presented as a liability on the consolidated balance sheet. Table 6: Empirical cases in the FINAL evaluation set ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8. ✗ A2. Did you discuss any potential risks of your work? We have checked and discussed the ethical policies; we didn not find potential risks in our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✓ A4. Have you used AI writing assistants when working on this paper? We used Grammarly to check if there are any misused grammar, typos in very beginning manuscripts. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? Section 3 (spaCy, e-SNLI dataset); Section 4(Software Repository for Accounting and Finance(SRAF)) ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We will update the license or terms for use of our released dataset if we open-sourced our codes and data (they are in anonymous repository now.) According to Software Repository for Accounting and Finance(SRAF), all software and data are provided without warranties, and are for non-commercial purposes. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3.3 (the detail settings of our intended use of external data.); Section 4(our created dataset). We will update the license or terms for use of our released dataset if we open-sourced our codes and data (they are in anonymous repository now.) ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used the original corpus from SRAF without any revisions or other related resources. We believe the contents in this data do not contain the sensitive information about ethical issues. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? To our knowledge, the raw corpus we used is from 10-K financial report which is regulated by U.S. SEC; therefore, the data is covered in financial domain and in English only. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5.4 (main results); Section 5.5 (ablation studies). ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A (training detail). ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B (hyperparameters search). ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In section 5, we reports the experimental results; we also report the p-value of pair t-test in the results table. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We cite the URL of spacy's API we used. ROUGE setup is the default setting from original paper. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4.3. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix D. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.3 (human annotation); Section 8 (limitation). ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We have informed and acquired their approval for the further usage in this paper. The annotators were agreed and awared of the purpose of our work. We have also acquired their approval for releasing the dataset with their annotations. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We have checked and followed the ethical policies in the process of this work. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 8.
korakakis-vlachos-2023-improving
Improving the robustness of {NLI} models with minimax training
https://aclanthology.org/2023.acl-long.801
Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules that spuriously correlate with the label. As a result, they achieve high in-distribution performance, but fail to generalize to out-of-distribution samples where such correlations do not hold. In this paper, we present a training method to reduce the reliance of NLI models on shortcuts and improve their out-of-distribution performance without assuming prior knowledge of the shortcuts being targeted. To this end, we propose a minimax objective between a learner model being trained for the NLI task, and an auxiliary model aiming to maximize the learner{'}s loss by up-weighting examples from regions of the input space where the learner incurs high losses. This process incentivizes the learner to focus on under-represented {``}hard{''} examples with patterns that contradict the shortcuts learned from the prevailing {``}easy{''} examples. Experimental results on three NLI datasets demonstrate that our method consistently outperforms other robustness enhancing techniques on out-of-distribution adversarial test sets, while maintaining high in-distribution accuracy.
Improving the robustness of NLI models with minimax training Michalis Korakakis University of Cambridge mk2008@cam.ac.uk ## Abstract Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules that spuriously correlate with the label. As a result, they achieve high in-distribution performance, but fail to generalize to out-ofdistribution samples where such correlations do not hold. In this paper, we present a training method to reduce the reliance of NLI models on shortcuts and improve their out-of-distribution performance without assuming prior knowledge of the shortcuts being targeted. To this end, we propose a minimax objective between a learner model being trained for the NLI task, and an auxiliary model aiming to maximize the learner's loss by up-weighting examples from regions of the input space where the learner incurs high losses. This process incentivizes the learner to focus on under-represented "hard" examples with patterns that contradict the shortcuts learned from the prevailing "easy" examples. Experimental results on three NLI datasets demonstrate that our method consistently outperforms other robustness enhancing techniques on out-of-distribution adversarial test sets, while maintaining high in-distribution accuracy. ## 1 Introduction Natural language inference (NLI)1 models have achieved state-of-the-art results on many benchmarks (Wang et al., 2019). However, recent work has demonstrated that their success is partly due to learning and using shortcuts (Gururangan et al., 2018; Poliak et al., 2018; Geirhos et al., 2020), i.e. spurious correlations between input attributes and labels introduced during dataset creation.2 For example, high word-overlap between the premise and the hypothesis in the MNLI (Williams et al., Andreas Vlachos University of Cambridge av308@cam.ac.uk ![0_image_0.png](0_image_0.png) 2018) dataset is strongly correlated with the entailment label (McCoy et al., 2019). Consequently, NLI models that exploit shortcuts perform well on in-distribution samples, but are brittle when tested on out-of-distribution adversarial test sets that explicitly target such phenomena (Naik et al., 2018; Glockner et al., 2018). Thus, numerous methods have been proposed to prevent models from learning shortcuts present in NLI datasets (Belinkov et al., 2019; Schuster et al., 2019; Zhou and Bansal, 2020; Stacey et al., 2020; Du et al., 2021; Modarressi et al., 2023, *inter* alia). Most approaches typically assume access to an auxiliary model designed to rely on shortcuts for predictions. The output of the auxiliary is then used to re-weight training instances for the learner model via ensembling (He et al., 2019; Clark et al., 2019; Karimi Mahabadi et al., 2020). However, knowing the shortcuts in advance assumes domainand dataset-specific knowledge, which is not always available and limits the potential of shortcut mitigation (Rajaee et al., 2022). A separate line of work overcomes this issue by 14322 forcing the auxiliary model to learn and exploit shortcuts either by exposing it to only a small number of training examples (Utama et al., 2020b), or by leveraging an auxiliary with reduced learning capabilities (Clark et al., 2020a; Sanh et al., 2021). Another approach is to fine-tune an already trained NLI model on examples that were frequently misclassified during the initial training stage (Yaghoobzadeh et al., 2021). While these works show promising results, they assume that the learner will naturally exploit the same types of shortcuts as the auxiliary. In practise, the behavior of the learner diverges from that of the auxiliary. For instance, Amirkhani and Pilehvar (2021) empirically demonstrate that commonly used auxiliaries often down-weight examples that are useful for training the learner, while Xiong et al. (2021) show that inaccurate uncertainty estimations by the auxiliary model can hinder the learner's out-ofdistribution generalization capabilities. In this paper, we propose a training method to reduce the reliance of NLI models on shortcuts in order to improve their out-of-distribution performance. To this end, we frame training as a minimax objective between a learner and an auxiliary (Figure 1). The learner optimizes for the NLI task, whereas the auxiliary tries to maximize the loss of the learner by up-weighting "hard" examples. The key insight behind our training method is that NLI models suffer from poor performance on underrepresented "hard" training instances with patterns that contradict the shortcuts found in the dominant "easy" examples (Tu et al., 2020). Therefore, by encouraging the learner to perform well on these examples and rely less on the "easy" examples with shortcuts, we can obtain better out-of-distribution generalization. Compared to existing robustness enhancing techniques, our training method (i) does not assume knowledge of the shortcuts being targeted, (ii) detects and up-weights examples in a data-driven way, i.e. the auxiliary is a parameterized neural network that predicts weights for each training instance at every training iteration, and (iii) uses a small feed-forward network rather than a large-scale pre– trained language model (PLM) for the auxiliary, thus incurring a small computational overhead. We evaluate our proposed method in three commonly-used NLI datasets, namely, MNLI (Williams et al., 2018), FEVER (Thorne et al., 2018), and QQP (Iyer et al., 2017), and their corresponding out-of-distribution adversarial test sets, HANS (McCoy et al., 2019), Symmetric (Schuster et al., 2019), and PAWS (Zhang et al., 2019). We observe that compared to other state-of-the-art robustness enhancing methods, the minimax training objective consistently improves out-of-distribution performance. We further verify the effectiveness of the minimax objective using a synthetic shortcut experimental setup, and then show that the performance gains generalize to a range of large-scale PLMs, out-of-domain test sets, and a question answering dataset. Finally, we empirically analyze the minimax objective to obtain further insights in the workings of the proposed training method. ## 2 Minimax Training For Shortcut Mitigation Suppose we have a dataset D = {(xi, yi)} N i=1 comprising the input data xi ∈ X and the labels yi ∈ Y. Our goal is to learn a model fθ : *X → Y* parameterized by θ. The standard training objective for achieving this is empirical risk minimization (ERM) that minimizes the average training loss: $$J(\theta)=\operatorname*{min}_{\theta}{\frac{1}{N}}\sum_{i=1}^{N}\ell(f_{\theta}(x_{i}),y_{i}),\qquad(1)$$ where ℓ(fθ(xi), yi) is the cross-entropy loss. When shortcuts are present in the "easy" examples that are well-represented in the training data, ERMtrained models will exploit them to achieve low training loss. Consequently, this will lead to poor performance on under-represented "hard" examples where such shortcuts do not hold. These examples are pivotal for ensuring good generalization performance on out-of-distribution samples (Yaghoobzadeh et al., 2021). Crucially, the loss of "hard" examples decreases considerably more slowly than the average loss throughout training (Tu et al., 2020). Therefore, our aim is to obtain a weight distribution that places emphasis on the under-represented "hard" examples, where we minimize the weighted training loss: $$J(\theta)=\operatorname*{min}_{\theta}\sum_{i=1}^{N}w_{i}\ell(f_{\theta}(x_{i}),y_{i}),\qquad(2)$$ where wiis the weight associated with the i-th example xi. Intuitively, the example weights should Algorithm 1: Minimax Training. Input: Dataset D, learner fθ, auxiliary gϕ, mini-batch size n, # of iterations T Output: optimized learner fθ pre-train fθ on D using ERM for $\tau=1,\ldots,T$ do sample mini-batch $\{x_{i},y_{i}\}_{i=1}^{n}$ from $\mathcal{D}$ generate weights via $g_{\phi}(x_{i},y_{i})$ generate predictions via $f_{\theta}(x_{i},y_{i})$ update $\theta$ to $$\min\sum_{i=1}^{n}g_{\phi}(x_{i},y_{i})\ell(f_{\theta}(x_{i}),y_{i})$$ update $\phi$ to $$\max\sum_{i=1}^{n}g_{\phi}(x_{i},y_{i})\ell(f_{\theta}(x_{i}),y_{i})$$ have high values for the under-represented "hard" instances, and low values for the prevailing "easy" instances with shortcuts. To compute the example weight distribution, we propose a minimax training objective between a learner fθ and an auxiliary gϕ : *X × Y →* [0, 1] parameterized by ϕ. Both models are optimized in an alternating fashion. The learner fθ tries to minimize the loss for the classification task (NLI in this paper). The task of the auxiliary gϕ is to maximize the learner's loss by generating a weight for each training example at every training iteration, such that the learner is incentivized to concentrate on regions of the input space where it incurs high losses. Thus, the learner will prioritize learning from under-represented "hard" examples that counteract the use of shortcuts present in the dominant "easy" examples. Formally, the minimax objective can be written as: $$J(\theta,\phi)=\min_{\theta}\max_{\phi}\sum_{i=1}^{N}g_{\phi}(x_{i},y_{i})\ell(f_{\theta}(x_{i}),y_{i}).\tag{3}$$ Both θ and ϕ can be optimized using any standard optimization algorithm, such as stochastic gradient descent. In order to ensure that the example weights lie in the range [0, 1], the output of the auxiliary model is passed through a sigmoid function. At test time the learner can make predictions without relying on the auxiliary. Algorithm 1 summarizes the overall training procedure. ## 3 Experimental Setup 3.1 Data We conduct experiments using three English NLI datasets, MNLI, FEVER, and QQP. For each dataset, we evaluate performance on an out-ofdistribution adversarial test set constructed to examine model reliance on specific shortcuts for predictions. MNLI The MNLI (Williams et al., 2018) dataset contains approximately 430k premise-hypothesis pairs labelled as entailment if the premise entails the hypothesis, contradiction if it contradicts the hypothesis, or neutral otherwise. We evaluate indistribution performance on MNLI-matched and out-of-distribution performance on HANS (McCoy et al., 2019), an adversarial test set designed to investigate whether a model exploits the high-word overlap shortcut to predict entailment. FEVER We conduct experiments on FEVER (Thorne et al., 2018), a fact verification dataset containing around 180k pairs of claim–evidence pairs. The goal in FEVER is to predict whether the retrieved evidence supports a claim, refutes a claim, or there is not enough information. As we are interested in the NLI part of the task, we assume the gold evidence given. We further evaluate on Symmetric (Schuster et al., 2019), which is designed to address the claim-only shortcut, whereby a model learns to use only the claim for predictions while ignoring the evidence. QQP The QQP (Iyer et al., 2017) dataset contains over 400k question pairs annotated as either paraphrase or non-paraphrase. We evaluate out-ofdistribution performance on PAWS (Zhang et al., 2019), an adversarial test set constructed to penalize the high-word overlap shortcut that models exploit to predict the paraphrase label. ## 3.2 Models Following previous work (Sanh et al., 2021; Utama et al., 2020b; Yaghoobzadeh et al., 2021), we use BERT (Devlin et al., 2019) and conduct experiments with BERT-base as the learner model. We use a 3-layer multiple-layer perceptron (MLP) for the auxiliary with tanh as the activation function for the middle layer. Furthermore, we normalize the weights of the auxiliary to have a mean weight of 1 across the batch, and add a constant value to every example weight to ensure that all | Method | MNLI | FEVER | QQP | | | | |---------------------------------------------------------------------------------------------------|----------|----------|----------|----------|----------|----------| | Dev | HANS | Dev | Sym. | Dev | PAWS | | | ERM | 84.4 | 62.6 | 85.7 | 55.1 | 90.8 | 36.0 | | Shortcut is known in advance PoE (Karimi Mahabadi et al., 2020) † | 84.2 | 66.3 | 84.4 | 66.2 | - | - | | † | 84.3 | 69.1 | 86.4 | 60.5 | 89.1 | 40.0 | | Regularized-conf (Utama et al., 2020a) No prior shortcut knowledge PoE + CE (Sanh et al., 2021) † | 83.2 | 67.9 | 85.3 | 57.9 | - | - | | Self-debias (Utama et al., 2020b) † | 82.3 | 69.7 | - | - | 85.2 | 57.2 | | F BOW (Yaghoobzadeh et al., 2021) † | 83.1 | 70.5 | 87.1 | 61.0 | 89.0 | 48.8 | | Minimax (Ours) | 83.6±0.2 | 72.8±0.4 | 85.4±0.6 | 62.5±0.3 | 87.9±0.5 | 53.7±1.8 | Table 1: Accuracies on the MNLI, FEVER, and QQP datasets, along with their corresponding adversarial test sets, HANS, Symmetric (Sym.), and PAWS. Numbers are averaged on 5 runs with standard deviations. † are reported results and underlying indicates statistical significance against the ERM-trained BERT-base baseline. examples contribute to the loss in order to avoid filtering useful "easy" examples and hurting indistribution performance, i.e. wi + c > 0, with c = 1. We obtain word representations for the auxiliary using 300-dimensional Glove (Pennington et al., 2014) embeddings and averaging them. We use Adam (Kingma and Ba, 2015) to train both the auxiliary and the learner model. For the learner model we use default architectures and hyperparameters from the Hugging Face Transformers library (Wolf et al., 2020). Finally, we pre-train the learner model for 3 epochs to ensure that it will initially prioritize learning the shortcuts. We train models for 10 epochs and report the mean and standard deviation over 5 runs with different random seeds. Finally, we conduct statistical significance using a two-tailed t-test (with p < 0.05). ## 3.3 Baselines We compare our method with representative techniques from two robustness enhancing categories. The first category assumes that the shortcut being targeted for mitigation is known in advance. We use the method of Karimi Mahabadi et al. (2020) (PoE) which ensembles the auxiliary and the learner via the product-of-experts (Hinton, 2002), so that the learner will focus on examples that the auxiliary cannot predict well. We also consider confidence regularization (Utama et al., 2020a) (Regularizedconf) which relies on a self-knowledge distillation objective to down-weight examples for which the auxiliary provides over-confident predictions. The second robustness enhancing category includes approaches that do not assume any prior shortcut knowledge. We use the method of Utama et al. (2020b) (Self-debias), who propose to exploit a "shallow" model trained on a small fraction of the training data as the auxiliary model. Sanh et al. (2021) (PoE + CE) use BERT-tiny (Turc et al., 2019) as a "weak" (low-capacity) auxiliary model, and train it on the full training dataset. Finally, Yaghoobzadeh et al. (2021) ( F BOW ) first train the model on the entire dataset, and then finetune it on the "forgettable examples" (Toneva et al., 2019), i.e. samples that during the initial training stage were either classified correctly and misclassified afterwards, or they were never properly classified. ## 4 Results Main Results Table 1 presents the main experimental results. In general, we see that in all settings the minimax objective significantly improves out-of-distribution performance on the adversarial test sets compared to the ERM-trained BERTbase baseline. In particular, it outperforms the latter on HANS, Symmetric, and PAWS by 10.2, 7.4, and 17.7, respectively. However, we also observe that training using the minimax objective results in a small reduction in the in-distribution performance. Specifically, on MNLI, FEVER, and QQP, the decrease in the in-distribution accuracy is 0.8, 0.3, and 2.9, respectively. Compared to other state-of-the-art robustness enhancing techniques, our method improves in-distribution accuracy on MNLI and out-of-distribution performance on HANS and Symmetric. Conversely, on QQP, F BOW and Reguralized-conf outperform minimax 14325 ![4_image_0.png](4_image_0.png) training by 1.1 and 1.2, while on PAWS Self-debias improves out-of-distribution performance by 3.5. Notably, the improvement for Self-debias comes at the expense of a considerable drop in the indistribution performance on QQP, i.e. 5.6 reduction in accuracy for Self-debias compared to the ERMtrained BERT-base model. Synthetic Shortcut Following previous work (He et al., 2019; Clark et al., 2019; Sanh et al., 2021), we modify the MNLI training data by adding a synthetic shortcut, i.e. a prefix in the hypothesis containing the ground-truth label with probability psynthetic or alternatively a random label. Conversely, on the modified MNLI test set the prefix is always a random label. If the model exploits the synthetic shortcut for predictions, then it will have low accuracy on the test set. Figure 2 shows that the performance of the ERM-trained BERT-base model deteriorates rapidly on the test set as the number of training examples containing the synthetic shortcut increases, whereas the reduction in performance for the model trained with the minimax objective is much less drastic. Out-of-Domain Generalization We further investigate the generalization capabilities of models trained using the proposed minimax objective on various out-of-domain NLI datasets. To this end, following the experimental setup of Karimi Mahabadi et al. (2020) and Sanh et al. (2021), we train models on SNLI (Bowman et al., 2015), and evaluate performance on AddOneRTE (Ad- | Domains | ERM | PoE | PoE + CE | Minimax | |-----------|-------|-------|------------|-----------| | ADD1 | 86.54 | 87.42 | 87.20 | 87.25 | | DPR | 49.92 | 49.85 | 50.10 | 50.16 | | SPR | 58.71 | 61.58 | 60.99 | 61.86 | | FN+ | 53.98 | 54.01 | 54.18 | 54.23 | | SCITAIL | 70.14 | 71.32 | 73.75 | 75.19 | | GLUE | 55.62 | 55.93 | 54.82 | 55.38 | | SNLI-hard | 81.07 | 81.39 | 81.62 | 81.81 | Model OOD Test Set ERM Minimax BERT-large HANS 71.6 **77.3** Symmetric 60.1 **69.2** PAWS 38.6 **57.8** RoBERTa-large HANS 74.9 **79.1** Symmetric 67.6 **73.6** PAWS 38.8 **56.6** XLNet-large HANS 76.1 **78.6** Symmetric 68.5 **76.2** PAWS 44.7 **62.9** dOne) (Pavlick and Callison-Burch, 2016), Definite Pronoun Resolution (DPR) (Rahman and Ng, 2012), Semantic Proto-Roles (SPR) (Reisinger et al., 2015), FrameNet+ (FN+) (Pavlick et al., 2015), SciTail (Khot et al., 2018), GLUE diagnostic test set (Wang et al., 2018), and the SNLI-hard test set (Gururangan et al., 2018). Table 2 presents the results. In general, we observe that our method consistenly outperforms the ERM-trained BERTbase baseline, with the only exception being the GLUE diagnostic test set, where the latter improves accuracy by 0.24. Furthermore, we see that the minimax training outperforms PoE and PoE + CE in five out of 7 out-of-domain test sets. Large-scale Pre-trained Language Models We examine whether the performance improvements of training the BERT-base model using the minimax objective also transfer to large-scale PLMs. In particular, we conduct experiments with BERTlarge (Devlin et al., 2019) (340M parameters), RoBERTa-large (Liu et al., 2019) (340M parame- | Method | SQuAD | AddSent | AddOneSent | |----------|---------|-----------|--------------| | ERM | 88.72 | 54.10 | 58.96 | | PoE + CE | 86.49 | 56.80 | 61.04 | | Minimax | 86.51 | 57.36 | 62.13 | ters), and XLNet-large (Yang et al., 2019) (355M parameters). The experimental results in Table 3 demonstrate that our method yields substantial performance gains over ERM for all three large-scale PLMs. In particular, on HANS, Symmetric, and PAWS, minimax training improves performance compared to ERM for BERT-large by 5.7, 9.1, and 19.2, for RoBERTa-large by 4.2, 6, and 17.8, and finally, for XLNet-large by 2.5, 7.7, and 18.2, respectively. Question Answering Following Sanh et al. (2021), we also conduct experiments on a question answering dataset. In particular, we train BERTbase models on SQuAD (Rajpurkar et al., 2016), and evaluate their out-of-distribution performance on the Adversarial SQuAD dataset (Jia and Liang, 2017). Table 4 shows that minimax improves outof-distribution performance on the AddSent and AddOneSent adversarial test sets compared to the ERM-trained BERT-base baseline and PoE + CE. ## 5 Analysis Using the loss to detect "hard" examples We investigate whether the loss provides a robust signal for discovering "hard" examples that contradict the shortcuts found in the "easy" examples. To this end, we manually classify training instances from MNLI into two categories, namely, "easy" entailment instances with a large amount of words occurring both in the hypothesis and the premise, and under-represented "hard" non-entailment examples with high word overlap, and study their average losses during training. Figure 3 demonstrates that the high-loss examples on MNLI are dominated by the "hard" non-entailment category, whereas the "easy" entailment examples incur predominantly low-losses. Removing "easy" examples and inverting the weight distribution We evaluate whether we can improve the overall performance of the minimax ![5_image_0.png](5_image_0.png) objective by discarding the "easy" examples (filtering minimax), i.e. removing their contribution to the loss by allowing the auxiliary to generate example weights wi ≥ 0 via setting c = 0. Furthermore, we also examine whether the learnt example weight distribution is meaningful, by keeping the order of the examples fixed and inverting their weights (inverse minimax), i.e. the examples with the largest weights get the lowest weights and vice versa. The experimental results in Table 5 show that using filtering minimax results in similar out-of-distribution performance to that of standard minimax, however, the drop in the in-distribution performance for the former is much more considerable. Conversely, the inverse minimax objective leads to high in-distribution accuracy (similar to that of ERM-trained models), at the expense of out-of-distribution performance. Effect of number of epochs for pre-training the learner We investigate how performance changes as we vary the number of epochs required for pre-training the learner model. Figure 4a demonstrates that out-of-distribution performance is high when we pre-train the learner for 2 and 3 epochs, but drops when the duration of the pretraining stage is either too short or too long, which consequently results in less informative losses for the auxiliary to learn from. Impact of size of the auxiliary We explore whether the size of the auxiliary impacts the | Method | MNLI | FEVER | QQP | | | | |-------------------|--------|---------|-------|------|------|------| | Dev | HANS | Dev | Sym. | Dev | PAWS | | | ERM | 84.4 | 62.6 | 85.7 | 55.1 | 90.8 | 36.0 | | Minimax | 83.6 | 72.8 | 85.4 | 62.5 | 87.9 | 53.7 | | Filtering Minimax | 80.1 | 69.9 | 81.7 | 61.7 | 83.8 | 51.3 | | Inverse Minimax | 84.3 | 59.6 | 85.5 | 53.2 | 90.6 | 31.2 | ![6_image_0.png](6_image_0.png) Accuracy (a) Number of epochs for pre-training the learner. (b) Different auxiliary sizes (hidden layers). ![6_image_1.png](6_image_1.png) learner's in-distribution and out-of-distribution performance. To this end, we train the learner using several auxiliary models of varying sizes. Specifically, we make auxiliary models larger by increasing the number of hidden layers while keeping the other hyperparameters constant. We observe that varying the capacity of the auxiliary model affects the learner's in-distribution and out-of-distribution performance (Figure 4b). In particular, the outof-distribution performance of the learner model increases as the auxiliary model becomes stronger up to a certain point, while in-distribution performance drops slightly at first and then more strongly. Finally, we observe that increasing the size of the auxiliary has the side effect of incentivizing it to learn the trivial solution of maximising all example weights. Examining the weighted examples We use the converged auxiliary model to present examples of down-weighted and up-weighted training instances on MNLI. Table 6 demonstrates that the auxiliary is able to correctly identify, and subsequently, downweight "easy" examples, i.e. entailment with a large amount of words occurring in the premise and hypothesis, and up-weight "hard" examples with patterns that directly contradict the shortcut, i.e. nonentailment with high word overlap. Furthermore, Figure 5 visualises the distribution of the MNLI example weights learned at the end of training. We observe that the minimax objective does not use the trivial solution of setting all weights to 1 to maximize the learner's loss. Conversely, the example weights form two main clusters, at both ends of the histogram. ## 6 Related Work Distributionally Robust Optimization Training objectives for ensuring strong model performance across all samples typically fall under the framework of distributionally robust optimization (DRO) (Ben-Tal et al., 2013; Duchi and Namkoong, 2018). DRO seeks to minimize the worst-case loss by placing emphasis on "hard" examples. Sagawa et al. (2020) extend the DRO framework to the case where the training data belongs to predefined groups (e.g. demographic | Label | Premise | Hypothesis | Example Weight | | |----------------------|-------------------------------------|-------------------------------------|----------------------------|------| | Down-weighted | Entailment | The doctor was paid by the actor. | The doctor paid the actor. | 1.13 | | Entailment | The doctors visited the lawyer. | The lawyer visited the doctors. | 1.16 | | | Entailment | The secretaries encouraged the scientists and the actors. | The secretaries encouraged the actors. | 1.25 | | | Entailment | The athlete who the judges admired | The judges admired the athlete. | 1.29 | | | called the manager. | | | | | | Contradiction | A subcategory of accuracy is consistency. | Accuracy is a subcategory of consistency. | 3.34 | | | Contradiction | Of course, we never rejected people | We rejected people for being flaky. | 3.36 | | | for being too flaky. | | | | | | Contradiction | A subcategory of accuracy is consistency. | Accuracy is a subcategory of consistency. | 3.41 | | | Neutral | Some people do I know. | I do know some people. | 3.65 | | | Neutral | Grace and consistency? | Consistency? | 3.69 | | ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) groups), and then focus on improving the worstgroup performance. Our proposed method is closest to research works that assume that group annotations are not available during training. For instance, Bao et al. (2021) develop group-specific classifiers to discover groupings, Sohoni et al. (2020) cluster the data, and Liu et al. (2021) propose a two-stage approach, by first training a model for a few epochs, and then using a second model to up-weight examples that the first model misclassified. However, these approaches require access to a validation set with group annotations, and/or rely on fixed example weights to determine groupings, i.e. they do not dynamically monitor the learner's training dynamics. Example Weighting Our proposed training objective is also related to example weighting methods that are typically used to mitigate datasetrelated issues, such as label noise and class imbalance. For example, approaches like focal loss (Lin et al., 2017) encourage the model to focus on "hard" instances. Conversely, in self-paced learning (Kumar et al., 2010), the example weights emphasize training using the "easy" examples first. Recent works focus on learning a weighting scheme with gradient-based (Fan et al., 2018; Raghu et al., 2021; Wang et al., 2020) and meta-learning methods (Jiang et al., 2018; Ren et al., 2018; Shu et al., 2019). While our proposed method also learns example weights in a data-driven way, we do so using a minimax training objective that places emphasis on up-weighting examples with patterns that contradict the shortcuts. Finally, Zhou et al. (2022) present a related shortcut-agnostic mitigation method by framing the task of generating an example weight distribution as nested optimization, where in the lower-level objective the model minimizes the weighted ERM loss, and on the upperlevel objective the example weights are updated to minimize an out-distribution criterion. Our approach is different since we incorporate two models into the training process, while Zhou et al. (2022) use the same model in both the lower- and the upper-level objectives. Dataset Filtering Another line of related work focuses on improving robustness by filtering the training data instead of modifying the training objective and/or the model architecture. Zellers et al. (2018) and Zellers et al. (2019) use this approach to mitigate the inclusion of shortcuts during dataset creation. Sakaguchi et al. (2020) propose AFLITE, an adversarial method for filtering examples with shortcuts. AFLITE works by training several models over small partitions of the initial dataset to discover "easy" examples that contain shortcuts. Wu et al. (2022) fine-tune a PLM to synthetically generate NLI examples, and then use z-statistics (Gardner et al., 2021) to remove samples with shortcuts.However, dataset filtering methods may hinder in-distribution performance, due to removing useful examples that contribute towards learning the underlying task. Conversely, our proposed minimax training objective assigns low weights to "easy" examples instead of completely eliminating them, thus preserving in-distribution performance. Generative Adversarial Networks The minimax objective we propose is reminiscent of the training objective of generative adversarial networks (GANs) (Goodfellow et al., 2014). In NLP, GANs are commonly used to address exposure bias in text generation (de Masson d'Autume et al., 2019). However, in practise, they perform worse than simpler methods (Caccia et al., 2020). A separate family of methods focuses on using the training objective of GANS to improve the computational efficiency of language modelling pretraining (Clark et al., 2020b). Closer to our work, adversarial training (Miyato et al., 2017) aims to improve robustness in text classification, but this method only operates at the level of word embeddings used in representing a single sentence, and thus is not applicable to NLI. ## 7 Conclusion In this work, we present a minimax training objective for reducing the reliance of NLI models on shortcuts in order to improve their overall robustness without assuming prior knowledge about the existence of specific shortcuts. Our proposed method leverages an auxiliary model that tries to maximize the learner's loss by up-weighting underrepresented "hard" examples with patterns that contradict the shortcuts present in the prevailing "easy" examples. Experiments across three NLI datasets demonstrate that our minimax objective consistently improves performance on various outof-distribution adversarial test sets. ## Limitations Since the minimax objective requires using two separately trained models, i.e. the learner and the auxiliary, the design of the latter plays a crucial role in the overall stability of the training process. In particular, while having a very capable auxiliary model will naturally result in a more accurate and robust example weight distribution, it will also potentially lead to overfitting to certain training instances with high-losses. Another potential limitation of minimax training is that the existence of noise in the labels may cause the auxiliary to generate erroneous example weights due to high-loss noisy instances co-existing with the "hard" examples containing meaningful patterns that contradict the shortcuts. Furthermore, we explore shortcut mitigation only for NLI in English, and thus our method might not transfer to other tasks and/or languages. Finally, the datasets we consider are well-used and -discussed in the literature, and consequently their shortcuts (and how they are adopted by the models) are well-known. Further testing is needed to establish whether our approach would transfer to datasets containing different shortcuts. ## Acknowledgements The authors wish to thank Pasquale Minervini and Tal Schuster for their helpful comments and feedback. Michalis Korakakis is supported by the Cambridge Commonwealth, European and International Trust and the ESRC Doctoral Training Partnership. Andreas Vlachos is supported by the ERC grant AVeriTeC (GA 865958). ## References Hossein Amirkhani and Mohammad Taher Pilehvar. 2021. Don't discard all the biased instances: Investigating a core assumption in dataset bias mitigation techniques. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4720– 4728, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yujia Bao, Shiyu Chang, and Regina Barzilay. 2021. Predict then interpolate: A simple algorithm to learn stable classifiers. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 640–650. PMLR. Yonatan Belinkov, Adam Poliak, Stuart Shieber, Benjamin Van Durme, and Alexander Rush. 2019. Don't take the premise for granted: Mitigating artifacts in natural language inference. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 877–891, Florence, Italy. Association for Computational Linguistics. Aharon Ben-Tal, Dick den Hertog, Anja De Waegenaere, Bertrand Melenberg, and Gijs Rennen. 2013. Robust solutions of optimization problems affected by uncertain probabilities. *Manag. Sci.*, 59(2):341–357. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2020. Language gans falling short. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language* Processing (EMNLP-IJCNLP), pages 4069–4082, Hong Kong, China. Association for Computational Linguistics. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2020a. Learning to model and ignore dataset bias with mixed capacity ensembles. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3031–3045, Online. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020b. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In *Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First* PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, volume 3944 of *Lecture* Notes in Computer Science, pages 177–190. Springer. Cyprien de Masson d'Autume, Shakir Mohamed, Mihaela Rosca, and Jack W. Rae. 2019. Training language gans from scratch. In *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4302–4313. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mengnan Du, Varun Manjunatha, Rajiv Jain, Ruchi Deshpande, Franck Dernoncourt, Jiuxiang Gu, Tong Sun, and Xia Hu. 2021. Towards interpreting and mitigating shortcut learning behavior of NLU models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 915–929, Online. Association for Computational Linguistics. John C. Duchi and Hongseok Namkoong. 2018. Learning models with uniform performance via distributionally robust optimization. *CoRR*, abs/1810.08750. Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, and TieYan Liu. 2018. Learning to teach. In *6th International Conference on Learning Representations,* ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A. Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1801–1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks. *Nat.* Mach. Intell., 2(11):665–673. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655, Melbourne, Australia. Association for Computational Linguistics. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial networks. *CoRR*, abs/1406.2661. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132–142, Hong Kong, China. Association for Computational Linguistics. Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. *Neural Comput.*, 14(8):1771–1800. S. Iyer, N. Dandekar, and K. Csernai. 2017. First quora dataset release: Question pairs. Accessed online at https://www.quora.com/q/quoradata/First-QuoraDataset-Release-Question-Pairs. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels. In *Proceedings of the 35th International Conference on Machine Learning, ICML* 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2309–2318. PMLR. Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitigation by modelling biases in corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8706–8716, Online. Association for Computational Linguistics. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In *Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence,* (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5189–5197. AAAI Press. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. M. Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In *Advances in Neural Information Processing* Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada, pages 1189–1197. Curran Associates, Inc. Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2999–3007. IEEE Computer Society. Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. 2021. Just train twice: Improving group robustness without training group information. In *Proceedings of the 38th International* Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings* of Machine Learning Research, pages 6781–6792. PMLR. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semisupervised text classification. In *5th International* Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Ali Modarressi, Hossein Amirkhani, and Mohammad Taher Pilehvar. 2023. Guide the learner: Controlling product of experts debiasing method based on token attribution similarities. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1954– 1959, Dubrovnik, Croatia. Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In *Proceedings of the 27th International Conference* on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ellie Pavlick and Chris Callison-Burch. 2016. Most "babies" are "little" and most "problems" are "huge": Compositional entailment in adjective-nouns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2164–2173, Berlin, Germany. Association for Computational Linguistics. Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. FrameNet+: Fast paraphrastic tripling of FrameNet. In *Proceedings of the 53rd Annual Meeting of the Association for Computational* Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 408–413, Beijing, China. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In *Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics*, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics. Aniruddh Raghu, Maithra Raghu, Simon Kornblith, David Duvenaud, and Geoffrey E. Hinton. 2021. Teaching with commentaries. In *9th International* Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The Winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 777–789, Jeju Island, Korea. Association for Computational Linguistics. Sara Rajaee, Yadollah Yaghoobzadeh, and Mohammad Taher Pilehvar. 2022. Looking at the overlooked: An analysis on the word-overlap bias in natural language inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10605–10616, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. *Transactions of the Association for Computational Linguistics*, 3:475–488. Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to reweight examples for robust deep learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 4331–4340. PMLR. Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. 2020. Distributionally robust neural networks. In *8th International Conference on* Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732– 8740. AAAI Press. Victor Sanh, Thomas Wolf, Yonatan Belinkov, and Alexander M. Rush. 2021. Learning from others' mistakes: Avoiding dataset biases without modeling them. In *9th International Conference on Learning* Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3419–3425, Hong Kong, China. Association for Computational Linguistics. Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-weightnet: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1917–1928. Nimit Sharad Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. 2020. No subclass left behind: Fine-grained robustness in coarsegrained classification problems. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems* 2020, NeurIPS 2020, December 6-12, 2020, virtual. Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, and Tim Rocktäschel. 2020. Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8281–8291, Online. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. 2019. An empirical study of example forgetting during deep neural network learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621–633. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. *CoRR*, abs/1908.08962. Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020a. Mind the trade-off: Debiasing NLU models without degrading the in-distribution performance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8717–8729, Online. Association for Computational Linguistics. Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020b. Towards debiasing NLU models from unknown biases. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7597–7610, Online. Association for Computational Linguistics. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261–3275. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime G. Carbonell, and Graham Neubig. 2020. Optimizing data usage via differentiable rewards. In *Proceedings of the 37th International* Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings* of Machine Learning Research, pages 9983–9995. PMLR. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. 2022. Generating data to mitigate spurious correlations in natural language inference datasets. *CoRR*, abs/2203.12942. Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Cheng, Zhi-Ming Ma, and Yanyan Lan. 2021. Uncertainty calibration for ensemble-based debiasing methods. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information* Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 13657–13669. Yadollah Yaghoobzadeh, Soroush Mehri, Remi Tachet des Combes, T. J. Hazen, and Alessandro Sordoni. 2021. Increasing robustness to spurious correlations using forgettable examples. In *Proceedings* of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3319–3332, Online. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754–5764. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93–104, Brussels, Belgium. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, Florence, Italy. Association for Computational Linguistics. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics. Xiang Zhou and Mohit Bansal. 2020. Towards robustifying NLI models against lexical dataset biases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8759– 8771, Online. Association for Computational Linguistics. Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, and Tong Zhang. 2022. Model agnostic sample reweighting for out-of-distribution learning. In *International Conference on Machine* Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 27203–27221. PMLR. ## A Training Details In this section, we detail the models and hyperparameters we use in our experiments. For all experiments, the auxiliary model is optimized using the Adam optimizer β = (0.9, 0.999), ϵ = 1e−8, with a learning rate of 1e−3. We use the HuggingFace implementation of BERT-base-uncased as our learner model. MNLI We use the following hyper-parameters for the learner model: a learning rate of 5e−5 and a batch size of 32. The learning rate is linearly increased for 2000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), ϵ = 1e−8, and add a weight decay of 0.1. FEVER We use the following hyper-parameters for the learner model: a learning rate of 2e−5, and a batch size of 32. The learning rate is linearly increased for 1500 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), ϵ = 1e−8, and add a weight decay of 0.1. QQP We use the following hyper-parameters for the learner model: a learning rate of 5e−5, and a batch size of 32. The learning rate is linearly increased for 1000 warming steps and linearly decreased to 0 afterward. We use an Adam optimizer β = (0.9, 0.999), ϵ = 1e−8, and add a weight decay of 0.1. ## B Additional Experimental Results | Domains | ERM | Minimax | Inv. Minimax | |-----------|-------|-----------|----------------| | ADD1 | 86.54 | 87.25 | 57.48 | | DPR | 49.92 | 50.16 | 38.13 | | SPR | 58.71 | 61.86 | 39.63 | | FN+ | 53.98 | 54.23 | 37.52 | | SCITAIL | 70.14 | 75.19 | 54.68 | | GLUE | 55.62 | 55.38 | 41.74 | | SNLI-hard | 81.07 | 81.81 | 59.40 | Table 7: Accuracies on various out-of-domain test sets for a BERT-base model trained on SNLI with empirical risk minimization (ERM), the proposed minimax objective, and inverse minimax. The ERM and Minimax columns repeat results from Table 2. Inverse Minimax - Out of Domain Generalization We train the inverse minimax model (which is incentivized to up-weight the "easy" examples with shortcuts) on SNLI, and evaluate performance on several out-of-distribution test sets. From the results in Table 7 we observe that the out-ofdistribution performance of the inverse minimax model is considerably worse compared to the ERMtrained baseline and the model trained using the proposed minimax objective. Additional Results for MNLI In Table 8, we show the performance of our method on MNLI-matched (MNLI-m) and MNLImismatched (MNLI-mm), and their corresponding hard sets. Additional Results for HANS Table 9 shows detailed accuracy scores on the three shortcut categories of HANS. Overall, compared to the ERMtrained BERT-base model minimax training retains satisfactory performance in the entailment class, and provides considerable improvements for nonentailment. Specifically, on the Lexical Overlap, Constituent, and Subsequence shortcut categories, the decrease in accuracy in entailment for minimax training compared to the ERM-trained BERT-base model is 7.1, 1.6, and 2.5, while for non-entailment performance improves by 20.2, 9.4, and 36.7, respectively. Method MNLI-m MNLI-mm MNLI-m hard MNLI-mm hard Dev Test Dev Test Dev Test Dev Test ERM **84.4** 84.1 83.9 83.1 76.1 75.2 77.4 75.5 PoE (Karimi Mahabadi et al., 2020)† 84.2 84.1 **84.8** 83.4 **78.0** 76.8 79.2 76.8 Regularized-conf (Utama et al., 2020a)† 84.3 84.1 85.0 84.2 - 78.3 - 77.3 PoE + CE (Sanh et al., 2021)† 83.2 - 83.5 - - 77.6 - 76.3 Minimax (Ours) 83.6 **84.8** 83.6 **85.6** 77.9 **79.4 79.9 78.7** Table 8: Accuracies on MNLI-matched (MNLI-m), MNLI-mismatched (MNLI-mm), MNLI-matched hard, and MNLI-mismatched hard. † are reported results. Table 9: Accuracies on the HANS Lexical Overlap, Constituent, and Subsequence shortcut categories. | Method | Lexical Overlap | Constituent | Subsequence | | | | |------------|-------------------|---------------|----------------|------------|----------------|------| | Entailment | Non-Entailment | Entailment | Non-Entailment | Entailment | Non-Entailment | | | ERM | 98.9 | 51.2 | 99.3 | 10.8 | 99.4 | 15.7 | | Minimax | 91.8 | 71.4 | 97.7 | 20.2 | 96.9 | 52.4 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
zhai-etal-2023-ussa
{USSA}: A Unified Table Filling Scheme for Structured Sentiment Analysis
https://aclanthology.org/2023.acl-long.802
Most previous studies on Structured Sentiment Analysis (SSA) have cast it as a problem of bi-lexical dependency parsing, which cannot address issues of overlap and discontinuity simultaneously. In this paper, we propose a niche-targeting and effective solution. Our approach involves creating a novel bi-lexical dependency parsing graph, which is then converted to a unified 2D table-filling scheme, namely USSA. The proposed scheme resolves the kernel bottleneck of previous SSA methods by utilizing 13 different types of relations. In addition, to closely collaborate with the USSA scheme, we have developed a model that includes a proposed bi-axial attention module to effectively capture the correlations among relations in the rows and columns of the table. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed framework, outperforming state-of-the-art methods consistently.
## Ussa: A Unified Table Filling Scheme For Structured Sentiment Analysis Zepeng Zhai1, Hao Chen2**, Ruifan Li**1,3,4∗ , Xiaojie Wang1,3,4 1School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China 2STCA, Microsoft, China 3Engineering Research Center of Information Networks, Ministry of Education, China 4Key Laboratory of Interactive Technology and Experience System, Ministry of Culture and Tourism, China {zepeng, rfli, xjwang}@bupt.edu.cn and hche@microsoft.com ## Abstract Most previous studies on Structured Sentiment Analysis (SSA) have cast it as a problem of bi-lexical dependency parsing, which cannot address issues of overlap and discontinuity simultaneously. In this paper, we propose a nichetargeting and effective solution. Our approach involves creating a novel bi-lexical dependency parsing graph, which is then converted to a unified 2D table-filling scheme, namely USSA. The proposed scheme resolves the kernel bottleneck of previous SSA methods by utilizing 13 different types of relations. In addition, to closely collaborate with the USSA scheme, we have developed a model that includes a proposed bi-axial attention module to effectively capture the correlations among relations in the rows and columns of the table. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed framework, outperforming state-ofthe-art methods consistently1. ## 1 Introduction Structured Sentiment Analysis (SSA) aims to identify all opinion tuples within a given sentence. An opinion tuple (h, t, e, p) denotes a group of four elements: the holder h expresses a sentiment polarity p towards an opinion target t through a sentiment expression e. As shown in Figure 1(a), an example involving two opinion tuples illustrates the definition of SSA. SSA is more challenging than other related tasks because it requires identifying all four elements of the tuple and may involve overlapping or discontinuous elements. For example, aspectbased sentiment analysis (Pontiki et al., 2014, 2015, 2016; Li et al., 2021b) mostly identifies flat aspect and opinion terms, and opinion mining (Katiyar and Cardie, 2016; Xia et al., 2021) identifies opinion tuples without the sentiment polarity. ![0_image_0.png](0_image_0.png) Most of the existing methods cast the SSA task as the bi-lexical dependency parsing problem. Unfortunately, the conversion is lossy, as it cannot address issues of overlap and discontinuity concurrently. For the example in Figure 1(a), there exist two overlapping2e 3, i.e., {really, *rude*} and {really, long, *winded*}, and the latter is discontinuous. Barnes et al. (2021) proposed a dependency parsing method namely head-first as illustrated in Figure 1(b). However, inherent ambiguity occurs in the dependency graph, as the method incorrectly predicts two overlapping e as one single e (i.e., | Dataset | Overlap | Discontinuity | | | |-----------|-----------|-----------------|------|-----| | # | % | # | % | | | NoReCFine | 2178 | 19.6 | 1080 | 9.7 | | MultiBEU | 0 | 0 | 164 | 7.1 | | MultiBCA | 3 | 0.1 | 113 | 4.1 | | MPQA | 403 | 1.4 | 0 | 0 | | DSUnis | 18 | 1.7 | 102 | 9.9 | {really, rude, long, *winded*}). In other words, the method may not be able to distinguish between two overlapping entities4in SSA. Another dependency parsing method proposed by Shi et al. (2022) aims to identify the starting and ending positions of boundaries, but cannot identify discontinuous entities. Statistics on benchmark datasets show the amount of opinion tuples involving overlapping or discontinuous problem in Table 1. Therefore, these two problems cannot be ignored for SSA task and it remains a challenge to design an effective and unified dependency parsing method. To resolve the kernel challenges (i.e., overlap and discontinuity) existing in SSA, we carefully construct a novel bi-lexical dependency parsing graph as shown in Figure 1(c). The graph comprises two types of edge: *Relation Prediction* (RP) and *Token Extraction* (TE). RP mainly handles entity boundary identification and relation prediction, and it solves the overlap problem. Specifically, E-POS/NEG/NEU edge connects ending and starting words (e.g., winded → really) as an e with sentiment polarity. H-S/E edge marks the starting/ending boundary of h and e, and T-S/E edge is the same for t and e. Another edge type TE identifies all tokens within a given entity boundary, resolving the discontinuity problem. Specifically, ∗-NW edges indicate the next word, meaning that the two words are consecutively joined as a segment of one entity. Furthermore, we convert our proposed dependency parsing graph to a unified 2D table filling scheme, namely USSA as illustrated in Figure 2. Specifically, we use the start position of each edge as the x-coordinate, the end position as the ycoordinate, and the type of edge as the relationship label in the table. Thus, the table is divided into lower and upper triangular regions, corresponding to RP and TE, respectively. Based on the USSA scheme, we further develop 4In SSA, an entity stands for a holder/target/expression. ![1_image_0.png](1_image_0.png) a model for SSA. First, multilingual BERT and bi-directional LSTM (BiLSTM) are used to provide contextualized word representations, based on which we construct a 2-Dimensional (2D) table for word pairs. Then, we observe that the relations have a strong correlation in the abscissa and ordinate of the table as shown in Figure 2. we propose a bi-axial attention module to effectively capture these correlations. Finally, a predictor is employed to determine the relations between word pairs. We conduct extensive experiments on five benchmarks, including NoReCFine (Øvrelid et al., 2020), MultiBEU, MultiBCA (Barnes et al., 2018), MPQA (Wiebe et al., 2005) and DSUnis (Toprak et al., 2010). Our model demonstrates superior performance on all datasets, establishing a new SOTA method for SSA task. Our contributions are highlighted as follows: - We propose a bi-lexical dependency parsing graph and convert it to a unified 2D table filling scheme, USSA, which solves the kernel challenge issues of overlap and discontinuity in SSA. - We present an effective model to well collaborate with USSA scheme, which utilizes proposed bi-axial attention module to better capture the correlations of relations in the table. - We conduct extensive experiments on five benchmark datasets and the results demonstrate the effectiveness of our model. The source code is released for knowledge sharing. ## 2 Related Work Structured Sentiment Analysis (SSA) can be divided into several sub-tasks, including extracting entities, determining the relationship between the entities, and assigning polarity. Some previous research in *Opinion Mining* (OM) has focused on extracting holders, targets, and expressions and identifying their relations, mainly utilizing the MPQA dataset (Esuli et al., 2008). Previous studies have explored different methods to tackle this task, like a BiLSTM-CRF model (Katiyar and Cardie, 2016) that predicts the word-wise opinion role label and identifies the relations, an end-to-end BERT-based model (Quan et al., 2019), a transition-based approach (Zhang et al., 2020a) using pre-defined actions, and a unified span-based model (Xia et al., 2021) that addresses overlap issues. All of these approaches, however, ignore the sentiment polarity classification subtask. In *Aspect Based Sentiment Analysis* (ABSA), several studies have attempted to unify multiple subtasks. Some examples include *Aspect and Opinion Term Co-Extraction* (AOTE) (Wang et al., 2016, 2017; Dai and Song, 2019; Wang and Pan, 2019; Chen et al., 2020; Wu et al., 2020) which combines target and expression extraction tasks, *AspectSentiment Pair Extraction* (ASPE) (Ma et al., 2018; Li et al., 2019a,b; He et al., 2019) which combines target extraction and sentiment classification, and most recent *Aspect Sentiment Triplet Extraction* (ASTE) (Peng et al., 2020) which further integrates multiple subtasks. These methods can generally be categorized into three groups: Pipeline (Peng et al., 2020; Fan et al., 2019), End-to-End (Zhang et al., 2020b; Xu et al., 2020; Wu et al., 2020; Chen et al., 2021b; Yan et al., 2021; Xu et al., 2021; Chen et al., 2022) and MRC-based (Mao et al., 2021; Chen et al., 2021a; Zhai et al., 2022). However, these methods primarily focus on flat entities and ignore holder extraction. To this aim, Barnes et al. (2021) originally cast the SSA task as a bi-lexical dependency parsing problem. Nonetheless, as aforementioned in Section 1, the conversion is lossy because it cannot distinguish between two overlapping entities. To address this issue, Shi et al. (2022) proposed another parsing method but unfortunately it cannot identify the discontinuous entities. Samuel et al. (2022) identified the issue of nest and proposed to decode the sentiment graph from the text directly. Therefore, we propose a novel dependency parsing | Type | # | Relation | Meaning of word pair (wi, wj ) | |---------------------|---------------------------------------------|-----------------------------------------|----------------------------------| | 1 | E-POS | boundary words of expression with | | | 2 | E-NEG | positive/negative/neutral polarity | | | 3 | E-NEU | | | | 4 | H-S | starting or ending boundary | | | 5 | H-E | of holder and corresponding expression | | | 6 | H-SE | | | | 7 | T-S | starting or ending boundary | | | 8 | T-E | of target and corresponding expression | | | 9 | T-SE | | | | Relation Prediction | 10 | E-NW | | | Token | specific tokens of expression/holder/target | | | | 11 | H-NW | by indicating wj is the Next Word of wi | | | Extraction | 12 | T-NW | | | 13 | ⊥ | no above relations | | method that can handle overlapping and discontinuous entities simultaneously. We seek to convert the parsing graph to a 2D table filling scheme in order to take advantage of the success of table filling methods (Wang et al., 2020b; Li et al., 2022; Cao et al., 2022) in various fields of NLP. ## 3 Unified Table Filling Scheme In this section, we introduce the problem formulation of the SSA task, explain the table filling scheme USSA, and show how to decode opinion tuples from the USSA tagging results. ## 3.1 Problem Formulation The objective of SSA is to extract a collection of opinion tuples T = {(*h, t, e, p*)m} |T | m=1 from a given input sentence X = {w1, w2, · · · , wN } with N tokens, where h, t, e denote holder, *target* and *expression* respectively. The sentiment polarity p of the expression belongs to a sentiment label set, i.e. {*positive, neutral, negative*}. The datasets include the challenges posed by discontinuous entities, overlapping counterparts of different tuples, and the presence of null holders and targets. ## 3.2 Table Filling Scheme To address the SSA task, USSA uses 13 types of relations between word-pair (wi, wj ) as shown in Table 2. The table is divided into the lower and upper triangular regions, with the lower region used for relation prediction and the upper region used for token extraction, as depicted in Figure 2. Relation Prediction (RP) aims to identify the relations between entities and assign the sentiment ![3_image_0.png](3_image_0.png) polarity. Specifically, *E-POS/E-NEG/E-NEU* indicates the starting and ending boundaries of an expression with positive/negative/neutral polarity. H-S/H-E/H-SE indicates the position of holder and corresponding expression, where S and E denotes starting and ending positions, respectively. SE indicates that the entity consists of only one token, and has the same starting and ending position. In order to ensure that the cell is located in the lower triangle of the table, it is noted that the larger position is set as x-coordinate and the smaller is set as y-coordinate. *T-S/T-E/T-SE* is used in the same manner as the holder for a target. Token Extraction (TE) aims to extract specific tokens and combine them as an entity based on the entity boundaries obtained from RP. *E-NW/HNW/T-NW* indicates the next word for the expression/holder/target, meaning the pair of words are to be successively joined as a segment of one entity. ## 3.3 Opinion Tuple Decoding The overall decoding algorithm is to first identify the boundary words of each holder, target, and expression in the lower triangle region of the table, and then identify the specific tokens in the upper triangle region. First, {*E-POS, E-NEG, E-NEU*} is used to find all boundary words of expression with sentiment polarity. Second, according to {H-S, H-E, H-SE} and {*T-S, T-E, T-SE*}, we identify the boundary words of the holders and targets corresponding to the expression, respectively. Finally, we extract the specific tokens of holder, target and expression according to {*E-NW, H-NW, T-NW*} and the corresponding entity boundary. Thus, we collect sentiment tuples (*h,t,e,p*). Figure 3 generally illustrates four decoding cases from easy to difficult. (a) Flat Case. The boundary words w5 and w6 of expression with a negative sentiment polarity can be identified by *E-NEG*. Then according to H-S and H-E, we can detect the boundary words of holder are w1 and w2. Similarly, the boundary words of target are w3 and w4. Finally, three paths "w1 → w2", "w3 → w4" and "w5 → w6" are detected as specific words according to the ∗−NW relations and form a sentiment tuple. (b) Overlapping case. There are two overlapping expressions and they can be distinguished by two *E-NEG* relations. Therefore, RP relation type contributes to the overlapping issue. (c) Discontinuous case. There is one discontinuous target in the case. One path "w5 → w7" can be found according to the *T-NW* relation. Therefore, TE relation type can help handling the discontinuous problem. (d) Complex case. Consider the complex and rare case, where there are two overlapping expressions {w3, w5, w7} and {w4, w5, w6}, and the former is discontinuous. If only use RP, discontinuous expression will incorrectly identified as continuous one (i.e., {w3, w4, w5, w6, w7}). If only use TE, it is impossible to identify correct expressions because we can find four paths in the ambiguous case. Therefore, we can obtain correct tuples by collaboratively using both relation types. ## 4 Model Structure This section elaborates upon our model, as depicted in Figure 4, which is designed to effectively integrate the USSA scheme. Our model is mainly composed of four components: the encoder layer, the word-pair representation layer, the refining strategy, and the prediction layer. ## 4.1 Encoder Layer Given the input sentence X = {w1, w2, · · · , wN } with N tokens, the encoding layer outputs ![4_image_0.png](4_image_0.png) the hidden representation sequence H = {h1, h2, · · · , hN }, leveraging BiLSTM as the sentence encoder. Following previous work, we further enhance the token representations with a pretrained contextualized embeddings from frozen multilingual BERT (Devlin et al., 2019). Note that we do not use part-of-speech (POS), lemma, and character-level embedding, which may put our model at a disadvantage in comparison to other models that do. ## 4.2 Word-Pair Representation Layer Evidently, the relation of USSA is asymmetric ((wi, wj ) ̸= (wj , wi)). Inspired by Yu et al. (2021) and Wang et al. (2021), we utilize Conditional Layer Normalization (CLN) to model the conditional word-pair representation R as, $$\begin{array}{c}{{r_{i,j}=\mathrm{CLN}(h_{i},h_{j})}}\\ {{=\gamma_{i}\odot\left(\frac{h_{j}-\mu}{\sigma}\right)+\lambda_{i}}}\end{array}\qquad\qquad(1)$$ where ⊙ denotes the element-wise product. In addition, scale factor γi and shift factor λi can incorporate extra contextual information, and two parameters µ and σ are the mean and standard deviation of hj , i.e., $$\gamma_{i}=\mathrm{W}_{\gamma}h_{i}+b_{\gamma},\quad\lambda_{i}=\mathrm{W}_{\lambda}h_{i}+b_{\lambda}$$ and $$\mu={\frac{1}{d}}\sum_{k=1}^{d}h_{j k},\quad\sigma={\sqrt{\frac{1}{d}}}\sum_{k=1}^{d}\left(h_{j k}-\mu\right)^{2}\ \ (3)$$ where hjk denotes the k-th dimension of hj . Wγ, Wλ, bγ and bλ are learnable parameters. ## 4.3 Refining Strategy Bi-Axial Attention Module. The relations in USSA exhibit a strong correlation in the rows and columns of the table. As an example, Figure 2 illustrates that for the *E-NEG* at position (12,8), the corresponding H-S/H-E/T-S/T-E must be located in row 8 or row 12 or column 8 or column 12 if it exists, and there must be *E-NW* relations in row 8 and in column 12. We propose to adopt bi-axial attention module to capture the correlation of relations and ensure the global connection, drawing inspiration from the success of axial attention (Ho et al., 2019; Wang et al., 2020a; Huang et al., 2019) in computer vision. First, we define a single axial attention as follows, $$\begin{array}{c}{{a_{i,j}=\mathrm{MultiHead}(r_{i,j},r o w_{i},r o w_{i})+}}\\ {{\mathrm{MultiHead}(r_{i,j},c o l_{j},c o l_{j})}}\end{array}\quad\mathrm{(4)}$$ where MultiHead, rowi, colj represent multi-head attention, the i-th row, and the j-th column of wordpair representation R, respectively. Then we utilize another symmetric axial attention and the wordpair representation itself to construct the contextual representation C as, $$(5)$$ $$c_{i,j}=a_{i,j}\oplus r_{i,j}\oplus a_{j,i}$$ $$(2)$$ where ⊕ denotes the concatenation operation. Feature Enhancement. To further improve the representation, we introduce the distance feature as shown in Figure 4. In light of the fact that the relation in USSA is sensitive to the relative distance of word pairs (e.g., the greater the NW span, the more words are spaced for the next word), we use distance feature to represent the relative distance information. Additionally, it helps to distinguish between the lower and upper triangular regions. | Dataset | Split | Sentences | Holders | Targets | Expressions | POS | NEU | NEG | | | | | | |-----------|---------|-------------|-----------|-----------|---------------|-------|-------|-------|------|-----|------|-----|------| | all | over. | dis. | all | over. | dis. | all | over. | dis. | | | | | | | train | 8634 | 898 | 0 | 0 | 6778 | 0 | 39 | 8448 | 1655 | 781 | 5684 | 0 | 2756 | | dev | 1531 | 120 | 0 | 0 | 1152 | 0 | 5 | 1432 | 261 | 131 | 988 | 0 | 443 | | test | 1272 | 110 | 0 | 0 | 993 | 0 | 6 | 1235 | 262 | 125 | 875 | 0 | 358 | | train | 1064 | 205 | 0 | 4 | 1285 | 0 | 23 | 1684 | 0 | 91 | 1406 | 0 | 278 | | dev | 152 | 33 | 0 | 0 | 153 | 0 | 1 | 204 | 0 | 15 | 168 | 0 | 36 | | test | 305 | 58 | 0 | 6 | 337 | 0 | 4 | 440 | 0 | 23 | 375 | 0 | 65 | | train | 1174 | 169 | 0 | 1 | 1695 | 0 | 23 | 1981 | 0 | 61 | 1272 | 0 | 708 | | dev | 168 | 15 | 0 | 0 | 211 | 0 | 1 | 258 | 3 | 6 | 151 | 0 | 107 | | test | 336 | 52 | 0 | 0 | 430 | 0 | 8 | 518 | 0 | 18 | 313 | 0 | 204 | | train | 5873 | 1431 | 0 | 0 | 1487 | 241 | 0 | 1715 | 6 | 0 | 671 | 337 | 698 | | dev | 2063 | 414 | 0 | 0 | 503 | 80 | 0 | 581 | 2 | 0 | 223 | 126 | 216 | | test | 2112 | 434 | 0 | 0 | 462 | 80 | 0 | 518 | 0 | 0 | 159 | 82 | 223 | | train | 2253 | 65 | 0 | 0 | 836 | 16 | 0 | 836 | 0 | 82 | 349 | 104 | 383 | | dev | 232 | 9 | 0 | 0 | 104 | 0 | 0 | 104 | 0 | 8 | 31 | 16 | 57 | | test | 318 | 12 | 0 | 0 | 142 | 2 | 0 | 142 | 0 | 12 | 59 | 12 | 71 | Then the final representation V is obtained as, $$v_{i,j}=c_{i,j}\oplus d_{i-j}$$ $\text{time-distance-combined}$ where di−j is the relative distance embedding. ## 4.4 Prediction Layer To obtain the label probability distribution pi,j for each cell in the table, we feed the refined word pair representation vi,j into a feed-forward network (FFN) and input hi,j into the biaffine predictor as an enhancement. FFN Predictor. For the word pair representation vi,j , we utilize an FFN to obtain the relation score as, $$s_{i,j}^{f}=\mathrm{FFN}_{f}(v_{i,j})$$ where s f i,j ∈ Rm is the relation score, and m is the number of relation type. Biaffine Predictor. Biaffine has proven effective for dependency parsing (Dozat and Manning, 2017), and it can work collaboratively with FFN predictor for relation classification according to previous research (Li et al., 2021a, 2022). We use biaffine module in our model to obtain the relation score s b i,j between the word pair (wi, wj ) as an enhancement, i.e., $$h_{i}^{a}=\mathrm{FNN}_{a}(h_{i})\eqno(8)$$ $$h_{j}^{b}=\mathrm{FNN}_{b}(h_{j})\eqno(9)$$ $$s_{i,j}^{b}=h_{i}^{a\mathrm{T}}U_{1}h_{j}^{b}+U_{2}\left(h_{i}^{a}\oplus h_{j}^{b}\right)+b\eqno(10)$$ $$(6)$$ where U1, U2 and b are trainable weights and bias. Thus, the relation score s b i,j ∈ Rm is obtained. Finally, the label probability distribution is calculated by combining the relation scores s f i,j and s b i,j as, $$p_{i,j}=\mathrm{softmax}\left(\alpha s_{i,j}^{f}+(1-\alpha)s_{i,j}^{b}\right)\quad(11)$$ where α is hyper-parameter used to adjust the influence of the corresponding predictor. ## 4.5 Loss Function Our objective is to minimize the following crossentropy loss as follows, $${\mathcal{L}}=-\sum_{i}^{N}\sum_{j}^{N}\sum_{r\in{\mathcal{R}}}\mathbb{I}(y_{i j}=r)\log(p_{i,j|r})\quad(12)$$ $$(7)$$ where N is the number of tokens in the sentence and R is pre-defined relation set in USSA. ## 5 Experiments 5.1 Datasets And Configuration Following the previous work, we conduct experiments on five benchmark datasets in four languages. The statistics are shown in Table 3. NoReCFine (Øvrelid et al., 2020) is a professional reviews dataset in Norwegian. MultiBEU and MultiBCA (Barnes et al., 2018) annotates hotel views in Basque and Catalan, respectively. MPQA (Wiebe et al., 2005) contains English news wire text and the content of DSUnis (Toprak et al., 2010) is online universities reviews in English. | Dataset | Model | Span | Sent. Graph | | | | |------------------------------------|----------------------------------------------------------------------|-----------|---------------|-------|------|----| | Holder F1 ↑ | Target F1 ↑ | Exp. F1 ↑ | NSF1 ↑ | SF1 ↑ | | | | RACL-BERT (Chen and Qian, 2020) | _ | 47.2 | 56.3 | _ | _ | | | Head-first (Barnes et al., 2021) | 51.1 | 50.1 | 54.4 | 37.0 | 29.5 | | | Head-final (Barnes et al., 2021) | 60.4 | 54.8 | 55.5 | 39.2 | 31.2 | | | Frozen PERIN (Samuel et al., 2022) | 48.3 | 51.9 | 57.9 | 41.8 | 35.7 | | | TGLS (Shi et al., 2022) | 60.9 | 53.2 | 61.0 | 46.4 | 37.6 | | | USSA (Ours) | 66.3 | 54.3 | 61.4 | 47.7 | 39.6 | | | NoReCFine | RACL-BERT (Chen and Qian, 2020) | _ | 59.9 | 72.6 | _ | _ | | Head-first (Barnes et al., 2021) | 60.4 | 64.2 | 73.9 | 58.0 | 54.7 | | | Head-final (Barnes et al., 2021) | 60.5 | 64.0 | 72.1 | 58.2 | 54.7 | | | Frozen PERIN (Samuel et al., 2022) | 55.5 | 58.5 | 68.8 | 53.1 | 51.3 | | | TGLS (Shi et al., 2022) | 62.8 | 65.6 | 75.2 | 61.1 | 58.9 | | | USSA (Ours) | 63.4 | 66.9 | 75.4 | 63.5 | 60.4 | | | MultiBEU | RACL-BERT (Chen and Qian, 2020) | _ | 67.5 | 70.3 | _ | _ | | Head-first (Barnes et al., 2021) | 43.0 | 72.5 | 71.1 | 62.0 | 56.8 | | | Head-final (Barnes et al., 2021) | 37.1 | 71.2 | 67.1 | 59.7 | 53.7 | | | Frozen PERIN (Samuel et al., 2022) | 39.8 | 69.2 | 66.3 | 60.2 | 57.6 | | | TGLS (Shi et al., 2022) | 47.4 | 73.8 | 71.8 | 64.2 | 59.8 | | | USSA (Ours) | 47.5 | 74.2 | 72.2 | 67.4 | 61.0 | | | MultiBCA | RACL-BERT (Chen and Qian, 2020) | _ | 20.0 | 31.2 | _ | _ | | Head-first (Barnes et al., 2021) | 43.8 | 51.0 | 48.1 | 24.5 | 17.4 | | | Head-final (Barnes et al., 2021) | 46.3 | 49.5 | 46.0 | 26.1 | 18.8 | | | Frozen PERIN (Samuel et al., 2022) | 44.0 | 49.0 | 46.6 | 30.7 | 23.1 | | | TGLS (Shi et al., 2022) | 44.1 | 51.7 | 47.8 | 28.2 | 21.6 | | | USSA (Ours) | 47.3 | 58.9 | 48.0 | 36.8 | 30.5 | | | MPQA | RACL-BERT (Chen and Qian, 2020) | _ | 44.6 | 38.2 | _ | _ | | Head-first (Barnes et al., 2021) | 28.0 | 39.9 | 40.3 | 31.0 | 25.0 | | | Head-final (Barnes et al., 2021) | 37.4 | 42.1 | 45.5 | 34.3 | 26.5 | | | Frozen PERIN (Samuel et al., 2022) | 13.8 | 37.3 | 33.2 | 24.5 | 21.3 | | | TGLS (Shi et al., 2022) | 43.7 | 49.0 | 42.6 | 36.1 | 31.1 | | | USSA (Ours) | 44.2 | 50.2 | 46.6 | 38.0 | 33.2 | | | DSUnis | Table 4: Experiment results on five benchmark datasets for SSA task. | | | | | | We obtain the frozen token representations from the pre-trained BERT-multilingual-base to ensure a fair comparison with other methods. Furthermore, we use 4-layer BiLSTMs with an output size of 768. We train our model for 60 epochs with a linear warm-up for 10% of the training steps and save the model parameters based on the highest SF1 score on the development set. We use an NVIDIA A100 to train the model for an average of 45 minutes. The reported results are the averages from five runs with different random seeds. See Appendix A for more details. ## 5.2 Baseline Methods We compare our proposed method with five stateof-the-art baselines. **RACL-BERT** (Chen and Qian, 2020) is a relation aware collaborative learning framework which allows the subtasks of ABSA to work coordinately. Barnes et al. (2021) employ it as a baseline for SSA. **Head-first** and **Headfinal** (Barnes et al., 2021) are two different bilexical dependency parsing methods that use a reimplementation of the neural parser (Dozat and Manning, 2018). **Frozen PERIN** (Samuel et al., 2022) applies PERIN (Samuel and Straka, 2020), a graph-based parser to model a superset of graph features into a frozen XLM-R (Conneau et al., 2020) backbone. **TGLS** (Shi et al., 2022) is a bi-lexical dependency parsing method and it is equipped with the graph attention network. ## 5.3 Evaluation Metrics Following the previous work (Samuel et al., 2022), we mainly use **Sentiment Graph F1 (SF1)** to evaluate our models. SF1 defines a sentiment tuple as a true positive when it is an exact match at graph- NoReCFine MultiBEU MultiBCA MPQA DS**Unis** ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) (-0.91) (-0.78) (-0.99) (-0.63) (-1.02) (-0.26) (-0.48) (-0.58) (-0.48) (-0.31) (-0.17) (-0.38) (-0.35) (-0.25) (-0.20) (-0.36) (-0.28) (-0.18) (-0.47) (-0.17) (-1.86) (-1.16) (-1.42) (-1.10) (-1.60) (-1.29) (-0.57) (-0.61) (-0.04) (-1.01) level, weighting the overlap between the predicted and gold spans for each span, and averaging across all three spans. We also include Holder F1, **Target** F1, and **Exp. F1** for token extraction of *Holders*, Targets, and *Expressions*, as well as **Nonpolarity** Sentiment Graph F1 (NSF1) for further analysis. ## 5.4 Main Results In Table 4, we compare our USSA with other baselines using multiple metrics. We find that our USSA generally outperforms the other baselines in terms of the Span F1 metric across all datasets, and it surpasses the performance of suboptimal method by an average of 1.47% F1 score. It includes significant improvements, such as 7.2% F1 score for extracting targets on MPQA and 5.4% F1 score for extracting holders on NoReCFine. However, the performance of our USSA in extracting targets is slightly weaker, with a 0.5% lower F1 score. Considering the Sentiment Graph metric, which is crucial for comprehensively evaluating entity, relation and polarity predictions, our USSA consistently outperforms all other methods in both NSF1 and SF1. Compared with another strong baseline TGLS, our USSA surpasses its performance by averages 3.48 NSF1 score and 3.14% SF1 score, despite of not using POS, lemma, or character-level embedding. The improvement is attributed to our USSA's ability to effectively address the issues of overlap and discontinuity simultaneously. ## 6 Discussion In this section, we will conduct a deeper analysis to answer the following questions. ## 6.1 Are The Components Of The Model Valid? Table 5 presents the findings of ablation experiments. The results reveal that the bi-axial attention module is useful for good performance, as its ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) removal resulted in an obvious decline in performance across all five datasets. On the other hand, substituting the bi-axial attention module with a single one or removing the distance feature has a less pronounced effect on performance. Furthermore, we find that while the FFN predictor played a dominant role, the biaffine predictor also makes a positive impact, with gains up to 0.47% observed at most. Lastly, discarding the ∗-NW relations cause a noticeable drop in F1 scores across all datasets, particularly on NoReCFine (↓1.29%) and DSUnis (↓1.01%). This is because these datasets contain a higher proportion of discontinuous entities, and without the ∗-NW relations, such entities would be incorrectly identified as continuous ones. In short, the results demonstrate the effectiveness of each module and emphasize the significance of the ∗-NW relations. 14347 ## 6.2 Is Bi-Axial Attention Module Effective? Previous research has demonstrated the effectiveness of convolutional neural networks (CNNs) in table filling methods (Li et al., 2022; Yan et al., 2022). However, when the table is large, it can be challenging for CNNs to fast capture global information (Peng et al., 2021). We conduct a direct comparison with the CNN method used in (Li et al., 2022) as shown in Figure 5. The results indicate that the performance of CNNs decreases across all five datasets, and it is likely due to the fact that many sentences in SSA tasks are long. In addition, we visualize the bi-axial attention scores applied to the E-POS cell as shown in Figure 6. The visualization shows the attention on related relations of E-POS, such as T-S and T-E. To sum up, bi-axial attention mechanism can effectively help identify relations in the table. ## 7 Conclusion In this paper, we propose a novel bi-lexical dependency parsing graph and convert it to a unified 2D table-filling scheme, namely USSA to address the overlapping and discontinuous issues simultaneously. We also develop a model that includes a novel bi-axial attention module to better refine the word-pair representation. Additionally, our proposed framework may serve as an inspiration for other tasks involving the extraction of tuples that both present overlap and discontinuity challenges. ## Limitations Our approach has proven to be superior to previous methods on multiple public benchmark datasets. However, one major disadvantage of the table filling method is the increased training time and memory usage. The computational resources are required for the 2D table representation of word-pair relations for constructing and storing the table. In comparison, using a sequence representation as input could be generally more efficient. Our approach also faces the computational challenge. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grant 62076032. We appreciate constructive feedback from the anonymous reviewers for improving the final version of this paper. ## References Jeremy Barnes, Toni Badia, and Patrik Lambert. 2018. MultiBooked: A corpus of Basque and Catalan hotel reviews annotated for aspect-level sentiment classification. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Jeremy Barnes, Robin Kurtz, Stephan Oepen, Lilja Øvrelid, and Erik Velldal. 2021. Structured sentiment analysis as dependency graph parsing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3387–3402, Online. Association for Computational Linguistics. Hu Cao, Jingye Li, Fangfang Su, Fei Li, Hao Fei, Shengqiong Wu, Bobo Li, Liang Zhao, and Donghong Ji. 2022. OneEE: A one-stage framework for fast overlapping and nested event extraction. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 1953– 1964, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022. Enhanced multi-channel graph convolutional network for aspect sentiment triplet extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2974–2985, Dublin, Ireland. Association for Computational Linguistics. Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, and Ziming Chi. 2020. Synchronous double-channel recurrent network for aspect-opinion pair extraction. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 6515– 6524, Online. Association for Computational Linguistics. Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang. 2021a. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(14):12666–12674. Zhexue Chen, Hong Huang, Bang Liu, Xuanhua Shi, and Hai Jin. 2021b. Semantic and syntactic enhanced aspect sentiment triplet extraction. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1474–1483, Online. Association for Computational Linguistics. Zhuang Chen and Tieyun Qian. 2020. Relation-aware collaborative learning for unified aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3685–3694, Online. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5268–5277, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 2: Short Papers), pages 484–490, Melbourne, Australia. Association for Computational Linguistics. Andrea Esuli, Fabrizio Sebastiani, and Ilaria Urciuoli. 2008. Annotating expressions of opinion and emotion in the Italian content annotation bank. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509–2518, Minneapolis, Minnesota. Association for Computational Linguistics. Hao Fei, Yafeng Ren, and Donghong Ji. 2020. Boundaries and edges rethinking: An end-to-end neural model for overlapping entity relation extraction. *Information Processing & Management*, 57(6):102311. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 504–515, Florence, Italy. Association for Computational Linguistics. Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. 2019. Axial attention in multidimensional transformers. *arXiv preprint* arXiv:1912.12180. Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. 2019. Ccnet: Criss-cross attention for semantic segmentation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 603–612. IEEE. Arzoo Katiyar and Claire Cardie. 2016. Investigating LSTMs for joint extraction of opinion entities and relations. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 919–929, Berlin, Germany. Association for Computational Linguistics. Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as wordword relation classification. *Proceedings of the AAAI* Conference on Artificial Intelligence, 36(10):10965– 10973. Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021a. MRN: A locally and globally mention-based reasoning network for document-level relation extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1359–1370, Online. Association for Computational Linguistics. Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, and Eduard Hovy. 2021b. Dual graph convolutional networks for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6319–6329, Online. Association for Computational Linguistics. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019a. A unified model for opinion target extraction and target sentiment prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6714– 6721. Xin Li, Lidong Bing, Wenxuan Zhang, and Wai Lam. 2019b. Exploiting BERT for end-to-end aspect-based sentiment analysis. In *Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)*, pages 34–41, Hong Kong, China. Association for Computational Linguistics. Dehong Ma, Sujian Li, and Houfeng Wang. 2018. Joint learning for targeted sentiment analysis. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4737–4742, Brussels, Belgium. Association for Computational Linguistics. Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(15):13543–13551. Lilja Øvrelid, Petter Mæhlum, Jeremy Barnes, and Erik Velldal. 2020. A fine-grained sentiment dataset for Norwegian. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5025– 5033, Marseille, France. European Language Resources Association. Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. *Proceedings of the AAAI Conference on* Artificial Intelligence, 34(05):8600–8607. Zhiliang Peng, Wei Huang, Shanzhi Gu, Lingxi Xie, Yaowei Wang, Jianbin Jiao, and Qixiang Ye. 2021. Conformer: Local features coupling global representations for visual recognition. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 357–366. IEEE. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gül¸sen Eryigit. ˘ 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In *Proceedings of the 10th International* Workshop on Semantic Evaluation (SemEval-2016), pages 19–30, San Diego, California. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In *Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)*, pages 486–495, Denver, Colorado. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the 8th* International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics. Wei Quan, Jinli Zhang, and Xiaohua Tony Hu. 2019. End-to-end joint opinion role labeling with bert. In 2019 IEEE International Conference on Big Data (Big Data), pages 2438–2446. David Samuel, Jeremy Barnes, Robin Kurtz, Stephan Oepen, Lilja Øvrelid, and Erik Velldal. 2022. Direct parsing to sentiment graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 470–478, Dublin, Ireland. Association for Computational Linguistics. David Samuel and Milan Straka. 2020. ÚFAL at MRP 2020: Permutation-invariant semantic parsing in PERIN. In *Proceedings of the CoNLL 2020* Shared Task: Cross-Framework Meaning Representation Parsing, pages 53–64, Online. Association for Computational Linguistics. Wenxuan Shi, Fei Li, Jingye Li, Hao Fei, and Donghong Ji. 2022. Effective token graph modeling using a novel labeling strategy for structured sentiment analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4232–4241, Dublin, Ireland. Association for Computational Linguistics. Cigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. In *Proceedings* of the 48th Annual Meeting of the Association for Computational Linguistics, pages 575–584, Uppsala, Sweden. Association for Computational Linguistics. Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan L. Yuille, and Liang-Chieh Chen. 2020a. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV, volume 12349 of Lecture Notes in Computer Science, pages 108–126. Springer. Wenya Wang and Sinno Jialin Pan. 2019. Transferable interactive memory network for domain adaptation in fine-grained opinion extraction. *Proceedings* of the AAAI Conference on Artificial Intelligence, 33(01):7192–7199. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 616– 626, Austin, Texas. Association for Computational Linguistics. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3316–3322. AAAI Press. Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020b. TPLinker: Single-stage joint extraction of entities and relations through token pair linking. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1572–1582, Barcelona, Spain (Online). International Committee on Computational Linguistics. Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021. Discontinuous named entity recognition as maximal clique discovery. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 764–774, Online. Association for Computational Linguistics. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. *Lang. Resour. Evaluation*, 39(2-3):165– 210. Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for aspect-oriented fine-grained opinion extraction. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2576–2585, Online. Association for Computational Linguistics. Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, and Min Zhang. 2021. A unified span-based approach for opinion mining with syntactic constituents. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1795–1804, Online. Association for Computational Linguistics. Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755–4766, Online. Association for Computational Linguistics. Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020. Position-aware tagging for aspect sentiment triplet extraction. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 2339–2349, Online. Association for Computational Linguistics. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416–2429, Online. Association for Computational Linguistics. Hang Yan, Yu Sun, Xiaonan Li, and Xipeng Qiu. 2022. An embarrassingly easy but strong baseline for nested named entity recognition. *CoRR*, abs/2208.04534. Bowen Yu, Zhenyu Zhang, Jiawei Sheng, Tingwen Liu, Yubin Wang, Yucheng Wang, and Bin Wang. 2021. Semi-open information extraction. In *Proceedings of the Web Conference 2021*, WWW '21, page 1661–1672, New York, NY, USA. Association for Computing Machinery. Zepeng Zhai, Hao Chen, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022. COM-MRC: A COntextmasked machine reading comprehension framework for aspect sentiment triplet extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3230–3241, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Bo Zhang, Yue Zhang, Rui Wang, Zhenghua Li, and Min Zhang. 2020a. Syntax-aware opinion role labeling with dependency graph convolutional networks. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3249– 3258, Online. Association for Computational Linguistics. Chen Zhang, Qiuchi Li, Dawei Song, and Benyou Wang. 2020b. A multi-task learning framework for opinion triplet extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 819–828, Online. Association for Computational Linguistics. ## A Hyper-Parameter Settings Global Hyper-Parameter Settings | Hyperparameter | Assignment | |----------------------------|--------------| | Contextualized Embedding | mBERT | | Embeddings Trainable | False | | Num of Epochs | 60 | | Batch Size | 16 | | Hidden LSTM | 768 | | Dim Distance Feature | 100 | | Gradient Accumulation Step | 2 | ## Local Hyper-Parameter Settings | Dataset | MaxTokenLen | LearningRate | α | |-----------|---------------|----------------|-------| | NoReCFine | 150 | 2e-3 | 0.650 | | MultiBEU | 150 | 2e-3 | 0.500 | | MultiBCA | 386 | 1e-3 | 0.650 | | MPQA | 210 | 2e-3 | 0.725 | | DSUnis | 386 | 1e-3 | 0.650 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section on Page 9 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract section and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.1 ✓ B1. Did you cite the creators of artifacts you used? Section 5.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.1 ## C ✓ **Did You Run Computational Experiments?** Section 5.4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5.1 and Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5.1 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
he-etal-2023-pad
{PAD}-Net: An Efficient Framework for Dynamic Networks
https://aclanthology.org/2023.acl-long.803
Dynamic networks, e.g., Dynamic Convolution (DY-Conv) and the Mixture of Experts (MoE), have been extensively explored as they can considerably improve the model{'}s representation power with acceptable computational cost. The common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones where all parameters are dynamic (at least within a single layer) and vary with the input. However, such a fully dynamic setting may cause redundant parameters and high deployment costs, limiting the applicability of dynamic networks to a broader range of tasks and models. The main contributions of our work are challenging the basic commonsense in dynamic networks and proposing a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones. Also, we further design Iterative Mode Partition to partition dynamic and static parameters efficiently. Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures, i.e., DY-Conv and MoE, on both image classification and GLUE benchmarks. Encouragingly, we surpass the fully dynamic networks by $+0.7\%$ top-1 acc with only 30{\%} dynamic parameters for ResNet-50 and $+1.9\%$ average score in language understanding with only 50{\%} dynamic parameters for BERT. Code will be released at: \url{https://github.com/Shwai-He/PAD-Net}.
# Pad-Net: An Efficient Framework For Dynamic Networks Shwai He1 Liang Ding2∗ Daize Dong3 Boan Liu4 Fuqiang Yu5 **Dacheng Tao**2 1University of Maryland, College Park 2The University of Sydney 3University of Electronic Science and Technology of China 4Wuhan University 5Shandong University shwaihe@umd.edu, liangding.liam@gmail.com ## Abstract Dynamic networks, e.g., Dynamic Convolution (DY-Conv) and the Mixture of Experts (MoE), have been extensively explored as they can considerably improve the model's representation power with acceptable computational cost. The common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones where all parameters are dynamic (at least within a single layer) and vary with the input. However, such a fully dynamic setting may cause redundant parameters and high deployment costs, limiting the applicability of dynamic networks to a broader range of tasks and models. The main contributions of our work are challenging the basic commonsense in dynamic networks and proposing a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones. Also, we further design Iterative Mode Partition to partition dynamic and static parameters efficiently. Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures, i.e., DYConv and MoE, on both image classification and GLUE benchmarks. Encouragingly, we surpass the fully dynamic networks by +0.7% top-1 acc with only 30% dynamic parameters for ResNet-50 and +1.9% average score in language understanding with only 50% dynamic parameters for BERT. Code will be released at: https://github.com/Shwai-He/PAD-Net. ## 1 Introduction Deep neural networks have been continuously pushing the state-of-the-art performance in the tasks of computer vision (Girshick et al., 2014; Dosovitskiy et al., 2021) and natural language processing (Dai and Le, 2015; Brunet et al., 2019; Zan et al., 2022; Zhong et al., 2022b) in past years, at the cost of increasing training cost (Shen et al., 2023). However, most prevalent architectures perform infer- ∗Corresponding author ence statically where both the computational graph and network parameters are fixed once after training, which limits the representation power. Dynamic networks (Han et al., 2021), as opposed to static ones, adapt their parameters or architectures to each specific input, improving the model representation power with acceptable computational cost, e.g., Switch Transformers (Fedus et al., 2021). The common practice of implementing dynamic networks is transforming static networks (or modules) with counterpart dynamic ones. For example, Dynamic Convolution (DY-Conv) (Chen et al., 2020b) replaces traditional convolution by adopting k adaptive convolutional kernels; Mixture of Experts (MoE) (Shazeer et al., 2017) replaces a single fully connected layer with multiple feed-forward neural networks in a parallel manner. The success of dynamic networks motivates practitioners to design dynamic networks (Fedus et al., 2021; Li et al., 2021a), which often follow a fully dynamic approach, where all parameters are dynamic (at least within a single layer) and change with the input. Previous works (Chen et al., 2020b) show that dynamic networks often outperform their static counterpart, and using more dynamic layers intriguingly leads to ever-increasing performance. For instance, dynamic convolution promotes the performance on the ImageNet when more static convolution layers turn into dynamic ones. However, these prior works do not explain the need for a fully dynamic mechanism, and it remains unclear whether to convert static parameters into dynamic and to what extent if yes. On the other hand, such a fully dynamic manner is resource expensive and may cause redundancy, limiting the applicability of dynamic networks. For instance, the total parameters of BERTBase equipped with MoE are ~506.3M (with 8 experts) compared to only ~108.9M for vanilla BERT-Base. In addition, an MoE layer also multiplies the computation. It seems more is better 14354 when transforming static layers into dynamic ones, but how about the dynamic parameters within a dynamic network: Do all of them lead to the promotion? This urges us to reflect *whether there exist* redundant dynamic parameters, in fully dynamic network layers? Based on the above scrutinization, we hypothesize that less is more for dynamic parameters in fully dynamic networks. Motivated by this hypothesis, we propose the Iterative Mode Partition (IMP) algorithm to progressively convert less important dynamic parameters into static ones for higher efficiency, while maintaining performance at a competitive level. Given a fully dynamic network initialized with all parameters in dynamic mode, we attempt to partition a subset of static parameters out from them. Specifically, we iteratively transform dynamic parameters based on their impact on loss values. If the transformation of the i-th element of dynamic parameters results in only a minimal loss difference, we safely make it static. Given a desired dynamic ratio (the proportion of dynamic parameters), we can balance the trade-off between dynamic and static parameters. Since static parameters are less costly to deploy, we prune redundant parameters after mode partition, obtaining a lightweight architecture, namely Partially Dynamic Networks (**PAD-Net**), which contains two modes of parameters (dynamic parameters that vary with inputs and static parameters that are fixed during inference). Empirically, we extensively validate this hypothesis and our proposed PAD-Net, including GLUE benchmark (Wang et al., 2019) for MoE and visual image classification (Deng et al., 2009) for dynamic convolution. Experiment results reveal that we successfully converted redundant dynamic parameters into static ones and PAD-Net achieves the highest performance in all tasks with lightweight architectures. Given the superiority of PAD-Net in both effectiveness and efficiency, we show that less dynamic is more efficient in fully dynamic networks, successfully verifying the above hypothesis. The inspiration of partially dynamic can be extended to other dynamic networks and even inform future efficient architectures designation. In short, our contributions are threefold: - We hypothesize that a fully dynamic network contains partially dynamic subnetworks that maintain or exceed the representation power of the original network. - Following our hypothesis, we propose the novel PAD-Net framework to achieve a partially dynamic mechanism and devise an *Iterative Mode Partition* (IMP) algorithm to partition static and dynamic parameters. - We empirically validate our hypothesis and PAD-Net on both NLP and CV tasks across two representative dynamic networks, including MoE and dynamic convolution. ## 2 Related Work Dynamic Networks. The dynamic neural network is an emerging research topic in deep learning, which adapts structures or parameters to different inputs, leading to notable advantages in terms of accuracy, and computational efficiency. Han et al. (2021) classify dynamic networks into two categories: dynamic architecture networks and dynamic parameter networks. Dynamic architecture networks adaptively adjust architectures conditioned on each sample. Specifically, they adjust the network depth (Wang et al., 2018), width (Mullapudi et al., 2018), or route based on the input (Huang et al., 2018). Instead of changing the model architecture, dynamic parameter networks boost representation power by adapting parameters or activation functions to the input (Yang et al., 2019; Liu et al., 2021). Existing works often transform various types of static parameters into dynamic versions (Chen et al., 2020b). Among them, dynamic convolution and mixture-of-experts are the typical examples that aggregate multiple convolution parameters (and experts) dynamically based on the input, leading to significant improvement with negligible computational cost. Network Pruning. Past works in network pruning have explored effective techniques to find efficient subnetworks (Lee et al., 2019; Evci et al., 2020; He et al., 2022) and zero out redundant parameters. According to the lottery ticket hypothesis (LTH) pioneered by Frankle and Carbin (2019), dense, randomly initialized, feed-forward networks contain the subnetwork (winning tickets) that maintains comparable test performance of the original network after training for the same iterations. This hypothesis inspires a series of follow-up works in network pruning. However, these methods always sacrifice performance because of pruned parameters. As for dynamic networks, instead of directly pruning dynamic parameters, we considered changing them to static ones. In Section 5.4, we show our approach significantly and consistently outperforms fully dynamic networks in the GLUE benchmark (Wang et al., 2019), while the pruned model performed worse than the original network. ## 3 Review Of Fully Dynamic Networks Basic Concept. Dynamic networks first adjust computational parameters and then compute the input using adjusted parameters, rather than directly using intrinsic parameters to compute the input. In a fully dynamic network, all intrinsic parameters are used as dynamic factors to generate computational parameters Θˆ , which are dependent on two parts: the input x and the intrinsic parameters Θ. Let us denote W as the dynamic function, computational parameters is formulated as Θ = ˆ W(x, Θ). Given an input sample x, the output of is y = F(x, Θ) for a conventional network with static parameters and y = F(x, Θ) ˆ for a dynamic network. Existing dynamic networks, though using different dynamic functions, tend to follow a fully dynamic manner: Networks take all intrinsic parameters to generate the computational parameters where all elements are dynamic and vary with the input. We call such networks fully dynamic networks and, in the following, introduce instantiations coming from dynamic architecture networks, i.e., *Mixture of Experts*, and dynamic parameter networks, i.e., *Dynamic Convolution*, respectively. Mixture of Experts. We talk about dynamic architecture networks by taking the Mixture of Experts (MoE) (Jacobs et al., 1991; Shazeer et al., 2017) as an instantiation. MoE prepares m parallel static experts with parameters Θ(i)(i = 1, 2*, . . . , m*) and only selects n experts with the highest scores (n ≤ m). Given a specific input, we denote G(x) as the output scores of gating and T as the indices of the selected experts. For the i-th selected expert, we denote the combination of the score GTi (x) and parameters Θ(Ti)as w (Ti) = GTi (x), Θ(Ti) . The dynamic function of MoE can be represented as: $$\mathcal{W}(\mathbf{x},\Theta)=\{w^{(\mathcal{T}_{1})},\ldots,w^{(\mathcal{T}_{n})}\},\tag{1}$$ where $w^{(\mathcal{T}_{i})}=\{G_{\mathcal{T}_{i}}(\mathbf{x}),\Theta^{(\mathcal{T}_{i})}\}$. Dynamic Convolution. As a typical example of dynamic parameter networks, Dynamic Convolution (Chen et al., 2020b) prepares k parallel static kernels Θ(i)(i = 1, 2*, . . . , k*) as intrinsic parameters and utilizes the linear combination of them as the aggregated kernel. The linear scale is aggregated dynamically via a channel-wise attention block (Hu et al., 2018) denoted as Attention, so the dynamic function can be written as: $$\mathcal{W}(\mathbf{x},\Theta)=\sum_{i=1}^{k}\pi_{i}(\mathbf{x})\cdot\Theta^{(i)},\qquad\qquad(2)$$ where $\pi(\mathbf{x})=\mathrm{Attention}(\mathbf{x})$. Limitation Discussions. Mainstream dynamic networks usually replace static layers with fully dynamic layers, where all elements of dynamic parameters require corresponding dynamic factors co-working with input samples. However, this situation causes redundant parameters and high deployment costs, limiting the applicability of dynamic networks to a border range of resource-constrained situations and large-scale models. For this fully dynamic manner, we raise two questions: (1) **Is it** necessary to pay the cost of enormous parameters and computations to aggregate dynamic parameters? (2) **Is it necessary to make all computational** parameters dynamic, to maintain the performance improvement? We propose the Partially Dynamic Network (PAD-Net) that mixes dynamic and static parameters to answer the above questions. ## 4 Methodology 4.1 Pad-Net: Partially Dynamic Network In response to the limitation of fully dynamic networks, we question whether it is necessary to make all parameters dynamic. To this end, we try to detect the less important dynamic parameters and transform them into input-agnostic static parameters. Specifically, we utilize a mask Mi(i = 1, 2*, . . . , m*) to indicate whether the i-th element of Θˆ is dynamic or static: Mi = 1 means the i-th element of Θˆ is dynamic and vice versa. We use Θ˜ ∈ R m to denote the dynamic parameters and Θ¯ ∈ R m to represent the static parameters, then computational parameters Θˆ are reformulated as: $${\hat{\Theta}}_{i}=\begin{cases}{\hat{\Theta}}_{i}={\mathcal W}_{i}({\bf x},\Theta)&\mathrm{if\,\,\,M}_{i}=1\\ {\hat{\Theta}}_{i}&\mathrm{otherwise}\end{cases},\qquad(3)$$ where Θˆi(i = 1, 2*, . . . , m*) represents the i-th element of Θˆ , and Θ denotes the dynamic factors. In our architecture, intrinsic parameters include dynamic factors Θ and static parameters Θ¯ . Note that M partitions the computational parameters into two non-overlapping parts, forming a network with ![3_image_0.png](3_image_0.png) only a part of the parameters dynamic, i.e., Partially Dynamic Network (PAD-Net). Details of the procedure of generating the computational parameters from intrinsic are visualized in Figure 1. To overcome the aforementioned challenges and limitations, we propose a novel network architecture, Partially Dynamic Network (PAD-Net). We also devise a new algorithm *Iterative Mode Partition* (IMP) to build this model efficiently. In addition, we set two scale factors to describe the relative intensity of these subnetworks separately in terms of magnitude, namely λs and λd. With these scale factors, we factorize our method into a more general formulation: $$\hat{\Theta}_{i}=\begin{cases}\lambda_{d}\cdot\tilde{\Theta}_{i}&\mathrm{if\;\;M_{i}=1}\\ \lambda_{s}\cdot\tilde{\Theta}_{i}&\mathrm{otherwise}\end{cases},\qquad\qquad(4)$$ where we constrain λs + λd = 2(λs, λd > 0), and Equation 3 is the special situation when both λs and λd are equal to 1. Similar to the constraint Pk i=1 πi in dynamic convolution (Chen et al., 2020b), this constraint compresses the parameters space and significantly simplifies the joint optimization of scale factors and the counterpart parameters. ## 4.2 Iterative Mode Partition In the above section, we present the architecture of PAD-Net, which includes dynamic parameters and counterpart static parameters. Next, we further discuss our method in how to generate indicator masks to partition dynamic and static parameters. Let us first formulate this partition as an optimization problem, where our goal is to minimize loss values L. Given a dataset D = {(xi, yi)} n i=1 and a desired dynamic ratio κ of M, we briefly formulate mode partition as the following constrained optimization problem: $$\begin{array}{ll}\min_{\rm M}L(\hat{\Theta},{\rm M};{\cal D})=\min_{\rm M}\frac{1}{n}\sum_{i=1}^{n}\ell(\hat{\Theta},{\rm M};({\bf x}_{i},{\bf y}_{i})),\\ \mbox{s.t.}&{\rm M}\in\{0,1\}^{m},\quad\|{\rm M}\|_{0}\leq\kappa\cdot m,\end{array}\tag{5}$$ where ℓ(·) denotes the standard loss function (e.g., cross-entropy loss), Θˆ is the set of computational parameters of the neural network, *∥ · ∥*0 is the standard L0 norm, m is the total number of parameters. The conventional approach to optimize the above problem is adding sparsity enforcing penalty term M (Carreira-Perpinán and Idelbayev, 2018), while it often requires heavily tuned hyperparameter settings and several trials. On the other hand, LTH-based (Chen et al., 2020a; Evci et al., 2020) methods find the mask by several iterations, but it is prohibitively time-consuming. Also, considering the large-scale dynamic networks, it is unnecessary to deploy redundant parameters. We tend to partition the two modes before training to prune redundant parameters and avoid timeconsuming training iterations. Inspired by Lee et al. (2019)'s gradient-based pruning strategy, we propose an algorithm to make excessive dynamic parameters static. We resort to mini-batches of training data Db = {(xi, yi)} b i=1 ∼ D to detect redundant dynamic parameters. Given a dynamic parameter Θˆj at the j-th element of Θˆ , we compute its importance of being dynamic based on the loss difference ∆Lj caused by making Θˆj static (changing the value of Mj from 1 to 0): $\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)$ 3. $\frac{1}{2}$ . $\mathcal{L}^{\mu}$ ∆Lj (M, Θ; ˆ Db) = *L(M,* Θ; ˆ Db) − L(M − tj , Θ; ˆ Db), (6) 14357 ![4_image_0.png](4_image_0.png) where tj is the indicator vector of j-th element of M (i.e., zeros everywhere except at the index j where it is one). We only consider transforming redundant dynamic parameters into static ones, so the loss difference ∆Lj is zero if Θˆj is static. Note that computing ∆Lj for each dynamic parameter is prohibitively expensive, as it usually requires millions of forwarding passes over the dataset, so we resort to a simple and effective approximate alternative. Specifically, we release binary constraints of M and make it differentiable and utilize the derivative of L with respect to Mj to approximate ∆Lj : $$\begin{array}{l}\Delta L_{j}(\mathrm{M},\hat{\Theta};\mathcal{D}_{b})\approx g_{j}(\hat{\Theta};\mathcal{D}_{b})=\left.\frac{\partial L_{j}(\mathrm{M},\hat{\Theta};\mathcal{D}_{b})}{\partial\mathrm{M}}\right.\\ \\ =\left.\lim_{\delta\to0}\frac{L_{j}(\mathrm{M},\hat{\Theta};\mathcal{D}_{b})-L_{j}(\mathrm{M}-\delta\mathrm{t}_{j},\hat{\Theta};\mathcal{D}_{b})}{\delta}\right|_{\mathrm{t}=1},\end{array}\tag{7}$$ where gj (Θ; ˆ Db) denotes the j-th element in derivative g(Θ; ˆ Db). We accumulate the derivatives for all j by one forward-backward pass using automatic differentiation. Note that if the magnitude of gj is high, it essentially means that making parameter Θˆj static has a considerable effect on the loss, and it has to be dynamic. In contrast, the parameter should be static if the magnitude of gj is low. Therefore, We take the normalized magnitude of the derivatives of g as the criteria: $$s_{j}=\left|g_{j}(\hat{\Theta};{\mathcal{D}}_{b})\right|/\sum_{k=1}^{m}\left|g_{k}(\hat{\Theta};{\mathcal{D}}_{b})\right|.\tag{8}$$ Given the dynamic ratio κ, we take the sκ (the κ-th percentile of s) as the threshold and transform the mask elements whose scores are below zero: $$\mathrm{M}=1\left[s-s_{\kappa}\geq0\right],$$ where 1[·] is an element-wise indicator function where the output will be 1 if the condition [·] meets else it will be zero. Note that the indicator mask M prunes out redundant parameters in dynamic parameters Θ˜ and static parameters Θ¯ respectively. Also, for fewer dynamic parameters to generate, we can also utilize the binary mask to prune redundant dynamic factors. Taking MoE as an example, M can be directly applied to parallel experts: Θ(i) ← M ⊙ Θ(i), ∀i ∈ {1, 2*, . . . , k*}. In addition, we can decrease the computational cost of generating based on dynamic factors. Inspired by the success of the iterative strategy in pruning at initialization (Verdenius et al., 2020; de Jorge et al., 2021), we start from a fully dynamic network and adopt an iterative strategy shown in Figure 2 to transform dynamic parameters into static parameters iteratively, where we increase the zero ratios of M exponentially. The effectiveness of the mode partition and the iterative mode partition is experimentally verified in Section 5.3. ## 5 Empirical Evaluation 5.1 Implementation Details Mixture of Experts. We use Adam (Kingma and Ba, 2015) as the optimizer with β1, β2 = 0.9, 0.98. For regularization, we set the weight decay as 0.1 and grid-search the learning rate from {1e-5, 5e-5, 1e-4, 5e-4}, where we warm up the learning rate in the first 10% steps (of the total training steps). For different data scales, we grid-search training epoch and batch size from {5, 10, 15, 20} and {8, 16, 32, 64}, respectively. The maximum length is 128 for all tasks. Following Shazeer et al. (2017), we initialize dynamic and static parameters with pretrained parameters. $\eqref{eq:walpha}$. Dynamic Convolution. We use an SGD optimizer (Ruder, 2016) with 0.9 momentum, follow- | Method | BERT | ALBERT | | | | | | | | | | | |----------|---------|----------|----------|----------|----------|---------|--------|----------|----------|----------|----------|------| | #Param. | CoLA | RTE | MRPC | STS-B | Avg. | #Param. | CoLA | RTE | MRPC | STS-B | Avg. | | | Static | 108.9M | 54.6±0.4 | 66.4±0.7 | 84.6±0.3 | 85.8±0.3 | 72.9 | 11.1M | 54.2±0.7 | 76.6±0.7 | 87.2±0.4 | 90.6±0.3 | 77.2 | | MoE | 506.3M | 58.0±0.9 | 69.3±1.2 | 85.0±0.4 | 87.1±0.2 | 74.9 | 44.2M | 56.8±1.2 | 77.2±0.8 | 87.4±0.4 | 90.7±0.3 | 78.0 | | PAD-Net | 308.0M | 59.7±0.8 | 71.5±1.4 | 85.5±0.4 | 90.3±0.6 | 76.8 | 30.0M | 57.4±1.4 | 77.6±0.5 | 88.4±0.3 | 90.9±0.2 | 78.6 | | Method | RoBERTa | ELECTRA | | | | | | | | | | | | #Param. | CoLA | RTE | MRPC | STS-B | Avg. | #Param. | CoLA | RTE | MRPC | STS-B | Avg. | | | Static | 124.1M | 62.8±1.0 | 77.6±1.6 | 90.0±0.5 | 91.0±0.3 | 80.4 | 108.9M | 67.3±1.5 | 82.6±1.7 | 89.0±0.5 | 90.6±0.1 | 82.4 | | MoE | 521.5M | 63.6±1.1 | 78.0±1.4 | 90.2±0.4 | 91.1±0.2 | 80.8 | 506.3M | 67.6±1.1 | 83.0±1.4 | 89.3±0.3 | 90.8±0.2 | 82.7 | | PAD-Net | 323.1M | 64.2±0.8 | 79.4±1.2 | 90.7±0.3 | 91.4±0.3 | 81.4 | 308.0M | 68.2±1.3 | 84.1±1.5 | 89.5±0.4 | 91.2±0.2 | 83.3 | | Depth | Model | Params | FLOPs | Top-1(w/dev) | Width | Model | Params | FLOPs | Top-1(w/dev) | |-----------------|---------|----------|----------|----------------|---------|---------|----------|---------|----------------| | Static | 5.2M | 0.89G | 63.1±0.4 | | | | | | | | CondConv | 36.7M | 0.92G | 66.9±0.2 | | | | | | | | ResNet-10 | DY-Conv | 18.6M | 0.91G | 67.4±0.3 | | | | | | | PAD-Net | ✶6.9M | ✶0.90G | 68.1±0.2 | Static | 2.0M | 97.0M | 65.7±0.3 | | | | CondConv | 15.5M | 113.0M | 68.8±0.2 | | | | | | | | ×0.5 | DY-Conv | 4.0M | 101.4M | 69.6±0.1 | | | | | | | PAD-Net | ✶2.7M | ✶98.3M | 70.4±0.2 | | | | | | | | Static | 11.1M | 1.81G | 70.6±0.3 | | | | | | | | CondConv | 81.4M | 1.89G | 71.9±0.2 | | | | | | | | ResNet-18 | DY-Conv | 42.7M | 1.86G | 72.4±0.3 | | | | | | | PAD-Net | ✶15.1M | ✶1.83G | 73.0±0.3 | Static | 2.6M | 209.1M | 69.2±0.4 | | | | CondConv | 17.5M | 233.9M | 72.1±0.3 | | | | | | | | ×0.75 | DY-Conv | 8.0M | 220.1M | 72.6±0.1 | | | | | | | PAD-Net | ✶5.2M | ✶212.4M | 73.5±0.2 | | | | | | | | Static | 23.5M | 3.86G | 76.2±0.2 | | | | | | | | CondConv | 129.9M | 3.98G | 76.9±0.3 | | | | | | | | ResNet-50 | DY-Conv | 100.9M | 3.97G | 77.2±0.2 | | | | | | | PAD-Net | ✶33.8M | ✶3.90G | 77.9±0.2 | | | | | | | | (a) ResNet | Static | 3.5M | 300.8M | 72.1±0.3 | | | | | | | CondConv | 27.5M | 329.0M | 74.4±0.2 | | | | | | | | ×1.0 | DY-Conv | 11.1M | 312.9M | 74.8±0.2 | | | | | | | PAD-Net | ✶6.1M | ✶304.4M | 75.3±0.1 | | | | | | | | (b) MobileNetV2 | | | | | | | | | | ing cosine learning rate scheduling and warmup strategy. The learning rate rises to the maximum linearly in the first ten epochs and schedules to arrive at zero within a single cosine cycle. We follow Chen et al. (2020b)'s temperature annealing strategy to avoid the unstable output values of the softmax function in the first epochs. We train ResNet for 100 epochs with the max learning rate of 0.1. We train the MobilenetV2 for 300 epochs with the max learning rate of 0.05. The weight decay is 1e-4 for ResNet and 4e-5 for MobilenetV2. The training batch size is 256 for all models. ## 5.2 Main Results Natural Language Understanding. We evaluate the performance of PAD-Net for MoE on various datasets from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019). Like previous works (Lee et al., 2020; Dodge et al., 2020; Zhong et al., 2022a), we fine-tune pretrained models, e.g., BERT (Devlin et al., 2019), ALBERT (Lan et al., 2020), RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020) on the training set and directly report results on validation set using the last checkpoint, since the test results are only accessible by the leaderboard with submission limitation. Following Shazeer et al. (2017); Gao et al. (2022), we replace feed-forward layers with MoE layers where we prepare 8 experts and select the top-2 experts for each input. We set the dynamic ratio κ = 50% because it is close to the optimal value. Table 1 shows that PAD-Net outperforms MoE on the GLUE benchmark with a 0.95% average increase for four backbones. Specifically, PAD-Net improves BERT by 1.9% and RoBERTa by 0.6% | Method | #Param. | CoLA | SST-2 | MRPC | STS-B | QQP | MNLI | QNLI | RTE | Avg. | |----------|-----------|--------|---------|--------|---------|-------|--------|--------|-------|--------| | BERT | 108.9M | 54.6 | 91.4 | 84.6 | 85.8 | 90.6 | 83.7 | 90.4 | 66.4 | 81.2 | | w/ MoE | 506.3M | 58.0 | 91.7 | 85.0 | 87.1 | 90.8 | 83.8 | 90.8 | 69.3 | 82.1 | | κ = 70% | 387.3M | 58.5 | 92.4 | 85.5 | 89.6 | 90.9 | 83.9 | 90.9 | 70.6 | 82.8 | | κ = 50% | 308.0M | 59.7 | 92.2 | 85.4 | 90.3 | 90.9 | 84.2 | 91.0 | 71.5 | 83.2 | | κ = 30% | 228.6M | 59.0 | 92.0 | 85.3 | 89.4 | 91.0 | 84.0 | 90.9 | 71.2 | 82.9 | | κ = 10% | 149.3M | 57.5 | 91.1 | 85.4 | 88.3 | 90.4 | 83.6 | 90.6 | 70.2 | 82.1 | | RoBERTa | 124.1M | 62.1 | 94.0 | 89.6 | 90.6 | 91.0 | 86.9 | 91.8 | 77.4 | 85.4 | | w/ MoE | 521.5M | 63.6 | 94.8 | 90.2 | 91.1 | 91.7 | 87.7 | 92.9 | 78.0 | 86.3 | | κ = 70% | 402.5M | 64.6 | 95.0 | 91.0 | 91.0 | 91.8 | 87.7 | 92.9 | 78.2 | 86.5 | | κ = 50% | 323.1M | 64.4 | 95.2 | 90.7 | 91.4 | 91.9 | 88.0 | 93.0 | 79.4 | 86.8 | | κ = 30% | 243.8M | 63.4 | 94.6 | 90.5 | 91.2 | 91.4 | 87.8 | 93.2 | 78.8 | 86.4 | | κ = 10% | 164.5M | 63.9 | 94.4 | 90.4 | 90.8 | 90.9 | 87.4 | 92.6 | 78.2 | 86.1 | ![6_image_0.png](6_image_0.png) Visual Image Classification. We also report the superiority of PAD-Net in visual image classification. In Table 2, we compare PAD-Net with static convolution (Krizhevsky et al., 2012), CondConv (Yang et al., 2019) and Dynamic Convolution (Chen et al., 2020b) on ImageNet (Deng et al., 2009) classification for ResNet (He et al., 2016) and MobileNetV2 (Sandler et al., 2018) in the same experimental setting with previous works, by adjusting all convolution layers except the first layer. Before training, we first partition two modes of parameters with a given dynamic ratio κ using ten batches of examples. PAD-Net improves accuracy with significantly lighter architecture and fewer FLOPs (Floating Point Operations). For instance, PAD-Net outperforms DY-Conv by 0.7% top-1 accuracy with 33.9% parameters and 0.1G fewer FLOPs in ResNet-50. ## 5.3 Ablation Study Effect of Dynamic Ratio. Inspired by Wettig et al. (2022), we investigate the impact of different dynamic ratios κ, and the results are shown in Table 3 For MoE and Figure 3 for DY-Conv. Because PAD-Net with low dynamic ratios significantly outperforms fully dynamic networks, we only consider ratios of less than 70%, allowing for more sparsity and efficiency. We empirically find that κ = 50% is nearly the optimal ratio for MoE to achieve the highest performance, while the best performance of DY-Conv is achieved when κ = 30%. We believe that different dynamic functions contribute | Model | Option | RTE | STS-B | |-------------|----------|----------|----------| | - | 69.6 | 87.4 | | | λs | 70.7 | 88.1 | | | λd | 70.9 | 89.6 | | | λs, λd | 71.3 | 89.8 | | | λs + λd = 2 | 71.5 | 90.3 | | | Model | Option | CIFAR-10 | ImageNet | | BERT-base | - | 93.9 | 77.1 | | λs | 94.3 | 77.2 | | | λd | 94.5 | 77.4 | | | λs, λd | 96.0 | 77.6 | | | λs + λd = 2 | 96.6 | 77.8 | | | ResNet-50 | | | | Figure 4: **Comparison of different partition methods**, including random partition "Random", mode partition "MP", and iterative mode partition "IMP". We also report dynamic convolution "Dynmiac" as a baseline. ![7_image_0.png](7_image_0.png) Effect of Scale Factors. We also conduct an ablation study on the proposed scale factors and verify their necessity. Table 4 summarizes the impact of scale factors on different architectures. We initially tried to gain scale factors from a SENet structure (Hu et al., 2018), while it did not contribute to the improvement of performance. So we just set scale factors as trainable parameters to avoid redundant parameters and operations. Besides the setting "λs+λd = 2" in Equation 4, we consider other situations: only using one factor ("λs" and "λd") , and no scale factors used ("–"). We conduct experiments on CIFAR-10 (Krizhevsky, 2009) and ImageNet for ResNet-50, RTE, and STS-B for BERT. λs and λd enhance performance substantially, and their coexistence leads to further improvement. To explore the impact of the summation constraint, we release it and denote this setting as "λs, λd". Clearly, without summation constraint, the performance of ResNet-50 and BERT decreases significantly, i.e., -0.4% and -0.35% on average. ## Effectiveness Of Iterative Mode Partition. We compare different partition strategies in Figure 4. Compared to fully dynamic networks, accuracy degrades when we partition two modes randomly, which means this naive partition method mistakes some important dynamic parameters. In contrast, mode partition contributes to a better combination of dynamic and static parameters, improving the accuracy. IMP shows its effectiveness by achieving the best performance. Figure 5: **Visualization description of the computation cost for PAD-Net on MoE.** Given a specific input X, we denote the computation cost for selected experts and static parameters. ![7_image_1.png](7_image_1.png) ## 5.4 Detailed Analysis Reduced Computation. We show the computation cost of PAD-Net in Figure 5. Compared to vanilla MoE, PAD-Net reduces the computation between selected experts and the input, yTi = ETi (x), where ETi denotes the i-th selected experts. Because the two methods share the same gating mechanism, we temporally ignore its computation for simplicity. We denote the computation of the i-th expert as CTi where CT1 = *· · ·* = CTn = c, and the total computation of multi-experts is nc if we select n experts within m ones. In PAD-Net, given the dynamic ratio κ, it is reduced to nκc. Together with the computation (1 − κ)c, the computation of a PAD-Net layer is nκc+ (1−κ)c. Integrated with PAD-Net, an MoE layer can reduce the computation by (n − 1)(1 − κ)c. When κ is low enough, the computation of PAD-Net can be close to static networks. For DY-Conv, the reduced computation lies in the linear combination of parallel kernels Pk i=1 πi(x) · Θ(i), which is sparse in PAD-Net. In short, the degree of reduced computation depends on the specific dynamic function. Difference with Model Pruning. Mode partition maintains important dynamic parameters while making redundant ones static, which may be similar to network pruning. In Table 5, we compare mode partition with network pruning (Lee et al., 2019) on the GLUE benchmark for BERTbase and reveal their difference empirically. PADNet achieves the best performance among all tasks listed, with 1.2% average improvements over vanilla MoE. In contrast, we discover that network pruning lowers the performance of MoE significantly by 1.1% on average. Considering maintain- | Method | #Param. | CoLA | SST-2 | MRPC | STS-B | QQP | MNLI | QNLI | RTE | Avg. | |----------|-----------|--------|---------|--------|---------|-------|--------|--------|-------|--------| | Static | 108.9M | 54.6 | 91.4 | 84.6 | 85.8 | 90.6 | 83.7 | 90.4 | 66.4 | 81.2 | | MoE | 506.3M | 58.0 | 91.7 | 85.0 | 87.1 | 90.8 | 83.8 | 90.4 | 69.3 | 82.0 | | MoE-P | 308.0M | 55.6 | 91.6 | 84.7 | 85.8 | 90.8 | 82.4 | 90.2 | 65.7 | 80.9 | | PAD-Net | 59.7 | 92.2 | 85.4 | 90.3 | 90.9 | 84.2 | 91.0 | 71.5 | 83.2 | | Figure 6: **Dynamic property calculation.** We plot layer-wise curves of parameter variance and output variance for ResNet-50. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) ing the performance of a fully dynamic network, it is preferable to convert unimportant dynamic parameters into static ones than to prune them. Dynamic Property. Dynamic property refers to variant numerical characteristics of a dynamic network caused by different input samples. The ideal dynamic network maintains two capacities: assigning specific parameters for the input and making counterpart output discriminative. Inspired by Li et al. (2021b), we take two levels of variance as metrics (parameter variance and output variance) to measure the dynamic property and show the result in Figure 6. Static convolution, dynamic convolution, and PAD-Net (κ = 30%) show different properties given the same samples from ImageNet. We see that dynamic convolution retains a high degree of parameter variance while it has the lowest output variance. Static convolution performs the opposite. The outputs of PAD-Net are discriminative, which may contribute to its superiority in performance. ## 6 Conclusion And Future Work In this work, we first reveal parameter redundancy and high deployment costs of fully dynamic networks. To resolve these problems, we proposed the partially dynamic network (PAD-Net) to advance both performance and efficiency. PAD-Net demonstrated its superiority on MoE and DY-Conv frameworks. Extensive experiments on both NLP and CV tasks empirically show its effectiveness and efficiency against fully dynamic networks, significantly improving performance with much fewer dynamic parameters and less computation. Our proposed method could be extensively integrated with other mainstream architectures and inspire future work in efficient neural network designation and other fields. ## Acknowledgements We are grateful to the anonymous ACL reviewers and the area chair for their insightful comments and suggestions. ## 7 Limitations Despite the progress we made, there still exist limitations in our work. On the one hand, we only investigated some classic dynamic networks and found that the proposed method contribute to the best performance in selected criteria. However, other advanced partition methods that further improve the performance and efficiency may exist, which deserve exploration in future work. On the other hand, since we only consider MoE and DY-Conv in limited tasks, it would be valuable to consider other architectures (e.g., Switch Transformer (Fedus et al., 2021)), machine learning methods (e.g., reinforcement learning (Li et al., 2022)) and tasks (e.g., machine translation (Ding et al., 2020, 2021)). ## Ethics Statement We take ethical considerations seriously and strictly adhere to the ACL Ethics Policy. This paper focuses on the higher efficiency of dynamic networks, e.g., the mixture of experts. Both the datasets and models used in this paper are publicly available and have been widely adopted by researchers. We ensure that the findings and conclusions of this paper are reported accurately and objectively. ## References Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In ICML. Miguel A Carreira-Perpinán and Yerlan Idelbayev. 2018. "learning-compression" algorithms for neural net pruning. In *CVPR*. Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020a. The lottery ticket hypothesis for pretrained bert networks. In *NeurIPS*. Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Lu Yuan, and Zicheng Liu. 2020b. Dynamic convolution: Attention over convolution kernels. In CVPR. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In *ICLR*. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In *NeurIPS*. Pau de Jorge, Amartya Sanyal, Harkirat S Behl, Philip HS Torr, Gregory Rogez, and Puneet K Dokania. 2021. Progressive skeletonization: Trimming more fat from a network at initialization. *ICLR*. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In *CVPR*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. Liang Ding, Longyue Wang, and Dacheng Tao. 2020. Self-attention with cross-lingual position representation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1679–1685, Online. Association for Computational Linguistics. Liang Ding, Di Wu, and Dacheng Tao. 2021. Improving neural machine translation by bidirectional training. EMNLP. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv*. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. 2020. Rigging the lottery: Making all tickets winners. In *ICML*. William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *arXiv*. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *ICLR*. Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, ZhongYi Lu, and Ji-Rong Wen. 2022. Parameter-efficient mixture-of-experts architecture for pre-trained language models. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3263–3273, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR. Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. 2021. Dynamic neural networks: A survey. *IEEE Transactions on Pattern* Analysis and Machine Intelligence. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *CVPR*. Shwai He, Liang Ding, Daize Dong, Miao Zhang, and Dacheng Tao. 2022. Sparseadapter: An easy approach for improving the parameter-efficiency of adapters. In *EMNLP*. Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-andexcitation networks. In *CVPR*. Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kilian Q Weinberger. 2018. Condensenet: An efficient densenet using learned group convolutions. In CVPR. Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. *Neural Computation*. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR*. Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical report. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In *NeurIPS*. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In *ICLR*. Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. 2020. Mixout: Effective regularization to finetune large-scale pretrained language models. *ICLR*. Alexander Wettig, Tianyu Gao, Zexuan Zhong, and Danqi Chen. 2022. Should you mask 15% in masked language modeling? *arXiv*. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. 2019. Snip: Single-shot network pruning based on connection sensitivity. In *ICLR*. Chao Li, Aojun Zhou, and Anbang Yao. 2021a. Omnidimensional dynamic convolution. In *ICLR*. Qian Li, Hao Peng, Jianxin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, and Zheng Wang. 2022. Reinforcement learning-based dialogue guided event extraction to exploit argument relations. IEEE ACM Trans. Audio Speech Lang. Process. Yunsheng Li, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Ye Yu, Lu Yuan, Zicheng Liu, and Others. 2021b. Revisiting dynamic convolution via matrix decomposition. In *ICLR*. Chuan Liu, Yi Gao, and Jiancheng Lv. 2021. Dynamic normalization. *arXiv*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Others. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv*. Ravi Teja Mullapudi, William R Mark, Noam Shazeer, and Kayvon Fatahalian. 2018. Hydranets: Specialized dynamic architectures for efficient inference. In CVPR. Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. *arXiv*. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In *CVPR*. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In *ICLR*. Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, and Dacheng Tao. 2023. On efficient training of large-scale deep learning models: A literature review. arXiv. Stijn Verdenius, Maarten Stol, and Patrick Forré. 2020. Pruning via iterative ranking of sensitivity statistics. arXiv. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *ICLR*. Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. 2018. Skipnet: Learning dynamic routing in convolutional networks. In *ECCV*. Brandon Yang, Gabriel Bender, Quoc V. Le, and Jiquan Ngiam. 2019. Condconv: Conditionally parameterized convolutions for efficient inference. In *NeurIPS*. Changtong Zan, Keqin Peng, Liang Ding, Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, and Dacheng Tao. 2022. Vega-MT: The JD explore academy machine translation system for WMT22. In WMT. Qihuang Zhong, Liang Ding, Li Shen, Peng Mi, Juhua Liu, Bo Du, and Dacheng Tao. 2022a. Improving sharpness-aware minimization with fisher mask for better generalization on language models. In EMNLP. Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, et al. 2022b. Toward efficient language model pretraining and downstream adaptation via self-evolution: A case study on superglue. arXiv. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? Both the methods we cite and the improvement we make involve no potential risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✗ B1. Did you cite the creators of artifacts you used? Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 5.1 5.2 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5.1 5.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
mehrabi-etal-2023-resolving
Resolving Ambiguities in Text-to-Image Generative Models
https://aclanthology.org/2023.acl-long.804
Natural language often contains ambiguities that can lead to misinterpretation and miscommunication. While humans can handle ambiguities effectively by asking clarifying questions and/or relying on contextual cues and common-sense knowledge, resolving ambiguities can be notoriously hard for machines. In this work, we study ambiguities that arise in text-to-image generative models. We curate the Text-to-image Ambiguity Benchmark (TAB) dataset to study different types of ambiguities in text-to-image generative models. We then propose the Text-to-ImagE Disambiguation (TIED) framework to disambiguate the prompts given to the text-to-image generative models by soliciting clarifications from the end user. Through automatic and human evaluations, we show the effectiveness of our framework in generating more faithful images aligned with end user intention in the presence of ambiguities.
# Resolving Ambiguities In Text-To-Image Generative Models Ninareh Mehrabi***, Palash Goyal, Apurv Verma, Jwala Dhamala,** Varun Kumar, Qian Hu, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Rahul Gupta Amazon Alexa AI-NU ## Abstract Natural language often contains ambiguities that can lead to misinterpretation and miscommunication. While humans can handle ambiguities effectively by asking clarifying questions and/or relying on contextual cues and commonsense knowledge, resolving ambiguities can be notoriously hard for machines. In this work, we study ambiguities that arise in text-to-image generative models. We curate the Text-toimage Ambiguity Benchmark (TAB) dataset to study different types of ambiguities in text-toimage generative models.1 We then propose the Text-to-ImagE Disambiguation (TIED) framework to disambiguate the prompts given to the text-to-image generative models by soliciting clarifications from the end user. Through automatic and human evaluations, we show the effectiveness of our framework in generating more faithful images aligned with end user intention in the presence of ambiguities. ## 1 Introduction Natural conversations contain inherent ambiguities due to potentially multiple interpretations of the same utterance. Different types of ambiguities can be attributed to *syntax* (e.g., "the girl looks at the boy holding a green bag" - is the girl holding the green bag?), *semantics* (e.g., "a picture of cricket" - is cricket referring to an insect or a game?), and *underspecification* (e.g., "doctor talking to a nurse" - is the doctor/nurse male or female?). Ambiguities pose an important challenge for many natural language understanding tasks and have been studied extensively in the context of machine translation (Stahlberg and Kumar, 2022), conversational question answering (Guo et al., 2021), and task-oriented dialogue systems (Qian et al., 2022), among others. In this paper, we study the effect of ambiguity in text-to-image generative models (Ramesh et al., *mninareh@amazon.com 1Data located at: https://github.com/Ninarehm/TAB. ![0_image_0.png](0_image_0.png) 2021, 2022; Saharia et al., 2022; Yu et al., 2022) and demonstrate that ambiguous prompts provided to such models might result in undesired outcomes and poor user experience. In particular, ambiguities due to underspecification can lead to biased outcomes with possible implications on fairness of the underlying models (e.g., when prompted with "doctor talking to a nurse", the model might generate images with disproportionate number of male doctors and female nurses). We also propose a framework for mitigating ambiguities existing in prompts. We choose this setting to study ambiguity as visual scenes provide readily human-interpretable alternative understandings of text, thus helping in evaluating ambiguities as well as mitigation strategies. Figure 1 illustrates a few ambiguous prompts and corresponding outputs from the state-of-theart text-to-image model, DALL-E (Ramesh et al., 2022; Dayma et al., 2021). We observe that ambi14367 ![1_image_0.png](1_image_0.png) TIED Framework guity in prompts confuses the model resulting in a varied set of generated images. Humans tend to resolve ambiguities by asking clarifying questions, relying on other forms of modalities (such as vision), using contextual signals, and leveraging common-sense and/or an external source of knowledge (Achimova et al., 2022). Inspired by this observation, we propose a new framework (Figure 2) in which we incorporate a language model-based *prompt disambiguation* filter to process the prompts fed to text-to-image generative models. This filter is capable of either asking clarifying questions or generating different possible textual descriptions of visual setups which would later be resolved through end user interactions (in this work, we define *end user* as a human-agent who interacts with the system and might interchangeably use human to refer to the end user). Ultimately, the disambiguation filter helps the text-to-image model to identify a single visual setup for image generation. In this work, we define visual setups as textual descriptions of different possible visual interpretations of a given ambiguous prompt. To better understand the weaknesses of current text-to-image generative models, and to evaluate the effectiveness of our proposed mitigation framework, we curate a benchmark dataset consisting of ambiguous prompts covering different types of ambiguities that are especially relevant to text-toimage models. We propose new automatic evaluation procedures to evaluate faithfulness of generations in text-to-image generative models to end user intention. We perform various automatic and human evaluation experiments (on over 15k images) and observe that our proposed framework can improve faithful image generation by an overall average of over 11% through prompt disambiguation. Overall, we make the following contributions: 1. We introduce a Text-to-image Ambiguity Benchmark (TAB) dataset containing different types of ambiguous prompts along with different visual setups (Section 2). 2. We propose the Text-to-ImagE Disambiguation (TIED) framework which can be applied on top of any text-to-image model for ambiguity resolution (Section 3). 3. We propose new automatic evaluation procedures to evaluate ambiguity resolution in textto-image models. We perform automatic and human experiments to demonstrate the effect of TIED on ambiguity resolution and faithful image generations (Sections 4 and 5). ## 2 Text-To-Image Ambiguity Benchmark (Tab) Our Text-to-image Ambiguity Benchmark (TAB) is a modified version of the LAVA corpus (Berzak et al., 2015). The original LAVA corpus contains various types of ambiguous sentences that can only be resolved through visual signals from their corresponding images/videos. We use the ambiguous ![2_image_0.png](2_image_0.png) prompts (templates) from LAVA and not the images - as images in our case would be generated by text-to-image generative models. We make various modifications to the LAVA corpus to create TAB. These modifications include: (i) adding new ambiguous sentences to TAB to cover more diverse objects, scenes, and scenarios compared to existing ambiguous sentences in LAVA, (ii) removing examples relevant to video domain from LAVA and only keeping examples relevant to static images in TAB, (iii) including fairness prompts in TAB that cover different activities (Zhao et al., 2017) and occupations (Nadeem et al., 2021) in which the identities of the individuals are ambiguous, (iv) adding more structurally complex sentences, and (v) curating additional labels for TAB (e.g., whether the visual setup or interpretation of an ambiguous sentence is commonsensical or not). As a result of some of the modifications mentioned above, TAB ends up covering 963 more ambiguous sentences (prompts) and 4,192 more visual setups (textual descriptions of possible visual interpretations for each ambiguous sentence) compared to LAVA. LAVA covers 237 ambiguous sentences and 498 visual setups, while TAB covers 1,200 ambiguous sentences and 4,690 visual setups. On a high level, TAB covers six main types of prompt ambiguities, including fairness and linguistic type ambiguities. We add some additional complex cases on top of the six main types of prompt ambiguities. In these complex cases, we take a sample of prompts from TAB and manually make structurally more complex version of each sentence. This process is done in a way such that the ambiguities and meaning of the constituent sentences are kept intact, but the structure of a sentence is made more complex through addition of more information, extra words, adverbs, and adjectives. For instance, "*The girl waves at the old man and* woman." representing an example for syntax conjunction type ambiguity can be turned into "The girl gracefully waves at the old man and woman to show respect and gratitude." with a more complex sentence structure. We also add some additional miscellaneous cases, which are not covered by six main types of ambiguities. In addition, we add combination cases where we combine fairness and linguistic type ambiguities and make new variations from our existing prompts. Each of the 1,200 ambiguous prompts in TAB are accompanied with possible visual setups. We also curate questions that can be surfaced to the end user to clarify the visual setup they have in mind amongst the set of visual setups available for a given ambiguous prompt. The objective of TIED framework is to either generate different visual setups or clarifying questions to the end user to disambiguate the prompts through end user interaction. Therefore, we use these questions and visual setups in TAB as ground truth to evaluate the TIED framework (details in Section 4). Each of the elements (e.g., visual setups) in TAB are generated by an expert annotator and are cross-checked by two additional annotators to ensure that they are sensible. Additional detailed statistics about TAB can be found in Table 1. Appendix A has definitions, additional examples for each of the ambiguities covered in TAB, details of modifications made to LAVA, and the dataset schema. ## 3 Text-To-Image Disambiguation (Tied) Framework Given an ambiguous prompt, our Text-to-ImagE Disambiguation (TIED) framework uses a Language Model (LM) to obtain disambiguation feedback from the end user through an interactive system. Our goal in TIED is to use in-context learning to seek user feedback that can help us disambiguate the prompts. After obtaining the disambiguation feedback, TIED concatenates the obtained signals to the original ambiguous prompts and creates final disambiguated prompts as shown in Figure 2. TIED then uses the final disambiguated prompts to generate images using text-to-image models. We test two resolution approaches in TIED: 1) Question Answering-TIED ("*QA-TIED*") which resolves ambiguities by language models generating questions and seeking answers to disambiguate prompts; 2) Visual Setup-TIED ("*VSTIED*") which resolves ambiguities by language models directly generating different possible visual setups (textual descriptions of different visual scenarios/interpretations) given an ambiguous prompt and seeking signals in the form of a visual setup being chosen. The overall TIED framework along with QA-TIED and VS-TIED ambiguity resolution approaches are shown in Figure 2. ## 3.1 Qa-Tied In QA-TIED, we perform in-context learning on the language model with few-shot examples containing ambiguous prompts as well as related clarifying questions that can result in visual resolution. Then, given an ambiguous prompt at inference time, we ask the model to generate clarifying questions. The question generated by the language model is presented to an end user, who is expected to provide a disambiguating response (assuming the question is useful to the end user and they can express their intention as a response). If the question is irrelevant, we expect the question to be left unanswered. The end user response is then concatenated to the original ambiguous prompt and a final disambiguated prompt is obtained as shown in Figure 2 (left). After obtaining the final disambiguated prompt, the prompt is provided to the text-to-image model and the corresponding image is generated. ## 3.2 Vs-Tied In VS-TIED, we perform in-context learning on the language model with few-shot examples containing ambiguous prompts as well as textual descriptions of possible visual scenarios that can result in visual resolution. Then, given an ambiguous prompt at inference time, we ask the model to generate possible textual descriptions of visual scenarios. Similar to the QA-TIED setup, the end user interacts with the language model and, this time instead of providing answers to clarifying questions, they pick the textual description of visual scenario that they have in mind out of all the possible ones generated by the language model. The chosen textual description of the visual scenario is then concatenated to the original ambiguous prompt and a final disambiguated prompt is obtained as shown in Figure 2 (right). If the generated visual scenarios are irrelevant or may not result in visual resolution, we expect the end user to not pick any generated scenario. Lastly, after obtaining the final disambiguated prompt, the prompt is provided to the text-to-image model and the corresponding image is generated. ## 4 Experiments We evaluate TIED on two complementary aspects: (i) whether language models incorporated in TIED generate appropriate clarifying questions (in the case of QA-TIED) or textual descriptions of visual setups (in the case of VS-TIED), resulting in visual resolution; (ii) whether modified prompts generated through TIED result in faithful image generation aligned with end-user intention. We discuss respective experiments for both of these evaluations below. In our experiments, we use three language models: GPT-2 (Radford et al., 2019), GPTneo (Black et al., 2021), and OPT (Zhang et al., 2022). In addition, we use OpenAI's DALL-E (Ramesh et al., 2022) and DALL-E Mega (Dayma et al., 2021) models as our text-to-image models to generate images in our experiments. ## 4.1 Language Model-Based Disambiguation To evaluate the ability of language models in generating disambiguation questions and/or textual descriptions of visual scenarios, we provide each of the three language models one example from each of the main six types of ambiguities as few-shot examples that are externally sourced and not present in TAB (note that we only need a handful of fewshot examples which are tabulated in Appendix B). We then perform automatic and human evaluations on the results generated by the language models given the ambiguous prompts in TAB as test instances as described below. Automatic Evaluations. In automatic evaluations, we report the alignment of generations by language models to ground truths provided in the TAB dataset. We remind the reader that in TAB, for each ambiguous prompt (e.g., "*The girl looks* at the bird and the butterfly; it is green") different possible textual descriptions of visual interpretations that can be associated to a prompt are present (e.g., (1) "*The bird is green*", (2) "*The butterfly* is green"). These visual interpretations serve as our ground truth in our automatic evaluations for the VS-TIED approach. In addition to visual interpretations, TAB contains the question format of each of those interpretations (e.g., (1) "*Is the bird* green?", (2) "*Is the butterfly green?*") that serve as our ground truth in our automatic evaluations for the QA-TIED approach. In automatic evaluations, ![4_image_0.png](4_image_0.png) we report the BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores by comparing the generations to the ground truth visual setups/clarifying questions in the TAB dataset. Human Evaluations. It is possible that the automatic metrics might not capture different variations of the ground truth labels. It is also possible that the automatic metrics might give high scores even if the generated prompts are not useful for disambiguation. Therefore, we perform human evaluations to ensure that the generations are relevant and that the automatic metrics are reliable metrics for this use-case. In human evaluations, we report the fraction of generations by language models for which an end user provides an answer or a selection in the TIED framework indicating the fraction of successful generations. Due to human evaluation task being labor intensive, we report the human evaluation results only on the GPT-neo model as we obtained best automatic results for this model. ## 4.2 Faithful Image Generation In Tied To evaluate the effectiveness of TIED in faithful image generation through prompt disambiguation, we compare generated images using disambiguated prompts obtained through TIED vs the original ambiguous prompts coming from TAB. For each prompt, four images are generated through the textto-image models used in our experiments. Overall, we generate and study over 15k images for different experimental setups and models (Appendix C has more details and statistics on prompts and images). Automatic Evaluations. In our automatic evaluations, we use a Visual Question Answering (VQA) model to check whether end user's intention is satisfied in the generated image. TAB provides each prompt associated with an image with an end user intention in the question format. We use both image and its corresponding question as inputs to the VQA model as shown in Figure 3. Ideally, if the image aligns to end user intention, we would expect the VQA model to output a "Yes" as an answer to the question. Thus, we report the fraction of times the VQA model outputs a "Yes" as an answer as the fraction of faithful generations aligned with end user intention amongst all the generations. For our automatic evaluation, we use the VILT VQA model (Kim et al., 2021). Human Evaluations. In addition to the proposed evaluation framework, we perform human evaluation in which we replace the VQA model with a human evaluator which checks whether the image satisfies the end user intention. Human evaluations would give us an unbiased understanding of TIED's effectiveness in faithful image generation by text-to-image models and would identify if human and VQA approaches agree. The human evaluation experiments are performed on Amazon's mechanical turk (mturk) platform. Overall, 400 images are annotated by mturk workers. Each image is annotated by three workers; thus, we obtain 1200 annotations in total (Appendix C has more details on the mturk experiments along with our survey). Paraphrasing Evaluations. To disambiguate the prompts in TIED, we concatenate the disambiguation signals with the original ambiguous prompts. This can give us complex and unnatural looking sentences. It might be beneficial to restate the sentences to obtain better results. Thus, we explore the effect that paraphrasing the disambiguated prompts can have on creating prompts more aligned with end user intention and hence more faithful image generation. Here, we take all the disambiguated prompts obtained through TIED, which are concatenation of disambiguated signals provided by the end user to the ambiguous prompts, and apply sentence paraphrasing model fine-tuned on BART (Lewis et al., 2020) over them. We then compare the results from providing the text-to-image model the ambiguous prompt vs the disambiguated prompt which is obtained from simple concatenation of end user provided signal to the original prompt vs a paraphrased version of | GPT-2 | GPT-neo | OPT | | | | | |----------------------------------|-----------|---------|--------|---------|--------|---------| | Ambiguity Type | BLEU ↑ | ROUGE ↑ | BLEU ↑ | ROUGE ↑ | BLEU ↑ | ROUGE ↑ | | Total Benchmark | 0.39 | 0.58 | 0.46 | 0.60 | 0.42 | 0.59 | | Syntax Prepositional Phrase (PP) | 0.21 | 0.64 | 0.06 | 0.63 | 0.22 | 0.65 | | Syntax Verb Phrase (VP) | 0.60 | 0.81 | 0.75 | 0.84 | 0.67 | 0.83 | | Syntax Conjunction | 0.17 | 0.63 | 0.23 | 0.65 | 0.06 | 0.56 | | Discourse Anaphora | 0.30 | 0.69 | 0.19 | 0.60 | 0.74 | 0.83 | | Discourse Ellipsis | 0.48 | 0.69 | 0.22 | 0.47 | 0.55 | 0.75 | | Fairness | 0.36 | 0.55 | 0.60 | 0.59 | 0.50 | 0.58 | ![5_image_0.png](5_image_0.png) ROUGE and Human **BLEU and Human** ![5_image_1.png](5_image_1.png) ![5_image_2.png](5_image_2.png) Pearson 0.863 0.546 Table 3: Pearson correlation between human evaluations and automatic metrics. the disambiguated prompt from the previous step. We report whether paraphrasing helps the model to generate more faithful images to end user intention. ## 4.3 Ablation Studies In our first ablation study, we demonstrate the effect of the number of few-shot examples provided to a language model on its performance. In this study, for a given ambiguity type we vary the amount of few-shot examples provided to the model. We then report model's performance on resolving the specific type of ambiguity for which the few-shot examples are given and its generalization ability in resolving other ambiguity types. In our second ablation study, we test model's ability to resolve ambiguities for complex vs simple sentence structures existing in TAB. In this study, we compare the performance disparities between language models' ability in resolving existing ambiguities in simple sentences vs similar sentences with more complex sentence structures that we curate in TAB. ## 5 Results 5.1 Language Model-Based Disambiguation Automatic Evaluations. From results in Table 2, we observe that language models perform reasonably well to generate good quality clarifying questions when given an ambiguous prompt as an input according to BLEU (∼ 0.40) and ROUGE (∼ 0.60) metrics in the few-shot QA-TIED setup. Here, we report the results for the QA-TIED approach in which language models generate one clarifying question per given prompt. Additional results for VS-TIED and the case in which we generate multiple clarifying questions per prompt in QATIED are in Appendix B.1. Similarly, we observe reasonable BLEU and ROUGE scores for the VSTIED setup. However, we note that better scores are obtained in QA-TIED setup compared to VSTIED. This suggests that the task of directly generating multiple scenarios given few-shot examples is harder for these models than generating clarifying questions given an ambiguous prompt. We believe that better scores are obtained for QA-TIED compared to VS-TIED since writing a disambiguation question has less diversity and is less complicated while there might be different ways that one would describe a disambiguated scenario. Thus, providing one way of generating a disambiguation scenario for some in-context examples would not be enough for the model to generalize and learn the task efficiently. In addition to reporting overall results on our overall TAB benchmark dataset, fine-grained results for the six different ambiguity types (as reported in Table 2) suggest that there exists disparity in how different ambiguities are handled by each of these language models. For instance, language models obtain higher BLEU and ROUGE scores on generating clarifying question in QA-TIED for ambiguity type *Syntax Verb Phrase (VP)* than ambiguity type *Syntax Propositional Phrase (PP)*. This ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) suggests that some types of ambiguities are easier for the language models to resolve compared to others, although they see similar number of examples provided per ambiguity type as few-shot examples using the in-context inference strategy. These results also demonstrate that the sentence structure has more room for variation in some ambiguity types making it harder to be resolved in the in-context learning setup. Human Evaluations. Figure 4 shows results from our human evaluation studies which shows similar trend as automatic evaluations discussed above. We report the fraction of generations that are successful according to end user (results for VS-TIED in Appendix B.1). From our human evaluation results as demonstrated in Table 3, we report the Pearson correlation across ambiguity types between human and ROUGE score of 0.863 and between human and BLEU score of 0.546. These results further show the agreement between our performed automatic and human evaluations. ## 5.2 Faithful Image Generation In Tied Human Evaluations. First, we demonstrate the effectiveness of QA-TIED in generating faithful images aligned with end user intention according to human evaluations. As per Fleiss Kappa (Fleiss, 1971), we observe an inter-annotator agreement of 0.86, denoting significant agreement. Figure 5 shows the fraction of times the models generate faithful images. We observe that overall, disambiguation helps with faithful generations by improving the results from the baseline that uses the original ambiguous prompts. Despite the overall positive impact of disambiguation, the fine-grained results in Figure 5 demonstrate that disambiguation has an adverse effect for some ambiguity types (e.g., PP type ambiguity due to the inherent struggle of text-to-image models with prepositions). In addition, we observe that it is harder to generate faithful images with correct interpretations for some ambiguity types (e.g., Ellipsis) due to the complexity of the prompts in this ambiguity category for text-toimage models. Automatic Evaluations. Second, we show similar results for our proposed automatic evaluation method to those of humans in Figure 6. We report Pearson correlation between human results vs automatic to be 0.83 and 0.75 for DALL-E Mega and OpenAI's DALL-E, respectively. This shows that ![7_image_0.png](7_image_0.png) the proposed automatic metric is in agreement with human annotators and can be used as a proxy to human evaluation, saving time and cost. For additional results on other setups (e.g., VS-TIED) and more evaluated images refer to Appendix C.1. Fairness Evaluations. Figure 7 demonstrates the effect that disambiguation via TIED has on generating more diverse images with fairness implications. By specifying identities associated with an individual, more diverse images can be generated. The LM based system (TIED) provides the user an opportunity to specify their intention more clearly. This can improve user satisfaction and encourage these models to generate diverse images. Paraphrasing Evaluations. Lastly, we report the effect paraphrasing the disambiguated prompts has over simply concatenating the disambiguation signal to the end of the ambiguous prompts. Figures 5 and 6 demonstrate that paraphrasing disambiguated prompts can overall have very slight and not significant improvement over simply concatenating the disambiguation signal to the end of the ambiguous prompts.2 ## 5.3 Ablation Studies We report our main findings here (see Appendix B for details). From the results, we observe that although increasing the number of few-shot examples can in some cases have positive impact on performance both in domain and out of domain generalization ability, the nature of the prompt (prompt format and ordering) also plays an important role. Our results also match the previous findings in (Zhao et al., 2021) in which authors study the effect of few-shot examples provided to language models to perform various tasks. In addition, we demonstrate that language models obtain lower performance for complex sentence structures compared to their simple sentence counterparts which is expected. ## 6 Related Work Resolving ambiguities in different NLP applications has been a prominent research direction due to its importance. For instance, word sense disambiguation is one of the areas in NLP that has gained significant attention (Wang and Wang, 2021). Resolving ambiguities in question answering (Min et al., 2020), conversational question answering (Guo et al., 2021), and task-oriented dialogue systems (Qian et al., 2022) has also been previously studied. Ambiguity resolution has also been studied in multi-modal applications, such as multi-modal machine translation (Li et al., 2022) or matching images or videos to disambiguated interpretation of a sentence (Berzak et al., 2015). Despite those recent efforts, not much attention has been paid to ambiguities in text-to-image generative models. On the other hand, the growing popularity of those models, both in academic and non-academic circles, make it imperatives to better understand potential issues with those systems due to language ambiguity (Hutchinson et al., 2022). ## 7 Conclusion In this work, we study the role of prompt ambiguity in text-to-image generative models and propose a disambiguation framework to generate more faithful images better aligned with user intention. We curate a benchmark dataset consisting of different types of ambiguities. We measure the ability of various language models in obtaining disambiguation signals through end user interaction by either generating clarifying questions or visual setups. After obtaining the signals and performing different automatic and human evaluations, we measure the faithfulness of image generations by text-toimage generative models given ambiguous, disambiguated, and paraphrased disambiguated prompts. In addition, we frame and analyze fairness in these systems from a new and different perspective. Although we demonstrate our framework's ability in distinguishing the existence of different interpretations given ambiguous prompts and their resolution, future work can investigate these two intertwined issues separately. In this work, our focus was on ambiguity resolution. Future work can focus on ambiguity detection in this domain. ## Limitations We acknowledge that our benchmark dataset does not cover all the existing ambiguities and that ambiguities related to fairness do not cover all the possibilities. It is also challenging to address all the existing ambiguities considering all the dimensions at once. If we want to consider all the existing ambiguities at once, we would need to deal with a combinatorial explosion of potential ambiguities. We acknowledge that our framework is not designed for combinatorial cases; however, our benchmark and framework is designed to showcase some of the existing problems related to more prominent ambiguities in text-to-image generative models. We encourage future work to expand on this work to consider all the existing possibilities. In addition, although our framework is able to result in more faithful image generations on overall cases, a few ambiguity types in our fine-grained results are shown to be harder to result in faithful image generations. We encourage future work to investigate this issue further to improve the results for these specific ambiguity types. ## Ethical Considerations In this work, we study and propose solutions to resolve existing ambiguities in prompts given to text-to-image generative models. In addition to resolving ambiguities in prompts, this work not only frames and analyzes fairness from a new and different perspective, but it also results in more faithful image generations aligned with end user intention. These aspects can contribute to numerous positive impacts to the research community. Not only one can generate more diverse images through disambiguating fairness type ambiguities, but our framework can also improve user satisfaction by generating aligned images to end user's intention despite existing ambiguities in the provided prompts. Resolving ambiguities can also avoid spread of misinformation and development of fallacies. Despite the aforementioned positive impacts, we also acknowledge the limitations associated with this work. We acknowledge that our benchmark dataset is just a very small sample of different types of ambiguous prompts that can be provided to a system. In addition, for the fairness type ambiguities, we only consider gender (male vs female), skin color (dark vs light), and age (young vs old). We acknowledge that these are only a limited number of characteristics that can represent identity of an individual and that we do not cover all the cases possible. We agree that we do not cover all the cases possible; however, our intent is to showcase a few examples through our benchmark (TAB) and highlight existing flaws associated with these systems encountering ambiguous prompts. In our experiments, we also utilize human annotators. We ensure to provide appropriate guidelines with a proper compensation to our workers (around 12$ per hour). We also utilize master workers based in the United States with proper expertise (completion of more than 1000 HITs with an acceptance rate above 85%). In addition, we provide the workers the opportunity to raise any concerns about our task. Based on the feedback, we believe that the task and the pay was satisfactory to the workers. We hope that our study can provide valuable insights to the research community with the positive implications out-weighting its limitations. We also open-source our benchmark dataset for the community to benefit from our work. As future work, researchers can investigate and propose better alternatives than our proposed framework for resolving ambiguities in text-to-image generative models along with extension of our work to semantic ambiguities in addition to the ones studied in this paper. Our benchmark dataset can also serve as a valuable resource for research in commonsense reasoning studies in text-to-image generative models which is less explored in our current work. We provide information in our benchmark dataset (whether an interpretation is commonsensical or not) which can be accessible to interested researchers in this area. ## References Asya Achimova, Gregory Scontras, Christian Stegemann-Philipps, Johannes Lohmann, and Martin V. Butz. 2022. Learning about others: Modeling social inference through ambiguity resolution. *Cognition*, 218:104862. Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, and Shimon Ullman. 2015. Do you see what I mean? visual resolution of linguistic ambiguities. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 1477–1487, Lisbon, Portugal. Association for Computational Linguistics. Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow. If you use this software, please cite it using these metadata. Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê Khc, Luke Melas, and Ritobrata Ghosh. 2021. Dall·e mini. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Meiqi Guo, Mingda Zhang, Siva Reddy, and Malihe Alikhani. 2021. Abg-coQA: Clarifying ambiguity in conversational question answering. In *3rd Conference on Automated Knowledge Base Construction*. Ben Hutchinson, Jason Baldridge, and Vinodkumar Prabhakaran. 2022. Underspecification in scene description-to-depiction tasks. In *Proceedings of the* 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1172– 1184, Online only. Association for Computational Linguistics. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In *International Conference on Machine Learning*, pages 5583–5594. PMLR. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yihang Li, Shuichiro Shimizu, Weiqi Gu, Chenhui Chu, and Sadao Kurohashi. 2022. Visa: An ambiguous subtitles dataset for visual scene-aware machine translation. *arXiv preprint arXiv:2201.08054*. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783– 5797, Online. Association for Computational Linguistics. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Kun Qian, Satwik Kottur, Ahmad Beirami, Shahin Shayandeh, Paul Crook, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2022. Database search results disambiguation for task-oriented dialog systems. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1158–1173, Seattle, United States. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*. Felix Stahlberg and Shankar Kumar. 2022. Jam or cream first? modeling ambiguity in neural machine translation with SCONES. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4950–4961, Seattle, United States. Association for Computational Linguistics. Ming Wang and Yinglin Wang. 2021. Word sense disambiguation: Towards interactive context exploitation from both word and sense perspectives. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 5218–5229, Online. Association for Computational Linguistics. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In *Proceedings of the 2017* Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. ## Appendix In this appendix, we will include details that were left out from the main text of the paper due to space limitations including experimental setup details as well as additional results and discussions. We ran all the experiments on an AWS p3.2xlarge EC2 instance. ## A Details About Benchmark Dataset Here, we will first define each of the different types of ambiguities existing in our benchmark dataset (TAB) with a corresponding example. We will then list the details of the modifications along with the extensions made to the original LAVA (Berzak et al., 2015) corpus to make TAB. ## A.1 Definitions Syntax Prepositional Phrase (PP): For this type of syntactic ambiguity, we borrowed the following template NNP V DT [JJ] NN1 *IN DT [JJ] NN*2 from the LAVA corpus (Berzak et al., 2015) to construct most of the cases in TAB. An example for this type of ambiguity can be: *The girl approaches* the shelf with a green plate. It is possible that 1. the green plate is with the girl or 2. is on the shelf. Syntax Verb Phrase (VP): For this type of syntactic ambiguity, we borrowed the following template NNP1 V [IN] NNP2 *V [JJ] NN* from LAVA to construct most of the cases in TAB. An example for this type of ambiguity can be: The girl hits the boy holding a birthday cake. It is possible that 1. the girl is holding the birthday cake or 2. the boy is holding the birthday cake. Syntax Conjunction: For this type of syntactic ambiguity, we borrowed the following templates NNP1 [and NNP2] V DT JJ NN1 *and NN*2 and NNP V DT NN1 or DT NN2 *and DT NN*3 from the LAVA corpus (Berzak et al., 2015) to construct most of the cases in TAB. An example for this type of ambiguity can be: *The girl looks at the red butterfly and bird*. It is possible that 1. the bird is red or 2. the bird is not red. Discourse Anaphora: For this type of discourse ambiguity, we borrowed the following template NNP V DT NN1 and DT NN2*. It is JJ* from the LAVA corpus (Berzak et al., 2015) to construct most of the cases in TAB. An example for this type of ambiguity can be: *The girl looks at the bird and* the butterfly; it is red. It is possible that 1. the bird is red or 2. the butterfly is red. Discourse Ellipsis: For this type of discourse ambiguity, we borrowed the following template NNP1 V NNP2*. Also NNP*3 from the LAVA corpus (Berzak et al., 2015) to construct most of the cases in TAB. An example for this type of ambiguity can be: The wolf eats the rabbit. Also the cat. It is possible that 1. the wolf eats both the rabbit and the cat or 2. the wolf and the cat eat the rabbit. Fairness: For this type of ambiguity, we borrowed list of activities that are possible to visually represent previously curated by (Zhao et al., 2017) (e.g. eating) and formed the following template: A person eating in which *eating* will be replaced by other activities existing in our activity list. In this case, there exists ambiguity since there is no enough information about the identity of the person doing the activity (whether the person is a female or male, their racial category, etc.). For the second type of fairness ambiguity, we borrowed list of different occupations (e.g. doctor) previously curated by (Nadeem et al., 2021) for fairness studies and formed the following template: An image of a doctor in which *doctor* will be replaced by other occupations existing in our occupation list. In this case, again there exists ambiguity since there is no enough information about the identity of the person involved in the occupation (whether the doctor is a female or male, what racial category, etc.). We only consider gender, age, and skin color of the individual as different possible interpretations and realize that this might be a limitation of our work since we do not cover all the possible cases as well as intersectional fairness; however, exhausting all the cases in our benchmark would have been challenging and we leave it to future work. Complex: In this case, we sampled some of the existing prompts created in our benchmark dataset following templates that were discussed above and manually made the structurally more complex version of them such that the meaning and ambiguity was kept the same but the structure was made more complex by addition of more information, words, adjectives, and adverbs. For instance, we converted the following simple ambiguous prompt: The girl waved at the old man and woman to the more complex version *The girl waved at the old man and* woman gracefully to show respect. Combination: In this case, we combined fairness type ambiguities with linguistic type ambiguities existing in our benchmark dataset. For instance, The police threatened the doctor with a gun combines the existing linguistic type ambiguity in our Example Visual Setups Commonsensical or Uncommonsensical **Question Format of Visual Setup** An elephant and a bird flying [the elephant is flying, the elephant is not flying] [UCS,CS] [is the elephant flying?, is the elephant not flying?] Table 4: Dataset schema of our benchmark (TAB) along with one provided example. The example contains the ambiguous prompt. The visual setup contains a list of different possible interpretations given an ambiguous example prompt. UCS represents that the interpretation is uncommonsensical and CS represents that the interpretation is commonsensical. We also include question format of each interpretation that is used in our automatic evaluations as inputs to VQA model. benchmark dataset since it is not clear whether the police is with the gun or the doctor. The same example also covers the fairness type ambiguity from our benchmark dataset since the identities of police and doctor are not specified. miscellaneous: In this case, we added some additional examples that were not covered in any of the previous types discussed above (e.g., porcelain egg container in which it is not clear whether the egg is porcelain or the container). Our benchmark schema is shown in Table 4. Each row of the dataset contains an example that represents the ambiguous prompt. The visual setup contains a list of different possible interpretations given an ambiguous example prompt. UCS represents that the interpretation is uncommonsensical and CS represents that the interpretation is commonsensical. We also include question format of each interpretation that is used in our automatic evaluations as inputs to VQA model. ## A.2 Modifications And Extensions Additions: From the original LAVA corpus (Berzak et al., 2015), we borrowed 112 examples (prompts) that were suitable for our usecase (e.g., there were applicable to static images) and added 1088 additional examples to our benchmark dataset. The original 112 examples, covered only 236 visual scenes (interpretations per ambiguous prompt); however, our extended cases added 4454 additional visual scenes to our benchmark dataset. Thus, in total our benchmark dataset covers 1200 ambiguous prompts (112 coming from LAVA and 1088 additional examples we curated) with 4690 total visual scenes (236 coming from LAVA and 4454 from our crafted examples). Our extensions included addition of different objects, scenes, and scenarios as well as addition of new types of ambiguities, such as the fairness. Modifications: In addition to expanding the LAVA corpus, we made various modifications to this dataset: 1. Our benchmark only contains ambiguous prompts and unlike LAVA we did not need videos/images to be part of our dataset as those will be generated by the text-to-image generative models. We would then evaluate faithfullness of generations using our benchmark dataset. 2. LAVA originally covered only few objects (3), we expanded the corpus to many different objects in diverse settings. 3. Added the fairness component. 4. Added the complex component. 5. Added the combination component in which we combined fairness ambiguity with linguistic ambiguity. 6. Added commonsensical vs uncommensensical label which represents whether each of the interpretations associated to a scene is commonsensical or not. E.g., for the ambiguous prompt An elephant and a bird flying, the first interpretation in which the elephant is not flying is commonsensical and the second interpretation in which the elephant is flying is uncommonsensical. Although we did not directly use this label in our work, we believe that this would be a valuable resource for future work in commonsense reasoning and its relation to ambiguity in such generative models. 7. Lava used proper names to address people in the images/videos. For our usecase, this would not be applicable, so we replaced proper names with girl vs boy to make the distinction possible. 8. Removed cases that were specific to video domain and not applicable to static images. ## B Details For Lm Experiments For this set of experiments we utilized three different language models: GPT-2, GPT-neo, and OPT. For the GPT-2 model, we utilized the 117M parameter pretrained model from huggingface 3. for the GPT-neo model, we utilized the 2.7B parameter model from huggingface 4. Lastly, for the OPT model, we utilized the 350M parameter pretrained model from huggingface 5. For the few-shot prompts provided to these lan- ![13_image_0.png](13_image_0.png) guage models refer to Table 5. We used the same set of prompts for the ablation study in which we compared simple vs complex sentence structures. For the ablation study in which we changed the number of few-shot examples provided to these models for each type of ambiguity specifically refer to Tables 6 and 7. Notice that for this set of experiments, we only considered the setup where the language model would generate one clarifying question per given ambiguous prompt (QA-TIED); thus, the prompts are provided as such. In addition, we used these prompts in order (meaning for one-shot setting, we used the first example. For two-shot setting, we used the first and second examples and so on.). For the automatic evaluation metrics, we used BLEU-4 6and ROUGE-1 7scores and their implementations from huggingface. In the main text, we refer to ROUGE-1 score as ROUGE and BLEU-4 as BLEU for simplicity. ## B.1 Results Automatic evaluation results from generating multiple clarifying questions as well as generating different possible visual setups (VS-TIED) can be found in Tables 8 and 9 respectively. Human evaluation results for generating different possible visual setups (VS-TIED) is demonstrated in Figure 8. ![13_image_1.png](13_image_1.png) The Pearson correlation between ROUGE and human scores are 0.829 and 0.424 between BLEU and human scores. For the first ablation study in which we vary the number of few-shot examples provided to the GPT-2 language model refer to Tables 11 through 16 for each of the ambiguity type separately. Results from the second ablation study in which we compared complex vs simple structures and the differences between language models' ability in generating one clarifying question, generating multiple clarifying questions, and generating multiple visual setups directly can be found in Table 10. In addition, we noticed some interesting patterns that we show the result qualitatively in Table 17. We noticed that even for the ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) same sentence, usage of different words caused the model to generate different outcomes. For instance, as shown in Table 17, for the linguistic type ambiguity, replacement of the word ladybug with giraffe results the model into generating a useful clarifying question that can actually be helpful in resolving the ambiguity vs just repeating the sentence in a question format. Similar pattern holds for fairness type ambiguity in which for the programmer the model generates a useful clarifying question that resolves ambiguities associated to the identity of the individual as given in the few-shot prompt, while for biologist the question is irrelevant, or for other cases the question is not helpful in resolving ambiguities attached to identity of the depicted individuals. These results demonstrate that even for the same sentences, words used in them play a significant role. | GPT-2 | GPT-neo | OPT | | | | | |----------------------------------|-----------|-------|------|-------|------|-------| | Ambiguity Type | BLEU | ROUGE | BLEU | ROUGE | BLEU | ROUGE | | Total Benchmark | 0.31 | 0.56 | 0.43 | 0.57 | 0.41 | 0.58 | | Syntax Prepositional Phrase (PP) | 0.12 | 0.66 | 0.08 | 0.61 | 0.16 | 0.65 | | Syntax Verb Phrase (VP) | 0.50 | 0.77 | 0.60 | 0.79 | 0.64 | 0.82 | | Syntax Conjunction | 0.18 | 0.65 | 0.25 | 0.68 | 0.09 | 0.57 | | Discourse Anaphora | 0.12 | 0.53 | 0.13 | 0.54 | 0.69 | 0.82 | | Discourse Ellipsis | 0.42 | 0.70 | 0.41 | 0.62 | 0.62 | 0.79 | | Fairness | 0.25 | 0.53 | 0.54 | 0.56 | 0.48 | 0.57 | Table 8: Automatic results from language models generating multiple clarifying questions. Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.23 0.52 0.20 0.44 0.31 0.60 Syntax Prepositional Phrase (PP) 0.07 0.61 0.06 0.58 0.07 0.60 Syntax Verb Phrase (VP) 0.39 0.80 0.30 0.69 0.39 0.81 Syntax Conjunction 0.15 0.64 0.14 0.56 0.12 0.67 Discourse Anaphora 0.0 0.57 0.06 0.47 0.0 0.76 Discourse Ellipsis 0.0 0.58 0.14 0.60 0.20 0.76 Fairness 0.29 0.50 0.19 0.41 0.40 0.60 GPT-2 GPT-neo OPT Table 9: Automatic results from language models directly generating multiple visual setups (VS-TIED). Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE simple complex simple complex simple complex simple complex simple complex simple complex One Clarifying Question 0.43 0.31 0.65 0.57 0.48 0.34 0.66 0.60 0.45 0.28 0.66 0.56 Multiple Clarifying Questions 0.34 0.24 0.62 0.55 0.44 0.31 0.63 0.58 0.42 0.27 0.65 0.56 Multiple Visual Setups 0.24 0.17 0.59 0.47 0.21 0.18 0.48 0.48 0.32 0.23 0.66 0.56 Table 10: Comparing sub-sample of structurally simple cases that had corresponding complex sentence structures. 1-shot 2-shot 3-shot 4-shot 5-shot 6-shot Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.13 0.38 0.20 0.46 0.27 0.48 0.21 0.47 0.28 0.47 0.32 0.50 Syntax Prepositional Phrase (PP) 0.12 0.42 0.19 0.54 0.29 0.60 0.23 0.58 0.28 0.61 0.32 0.61 Syntax Verb Phrase (VP) 0.29 0.48 0.27 0.42 0.42 0.64 0.43 0.62 0.47 0.67 0.56 0.69 Syntax Conjunction 0.0 0.38 0.09 0.46 0.15 0.55 0.04 0.51 0.0 0.53 0.0 0.51 Discourse Anaphora 0.0 0.40 0.0 0.33 0.0 0.48 0.0 0.42 0.0 0.45 0.0 0.47 Discourse Ellipsis 0.04 0.26 0.0 0.46 0.25 0.55 0.13 0.47 0.26 0.57 0.15 0.50 Fairness 0.0 0.36 0.18 0.50 0.12 0.44 0.10 0.46 0.13 0.43 0.15 0.48 Table 11: The effect of number of few-shot examples provided to GPT-2 model of type Syntax Prepositional Phrase (PP) on generating one clarifying question (QA-TIED) for different types of ambiguities. | GPT-2 | GPT-neo | OPT | |---------|-----------|-------| 1-shot 2-shot 3-shot 4-shot 5-shot 6-shot Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.05 0.36 0.30 0.58 0.33 0.54 0.36 0.56 0.34 0.57 0.33 0.55 Syntax Prepositional Phrase (PP) 0.0 0.34 0.0 0.62 0.0 0.53 0.0 0.53 0.0 0.58 0.10 0.59 Syntax Verb Phrase (VP) 0.22 0.46 0.52 0.77 0.55 0.75 0.63 0.81 0.56 0.79 0.57 0.79 Syntax Conjunction 0.0 0.42 0.21 0.66 0.21 0.60 0.16 0.64 0.23 0.67 0.21 0.64 Discourse Anaphora 0.0 0.11 0.08 0.50 0.13 0.52 0.26 0.66 0.10 0.51 0.0 0.66 Discourse Ellipsis 0.0 0.24 0.43 0.70 0.44 0.70 0.42 0.69 0.42 0.70 0.41 0.70 Fairness 0.0 0.33 0.29 0.58 0.20 0.54 0.22 0.55 0.24 0.57 0.19 0.53 Table 12: The effect of number of few-shot examples provided to GPT-2 model of type Syntax Verb Phrase (VP) on generating one clarifying question (QA-TIED) for different types of ambiguities. Table 13: The effect of number of few-shot examples provided to GPT-2 model of type Syntax Conjunction on generating one clarifying question (QA-TIED) for different types of ambiguities. | 1-shot | 2-shot | 3-shot | 4-shot | 5-shot | 6-shot | | | | | | | | |----------------------------------|----------|------------|------------|------------|------------|------------|-------|------|------|------|------|------| | Ambiguity Type | BLEU | ROUGE BLEU | ROUGE BLEU | ROUGE BLEU | ROUGE BLEU | ROUGE BLEU | ROUGE | | | | | | | Total Benchmark | 0.18 | 0.49 | 0.28 | 0.52 | 0.40 | 0.56 | 0.42 | 0.57 | 0.37 | 0.56 | 0.34 | 0.52 | | Syntax Prepositional Phrase (PP) | 0.0 | 0.44 | 0.0 | 0.53 | 0.0 | 0.57 | 0.13 | 0.54 | 0.08 | 0.53 | 0.0 | 0.46 | | Syntax Verb Phrase (VP) | 0.28 | 0.51 | 0.48 | 0.61 | 0.72 | 0.82 | 0.75 | 0.83 | 0.70 | 0.81 | 0.52 | 0.61 | | Syntax Conjunction | 0.0 | 0.53 | 0.23 | 0.62 | 0.21 | 0.62 | 0.19 | 0.64 | 0.17 | 0.59 | 0.32 | 0.58 | | Discourse Anaphora | 0.0 | 0.5 | 0.0 | 0.46 | 0.10 | 0.57 | 0.24 | 0.64 | 0.15 | 0.56 | 0.0 | 0.48 | | Discourse Ellipsis | 0.04 | 0.26 | 0.08 | 0.31 | 0.09 | 0.40 | 0.0 | 0.35 | 0.11 | 0.51 | 0.0 | 0.36 | | Fairness | 0.14 | 0.54 | 0.17 | 0.55 | 0.32 | 0.57 | 0.34 | 0.57 | 0.34 | 0.55 | 0.19 | 0.54 | 1-shot 2-shot 3-shot 4-shot 5-shot 6-shot Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.16 0.42 0.19 0.43 0.25 0.48 0.27 0.44 0.25 0.48 0.34 0.53 Syntax Prepositional Phrase (PP) 0.0 0.42 0.07 0.52 0.15 0.53 0.10 0.59 0.0 0.56 0.0 0.58 Syntax Verb Phrase (VP) 0.23 0.53 0.33 0.61 0.40 0.64 0.36 0.68 0.40 0.62 0.53 0.74 Syntax Conjunction 0.07 0.52 0.0 0.53 0.15 0.58 0.22 0.63 0.13 0.54 0.30 0.67 Discourse Anaphora 0.86 0.84 0.58 0.76 0.79 0.77 1.0 0.87 0.20 0.46 1.0 0.87 Discourse Ellipsis 0.0 0.34 0.04 0.36 0.11 0.41 0.11 0.42 0.05 0.46 0.07 0.39 Fairness 0.05 0.39 0.11 0.38 0.12 0.43 0.13 0.34 0.13 0.44 0.17 0.50 Table 14: The effect of number of few-shot examples provided to GPT-2 model of type Discourse Anaphora on generating one clarifying question (QA-TIED) for different types of ambiguities. 1-shot 2-shot 3-shot 4-shot 5-shot 6-shot Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.14 0.34 0.24 0.51 0.23 0.51 0.21 0.49 0.21 0.50 0.21 0.50 Syntax Prepositional Phrase (PP) 0.0 0.47 0.0 0.64 0.0 0.66 0.0 0.67 0.0 0.67 0.0 0.66 Syntax Verb Phrase (VP) 0.37 0.60 0.48 0.73 0.44 0.69 0.34 0.62 0.42 0.68 0.31 0.56 Syntax Conjunction 0.0 0.38 0.15 0.61 0.16 0.61 0.17 0.60 0.16 0.59 0.17 0.63 Discourse Anaphora 0.0 0.38 0.0 0.45 0.0 0.44 0.0 0.45 0.0 0.44 0.0 0.44 Discourse Ellipsis 0.0 0.33 0.50 0.73 0.42 0.70 0.70 0.76 0.42 0.70 0.93 0.89 Fairness 0.01 0.25 0.15 0.48 0.10 0.48 0.09 0.47 0.09 0.48 0.11 0.50 Table 15: The effect of number of few-shot examples provided to GPT-2 model of type Discourse Ellipsis on generating one clarifying question (QA-TIED) for different types of ambiguities. 1-shot 2-shot 3-shot 4-shot 5-shot 6-shot Ambiguity Type BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Total Benchmark 0.35 0.48 0.35 0.50 0.24 0.44 0.20 0.43 0.33 0.49 0.53 0.56 Syntax Prepositional Phrase (PP) 0.0 0.39 0.0 0.45 0.0 0.38 0.0 0.32 0.0 0.45 0.0 0.44 Syntax Verb Phrase (VP) 0.13 0.47 0.17 0.54 0.14 0.48 0.0 0.43 0.31 0.55 0.31 0.55 Syntax Conjunction 0.0 0.40 0.0 0.42 0.0 0.39 0.0 0.40 0.0 0.42 0.0 0.49 Discourse Anaphora 0.0 0.44 0.0 0.45 0.0 0.43 0.0 0.44 0.0 0.44 0.0 0.44 Discourse Ellipsis 0.0 0.20 0.0 0.23 0.0 0.20 0.0 0.20 0.0 0.24 0.0 0.34 Fairness 0.74 0.55 0.59 0.56 0.41 0.48 0.39 0.48 0.48 0.52 0.92 0.62 Table 16: The effect of number of few-shot examples provided to GPT-2 model of type Fairness on generating one clarifying question (QA-TIED) for different types of ambiguities. Table 17: Qualitative examples from both linguistic as well as fairness type ambiguities showing that even for the same sentence replacement of the words (e.g., ladybug with giraffe) can result in different generations (helpful clarifying questions vs repeating the sentence in question format or a non-useful generation). | Prompt | Generation | |----------------------------------------------------------|-----------------------------------------------------------| | Ling The ladybug sits next to the girl eating vegetable. | is the ladybug eating vegetable? | | The giraffe sits next to the girl eating vegetable. | is the giraffe sitting next to the girl eating vegetable? | | An image of a biologist. | is the animal a mammal? | | Fairness An image of a dancer. | is the dancer dancing? | | An image of a programmer. | is the programmer a female? | | An image of a travel guide. | is the travel guide a woman? | | An image of a cook. | is the cook in the kitchen? | ## C Details For Text-To-Image Experiments Through end user vs GPT-neo interactions in the setup where GPT-neo would generate one clarifying question (QA-TIED), we obtained 812 visual setups disambiguated by the end user (3248 images for 4 images per prompt and 4872 for six images per prompt) that represented our prompts for this setup. For the setup in which GPT-neo would generate multiple visual setups (VS-TIED) 805 scenarios were disambiguated by the end user (3220 images for 4 images per prompt and 4830 for 6 images per prompt). For DALL-E Mega, we generated 4 images per each of these prompts in each setup. We also have additional results reported in the Appendix for six images generated per prompt. We also did this generation for the disambiguated prompts, original ambiguous ones (for the sake of comparison between disambiguated vs ambiguous), as well as paraphrased prompts. For OpenAI's DALL-E due to their policies, restrictions, and limitations we were able to obtain images for 744 of these prompts in the setup where GPT-neo would generate one clarifying question (QA-TIED) and generated 4 images per prompt for each of the initial ambiguous prompts and final disambiguated ones by humans. For some portion of the prompts we have six images per prompt. This is due to the fact that OpenAI changed their policy in generating less images (4 instead of 6) after a period of time. However, we report the results on 4 images per prompt since this is the most amount of images that we have for all the prompts available. For the mturk experiments, Amazon mechanical turk workers annotated 150 DALL-E Mega images for the case where GPT-neo would generate one clarifying question (QA-TIED) and end user would provide clarifying answer. 150 DALL-E Mega images for the setup in which GPT-neo would generate multiple visual setups (VS-TIED) and end user would pick the intended one, and 100 OpenAI's DALL-E images for the setup where GPTneo would generate one clarifying question and end user would provide an answer (QA-TIED). Overall, this gave us 400 images. Each image was annotated by 3 mturk workers; thus, overall we ended up with 1200 annotations. The mturk survey provided to mturk workers is included in Figure 16. We recruited master workers from the platform with specific qualifications (completion of more than 1000 HITs with an acceptance rate above 85%). We provided the workers the opportunity to comment on our task and compensated them for approximately 12$ per hour. ## C.1 Results Automatic as well as human evaluation results reporting the percentage of faithful image generations in DALL-E Mega for the setup in which different possible visual setups are generated (VSTIED) by the language model and end user picking the best option and generated images from this signal attached to the initial ambiguous prompt is demonstrated in Figures 9 and 10 respectively. In addition, we report the same set of automatic results both for the case of language model generating clarifying question (QA-TIED) and the end user providing clarifying signals through answering the question as well as language model generating different possible visual setups (VS-TIED) and end user picking the best option for more generated images per prompt (six images per prompt) in Figures 11 and 12. In the previous sets of results we generated four images per prompt; however, in this set of results, we generated six images per prompt. Notice that we report these sets of results only for the DALL-E Mega model as we had quota limitations accessing OpenAI's DALLE. However, since results are similar to those with fewer images per prompt, we believe that the same would hold for OpenAI's DALL-E. These results are additional sets of results covering more images and serve as a sanity check. In addition, we performed experiments in which instead of providing the VQA model with the ground truth questions coming from our benchmark dataset, we provided the VQA model with questions generated by GPTneo in the setup where GPT-neo would generate one clarifying question (QA-TIED). This is done to show whether DALL-E Mega generates faithful images with regards to GPT-neo's generated questions regardless of our overall framework. The results for the case where we generated four images per prompt is demonstrated in Figure 13 and six images per prompt in Figure 14. In this case, instead of reporting the percentage of "Yes"s outputed by the VQA model, we reported the percentage of answers that matched end user provided answers to generated questions by GPT-neo (to report the faithfulness to end user intention). We demonstrate qualitative results comparing the generated images between ambiguous prompts provided to the system vs the disambiguated ones in Figure 15. ![18_image_0.png](18_image_0.png) ![18_image_2.png](18_image_2.png) ![18_image_4.png](18_image_4.png) ![18_image_1.png](18_image_1.png) ![18_image_3.png](18_image_3.png) ![18_image_5.png](18_image_5.png) ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The limitations are discussed under the "Limitations" section after conclusion. ✓ A2. Did you discuss any potential risks of your work? Potential risks are discussed in the "Ethical Considerations" section. ✓ A3. Do the abstract and introduction summarize the paper's main claims? The abstract and introduction summarize the paper's main claims made throughout the paper. ✗ A4. Have you used AI writing assistants when working on this paper? We have only used overleaf and its built in spell-checker. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We used scientific artifacts in our experiments (section 4). ✓ B1. Did you cite the creators of artifacts you used? Experiments section (section 4). ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We used only open-sourced artifacts and properly cited them as required by the provider (section 4). B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. We used only open-sourced artifacts and properly cited them as required by the provider (section 4). B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Our data used does not contain any sensitive information. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix section contains details about artifacts used. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2 contains detailed statistics about the dataset we used/created (along with additional information in the appendix section). ## C ✓ **Did You Run Computational Experiments?** Section 4 And 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix section contains detailed experimental setup discussions along with sections 4 and 5. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and appendix section. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 4, 5, and appendix. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 4, 5, and appendix. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4 and 5 under human evaluation. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? appendix includes the amazon mechanical turk survey screenshot including the instructions and additional details along with sections 4 and 5. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? appendix, sections 4 and 5 as well as "Ethical Considerations" section includes detailed discussions. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? appendix, sections 4 and 5 as well as "Ethical Considerations" section includes detailed discussions on our human studies. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? appendix, sections 4 and 5 as well as "Ethical Considerations" section includes detailed discussions on our human studies. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? appendix, sections 4 and 5 as well as "Ethical Considerations" section includes detailed discussions on our human studies.
jang-etal-2023-knowledge
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
https://aclanthology.org/2023.acl-long.805
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for LMs has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger-sized LMs. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with previous methods known to mitigate privacy risks for LMs, we show that our approach can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust.
# Knowledge Unlearning For Mitigating Privacy Risks In Language Models Joel Jang1∗ Dongkeun Yoon 1 Sohee Yang1 **Sungmin Cha**3 Moontae Lee2,4 Lajanugen Logeswaran2 **Minjoon Seo**1 1KAIST 2LG AI Research 3 Seoul National University 4University of Illinois joeljang@kaist.ac.kr ## Abstract Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for LMs has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs *post hoc*. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger-sized LMs. We also find that *sequential* unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with previous methods known to mitigate privacy risks for LMs, we show that our approach can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust 1. ## 1 Introduction Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical notes, and 128-bit UUIDs (Carlini et al., 2021; Lee et al., 2022; Huang et al., 2022; Lehman et al., 2021). In 2021, an AI chatbot *Iruda* became the first AI system to be sued for violating the Personal Information Protection Act after generating the exact home addresses and bank account numbers of actual individuals unintentionally (Park, 2021). Heikkilä (2022) has also ∗work done during internship at LG AI Research. 1We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning. shown that GPT-3 (Brown et al., 2020), one of the most well-known LM currently in commercial use, offered detailed private information about the Editor-in-Chief of MIT Technology Review including his family members, work address, and phone number. Considering findings that show extracting training data gets easier as LMs scale to larger sizes (Carlini et al., 2022a) and that it is common practice for practitioners to release billion parameters pretrained LMs for public use (Gao et al., 2020; Black et al., 2021; Zhang et al., 2022), it has become important to provide privacy guarantees for large LMs. Practitioners are required to *delete* personal information from the LMs by individuals' request because each individual has the "Right To Be Forgotten (RTBF)" (Mantelero, 2013; Graves et al., 2021) and can limit the direct and indirect commercial use of their personal information (Villaronga et al., 2018). Previous methods addressing privacy risks for language models attempt to remove all private information from the training data (data preprocessing) (Aura et al., 2006; Dernoncourt et al., 2017; Lison et al., 2021; Kandpal et al., 2022) or attempt to design algorithms that ensure differential privacy (DP) (Dwork, 2008; Dwork et al., 2006; Abadi et al., 2016; Anil et al., 2021; Li et al., 2022; Yu et al., 2022). Both approaches require *retraining* the underlying LM every time individuals want to practice their RTBF, which makes them inadequate for large LMs that are extremely costly to retrain. Furthermore, as pointed out by Brown et al. (2022), data preprocessing methods assume private information to be easily identifiable, specified, and removed and DP algorithms can only guarantee protection for information that has clear privacy borders, which makes them inadequate in the real-world scenarios where the standard of privacy might differ by each individual. To this end, we propose *knowledge unlearning* (Figure 1) as an efficient solution that can be ap14389 ![1_image_0.png](1_image_0.png) plied with just a few parameter updates instead of pretraining the underlying LM again. We perform experiments on GPT-Neo LMs (125M, 1.3B, 2.7B) (Black et al., 2021) and show that simply changing the gradient descent to the opposite direction during language modeling (which can also be seen as *maximizing* instead of *minimizing* the loss function) is effective at protecting target sequences from extraction attacks with little to no performance degradation on the initial LM capabilities measured via 13 downstream NLP tasks: 9 common classification benchmarks and 4 dialogue tasks. For some cases, *knowledge unlearning* unexpectedly shows significant improvements in LM performance for some of the benchmarks. We compare our approach with data deduplication method (Kandpal et al., 2022) and differential privacy decoding method (Majmudar et al., 2022) which are both known to mitigate privacy risks and show the effectiveness of knowledge unlearning by providing strong privacy protection while being much more efficient and robust. We also provide a general guideline that can be used to quantify the *memorization* and *extraction likelihood* of target token sequences and suggest when we can empirically consider them to have been "forgotten". Specifically, we introduce a novel metric that measures the extraction likelihood by varying the prefix length of the target token sequence and quantifying how much of the suffix is actually extracted from the LM. Surprisingly, for *knowledge unlearning*, we find that it is easier to forget a chunk of instances *sequentially* rather than trying to forget them all at once. We provide further analysis and show that the difficulty of *knowledge unlearning* depends heavily on the target data being forgotten, especially the domain of the target data. We also provide empirical examples of performing extraction attacks and how exactly *knowledge unlearning* provides privacy protection for the LM. To summarize, our main contributions are fourfold: - We compare *knowledge unlearning* with two approaches from literature known to mitigate privacy risks: a data preprocessing approach and a Differential Privacy (DP) Decoding approach. We show that our approach results in little to no performance degradation of general capabilities (sometimes resulting in improvement) while providing strong privacy protections in situations individuals practice their RTBF whereas the data preprocessing approach provides weaker privacy protection while being orders of magnitude computationally demanding and the DP Decoding approach results in severe degradation of LM performance. - We perform additional experiments to determine which factors contribute to the difficulty of knowledge unlearning and find that (1) trying to forget many samples at once results in substantial LM performance degradation which can be mitigated by *sequentially* forgetting chunks of data and that (2) the domain of the target data (Code, License, Wikipedia, etc.) plays a critical role in determining how hard they are to forget. - We provide a novel metric and a general guideline for quantifying the privacy risks for LMs and determine when they should be considered to have "forgotten" a given target sequence. - *Knowledge unlearning* surprisingly seems to make LMs stronger where the extreme cases bring *+8.0%* (37.6% → 45.6%), *+10.1%* (57.4% → 67.5%), and *+7.9%* (62.2% → 70.1%) improvements on Lambada for GPTNEO 125M, 1.3B, and 2.7B, respectively. ## 2 Related Work 2.1 Privacy Methods For Language Models Prior work that tries to mitigate privacy risks for LMs can be divided mainly into data pre/postprocessing methods and differential privacy methods. (Data) Pre/Post-Processing Data preprocessing aims to sanitize the training data; it aims to get rid of all data that might violate any kind of privacy from the training data prior to training. These methods mostly utilize measures such as parsers and classification models that try to identify and predict patterns that constitute private information. This is effective at identifying well-formatted private information such as social security numbers or special forms of medical notes (Aura et al., 2006; Dernoncourt et al., 2017; Lison et al., 2021; Kandpal et al., 2022). However, as pointed out by Brown et al. (2022), considering that private information is mostly context-dependent and sometimes in a nonspecific format, data preprocessing methods cannot fully claim that they provide privacy guarantees, especially guarantees that match each individual's standards. Methods that attempt to utilize *postprocessing* methods such as applying censorship to the LM outputs still face the same limitations. In this work, we compare our proposed method with a data preprocessing approach proposed by Kandpal et al. (2022) which shows that deduplicating the training corpora before pretraining helps pretrain LMs that show stronger robustness against extraction attacks than an LM pretrained under the same circumstances without deduplicating the pretraining corpora. However, we highlight that this approach, which may still be effective at mitigating the overall privacy risks, is not the most suitable approach when considering a realistic scenario of individuals requesting the removal of their information from the implicit parameters of the LMs. Differential Privacy Differential Privacy (DP) aims to guarantee that the effect of an individual input on the output of a specific function is bounded (Dwork, 2008; Dwork et al., 2006). In the context of deep neural networks, DP, which needs to be applied during the training phase, aims to construct models that can provide *general* guarantees that the individual information within the training data cannot be inferred (Abadi et al., 2016). While DP has shown to be surprisingly effective at fine-tuning LMs (Li et al., 2022; Yu et al., 2022), pretraining LMs with DP still suffers from substantial performance gap, expensive computation, and slow convergence (Anil et al., 2021). Furthermore, as pointed out by Brown et al. (2022), DP can only provide limited guarantees for LMs because DP requires a unified definition for privacy boundaries, which is inherently impossible for natural language data. Most importantly, in a realistic scenario where individuals may practice their Right-To-Be-Forgotten (RTBF) dynamically after model deployment, it is nontrivial to apply existing descent-based DP algorithms such as DP-SGD to only protection against *targeted* extraction attacks. ## 2.2 Machine Unlearning Machine unlearning has received attention as an alternative approach to overcome data privacy issues in machine learning (Cao and Yang, 2015; Ginart et al., 2019; Bourtoule et al., 2021; Graves et al., 2021). Several studies attempt to explore machine unlearning for deep neural networks (Golatkar et al., 2020; Mehta et al., 2022). However, they mostly focus on proposing algorithms for image classification models where they aim to forget a whole class; that is, achieve random performance for specific image classes such as "cats" or "ships". We are the first, to the best of our knowledge, to explore unlearning a specific sequence of tokens for LMs which is a quite different set-up from traditional image classification models (∼tens of image classes vs. a sequence of tokens that can each be classified into V ∈ R∼50,000). In this work, we coin this approach as *knowledge unlearning* since we are more focused on forgetting specific *knowledge* represented by sequences of tokens. Zhou et al. (2022) focus on how *forgetting* can be leveraged to improve the performance of the underlying model. They propose "forget-and-relearn" that unifies existing iterative training algorithms by selectively removing undesirable information and re-learning good features, helping boost performance for the task of image classification and multi-agent emergence communication. The underlying assumption is that it is often easier to define and stop unwanted behavior than to teach good behavior. We also show this phenomenon in Section 4 where we unintentionally find unlearning just a few sequences of tokens sometimes boosts general LM capabilities. ## 2.3 Memorization In Language Models Previous work that explores to which extent LMs have memorized their training data approach the phenomenon with two different viewpoints. Some work view memorization of LMs simply as a threat to individual privacy (Carlini et al., 2021, 2022a; Jagielski et al., 2022) and utilize metrics that quantify how much the LMs are susceptible to adversarial attacks. These metrics are mostly dependent on the specific types of attacks such as the membership inference attack (Shokri et al., 2017) and measure the privacy risks of LMs by quantifying the success rate of these attacks. In our work, we instead focus on more *targeted* extraction attacks. Another line of work simply quantifies how much *knowledge* is accumulated and forgotten during pretraining by extracting relational knowledge about the world (Petroni et al., 2019; Lazaridou et al., 2021; Jang et al., 2022b,a). This line of work does not view memorization as a negative trait, but as a positive one that can be leveraged to extract world knowledge from its implicit parameters and perform knowledge-intensive tasks such as question answering or training knowledgeable conversation agents. Our work is highly related to Jagielski et al. (2022)'s work where they also assert that forgetting can be a relaxed version of differential privacy. However, there are two main differences between our work and theirs. First, they only analyze forgetting as a *passive* form of mitigating privacy, asserting that data seen early in large-scale training obtain privacy benefits, whereas we suggest a more active form of forgetting. Second, they only show analysis results with image classification and audio generation models while we specifically focus on large LMs. ## 3 Knowledge Unlearning 3.1 Methodology We propose simply *negating* the original training objective of minimizing the negative log-likelihood of the token sequences as our main method of knowledge unlearning in LMs. Specifically, given a sequence of tokens x = (x1*, ..., x*T ), our unlearning training objective is simply *maximizing* the following loss function: $${\mathcal{L}}_{U L}(f_{\theta},\mathbf{x})=-\sum_{t=1}^{T}\log(p_{\theta}(x_{t}|x_{<t}))\quad\mathrm{(1)}$$ where x<t denotes the token sequence x = (x1*, ..., x*t−1) and pθ(xt|x<t) denotes the conditional probability of predicting the next token to be xt when given x<t to an LM f with parameters θ. ## 3.2 Quantifying Privacy Risks Of Language Models In this subsection, we introduce two metrics we use to quantify the privacy risks given a specific token sequence and how we empirically define the token sequence to be forgotten. In this work, we do not utilize metrics such as membership inference attack recall (Shokri et al., 2017) since we are not interested in quantifying the *general* privacy risks of LMs, but instead the privacy risks on the specific target token sequences. Extraction Likelihood (EL) We first introduce a new metric, EL. Given a sequence of tokens x = (x1*, ..., x*T ) and an LM f with pre-trained parameters θ, we define EL to be as follows: $$\operatorname{EL}_{n}(\mathbf{x})={\frac{\sum_{t=1}^{T-n}\operatorname{OverlAP}_{n}(f_{\theta}(x_{<t}),x_{\geq t})}{T-n}}\quad{\mathrm{(2)}}$$ $$\operatorname{OverlAP}_{n}(\mathbf{a},\mathbf{b})={\frac{\sum_{c\in n g(\mathbf{a})}\mathbb{1}\left\{c\in n g(\mathbf{b})\right\}}{|n g(\mathbf{a})|}}\quad{\mathrm{(3)}}$$ where ng(·) denotes the list of n-grams in the given token sequence and fθ(x<t) denotes the output token sequences from the LM fθ when given x<t as input that can have max lengths |x≥t| but may be shorter when the EOS (end-of-sequence) token is generated beforehand. The process of varying the prefix length |x<t| can be seen as varying the *strength* of adversarial attacks. This is based on the assumption that the more prior information is provided about the target token sequence, the easier the LM will be able to extract it. Overall, EL can be seen as estimating the general *extraction likelihood* since we are measuring the average success rate of varying extraction attacks quantified via getting the n-gram overlap of generated and target token sequences. While previous metrics quantifying the privacy risks of LMs are dependent on specific adversarial attacks, this characteristic of EL allows it to quantify the general likelihood of extraction without any dependency on specific extraction attacks. We regard n to be a hyper-parameter that can be varied depending on the stringency of privacy standards. The higher n is set, the stricter we set the standard for a successful extraction attack. Memorization Accuracy (MA) First proposed by Tirumala et al. (2022), Memorization Accuracy (MA) is defined as follows: $$\mathbf{MA}(\mathbf{x})=\frac{\sum_{t=1}^{T-1}\mathbbm{1}\{\text{argmax}(p_{\theta}(\cdot|x_{<t}))=x_{t}\}}{T-1}\tag{4}$$ MA quantifies how much fθ has memorized the given token sequences and can be used to analyze the training dynamics of large LMs. Empirical Definition of Forgetting By utilizing both ELn and MA, we empirically define a specific token sequence x to be forgotten and is no longer susceptible to extraction attacks when both of the following conditions are met: $$\mathrm{and}$$ $$\mathrm{EL}_{n}(\mathbf{x})\leq\frac{1}{|D^{\prime}|}\sum_{\mathbf{x}^{\prime}\in D^{\prime}}\mathrm{EL}_{n}(\mathbf{x}^{\prime})\tag{5}$$ $$\mathrm{MA}(\mathbf{x})\leq\frac{1}{|D^{\prime}|}\sum_{\mathbf{x}^{\prime}\in D^{\prime}}\mathrm{MA}(\mathbf{x}^{\prime})\tag{6}$$ where D′represents a validation corpora not seen during training. In other words, we define x to be forgotten when the ELn(x) and MA(x) reach a value that is lower than the average ELn and MA on token sequences that were not seen during training. ## 4 Experiments 4.1 Models, Datasets, And Configurations Baselines For the experiments, we use the GPTNEO (125M, 1.3B, 2.7B) LMs (Black et al., 2021) initially pretrained on all of the Pile corpora (825GB) (Gao et al., 2020), and the OPT (125M, 1.3B, 2.7B) LMs (Zhang et al., 2022), pretrained on a subset of the *deduplicated* version of the Pile as well as other corpora from different domains. For the experiments, we perform unlearning the GPT-NEO LMs and quantify the privacy risks of the target data compared to the OPT LMs to measure how effective our proposed approach is in contrast to deduplicating the training corpora before pretraining the underlying LM Kandpal et al. (2022). We do not use the exact LMs from Kandpal et al. (2022) because the LMs were not opensourced, and thus use the OPT LMs instead. We also consider the Differential Privacy (DP) Decoding (Majmudar et al., 2022) as one of the baselines; This approach proposes a decoding strategy that performs linear interpolation of the original logits with the uniform distribution and performs nucleus sampling, which they theoretically show provides DP guarantees. λ is set as the linear interpolation weight where λ = 0 performs nucleus sampling from the uniform distribution and λ = 1 performs regular nucleus sampling, using the logits as weights during random sampling. Target Data For the actual target data used to quantify the privacy risks of the LMs, we sample instances from the Training Data Extraction Challenge 2 where 15,000 examples (each are 200 token sequences long) from 16 different domains of the Pile corpora that are identified to be somewhat easyto-extract are provided. For our experiments, we randomly sample s samples from the 15,000 examples and make the underlying LM forget the s samples at once. As a default, we show the average results of 5 random samplings of s samples for all of our experimental settings. We only provide the average of the 5 samplings and do not separately report the standard deviation. Instead, we provide the results of each individual run in Appendix A. 2https://github.com/google-research/lm-extraction-benchmark Evaluation Datasets Providing stronger privacy protections for LMs may become meaningless if it requires sacrificing their original capabilities. Thus, while quantifying the privacy risks of LMs, we also quantify the original LM capabilities by evaluating the LMs on 9 classification tasks quantifying the general capabilities: Hellaswag (Zellers et al., 2019) and Lambada (Paperno et al., 2016) benchmarks to measure linguistic reasoning abilities, Winogrande (Sakaguchi et al., 2021) and COPA (Gordon et al., 2012) to measure commonsense reasoning abilities, and ARC-Easy (Clark et al., 2018), ARC-Challenge (Clark et al., 2018), Piqa (Bisk et al., 2020), MathQA (Amini et al., 2019), PubmedQA (Jin et al., 2019) benchmarks to measure the scientific reasoning abilities. We also evaluate on 4 dialogue tasks (Wizard of Wikipedia (Dinan et al., 2019), Empathetic Dialogues (Rashkin et al., 2019), Blended Skill Talk (Smith et al., 2020), and Wizard of Internet (Komeili et al., 2022)) to evaluate the generation capabilities of the LMs. We use the test set for Lambada and the validation set for the rest of the datasets. We also show the results of measuring the perplexity on the validation corpora of Pile and Wikitext in Appendix B. We do not include measuring perplexity as one of the main evaluations because perplexity might not be the most suitable metric for quantifying general LM performance, especially in the case of unlearning (further explanation given in Appendix B. We evaluate DP Decoding only on the 4 dialogue tasks because the decoding strategy cannot be applied for performing the classification tasks which is evaluated by utilizing a *verbalizer*. Configurations For the learning rate, we set it to 5e-5. We show the effect of varying learning rates in Appendix D. We use a constant learning rate scheduling throughout the run. We fix the global batch size to be the same as s (how many samples are forgotten at once) because having global batch sizes smaller than s proved to degrade general LM capabilities 3. For ELn, we set n=10 which means EL measures the extraction likelihood of extracting n consecutive tokens of varying extraction attack 4. For calculating EL10 and MA, we use a | Model (Size) | EL10(%) | MA(%) | |----------------|-----------|---------| | Threshold | Threshold | | | GPT-NEO (125M) | 4.99 | 29.94 | | GPT-NEO (1.3B) | 5.68 | 33.27 | | GPT-NEO (2.7B) | 5.53 | 34.02 | Table 1: Forgetting Threshold for GPT-NEO LMs naïve greedy decoding strategy. We set both the dropout and weight decay rates to 0. Lastly, while we provide a guideline of empirically deciding a single token sequence to be forgotten in Section 3.2, for considering a *chunk* of s token sequences to be forgotten, we use the average EL10 and MA as an approximation of the individual EL10 and MA. ## 4.2 Main Experiments Forgetting Threshold First, we show how we get the Forgetting Threshold for EL10 and MA, the values where we consider the token sequence to be forgotten and unsusceptible from extraction attacks, for all model sizes of GPT-NEO LMs in Table 1. For D′, we perform weighted sampling (same domain distribution as the Pile training corpora) of 10,000 instances each with token lengths 200 from the Pile validation corpora, and measure the average EL10 and MA (Equation 5-6), which are empirically set as the Forgetting Threshold values. Main Results Table 2 shows the main results of performing unlearning on LMs of varying sizes and the baselines. While we provide the average performances of the 5 random samplings in Table 2, we provide each individual runs in Appendix A for reference. We highlight five main observations regarding the results. (1) OPT LMs show a much lower EL10 and MA than GPT-NEO LMs, confirming that deduplicating the pretraining corpora is indeed helpful for mitigating privacy risks. (2) + DPD+ enables effective protection against extraction attacks demonstrated via the lowest EL and MA score; however, it brings severe degradation of generation capabilities measured via the average F1 score of the 4 dialogue generation tasks. (3) + UL+ results in severe degradation of both classification and dialogue tasks for the 125M, only severe degradation of dialogue tasks for 1.3B LM while for the 2.7B LMs, it enables retaining most of its previous capabilities. (4) While the LMs scale to larger sizes, it takes fewer epochs for the | Model | # | EL10 | MA | Classification Avg. | Dialogue Avg. | Epoch | |---------|-------|--------|---------|-----------------------|-----------------|---------| | Params | (%) ↓ | (%) ↓ | (ACC) ↑ | (F1) ↑ | | | | OPT | 125M | 8.6 | 52.9 | 42.4 | 10.2 | - | | NEO | 125M | 30.9 | 77.4 | 43.4 | 9.4 | - | | + DPD+ | 125M | 0.0 | 27.4 | N/A | 7.3 | - | | + UL | 125M | 3.7 | 50.1 | 42.6 | 8.0 | 11.0 | | + UL+ | 125M | 1.0 | 27.4 | 39.9 | 2.6 | 17.2 | | OPT | 1.3B | 23.3 | 67.1 | 50.6 | 12.4 | - | | NEO | 1.3B | 67.6 | 92.2 | 49.8 | 11.5 | - | | + DPD+ | 1.3B | 0.0 | 21.4 | N/A | 7.1 | - | | + UL | 1.3B | 11.0 | 62.2 | 49.7 | 11.6 | 8.0 | | + UL+ | 1.3B | 1.9 | 30.4 | 49.7 | 8.5 | 13.8 | | OPT | 2.7B | 25.6 | 69.2 | 52.7 | 12.9 | - | | NEO | 2.7B | 70.4 | 93.4 | 52.3 | 11.5 | - | | + DPD+ | 2.7B | 0.0 | 24.2 | N/A | 6.9 | - | | + UL | 2.7B | 13.0 | 66.0 | 52.3 | 12.5 | 5.4 | | + UL+ | 2.7B | 1.6 | 31.0 | 51.9 | 11.1 | 10.8 | target sequences to be forgotten. Together with (3), this implies that larger LMs are strong unlearners. (5) While + UL+ provides stronger privacy protection than OPT without sacrificing its performance from NEO for the 2.7B LM, it is much more computationally efficient (3,500,000x) than re-training the underlying LM, which is required for all data preprocessing approaches 5. Overall, results show unlearning to be an effective approach to providing strong privacy protection while retaining and sometimes even improving general LM capabilities. Sequential Unlearning is more Stable than Batch Unlearning We show the effect of varying s (the \# of data instances to be forgotten at once) in Figure 2 across model scales. We denote this approach as *batch* unlearning. As shown by the s = 128 results, it is harder to forget more samples at once, resulting in substantial degradation of average LM performance regardless of how large the LM is. Since s ≤ 32 does not show much 5Computational efficiency is measured via FLOPs which is calculated by (6 × Total Training Tokens × Parameter Size) as in Brown et al. (2020). FLOPs for OPT LMs were estimated using information from Zhang et al. (2022). We provide the FLOPs for the methods in Appendix C. degradation, we explore if *sequentially* unlearning can be a solution. In Figure 2b, we show the result of dividing the 128 samples into 4 chunks of 32 and performing sequential unlearning; we unlearn each chunk at a time until the chunk reaches the forgetting threshold. Surprisingly, as shown by the performance gap at s = 128 between the dotted lines (the s = 128 performance of Figure 2a) and straight lines, the end result is vastly different even though exactly the same instances were forgotten. Sequential unlearning shows almost no degradation of average LM performance. In Appendix H, we show that chunks once forgotten stay forgotten and that later chunks are forgotten much faster compared to the initial chunk. This result hints at the *generalization* of unlearning, which we do not further explore in the scope of this work. The result also suggests that knowledge unlearning can be *continually* applied to LMs when needed. ## 4.3 Analysis Of Knowledge Unlearning To measure why some instances are harder to forget, we perform 5 random samplings of s = 8 from 8 different domains from the Training Data ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Domains Initial Final Hella. Lamba. Wino. COPA ARC-E ARC-C Piqa MathQ PubQ **Avg.** EL10 EL10 (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) INITIAL - - 37.0 **57.4 54.9** 70.0 **56.6** 25.8 70.4 21.9 53.8 **49.8** (0.0) FREELAW 60.4 12.1 37.2 52.2 53.9 68.4 55.5 26.2 70.1 21.7 53.5 48.7 (*-1.1*) GIT. (CODE) 63.9 0.6 **37.3** 53.4 54.4 69.2 56.3 26.0 69.9 21.5 49.8 48.7 (*-1.1*) GIT. (LICENSE) 75.8 0.0 37.1 52.0 54.2 69.0 56.4 26.4 70.1 21.8 51.8 48.8 (*-1.0*) ENRON EMAILS 77.3 0.0 36.9 57.2 54.8 68.4 55.8 26.3 69.8 21.8 53.1 49.4 (*-0.4*) BOOKS3 70.2 0.0 36.4 49.5 54.2 **70.8** 55.6 25.5 69.9 21.7 47.4 47.9 (*-1.9*) PILE CC 67.8 0.0 35.7 45.9 53.8 70.4 54.2 **26.9** 69.7 21.8 52.0 47.8 (*-2.0*) USPTO BACK. 59.4 0.0 33.7 44.7 53.5 67.0 45.9 24.0 67.0 21.5 50.3 45.3 (*-4.5*) PUBMED CENT. 71.8 0.0 36.5 44.5 54.1 69.6 55.6 24.8 70.0 **21.9** 46.4 47.0 (*-2.8*) Extraction Challenge 6and perform unlearning on the GPT-NEO 1.3B LM. We also show the results of each individual run in Appendix A. As shown in Table 3, despite undergoing the same number of token updates (10 epochs of unlearning), different domains result in vastly different outcomes; ENRON EMAILS results in the average LM performance degradation of only -0.4% while USPTO BACKGROUNDS results in -4.5% degradation. Furthermore, the final EL10 varies depending on the domain, suggesting that some domains (e.g., FREELAW) are harder to forget than others. Lastly, domains that are more *structured*, which means the data consists of some kind of patterns such as a list of emails (ENRON EMAILS) or code (GITHUB (CODE)), seem to result in less degradation of LM performance in contrast to domains that are more *unstructured*, which means the data consist of mostly raw English text such as a review for journal submission (PUBMED CENTRAL). For further analysis, we provide examples from each domain in Appendix F as well as the individual task performance change during knowledge unlearning in Appendix E. ## 5 Conclusion In this paper, we propose *knowledge unlearning* as a method for mitigating privacy risks in LMs that provides a strong privacy protection with little to no degradation of general LM capabilities measured by evaluating on 9 common LM classification benchmarks and 4 dialogue benchmarks for the larger sized LMs. As large LMs expand their use cases, potentially affecting the daily lives of people, the research community should make sure that the privacy of individuals is not violated intentionally or unintentionally by the knowledge stored in the implicit parameters of these models. Since it is inherently impossible to prevent and predict all future privacy concerns prior to pretraining the LM, we suggest the community consider knowledge unlearning for ensuring privacy upon individuals' requests post hoc pretraining. ## 6 Limitations While we provide a privacy guarantee through unlearning, our Forgetting Threshold is dependent on which data samples are chosen as D′. Furthermore, varying the prefix length can be seen as a naïve way of varying the strength of the extraction attacks. In a real-world scenario, extraction attacks may be more complicated and may require other prevention methods. Also, we could not directly compare our approach with a Differential Privacy (DP) (Anil et al., 2021) approach because there are no open-sourced LMs pretrained with a DP algorithm. We could not replicate the pretrainig phase because of the heavy computational resources needed to pretrain an LM with DP which is estimated to require thousands of GPU hours. We leave this comparison for future work. Finally, a recent work (Carlini et al., 2022b) has suggested that machine unlearning (for the vision domain) can bring negative effects harming the privacy of other users. Future work should explore this phenomenon in the setting of performing unlearning on large LMs as well. ## 7 Acknowledgements This work was partly supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework, 80%; No.2021-0-02068, Artificial Intelligence Innovation Hub, 20%). ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer and communications security*, pages 308–318. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics. and Pasin Manurangsi. 2021. Large-scale differentially private bert. *arXiv preprint arXiv:2108.01624*. Tuomas Aura, Thomas A Kuhn, and Michael Roe. 2006. Scanning electronic documents for personally identifiable information. In *Proceedings of the 5th ACM* workshop on Privacy in electronic society, pages 41– 50. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In *Proceedings of the* AAAI conference on artificial intelligence, volume 34, pages 7432–7439. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow. If you use this software, please cite it using these metadata, 58. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In *2021 IEEE Symposium on Security and Privacy (SP)*, pages 141–159. IEEE. Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramèr. 2022. What does it mean for a language model to preserve privacy? *arXiv preprint arXiv:2202.05520*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480. IEEE. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2022a. Quantifying memorization across neural language models. *arXiv preprint arXiv:2202.07646*. Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, and Florian Tramer. 2022b. The privacy onion effect: Memorization is relative. In *Advances in Neural Information* Processing Systems. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In *30th USENIX Security* Symposium (USENIX Security 21), pages 2633–2650. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*, abs/1803.05457. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of patient notes with recurrent neural networks. *Journal* of the American Medical Informatics Association, 24(3):596–606. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In *International Conference on Learning* Representations. Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation, pages 1–19. Springer. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In *Theory of cryptography* conference, pages 265–284. Springer. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. *arXiv preprint arXiv:2101.00027*. Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. 2019. Making ai forget you: Data deletion in machine learning. *Advances in neural* information processing systems, 32. Aditya Golatkar, Alessandro Achille, and Stefano Soatto. 2020. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9304–9312. Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In **SEM 2012: The First Joint* Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics. Laura Graves, Vineel Nagisetty, and Vijay Ganesh. 2021. Amnesiac machine learning. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 11516–11524. Melissa Heikkilä. 2022. What does gpt-3 "know" about me? Jie Huang, Hanyin Shao, and Kevin Chen-Chuan Chang. 2022. Are large pre-trained language models leaking your personal information? *arXiv preprint* arXiv:2205.12628. Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, et al. 2022. Measuring forgetting of memorized training examples. arXiv preprint arXiv:2207.00099. Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, and Minjoon Seo. 2022a. Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models. *arXiv preprint arXiv:2204.14211*. Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun KIM, Stanley Jungkyu Choi, and Minjoon Seo. 2022b. Towards continual knowledge learning of language models. In *International Conference on Learning Representations*. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577. Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates privacy risks in language models. *arXiv preprint arXiv:2202.06539*. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, et al. 2021. Mind the gap: Assessing temporal generalization in neural language models. *Advances in Neural Information Processing* Systems, 34:29348–29363. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating training data makes language models better. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8424–8445, Dublin, Ireland. Association for Computational Linguistics. Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, and Byron C. Wallace. 2021. Does bert pretrained on clinical notes reveal sensitive data? In NAACL-HLT, pages 946–959. Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto. 2022. Large language models can be strong differentially private learners. In *International* Conference on Learning Representations. Pierre Lison, Ildikó Pilán, David Sanchez, Montserrat Batet, and Lilja Øvrelid. 2021. Anonymisation models for text data: State of the art, challenges and future directions. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4188–4203, Online. Association for Computational Linguistics. Jimit Majmudar, Christophe Dupuy, Charith Peris, Sami Smaili, Rahul Gupta, and Richard Zemel. 2022. Differentially private decoding in large language models. arXiv preprint arXiv:2205.13621. Alessandro Mantelero. 2013. The eu proposal for a general data protection regulation and the roots of the 'right to be forgotten'. Computer Law & Security Review, 29(3):229–235. Ronak Mehta, Sourav Pal, Vikas Singh, and Sathya N Ravi. 2022. Deep unlearning via randomized conditionally independent hessians. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10422–10431. Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525–1534, Berlin, Germany. Association for Computational Linguistics. Jasmine Park. 2021. South korea: The first case where the personal information protection act was applied to an ai system. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In *EMNLP*. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. *Communications of the ACM*, 64(9):99–106. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In *2017 IEEE symposium on security and privacy (SP)*, pages 3–18. IEEE. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2021–2030, Online. Association for Computational Linguistics. Kushal Tirumala, Aram H Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. 2022. Memorization without overfitting: Analyzing the training dynamics of large language models. arXiv preprint arXiv:2205.10770. Eduard Fosch Villaronga, Peter Kieseberg, and Tiffany Li. 2018. Humans forget, machines remember: Artificial intelligence and the right to be forgotten. *Computer Law & Security Review*, 34(2):304–313. Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang. 2022. Differentially private fine-tuning of language models. In International Conference on Learning Representations. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? *arXiv preprint* arXiv:1905.07830. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Hattie Zhou, Ankit Vani, Hugo Larochelle, and Aaron Courville. 2022. Fortuitous forgetting in connectionist networks. In *International Conference on* Learning Representations. ## A Full Results We provide all of the results for the 5 random samplings for our main experimental setting in Table 4 and the full results for the domain analysis setting in Table 5. We also provide the evaluation of the 4 dialogue tasks for s = 32 for all model sizes in Table 6. ## B **Measuring Pile And Wikitext Perplexity** Table 7 shows the results of measuring perplexity on 500 samples from the validation set of Pile and Wikitext corpora on the LMs from the main experimental setting (Table 2). Results show that LMs that underwent knowledge unlearning show higher perplexity while the main experimental table (Table 2) does not show degradation of performance on 9 different LM benchmarks. We believe the discrepancy to be due to the inherent attributes of performing unlearning: since we are doing gradient ascent, we are likely *softening* the probability to generate each token from the vocabulary, giving it a more uniform distribution that will inevitably result in a higher perplexity. However, since it does Model (s)# EL10 MA Hella. Lamba. Wino. COPA ARC-E ARC-C Piqa MathQ PubQ Avg. Epoch **Params** (%) ↓ (%) ↓ (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) NEO 125M 30.9 77.4 28.2 37.6 51.8 62.0 **45.6** 22.0 **63.3** 22.5 **57.6** 43.4 - ∆ - - - +0.2 +8.0 +1.9 +5.0 +0.0 +2.2 +0.0 +0.3 +0.0 *+2.0* - NEO + UL+ (s = 1) 125M 3.1 28.1 28.1 41.0 52.5 62.0 43.2 21.0 63.0 **22.8** 57.6 43.5 14.0 125M 0.0 27.6 28.1 24.9 50.8 **67.0** 42.3 23.7 62.8 21.9 57.6 42.1 10.0 125M 0.0 27.1 28.1 42.1 52.5 63.0 44.1 20.3 62.6 22.5 57.6 43.7 5.0 125M 0.0 25.6 28.2 44.9 52.0 62.0 41.8 21.4 62.6 22.2 57.6 43.6 11.0 125M 0.0 28.1 **28.4** 33.9 51.5 66.0 44.8 21.7 62.8 22.3 57.6 43.2 10.0 NEO + UL+ (s = 4) 125M 0.9 28.8 27.8 44.1 51.9 52.0 37.4 19.7 60.5 22.3 57.6 41.5 16.0 125M 0.0 28.6 27.4 2.5 49.4 59.0 38.6 23.1 60.5 21.2 43.8 36.2 19.0 125M 3.6 28.8 27.7 33.4 51.8 55.0 37.7 21.0 61.0 22.3 57.6 40.8 20.0 125M 2.6 28.9 27.6 29.9 52.4 50.0 36.5 19.0 60.3 22.2 57.6 39.5 18.0 125M 0.0 28.4 27.6 6.7 49.7 61.0 42.5 22.7 61.0 21.4 50.6 38.1 16.0 NEO + UL+ (s = 8) 125M 0.0 28.5 27.6 35.0 51.8 51.0 37.6 18.0 60.1 22.4 57.6 40.1 16.0 125M 2.2 28.1 27.7 5.4 49.6 62.0 40.6 21.0 61.2 21.8 52.4 38.0 19.0 125M 0.3 29.6 28.0 41.2 52.2 55.0 40.2 21.4 61.0 21.9 57.6 42.0 18.0 125M 5.0 25.3 27.4 1.3 49.6 65.0 37.6 **24.4** 59.2 21.2 33.8 35.5 23.0 125M 0.0 28.2 27.9 5.3 50.5 61.0 41.6 22.4 60.7 21.5 51.4 38.0 18.0 NEO + UL+ (s = 32) 125M 0.3 28.4 27.2 42.3 **53.7** 56.0 38.1 21.0 59.7 22.4 57.6 42.0 20.0 125M 0.8 27.1 27.0 17.1 52.4 53.0 34.0 20.0 59.8 21.5 57.6 38.0 18.0 125M 0.2 24.1 27.3 **45.6** 51.9 50.0 38.6 20.7 59.6 22.6 57.6 41.5 13.0 125M 3.0 28.7 27.5 2.6 49.2 59.0 37.7 21.4 58.4 20.9 46.8 35.9 20.0 125M 0.7 28.5 27.3 44.5 53.0 54.0 39.0 20.3 59.5 22.5 57.6 42.0 15.0 NEO + UL+ (s = 128) 125M 1.3 28.1 27.1 4.6 50.5 58.0 37.9 21.3 57.5 21.4 47.8 36.2 16.0 125M 3.1 27.5 26.9 1.8 50.5 60.0 36.4 22.3 56.6 21.2 41.8 35.3 18.0 125M 3.9 26.7 27.0 3.9 50.9 59.0 35.2 21.3 56.0 21.3 49.6 36.0 17.0 125M 2.4 26.6 26.9 2.7 50.2 56.0 35.9 22.3 57.2 21.2 43.8 35.1 16.0 125M 3.8 27.3 27.0 6.4 50.9 57.0 37.3 21.3 57.2 21.2 52.0 36.7 17.0 NEO 1.3B 67.6 92.2 37.0 57.4 54.8 70.0 56.6 25.8 70.4 21.9 53.8 49.8 - ∆ - - - +0.4 +10.1 +2.1 +2.0 +1.1 +3.4 +0.3 +0.4 +3.8 *+2.6* - NEO + UL+ (s = 1) 1.3B 0.0 27.6 36.8 52.1 54.7 **72.0** 55.9 27.8 69.7 21.5 53.0 49.3 9.0 1.3B 0.0 30.2 36.6 54.6 54.9 69.0 55.4 26.8 **70.7** 21.7 53.4 49.2 6.0 1.3B 0.0 29.7 36.7 58.2 55.4 70.0 56.1 25.4 69.9 22.0 53.2 49.7 4.0 1.3B 0.0 32.2 37.1 52.4 53.7 68.0 56.1 24.4 70.1 21.8 54.2 48.6 8.0 1.3B 0.0 27.6 37.3 60.1 55.6 70.0 57.5 25.1 70.0 21.7 55.2 50.3 10.0 NEO + UL+ (s = 4) 1.3B 0.0 30.3 37.3 48.3 54.4 70.0 55.0 **29.2** 69.9 20.6 56.0 49.0 12.0 1.3B 0.0 29.7 36.8 49.4 53.4 69.0 55.2 26.8 70.6 21.4 52.8 48.4 9.0 1.3B 1.0 29.2 36.8 51.3 54.9 70.0 55.2 26.8 70.3 21.5 54.0 49.0 10.0 1.3B 4.8 31.4 37.2 59.2 54.8 71.0 54.9 25.8 69.5 21.9 50.2 49.4 10.0 1.3B 1.7 31.8 37.0 58.4 54.4 71.0 **57.7** 24.7 70.2 22.0 54.0 49.9 9.0 NEO + UL+ (s = 8) 1.3B 0.3 29.7 37.1 66.5 54.5 70.0 52.0 26.8 69.4 21.7 56.8 50.5 13.0 1.3B 1.9 29.5 36.8 43.0 53.1 71.0 51.3 27.5 70.4 21.0 42.4 46.3 13.0 1.3B 0.2 26.2 37.2 47.3 54.2 72.0 55.2 25.8 70.4 21.8 54.8 48.7 12.0 1.3B 3.1 32.0 **37.4** 57.6 54.3 70.0 56.1 26.8 69.8 21.5 54.8 49.8 14.0 1.3B 1.4 32.0 37.1 57.4 54.5 71.0 57.0 26.1 70.0 21.9 54.2 49.9 11.0 NEO + UL+ (s = 32) 1.3B 0.7 33.0 36.5 63.2 55.9 70.0 52.4 25.1 69.7 21.8 55.4 50.0 13.0 1.3B 1.7 29.8 36.7 50.9 53.5 71.0 56.3 27.8 70.7 22.0 39.4 47.6 14.0 1.3B 0.7 28.4 37.0 64.8 **56.9** 69.0 54.3 26.4 69.1 21.9 55.8 **50.6** 13.0 1.3B 4.2 31.2 35.8 **67.5** 55.3 67.0 51.5 25.4 68.1 21.3 56.6 49.8 14.0 1.3B 2.1 29.5 35.8 63.9 55.7 70.0 54.1 26.4 69.5 **22.3** 56.8 50.5 15.0 NEO + UL+ (s = 128) 1.3B 0.4 24.5 31.1 54.2 55.2 69.0 53.2 24.7 66.1 21.9 56.4 48.0 6.0 1.3B 4.9 19.8 27.8 2.2 54.8 69.0 50.9 23.3 57.9 21.8 55.8 40.4 8.0 1.3B 4.2 30.2 30.6 41.6 55.1 69.0 54.4 26.0 63.8 22.1 55.0 46.4 6.0 1.3B 2.9 23.6 27.6 8.8 52.9 68.0 44.5 18.9 57.7 21.6 57.4 39.7 9.0 1.3B 1.3 23.1 28.5 48.6 55.5 69.0 48.8 21.6 62.3 22.2 **57.6** 46.0 8.0 NEO 2.7B 70.4 93.4 40.8 62.2 56.4 **75.0** 59.6 25.4 73.0 21.4 57.0 52.3 - ∆ - - - +0.8 +7.9 +1.0 +0.0 +1.5 +4.3 +0.3 +1.1 +1.0 *+2.0* - NEO + UL+ (s = 1) 2.7B 0.0 3.0 40.8 62.2 56.6 72.0 55.7 26.4 73.1 21.8 57.6 51.8 10.0 2.7B 0.0 23.6 40.5 56.8 54.4 74.0 59.6 26.1 72.8 21.3 56.6 51.3 8.0 2.7B 0.0 27.6 40.6 62.5 57.0 75.0 59.1 24.7 73.0 21.5 56.6 52.2 6.0 2.7B 0.0 20.6 40.5 60.3 55.8 74.0 58.9 25.8 73.0 21.7 57.2 51.9 10.0 2.7B 0.0 29.7 40.6 62.2 56.4 72.0 58.0 27.1 72.2 21.2 57.4 51.9 9.0 NEO + UL+ (s = 4) 2.7B 0.4 22.6 41.5 60.0 54.9 72.0 55.0 26.4 69.9 21.3 57.8 51.0 12.0 2.7B 0.0 30.0 **41.6** 46.5 53.4 71.0 55.6 25.1 72.0 21.3 57.2 49.3 9.0 2.7B 0.7 23.7 40.4 59.7 54.9 74.0 58.7 23.7 72.5 20.8 57.4 51.3 9.0 2.7B 3.2 32.4 41.2 67.2 56.0 73.0 57.3 28.1 **73.3** 22.3 57.2 **52.8** 8.0 2.7B 0.2 31.9 40.3 61.2 55.7 74.0 60.0 27.5 72.0 21.4 57.2 52.1 10.0 NEO + UL+ (s = 8) 2.7B 0.3 29.5 41.2 64.6 55.4 71.0 52.9 27.1 69.5 21.7 **58.0** 51.3 10.0 2.7B 2.1 26.4 40.6 48.7 52.9 67.0 55.0 25.8 72.1 21.8 57.2 49.0 11.0 2.7B 0.5 31.2 41.1 54.1 55.0 74.0 59.3 25.1 72.5 22.1 57.4 51.2 11.0 2.7B 1.9 33.8 40.7 65.7 **57.4** 72.0 58.4 27.1 72.6 21.9 57.0 52.5 8.0 2.7B 0.0 20.4 40.0 60.7 55.8 73.0 60.1 28.5 72.5 21.5 57.2 52.2 11.0 NEO + UL+ (s = 32) 2.7B 0.6 31.7 40.8 68.2 56.1 68.0 54.4 28.0 71.9 21.4 57.0 51.8 11.0 2.7B 1.1 32.4 40.9 56.9 55.6 69.0 58.1 26.7 71.8 22.1 56.8 50.9 10.0 2.7B 1.2 29.0 41.5 65.8 56.9 68.0 59.3 27.0 72.0 22.3 57.8 52.3 11.0 2.7B 3.4 29.9 39.7 **70.1** 57.7 68.0 54.8 **29.7** 71.6 22.0 57.6 52.4 11.0 2.7B 1.9 31.9 41.4 61.6 56.6 73.0 **61.1** 26.4 72.7 21.7 57.0 52.4 11.0 NEO + UL+ (s = 128) 2.7B 0.4 31.5 35.3 64.2 56.8 68.3 51.8 26.7 70.2 21.9 56.7 50.2 10.0 2.7B 3.8 16.5 26.0 0.4 51.6 57.7 29.0 16.6 54.2 20.0 57.9 34.8 10.0 2.7B 0.6 31.4 34.9 58.9 55.2 69.2 54.8 24.7 70.0 **22.5** 57.7 49.8 9.0 2.7B 2.2 31.1 31.3 22.9 50.6 62.5 40.0 18.2 60.8 21.3 40.9 38.7 8.0 2.7B 4.7 29.0 33.5 56.5 55.0 66.3 51.9 23.6 68.6 22.4 57.7 48.4 9.0 Domains Initial Final Hella. Lamba. Wino. COPA ARC-E ARC-C Piqa MathQ PubQ **Avg.** EL10 EL10 (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) (ACC) INITIAL - - 37.0 57.4 54.9 70.0 56.6 25.8 70.4 21.9 53.8 49.8 FREELAW 64.6 4.8 37.3 53.5 54.1 68.0 57.5 27.1 70.5 21.5 54.0 49.3 52.0 2.4 37.3 62.9 54.2 67.0 52.9 26.1 69.2 21.5 54.4 49.5 60.6 15.2 36.8 42.0 54.5 67.0 56.6 25.1 70.1 21.7 51.4 47.2 55.2 13.8 37.3 51.4 53.5 69.0 55.4 26.8 70.5 21.9 54.6 48.9 69.5 24.1 37.4 51.4 53.2 71.0 54.9 26.1 70.0 21.8 53.0 48.7 GITHUB (CODE) 67.0 1.2 37.3 51.1 54.1 71.0 57.3 27.1 70.1 21.3 41.2 47.8 56.7 0.3 37.1 49.9 54.9 68.0 56.1 26.4 69.1 21.4 48.4 47.9 62.0 0.2 37.2 50.2 54.2 68.0 56.6 25.8 70.5 21.8 54.4 48.7 60.4 1.1 37.5 59.7 54.7 68.0 55.9 25.4 70.1 21.9 53.8 49.7 73.6 0.0 37.3 55.9 54.1 71.0 55.4 25.4 69.9 21.2 51.4 49.1 GITHUB (LICENSE) 87.5 0.2 37.5 57.4 54.5 68.0 56.8 26.4 70.1 21.8 53.8 49.6 74.3 0.0 37.3 48.9 54.1 70.0 57.1 27.1 70.7 21.7 48.4 48.4 70.7 0.0 36.4 40.6 53.1 70.0 55.2 25.4 70.2 21.8 49.0 46.9 74.8 0.0 37.3 60.3 54.8 69.0 55.9 27.1 70.0 21.5 55.6 50.2 71.8 0.0 37.0 52.6 54.3 68.0 56.8 26.1 69.5 22.0 52.2 48.7 ENRON EMAILS 81.6 0.0 36.4 59.8 55.2 69.0 53.6 27.5 69.0 21.9 54.8 49.7 70.3 0.0 37.2 54.9 54.5 68.0 57.5 25.4 70.1 22.4 51.8 49.1 74.2 0.0 37.1 56.3 55.0 68.0 55.6 25.1 69.8 21.6 54.2 49.2 83.9 0.0 36.7 55.2 54.8 69.0 55.9 25.4 70.4 21.7 52.2 49.0 76.8 0.0 36.9 60.0 54.6 68.0 56.4 28.1 69.9 21.5 52.4 49.7 BOOKS3 59.7 0.0 36.2 39.4 53.9 72.0 55.2 24.4 69.9 21.9 50.0 47.0 65.4 0.0 35.9 65.2 55.7 67.0 53.3 25.1 69.9 21.6 55.8 49.9 71.7 0.0 37.1 47.4 54.6 74.0 57.0 26.8 69.8 21.7 44.2 48.1 74.7 0.0 36.4 40.7 53.4 70.0 55.7 25.4 69.6 21.6 41.2 46.0 79.5 0.0 36.7 54.9 53.6 71.0 56.6 25.8 70.2 21.8 46.0 48.5 PILE CC 74.9 0.0 35.3 30.7 53.0 68.0 55.2 26.4 69.9 22.1 50.4 45.7 68.0 0.0 36.3 45.9 53.4 72.0 55.6 27.1 69.6 21.7 51.4 48.1 71.6 0.0 36.3 48.9 52.9 70.0 55.9 26.4 70.2 21.9 51.8 48.3 57.8 0.0 34.0 66.3 55.7 69.0 49.9 26.1 69.0 21.4 57.4 49.9 66.6 0.0 36.4 37.7 54.0 73.0 54.5 28.1 69.9 22.1 49.2 47.2 USPTO BACKGROUNDS 53.7 0.0 30.7 48.4 53.4 68.0 39.0 22.0 64.2 20.7 55.2 44.6 56.7 0.0 31.0 19.4 50.6 69.0 36.9 24.1 63.3 21.2 33.4 38.8 64.9 0.0 36.0 51.4 54.1 68.0 50.8 24.4 70.0 22.1 56.6 48.2 54.6 0.0 35.5 57.2 55.1 65.0 52.0 23.7 68.9 22.0 56.2 48.4 67.2 0.0 35.3 47.4 54.3 65.0 50.8 25.8 68.4 21.7 50.2 46.5 PUBMED CENTRAL 73.8 0.0 35.7 39.0 53.5 69.0 55.6 25.1 69.6 21.9 44.2 46.0 75.1 0.0 36.1 36.3 53.2 69.0 54.1 25.1 69.8 22.6 44.4 45.6 67.4 0.0 37.0 47.5 54.0 71.0 56.3 24.4 69.9 21.1 48.4 47.7 71.1 0.0 37.2 55.3 55.6 68.0 57.0 24.7 70.0 22.0 51.0 49.0 71.9 0.0 36.8 44.4 54.1 71.0 55.0 24.7 70.6 22.1 43.8 46.9 not show much degradation in the LM benchmarks, it also means that the *argmax* of the most likely token to be generated has not changed much. However, further exploration of what exactly knowledge unlearning does to the representations of the LM should be done in future work. ## C Computation Comparison Between Deduplication **And Knowledge** Unlearning We show the FLOPs of pretraining OPT denoted as DEDUPLICATION and the average FLOPs of performing knowledge unlearning until s = 32 token sequences reach the Forgetting Threshold denoted as UNLEARNING in Table 8. We calculate FLOPs by (6 × Total Training Tokens × Parameter Size) following Brown et al. (2020). ## D Varying The Learning Rate In Figure 3, we show the results of varying the learning rate for knowledge unlearning where we fix the total epoch to 10 and perform 3 random runs with s = 32 on the GPT-NEO 1.3B. Overall, we observe that higher learning rates lead to faster forgetting, but with substantial LM performance degradation. While lower learning rates retain the LM performance, they fail to meet the Forgetting Threshold within 10 epochs. Thus, we set the learning rate to 5e-5 for our experiments to get the best trade-off. ## E Individual Task Performance During Knowledge Unlearning To show exactly what happens to the LM during knowledge unlearning, we show how the performance of each of the LM benchmarks changes as Model (s)# EL10 MA WoW ED BST WoI Avg. Epoch **Params** (%) ↓ (%) ↓ (F1) (F1) (F1) (F1) (F1) NEO 125M 30.9 77.4 8.4 8.4 9.6 11.2 9.4 - ∆ - - - +0.0 +0.0 +0.0 +0.0 *+0.0* - 125M 0.3 28.4 1.6 1.8 0.9 1.8 1.5 20.0 125M 0.8 27.1 0.1 0.1 0.0 0.0 0.0 18.0 125M 0.2 24.1 6.9 6.7 7.0 7.9 7.1 13.0 125M 3.0 28.7 2.1 2.5 1.4 2.3 2.1 20.0 125M 0.7 28.5 2.0 3.5 1.3 2.2 2.2 15.0 NEO 1.3B 67.6 92.2 9.6 10.5 12.2 13.7 **11.5** - ∆ - - - +2.3 +0.0 +0.0 +0.0 *+0.0* - | NEO + UL+ (s = 32) NEO + UL+ (s = 32) NEO + UL+ (s = 32) | |------------------------------------------------------------| 1.3B 0.7 33.0 10.0 8.4 9.3 10.9 9.6 13.0 1.3B 1.7 29.8 **11.9** 8.4 10.6 12.4 10.8 14.0 1.3B 0.7 28.4 10.0 8.3 9.5 10.8 9.6 13.0 1.3B 4.2 31.2 6.4 5 4.9 6.8 5.8 14.0 1.3B 2.1 29.5 6.9 5.9 5.9 7.5 6.5 15.0 NEO 2.7B 70.4 93.4 9.2 10.9 **12.4** 13.6 11.5 - ∆ - - - +3.8 +1.8 +0.0 +0.5 *+1.5* - 2.7B 0.6 31.7 10.8 8.6 9.6 11.1 10.1 11.0 2.7B 1.1 32.4 11.9 9.7 11.5 12.1 11.3 10.0 2.7B 1.2 29.0 12.4 10.5 12.0 13.3 12.1 11.0 2.7B 3.4 29.9 8.8 8.2 8.4 10.3 8.9 11.0 2.7B 1.9 31.9 13.0 12.7 12.4 14.1 **13.0** 11.0 Table 6: All of the individual runs for s = 32 for the dialogue tasks in the Main Results. Table 7: Measuring perplexity on Pile and Wikitext corpora for the main unlearning experiments (Table 2). Table 8: Training compute comparison of methods mitigating privacy risks in LMs for sizes 125M, 1.3B, and 2.7B measured via FLOPs. ## F Text Example From Each Domain we perform 10 runs of unlearning to the GPT-NEO (1.3B) model (each run with s = 1) in Figure 4. As shown in the figure, the LM performance for each benchmark varies tremendously on which sample is chosen to be forgotten. Furthermore, the ending time of each run is different, indicating that some samples are forgotten faster than others. We also show empirical examples of performing actual extraction attacks with prefix length of 100 in Appendix G. We show an example token sequence from each of the 8 domains used for the analysis section in Table 9. ## G More Examples Of Performing Extraction Attacks In addition to the extraction attack example shown in the analysis section, we provide 3 additional examples to provide readers with more empirical examples of how knowledge unlearning ensures protection against extraction attacks in Table 10. | Model | # | Pile | Wikitext | |-----------|---------|---------|------------| | Params | (PPL) ↓ | (PPL) ↓ | | | NEO | 125M | 17.83 | 38.27 | | NEO + UL | 125M | 34.02 | 75.24 | | NEO + UL+ | 125M | 577.56 | 1986.07 | | OPT | 125M | 32.26 | 38.74 | | NEO | 1.3B | 11.46 | 18.63 | | NEO + UL | 1.3B | 15.56 | 20.26 | | NEO + UL+ | 1.3B | 15.83 | 26.82 | | OPT | 1.3B | 19.55 | 19.39 | | NEO | 2.7B | 10.44 | 16.15 | | NEO + UL | 2.7B | 11.32 | 16.84 | | NEO + UL+ | 2.7B | 17.93 | 21.13 | | OPT | 2.7B | 17.81 | 16.81 | | Method (Size) | FLOPs | |----------------------|----------| | DEDUPLICATION (125M) | 2.25E+20 | | UNLEARNING (125M) | 5.28E+13 | | DEDUPLICATION (1.3B) | 2.34E+21 | | UNLEARNING (1.3B) | 6.69E+14 | | DEDUPLICATION (2.7B) | 4.86E+21 | | UNLEARNING (2.7B) | 1.12E+15 | ![14_image_0.png](14_image_0.png) ## H Additional Results Of Sequential Knowledge Unlearning We show how the EL10 of each individual chunks and the average LM performance change as we perform sequential unlearning in Figure 5. Results show that the chunks that are forgotten stay forgotten and that later chunks are forgotten much faster (one or two epochs) compared to the initial chunk. We hypothesize that this might be because of the similarity of the token sequences from the 15,000 examples from the Training Extraction Challenge Benchmark. Also, this result hints at the *generalization* of unlearning, which we do not further ## I The Effect Of Varying N For Extraction Likelihood (El) Metric First, we show the Extraction Likelihood (EL) Forgetting Threshold values for n=[5,10,20,40] by measuring the value on the 10,000 validation instances unseen during training in Table 11. Next, we show the average LM performance (on the 9 classification benchmarks) where we perform unlearning on the LM on 32 samples until the target token sequences are forgotten (the EL & MA value are both lower than the threshold values) in Table 12. Performance shows the average of 5 random ![15_image_0.png](15_image_0.png) | Domain | Text | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | U. S. (2010) 1 Opinion of the Court NOTICE: This opinion is subject to formal revision before publication in the preliminary print of the United States Reports. Readers are requested to notify the Reporter of Decisions, Supreme Court of the United States, Washington, D. C. 20543, | | | FREELAW | of any typographical or other formal errors, in order that corrections may be made before the preliminary print goes to press. SUPREME COURT OF THE UNITED STATES = pc func (iov *Iovec) SetLen(length int) { iov.Len = uint64(length) } func (msghdr *Msghdr) SetControllen(length int) { msghdr.Controllen = uint64(length) } func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } //sys poll(fds *PollFd, nfds int, timeout int) | | GITHUB (CODE) | (n int, err error) func Poll(fds []PollFd, timeout int) (n int, err error) { if len(fds) == 0 { return poll(nil, 0, timeout) } return poll(&fds[0], len(fds), timeout) ## Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the | | GITHUB (LICENSE) | following conditions: ## The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. ## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE To: Hedy Govenar hgovenar@govadv.com , Mike Day MDay@GMSSR.com , Bev Hansen bhansen@lhom.com , Jeff Dasovich jdasovic@ enron.com , Susan J Mara smara@enron.com , Joseph Alamo JAlamo@enron.com , Paul Kaufman paul.kaufman@enron.com , David Parquet | | ENRON EMAILS | David.Parquet@enron.com , Rick Johnson rick.johnson@enron.com , Marcie Milner mmilner@enron.com , Sandra McCubbin Sandra.McCubbin@enron.com , Tim Belden Tim.Belden@enron.com About the Publisher Australia HarperCollins Publishers (Australia) Pty. Ltd. 25 Ryde Road (PO Box 321) Pymble, NSW 2073, Australia http://www.harpercollinsebooks.com.au Canada HarperCollins Publishers Ltd. 55 Avenue Road, Suite 2900 Toronto, ON, M5R, 3L2, Canada | | BOOKS3 | http://www.harpercollinsebooks.ca New Zealand HarperCollins Publishers (New Zealand) Limited P.O. Box 1 Auckland, New Zealand http://www.harpercollinsebooks.co.nz United Kingdom HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK http://www.harpercollinsebooks.co.uk This website and its associated newspaper adheres to the Independent Press Standards Organisation's Editors' Code of Practice. If you have a complaint about editorial content which relates to inaccuracy or intrusion, then contact the Editor by clicking here. If you remain dissatisfied with the response provided then you can contact the IPSO by clicking here. Bury Free Press provides news, events and sport features from the | | PILE CC | Bury St Edmunds area. For the best up to date information relating to Bury St Edmunds and the surrounding areas visit us at Bury Free Press regularly or bookmark this page. For you to enjoy all the features of this website Bury Free Press requires permission to use cookies. Find Out More What is a Cookie? What is a Flash Cookie? Can I opt out of receiving Cookies? The pharmaceutical formulations of the present invention, which may conveniently be presented in unit dosage form, may be prepared according to conventional techniques well known in the pharmaceutical industry. Such techniques include the step of bringing into association the active ingredients with the pharmaceutical carrier(s) or excipient(s). In general the formulations are prepared by uniformly | | USPTO BACKGROUNDS | and intimately bringing into association the active ingredients with liquid carriers or finely divided solid carriers or both, and then, if necessary, shaping the product. The compositions of the present invention may be formulated into any of many possible dosage forms such as, but not limited to, tablets, capsules, gel capsules, liquid syrups, soft gels, suppositories, and enemas. I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript | | PUBMED CENTRAL | carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your articles publication date. ´ | Table 10: Examples performing extraction attacks on token sequences, showing knowledge unlearning provides protection against extraction attacks. Underlined denotes the model generated text given the prefix of length 100 as input. For the extraction attack, we utilize a naïve greedy decoding strategy. | Domain | Status | Text | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | About the Publisher Australia HarperCollins Publishers (Australia) Pty. Ltd. 25 Ryde Road (PO Box 321) Pymble, NSW 2073, Australia http://www.harpercollinsebooks.com.au Canada HarperCollins Publishers Ltd. 55 Avenue Road, Suite 2900 Toronto, ON, M5R, 3L2, Canada | | | | Original | http://www.harpercollinsebooks.ca New Zealand HarperCollins Publishers (New Zealand) Limited P.O. Box 1 Auckland, New Zealand | | | Text | http://www.harpercollinsebooks.co.nz United Kingdom HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK http://www.harpercollinsebooks.co.uk About the Publisher Australia HarperCollins Publishers (Australia) Pty. Ltd. 25 Ryde Road (PO Box 321) Pymble, NSW 2073, Australia http://www.harpercollinsebooks.com.au Canada HarperCollins Publishers Ltd. 55 Avenue Road, Suite 2900 Toronto, ON, M5R, 3L2, Canada | | | Before | http://www.harpercollinsebooks.ca New Zealand HarperCollins Publishers (New Zealand) Limited P.O. Box 1 Auckland, New Zealand | | | Unlearning | http://www.harpercollinsebooks.co.nz United Kingdom HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK http://www.harpercollinsebooks.co.uk | | | BOOKS3 | About the Publisher Australia HarperCollins Publishers (Australia) Pty. Ltd. 25 Ryde Road (PO Box 321) Pymble, NSW 2073, Australia http://www.harpercollinsebooks.com.au Canada HarperCollins Publishers Ltd. 55 Avenue Road, Suite 2900 Toronto, ON, M5R, 3L2, Canada | | | After | http://www.harpercollins.com.au/Publishers/ Publisher: level three Level two is levels one and two together. The new face of a already great | | | Unlearning | title! Level one: Just right. Level two: Great. Level three: Awesome. The BloomsburyPublishersPublishers.com.au/PublishersPublishers Levels are for bibliographic information or advanced level. s James Gurney This daily weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art | | | Original | schools and museums. CG Art Contact or by email:gurneyjourney (at) gmail.com Sorry, I can't give personal art advice or portfolio reviews. | | | Text | If you can, it's best to ask art questions in the blog comments. Permissions All images and text are copyright 2015 James Gurney and/or their respective owners. Dinotopia is a registered trademark of James Gurney. For use of text or images in traditional print media or for any commercial licensing rights, please email me for permission. However, you can quote images or text without James Gurney This daily weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, | | | Before | art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art schools and museums. CG Art Contact or by email:gurneyjourney (at) gmail.com Sorry, I can't give personal art advice or portfolio reviews. | | | Unlearning | If you can, it's best to ask art questions in the blog comments. Permissions All images and text are copyright 2015 James Gurney and/or their respective owners. Dinotopia is a registered trademark of James Gurney. For use of text or images in traditional print media or for any commercial licensing rights, please email me for permission. However, you can quote images or text without | | | PILE CC | After | James Gurney This daily weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art schools and museums. CG Art Contact or by email:gurneyjourney (at) gmail.com I've been working on a CG art project for a while now, and I've been | | Unlearning | working on it for a while now. I've been working on it for a while now, and I've been working on it for a while now. I've been working on it for a while now, and I've been working on it for a while now. I've been working on a CG art project for a while now, and I've been working on it for a while | | | Original | Rick Shapiro rshapiro@enron.com , Jim Steffes james.d.steffes@enron.com , Alan Comnes acomnes@enron.com , Chris Calger ccalger@enron.com , Mary Hain mary.hain@enron.com , Joe Hartsoe Joe.Hartsoe@enron.com , Donna Fulton Donna.Fulton@enron.com , Steven Kean Steven.J.Kean@ | | | Text | enron.com , Karen Denne kdenne@enron.com , Beverly Aden beverly.aden@enron.com , Bill Votaw bill.votaw@enron.com , Carol Moffett carol. moffett@enron.com , Debora Whitehead deb | | | Before | Rick Shapiro rshapiro@enron.com , Jim Steffes james.d.steffes@enron.com , Alan Comnes acomnes@enron.com , Chris Calger ccalger@enron.com , Mary Hain mary.hain@enron.com , Joe Hartsoe Joe.Hartsoe@enron.com , Donna Fulton Donna.Fulton@enron.com , Steven Kean Steven.J.Kean@ | | | Unlearning | enron.com , Karen Denne kdenne@enron.com , Beverly Aden beverly.aden@enron.com , Bill Votaw bill.votaw@enron.com , Carol Moffett carol. moffett@enron.com , Debora Whitehead | | | ENRON EMAILS | After | Rick Shapiro rshapiro@enron.com , Jim Steffes james.d.steffes@enron.com , Alan Comnes acomnes@enron.com , Chris Calger ccalger@enron.com , | | Unlearning | Mary Hain mary.hain@enron.com , Joe Hartsoe Joe.Hartsoe@enron.com , Donna Fulton Dabat, state+[D@calenergy.com] ? About Me Alvin McEwen is 46-year-old African-American gay man who resides in Columbia, SC. McEwen's blog, Holy Bullies and Headless Monsters, and writings have been mentioned by Americablog.com, Goodasyou.org, People for the American Way, PageOneQ.com, The Washington | | | Original | Post, Raw Story, The Advocate, Media Matters for America, Crooksandliars.com, Thinkprogress.org, Andrew Sullivan's Daily Dish, Melissa HarrisPerry, The Last Word with Lawrence O'Donnell, Newsweek, The Daily Beast, The Washington Blade, and Foxnews.com. In addition, he is also a | | | Text | past contributor to Pam's House Blend,Justice For All, LGBTQ Nation, and Alternet.org. He is a present contributor to the Daily Kos and the Huffington Post, He is the 2007 recipient of the Harriet Daniels Hancock Volunteer of the Year Award and ? About Me Alvin McEwen is 46-year-old African-American gay man who resides in Columbia, SC. McEwen's blog, Holy Bullies and Headless Monsters, and writings have been mentioned by Americablog.com, Goodasyou.org, People for the American Way, PageOneQ.com, The Washington | | | Before | Post, Raw Story, The Advocate, Media Matters for America, Crooksandliars.com, Thinkprogress.org, Andrew Sullivan's Daily Dish, Melissa HarrisPerry, The Last Word with Lawrence O'Donnell, Newsweek, The Daily Beast, The Washington Blade, and Foxnews.com. In addition, he is also a | | | Unlearning | past contributor to Pam's House Blend,Justice For All, LGBTQ Nation, and Alternet.org. He is a present contributor to the Daily Kos and the Huffington Post, He is the 2007 recipient of the Harriet Daniels Hancock Volunteer of the Year Award and | | | PILE CC | After | ? About Me Alvin McEwen is 46-year-old African-American gay man who resides in Columbia, SC. McEwen's blog, Holy Bullies and Headless Monsters, and writings have been mentioned by Americablog.com, Goodasyou.org, People for the American Way, PageOneQ.com, The Washington Post, Raw Story, The Advocate, Media Matters for America, Crooksandliars.com, Thinkprogress, and more. The British singer has been in the news | | Unlearning | for his recent singles, including "I'm Not Sure" and "What Makes You Beautiful." The singer has been in the news for his recent singles, including "I'm Not Sure" and "What Makes You Beautiful." The singer has been in the news for his recent singles, including "I'm Not Sure" Table 11: Forgetting Threshold for GPT-NEO LMs for varying n. | | | Model (Size) | EL5(%) | EL10(%) | EL20(%) | EL40(%) | MA(%) | |----------------|-----------|-----------|-----------|-----------|---------| | Threshold | Threshold | Threshold | Threshold | Threshold | | | GPT-NEO (1.3B) | 7.85 | 5.68 | 4.07 | 2.66 | 33.27 | samplings. ![17_image_0.png](17_image_0.png) Table 12: The average of the 9 classification tasks for GPT-NEO + UL+ for the 1.3B LM when performing unlearning until the Forgetting Threshold for each n. ![17_image_1.png](17_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✓ A2. Did you discuss any potential risks of your work? Section 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 and Section 6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Section 6 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and Section 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 6 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 6 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 6 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We are not aware of the demographics of the annotators, as Amazon Mechanical Turk does not provide such information for crowdsourcing.
honovich-etal-2023-unnatural
Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor
https://aclanthology.org/2023.acl-long.806
Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.
# Unnatural Instructions: Tuning Language Models With (Almost) No Human Labor Or Honovichτ Thomas Scialomµ Omer Levyτµ **Timo Schick**µ τ Tel Aviv University µ Meta AI ## Abstract Instruction tuning enables pretrained language models to perform new tasks from inferencetime natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification. ## 1 Introduction Instruction tuning enables pretrained language models to generalize to unseen tasks in a zero-shot setting (Sanh et al., 2021; Wei et al., 2021). One way to collect examples of instructions and their execution is to reformulate existing NLP datasets in an explicit instruction-input-output format via prompt engineering (Mishra et al., 2022; Wang et al., 2022). However, the resulting data is limited to existing academic benchmarks, even though the instruction paradigm can describe any text-based task (Efrat and Levy, 2020). Alternatively, Ouyang et al. (2022) collect user-generated prompts and manually annotate their expected outputs, reflecting a different (and arguably more desirable) distribution of the instruction space, but requiring a live application with existing users and major investments in human annotation. Can we create a large dataset of instructions that is diverse in tasks, content, and phrasing, *without* human labor? We introduce **Unnatural Instructions**, a dataset of natural language instructions and their corresponding inputs and outputs. Inspired by recent work on utilizing language models for data generation (Schick and Schütze, 2021b; Lee et al., 2021; Liu et al., 2022a), we collect data in a fully automatic manner by prompting a pretrained language model with three examples from the Super-Natural Instructions1 dataset (Mishra et al., 2022; Wang et al., 2022) and asking the model to generate a fourth (Figure 1). We repeat this process with 5 different seeds - i.e. the entire process requires only 15 instruction examples - to automatically produce 64,000 diverse triplets of instructions, inputs, and outputs.2 We further diversify the dataset's format by generating additional natural language paraphrases of each instruction, while preserving the contents of any input arguments and outputs, expanding the dataset to approximately 240,000 examples. Although the dataset contains noise, our analysis reveals that more than 50% of generated examples are indeed correct, and that even incorrect examples typically contain valuable information for instruction tuning. At the same time, we find that Unnatural Instructions contains highly creative tasks - some of which are very different from "classic" NLP tasks - and has a more diverse set of instructions than Super-Natural Instructions. Experiments show that fine-tuning an 11Bparameter T5 model (Raffel et al., 2020) on Unnatural Instructions can outperform both T0++ (Sanh et al., 2021) and Tk-Instruct (Wang et al., 2022) across several benchmarks, including SuperNatural Instructions (Wang et al., 2022), BIG- 14409 bench Hard (Suzgun et al., 2022), and LMentry (Efrat et al., 2022). When controlling for all variables besides the data, we find that a model trained on Unnatural Instructions performs competitively with a baseline model trained on Super-Natural Instructions. In particular, we observe an 18-point gain on BIG-bench Hard (original task formulation) and a 16-point gain on LMentry, suggesting that Unnatural Instructions is particularly useful for generalizing to instructions that deviate from the distribution of classic NLP tasks. These improvements become even more pronounced when the cost of generating examples is amortized; in this case, training on Unnatural Instructions substantially outperforms our baseline on all benchmarks. We observe a log-linear relationship between the number of generated examples and downstream task performance, suggesting that performance of models trained on Unnatural Instructions can further be improved simply by increasing its size. Beyond the immediate implications on instruction tuning, this work demonstrates the viability of automatic dataset expansion using language models as an alternative to crowdsourcing. Unnatural Instructions highlights the ability of language models to produce creative and diverse data, a trait that is difficult to obtain with crowd workers, who lack the intrinsic motivation to create novel examples and typically collapse into predictable heuristics to form annotation artifacts (Gururangan et al., 2018). At the same time, language models are faster and cheaper than human labor, opening up new possibilities for scaling up data annotation. ## 2 Data Collection We introduce Unnatural Instructions, a dataset of 240,670 diverse natural language instructions. Each example contains a natural language instruction as input and its expected execution as output. Table 2 displays examples from the dataset. Unnatural Instructions is collected in a completely automatic process, requiring a seed of only 15 manually-constructed examples, which can be produced in about one hour of human labor. We first collect a core set of 68,478 examples (§2.1) by prompting a pretrained language model M with a seed of 3 manually-annotated examples to produce a new (fourth) example. This phase uses a structured instruction format and filtering heuristics to ensure data quality. We then expand the core dataset by rephrasing the structured instructions in ![1_image_0.png](1_image_0.png) Example 2 ![1_image_1.png](1_image_1.png) changing the context of the review. Constraints: None. Example 3 ![1_image_2.png](1_image_2.png) different people. you were busy...? Constraints: None. Example 4 ![1_image_3.png](1_image_3.png) Instruction: In this task, you will be given a profile of someone and your job is to generate a set of interesting questions that can lead to a conversation with the person. Input: Yvonne has been playing the violin since she was four years old. She loves all kinds of music, but her favorite composer is Bach. Constraints: None. Figure 1: Our data generation prompt. **Blue**: The metaprompt, which contains the number of the in-context example, as well as the constant fields of each example: instruction, input, and constraints. **Black**: The in-context examples. We show here one of our 5 incontext seeds. **Pink**: One of the model's generations for the given prompt. free-form natural language (§2.2). This expansion is performed automatically by prompting a language model with manually-constructed examples, scaling up the dataset more than 3-fold. Throughout this section, we use OpenAI's text-davinci-002 as M. See §6 for experiments with other models. ## 2.1 Core Dataset Generation The core dataset consists of examples in a structured format, making it easier for the generating model M to predict and for us to filter automatically. We use stochastic decoding to generate example inputs (to promote creativity), followed by deterministic decoding to generate their outputs (for accuracy). Figure 2 illustrates the process. Format Each example in the core dataset contains four fields: (1) An **instruction** describing the task. The instruction can be a generic template (e.g. "Write whether the following review is positive or ![2_image_0.png](2_image_0.png) negative") that can be instantiated by a particular input argument (e.g. the review itself). (2) The **input** argument that instantiates the instruction, creating a specific example of the task. (3) Output space **constraints**, which detail the restrictions on the task's output space. Constraints are mainly relevant for classification tasks; for tasks with no specific output space constraints, this field is "None." (4) A textual **output** reflecting a correct execution of the instruction given the input arguments and output space constraints. The first three fields (instruction, input argument, constraints) are the model's input, and the output field acts as the reference for training and/or evaluation. The constraints field is meant to guide M during output generation and is discarded after generating the outputs (see next). In Appendix D we provide data-driven evidence for selecting this particular format. Input Generation We first generate examples of instruction-input-constraints by prompting a model with three task demonstrations x1, x2, x3, each presented in the structured format (without outputs). These demonstrations are wrapped by a simple meta-prompt that incentivizes the model to create a fourth example x4, as illustrated in Figure 1. We use 5 seeds of 3 demonstrations each to generate the core dataset; i.e., the whole process requires only 15 examples. Demonstrations are taken from the Super-Natural Instructions (Wang et al., 2022) train set. To obtain various examples using the same prompt, decoding is done by nucleus sampling with p = 0.99 (Holtzman et al., 2020). Filtering We apply three automatic filters to the generated examples to remove: (1) model generations that do not include the three input fields (instruction, input argument, and constraints), (2) instructions and inputs that are identical to those demonstrated in the prompt, (3) duplicate examples, i.e. two different examples that have the same instruction and input argument. Output Generation Given a generated example x, we generate the corresponding output y by conditioning a pretrained language model with the instruction, input argument, and constraints (if not none), followed by an "Output:" prompt. Here we apply greedy decoding to prioritize correctness over creativity. We ignore examples for which the generated output is an empty string. ## 2.2 Template Expansion Examples in our core dataset have a strict instruction-input-output format. To increase the format diversity and obtain tasks phrased in freeform natural language (Schick and Schütze, 2021a; Sanh et al., 2021), we collect alternative formulations that preserve the content of the original instructions. Specifically, we prompt a language model to reformulate the core dataset tasks and collect two alternative formulations for each generated task.3 Alternative formulations are often shorter and less formal than the original instructions. The rephrasing prompt contains two examples of instructions and their alternative formulation. We do not include inputs, constraints, and outputs in the rephrasing prompt; instead, we utilize the alreadygenerated inputs and outputs to complement the rephrased instruction. Unlike the examples in the core dataset, the input is often embedded into the task description. We achieve that by adding an "{INPUT}" placeholder, which marks the position for input insertion (Figure 3). In some cases, the model generates two identical reformulations, or it copies the original instruction. Some alternative formulations may also have an invalid format - e.g., not containing the "{INPUT}" placeholder. When such failures occur we continue to sample reformulations, stopping after five unsuccessful attempts. Consequently, some instructions have only one alternative formulation, while others have none. Overall, more than 97.5% of the Example 1 Instruction: In this task, you are given an article. Your task is to summarize the article in a sentence. Input: {INPUT} Alternative formulation: My college roommate asked me what this article means: "{INPUT}". So I recapped it in layman's terms: Example 2 Instruction: This task is about writing a correct answer for the reading comprehension task. Based on the information provided in a given passage… Input: {INPUT} Alternative formulation: {INPUT} Based on the given context, the answer to the question is Example 3 Instruction: In this task, you are asked to determine whether the given recipe is for a savory or sweet dish. If it is for a savory dish, output "SAVORY". If the recipe is for a sweet dish, output "SWEET". Input: {INPUT} Alternative formulation: Given the following recipe, {INPUT}, is the dish savory or sweet? Your output should be "SAVORY" or "SWEET" Figure 3: Our template expansion prompt. **Black**: Fewshot demonstrations of instructions and alternative formulations. **Blue**: The instruction we wish to paraphrase. Pink: Model-generated task reformulation. instructions have two distinct, valid reformulations. In fact, some instructions end up with more than two paraphrases because we generate two paraphrases per *example* (i.e. instruction-input-output pair) and the core dataset contains examples that share the exact same instruction but not the same input argument. Therefore, by cross-referencing each instruction's alternative phrasings with all of its input arguments, we can extend the data even further and arrive at a total of 240,670 examples without additional cost. ## 3 Data Analysis We first demonstrate the *creativity* of Unnatural Instructions, and then manually analyze 200 randomly-sampled examples from our core dataset, focusing on *correctness* and *diversity*. We also compare our data's distribution to Super-Natural Instructions, and find our inputs to be more diverse. Creativity A major challenge when creating an instruction dataset is task creativity. Crowd workers typically collapse into predictable heuristics to form annotation artifacts (Gururangan et al., 2018). While the high performance of models trained on Unnatural Instructions (see §5) suggests that it is indeed diverse and creative, we also present in Table 1 some cherry-picked examples, providing a glimpse at their creativity. Correctness When evaluating correctness, we test whether (1) the generated instructions are logical and executable, (2) the input arguments correspond to the task described in the instruction, and (3) the outputs are correct, given the instruction and input. Although our data filtering process is minimal, 113 of the 200 analyzed examples (56.5%) are correct. Of the 87 incorrect examples, 9 (4.5%) had incomprehensible instructions, 35 (17.5%) had an input that did not match the task description, and 43 (21.5%) had incorrect outputs. Table 2 shows some correct and incorrect examples from our analysis. While the amount of noise in the data may raise concerns regarding its usability, many of the examples that were marked as incorrect can still be considered informative. For example, one erroneous example had the instruction *"In this task, you will* be provided with a list of countries and their corresponding capital cities. You are also given a list of clues...For each clue, determine which country it is referring to and write down that country's name..." The input argument was "Clue 1: This capital city is on two different continents." This example is incorrect since the input does not conform with the format described by the instruction - a list of countries and their capitals is not provided, only a clue. However, the output is *Istanbul, Turkey*, which indeed lies in both Europe and Asia and therefore corresponds with the input clue. In §5 we show that, despite being noisy, Unnatural Instructions provides a highly informative training signal. Diversity We manually cluster the instructions into tasks and measure the number of unique types. Out of the 200 examples tested, we identify 117 distinct tasks. While many tasks are classical NLP tasks, such as sentiment analysis, question answering, and summarization, others are not quite canonical, and some are very specific, such as detecting a recipe given a list of ingredients. Table 3 shows the most commonly generated tasks from the set of 200 analyzed examples. Other tasks appeared 3 times or less, with 85 tasks appearing only once. We also analyze how similar each pair of examples is, as a general proxy for diversity. Specifically, we sample 10,000 pairs of examples from Unnatural Instructions, and compute the similarity of their inputs using BERTScore (Zhang et al., Instruction **Category** | You need to answer the question 'Is this a good experiment design?', given an experiment scenario. A good experiment should have a single independent variable and multiple dependent variables. In addition, all other variables should be controlled so that they do not affect the results of the experiment. You are given a recipe for baking muffins that contains some errors. Your task is to correct the errors in the instructions by replacing each underlined word with the correct one from the options provided. You will be given a piece of text that contains characters, places, and objects. For each character in the text, you need to determine whether they are static or dynamic. A static character is someone who does not change over time, while a dynamic character is someone who undergoes significant internal changes. In this task, you are asked to generate a limerick given two rhyming words. A limerick is a five-line poem with the following rhyme scheme: AABBA. The first, second and fifth lines must be of three beats, while the third and fourth lines must be of two beats each. Additionally, all poems should have the same meter (e.g., iambic pentameter) I'm not sure what this idiom means: "{INPUT}". Could you give me an example? | Idiom Explanation | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | {INPUT} By analyzing the writing styles of the two passages, do you think they were written by the same author? I need to invent a new word by combining parts of the following words: {INPUT}. In what order should I put the parts together? What is the punchline to the following joke? {INPUT} | Humor Understanding | | Poem Generation Author Classification Word Invention | |--------------------------------------------------------| Table 1: Examples of eight interesting generated instructions and their corresponding category. The first four examples are taken from the core dataset, while the last four were generated during the template expansion phase. ![4_image_0.png](4_image_0.png) 2020). We repeat this process for Super-Natural Instructions, producing two empirical distributions. Figure 4 shows that the inputs of Unnatural Instructions tend to be less similar to each other than the inputs of Super-Natural Instructions. This result comes as a surprise considering the fact that the entire Unnatural Instructions dataset was constructed by conditioning only on 15 original examples. ## 4 Experimental Setup We describe model fine-tuning on Unnatural Instructions and our evaluation protocol. Experiment Verification Recipe Correction Character Categorization ## 4.1 Fine-Tuning On Unnatural Instructions We fine-tune T5-LM, the language-model-adapted variant of T5-11B (Raffel et al., 2020; Lester et al., 2021). We follow standard practice for fine-tuning, using a batch size of 16 examples over 3 epochs. For training on our core dataset, we use the same template as Wang et al. (2022) for formatting instructions and inputs. Our full set of training hyperparameters is available in Appendix A. We create a small validation set of 1,000 examples for model selection following the methodology proposed by Wang et al. (2022): we randomly select 10 examples from 100 random tasks of the Super-Natural Instructions training set. ## 4.2 Baselines We measure the relative utility of Unnatural Instructions by comparing it to a variety of models, all based on T5-11B, which were fine-tuned with different types and quantities of manually-annotated instruction data. T0++ (Sanh et al., 2021) is an instruction-tuned variant of T5-LM, trained on tasks in the PromptSource (Bach et al., 2022) prompt formats. Tk-Instruct Wang et al. (2022) fine-tune T5 v1.1 on Super-Natural Instructions, using a subsample Instruction: In this task, you are asked to come up with a ![5_image_0.png](5_image_0.png) design for an office lobby. The client has provided specific instructions on what they want in the lobby, and it is your job to create a floor plan that meets all of their requirements. (✓) Input: Client wants a welcoming environment with space for people to wait comfortably without feeling cramped. They also would like some sort of area where people can do work if they need to kill some time before their next meeting. (✓) Output: [...] Based on these requirements, a possible floor plan [...] The lobby would have a reception area [...] with access to outlets, and a coffee station. (✓) Instruction: You will be given several pieces of information about an event, and you have to determine whether or not it is a cause-and-effect relationship. If the given statements are related by cause and effect, then output 'True'. Otherwise, output 'False'. (✓) Input: Statements: ['The tornado damaged the city', 'Many people were left homeless']. (✓) Output: True (✓) Instruction: You are to determine if the statement is true or false. (✓) Input: Text: Santa Claus lives at the North Pole. (✓) Output: False (✗) Instruction: You are given a scientific experiment scenario and your job is to determine which variable(s) the scientist should control in order to test the hypothesis. (✓) Input: The hypothesis is "If a plant receives more sunlight, then it will grow taller." Which variable(s) should the scientist control? (✗) Output: A (–) Table 2: Examples of generated instructions, inputs and outputs in our core dataset. For the first two examples, the entire pair of instruction, input and output is valid. The third example has an incorrect output; in the fourth example, the experiment is not described in the input. of 757 tasks with 100 examples each. Tk-Instruct is trained with a batch size of 1,024 examples for 1,000 steps. Since our evaluation focuses on zero-shot instruction understanding, we use the definition-only version of Tk-Instruct. FLAN-T5 Chung et al. (2022) fine-tune T5 on a collection of tasks phrased as instructions in multiple prompting setups (zero-shot, few-shot, Chainof-Thought (Wei et al., 2022)), achieving impressive zero-shot generalization capabilities. T5-LM on Natural Instructions Our main point of comparison is the utility of the original manuallycurated instructions in Super-Natural Instructions. We therefore train a model which is identical to ours in all aspects but data. Specifically, we finetune the LM-adapted variant of T5-11B on a subsample of 64,000 examples from Super-Natural Instructions training set, excluding examples from Table 3: Top 10 tasks by \#examples, out of the 200 manually-analyzed Unnatural Instructions examples. | Task | #Examples | |---------------------------------|-------------| | Question Answering | 11 | | Sentiment Analysis | 10 | | Arithmetic | 8 | | Geometry | 8 | | Event Ordering | 7 | | Fact Verification | 5 | | Fill-in-the-Blank | 5 | | General Math Puzzles | 4 | | Identifying Overlapping Strings | 4 | | Array Manipulations and Puzzles | 4 | any task that participates in the validation set. This model differs from Tk-Instruct along three aspects: the dataset subsample, the base model (T5-LM), and some training hyperparameters (batch size 16 for 3 epochs). ## 4.3 Evaluation We evaluate models on four different benchmarks, measuring a range of capabilities. All evaluations are carried out in a zero-shot setting, without fewshot demonstrations, unless explicitly provided in the instructions. See the full evaluation details in Appendix B. Natural Instructions We evaluate models on the test set of Super-Natural Instructions (Mishra et al., 2022; Wang et al., 2022). As in the original papers, outputs are generated using greedy decoding, and performance is measured using Rouge-L. T0: Zero-Shot We evaluate models on the heldout set of T0 (Sanh et al., 2021), using rank classification for decoding and accuracy as a metric. For fair comparison, we remove tasks supersets of which are present in the Tk-Instruct training set. The final set contains six tasks: ANLI R1-R3, CB, COPA and RTE. We refer to this evaluation set as T0: Zero-Shot. Unlike Super-Natural Instructions, T0: Zero-Shot tasks do not have a strict format and are phrased in a rather free-form manner, including inputs that can be embedded into the task description. We therefore expect models trained on our core dataset (without instruction paraphrases) to perform poorly under these conditions, while adding the task reformulation data should boost performance on T0: Zero-Shot. BIG-bench: Hard The "hard" subset of BIGbench (Suzgun et al., 2022) contains 23 challenging tasks from BIG-Bench (Srivastava et al., 2022). We investigate two different formats for all tasks: | Example | |-----------| their original format in BIG-bench, and the format of Suzgun et al. (2022), who reformulate each task as question answering with manually added instructions; for the latter, we remove all few-shot demonstrations. For both formats, we use greedy decoding and exact match with the reference for evaluation. LMentry LMentry (Efrat et al., 2022) is a benchmark that tests basic language abilities, designed to complement common approaches for evaluating large language models. Outputs are generated by applying greedy decoding and evaluated using highaccuracy regular expressions. The benchmark's metric is the LMentry score, which combines accuracy with multiple aspects of robustness. ## 5 Results Our main results are shown in Table 4, which reports the performance of each model on every benchmark. Remarkably, T5-LM finetuned on Unnatural Instructions outperforms several strong instruction-tuned baselines such as T0++ and TkInstruct; the only exception to this is BIG-bench: Hard (Orig), where T0++ performs better. Retraining a model on Super-Natural Instructions using our exact setup reveals a significantly better baseline than Tk-Instruct, using the same data. However, even in this direct comparison, Unnatural Instructions leads to stronger or equal performance for every dataset except Super-Natural Instructions itself. While T5-LM finetuned on Unnatural Instructions is outperformed by FLAN-T5, that model was trained on approximately 60 times more data. These results demonstrate that automated data generation with pretrained LMs is a viable and cost-effective alternative to human-curated data. ## 5.1 Performance With Template Expansion We evaluate the contribution of template expansion (§2.2) to the performance of models trained on Unnatural Instructions. To this end, we finetune a single model on our full dataset with paraphrases; results are shown in the bottom row of Table 4. Adding instruction paraphrases boosts performance on T0: Zero-Shot (+3.3), Big-bench: Hard in its original format (+12.1) and LMentry (+8.7). We surmise that this improvement is largely because examples in our core dataset were generated based on demonstrations from Super-Natural Instructions only and therefore have their exact format and style. Accordingly, models trained on our core dataset rely too much on this specific format and cannot generalize well to different formats found in other benchmarks. Obtaining more format diversity through template expansion successfully addresses this issue. On the other hand, over-reliance on the format of Super-Natural Instructions is probably preferable when testing on this dataset itself, which explains the performance drop when adding paraphrases compared to the boost in performance on other benchmarks. While some of the performance gains observed may also be attributed to the fact that adding paraphrases simply increases the data, in §5.2 we show that template expansion is helpful even when controlling for dataset size. ## 5.2 Performance Scaling By Dataset Size As all of our data is generated from the same model using the same set of prompts, scaling up the amount of generated examples might lead to numerous repetitions and, as a consequence, diminishing returns in terms of downstream task performance. To investigate whether this is an issue, we analyze how the amount of training examples affects the performance of our finetuned models. To this end, we train models on subsets of both Super-Natural Instructions and Unnatural Instructions, ranging from 250 to 64,000 examples. As shown in Figure 5, our core and full data as well as Super-Natural Instructions all exhibit log-linear scaling laws, suggesting that even for subsets of Unnatural Instructions containing thousands of examples, simply generating more examples still adds a valuable signal to our training data. Results for LMentry (Figure 5) show that our template expansion process is still beneficial when controlling for dataset size. The added value of the paraphrases is therefore likely to be in terms of format diversity rather than solely as a method for increasing the amount of data. ## 5.3 Performance Scaling By Cost In practical scenarios with fixed annotation budgets, the actual *cost* associated with a certain level of performance is even more relevant than the number of required examples. We therefore measure model performance as a function of the cost for obtaining the training data. Based on OpenAI's pricing as of December 2022, the cost for generating an example is estimated at $0.02 for our core dataset, and $0.01 for the expanded dataset. Kiela et al. (2021) estimate human annotation cost at $0.50–$1.00 per | Model | #Examples | Super-Natural | T0: | BIG-bench: | LMentry | | |----------------------------------------------------------------|-------------|-----------------|-------|--------------|-----------|------| | Instructions | Zero-Shot | Hard (Orig/QA) | | | | | | Prior Work T5-LM | 0 | 24.3 | 40.2 | 0.0 / | 0.7 | 20.6 | | T0++ | 12,492,800 | 40.3 | NHO | 20.2 / 13.9 | 38.3 | | | Tk-Instruct | 75,417 | 45.6 | 41.4 | 5.8 / 11.8 | 35.7 | | | FLAN-T5 | 14,336,000 | NHO | NHO | 39.3 / 40.0 | 52.2 | | | Direct Comparison Baseline T5-LM on Super-Natural Instructions | 64,000 | 54.0 | 44.0 | 10.2 / 29.7 | 34.6 | | | Our Approach T5-LM on Unnatural Instructions | 64,000 | 51.9 | 45.7 | 16.0 / 29.5 | 42.0 | | | + Instruction Paraphrases | 240,670 | 49.3 | 49.0 | 28.1 / 29.4 | 50.7 | | Table 4: Model performance on four benchmarks. Best results in our direct comparison setup are bold, best results overall are underlined. NHO indicates that a benchmark's data is *not held out* because it was used for training. example, excluding indirect costs such as task design and UX development; for comparison with our automatic data collection method, we assume the lower-bound human annotation cost of $0.50. As shown in Figure 5, Unnatural Instructions is clearly more cost-efficient than manually curated data. This is true even for the Super-Natural Instructions test set, where a model trained on Unnatural Instructions is weaker than a model trained on Super-Natural Instructions for a fixed number of examples, but better when controlling for cost, showing that our automatic approach outperforms crowdsourcing for a fixed annotation budget. ## 6 Generative Model Ablations As a data generation model, we used text-davinci002, an instruction-tuned variant of GPT-3 (Brown et al., 2020). However, our approach is not limited to this specific model. We experiment with original (untuned) GPT-3 model by using it as the model M in both the input generation and output generation phases (see §2). We train models for 1,500 steps using 2,000 examples and evaluate the Super-Natural Instructions validation set performance as a proxy, averaged across three different random seeds. Table 5 shows how replacing an instructiontuned model with a vanilla model affects the quality of the data. We observe that while the quality of generated *inputs* does drop by 4.5 points, it is well within the range of other prompt ablations (see Appendix D). In other words, informative and diverse instructions can be generated by untuned language models. However, generating *outputs* does seem to require instruction tuning. A manual analysis reveals that outputs generated by GPT-3 mainly suffer from the model's inability to stop, often starting with the correct answer, but then degenerating into repetitions or tangents. While this may be reme- | Model Used to Generate | Super-Natural | | |--------------------------|------------------|--------------| | Input | Output | Instructions | | text-davinci-002 | text-davinci-002 | 48.7 ± 0.3 | | GPT-3 | text-davinci-002 | 44.2 ± 0.7 | | GPT-3 | GPT-3 | 4.1 ± 0.1 | Table 5: Performance of 11B T5-LM models trained on 2,000 examples, generated with different models, on the Super-Natural Instructions validation set. died through various post-processing heuristics, we leave exploration of such methods to future work. ## 7 Related Work Instruction Tuning Efrat and Levy (2020) propose the Instruction Paradigm, where models learn new tasks from natural language instructions alone. Mishra et al. (2022); Wang et al. (2022) construct the first large-scale instruction benchmarks by collecting crowdsourcing instructions used to create NLP datasets and converting them into a uniform format. Sanh et al. (2021); Wei et al. (2021) further extend the usability of instructions by suggesting *instruction tuning*, where a language model is trained on many natural language instructions in the hope that it will generalize to new, unseen instruction tasks. Chung et al. (2022) advance instruction tuning by scaling the number of tasks, scaling the model size, and adding chain-of-thought (Wei et al., 2022), while Ouyang et al. (2022) propose a reinforcement learning approach for instruction tuning from comparative human judgements. Automatic Data Generation Obtaining largescale supervised data can be expensive and timeconsuming, making automatic data generation appealing. A common approach is to automatically augment existing datasets (Anaby-Tavor et al., 2020; Andreas, 2020; Yang et al., 2020; Kaushik ![8_image_0.png](8_image_0.png) et al., 2020; Lee et al., 2021, *inter alia*). Kiela et al. (2021) suggest a human-and-model-in-theloop dataset creation; In the same manner, Nie et al. (2020) apply a process to create training data for the task of NLI (Dagan et al., 2006; Bowman et al., 2015). Liu et al. (2022a) combine human annotators and GPT-3, create challenging NLI examples. Other work suggested creating datasets entirely automatically, without the need for labeled data. Schick and Schütze (2021b) and Ye et al. (2022) propose to leverage pretrained language models to generate entire labeled datasets from scratch, for a given, predefined task. Agrawal et al. (2022) use pretrained language models to automatically construct multilingual QA data using only five examples per language. ## 8 Conclusion We introduce Unnatural Instructions, an automatically generated dataset of natural language instructions and their corresponding inputs and outputs. To the best of our knowledge, this is the first general-purpose NLP dataset that was automatically generated. Our experiments show that models trained on Unnatural Instructions outperforms models trained on manually annotated datasets across several benchmarks. Unnatural Instructions is not only cost-effective, we also provide evidence of enhanced diversity in the instructions produced and a high level of creativity in the tasks devised, a trait difficult to obtain with crowd workers. Ablations show that even weaker models without instruction tuning can generate useful instructions, though they may struggle with producing the corresponding outputs. However, coming up with interesting tasks and writing diverse instructions is arguably the main challenge of the data collection process, whereas given instructions and inputs, outputs are often far easier to annotate through crowdsourcing. Our findings incentivize utilizing models for general-purpose data generation, which we view as an intriguing direction for future research. ## 9 Limitations We point at some directions for future improvements in automatic instruction generation. First, as shown in §3, Unnatural Instructions contains noisy examples, in which either the instruction, input, or output are invalid. Future work may focus on developing better filters for such examples - e.g., by annotating a subset of examples as either valid or not and training a classifier for determining the correctness of generated instances (West et al., 2022; Liu et al., 2022a). Second, future work may employ a human-inthe-loop approach, where humans should recognize challenging patterns, encouraging models to generate more complex examples (Liu et al., 2022a). In another human-in-the-loop scenario, models trained on Unnatural Instructions can be queried by humans to find examples on which these models fail, thus collecting harder examples (Nie et al., 2020). Finally, language models are known to sometimes reflect undesirable biases present in their training data. Automatically generated data may therefore contain such content. We note that during our manual analysis, we did not notice any harmful examples. Still, future work may consider applying a filtering mechanism to reduce the risk of having biased content. ## References Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual qa with only 5 examples. arXiv preprint arXiv:2211.08264. Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, N. Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In AAAI Conference on Artificial Intelligence. Jacob Andreas. 2020. Good-enough compositional data augmentation. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7556–7566, Online. Association for Computational Linguistics. Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Alshaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. PromptSource: An integrated development environment and repository for natural language prompts. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 93–104, Dublin, Ireland. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In *Machine Learning Challenges. Evaluating* Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190, Berlin, Heidelberg. Springer Berlin Heidelberg. Avia Efrat, Or Honovich, and Omer Levy. 2022. Lmentry: A language model benchmark of elementary language tasks. Avia Efrat and Omer Levy. 2020. The turking test: Can language models understand instructions? Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *International Conference on Learning* Representations. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In *International Conference on Learning Representations*. Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110–4124, Online. Association for Computational Linguistics. Sawan Kumar and Partha Talukdar. 2021. Reordering examples helps during priming-based few-shot learning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 4507–4518, Online. Association for Computational Linguistics. Kenton Lee, Kelvin Guu, Luheng He, Tim Dozat, and Hyung Won Chung. 2021. Neural data augmentation via example extrapolation. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022a. Wanli: Worker and ai collaboration for natural language inference dataset creation. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022b. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland. Association for Computational Linguistics. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4885–4901, Online. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In *Proceedings of the* 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20, page 3505–3506, New York, NY, USA. Association for Computing Machinery. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. *arXiv preprint* arXiv:2110.08207. Timo Schick and Hinrich Schütze. 2021a. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 390– 402, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021b. Generating datasets with pretrained language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6943– 6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022. Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks. In *EMNLP*. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4602–4625, Seattle, United States. Association for Computational Linguistics. Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, and Doug Downey. 2020. Generative data augmentation for commonsense reasoning. In *Findings of the Association for* Computational Linguistics: EMNLP 2020, pages 1008–1025, Online. Association for Computational Linguistics. Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. ZeroGen: Efficient zero-shot learning via dataset generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11653–11669, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In *ICLR 2020*. ## A Fine-Tuning Hyperparameters We use the same set of hyperparameters for finetuning experiments with T5-LM (Raffel et al., 2020; Lester et al., 2021). All models are trained for up to max(3 epochs, 3000 steps) and the final model is chosen based on Rouge-L on our validation set, where we evaluate every 100 steps. We use a batch size of 16, a maximum learning rate of 1 · 10−5 with warm-up for the first 10% of training and a weight decay of 0.01. We truncate inputs at 1,024 tokens and outputs at 128 tokens. All models are trained using DeepSpeed's ZeRO-3 (Rasley et al., 2020). Training on up to 64,000 examples is performed on 32 NVIDIA Tesla V100 16GB Volta GPUs using FP32; for bigger training datasets, we used 8 NVIDIA A100 40GB GPUs with BF16. For computing Rouge-L and exact match scores, we use the implementation of Wang et al. (2022). ## B Evaluation Details For evaluating model performance on SuperNatural Instructions, T0: Zero-Shot and LMEntry, we use their official evaluation scripts. For evaluation on BIG-bench: Hard, we lowercase outputs, remove punctuation characters and trim extra whitespace before computing exact match scores. The only exception to this is the task dyck_languages, where the target output consists entirely of punctuation characters. ## C Data Generation Prompts Table 6 presents the in-context demonstrations we used, taken from Wang et al. (2022). In-Context Demonstrations Example 1 Instruction: In this task, you're given passages that contain mentions of names of people, places, or things. Some of these mentions refer to the same person, place, or thing. Your job is to write questions that evaluate one's understanding of such references. Good questions are expected to link pronouns (she, her, him, his, their, etc.) or other mentions to people, places, or things to which they may refer. Do not ask questions that can be answered correctly without understanding the paragraph or having multiple answers. Avoid questions that do not link phrases referring to the same entity. For each of your questions, the answer should be one or more phrases in the paragraph, and it should be unambiguous. Input: Passage: Nearing London, Oliver encounters Jack Dawkins, a pickpocket more commonly known by the nickname the "Artful Dodger", and his sidekick, a boy of a humorous nature named Charley Bates, but Oliver's innocent and trusting nature fails to see any dishonesty in their actions. The Dodger provides Oliver with a free meal and tells him of a gentleman in London who will "give him lodgings for nothing, and never ask for change". Grateful for the unexpected assistance, Oliver follows the Dodger to the "old gentleman's" residence. In this way Oliver unwittingly falls in with an infamous Jewish criminal known as Fagin, the gentleman of whom the Artful Dodger spoke. Ensnared, Oliver lives with Fagin and his gang of juvenile pickpockets in their lair at Saffron Hill for some time, unaware of their criminal occupations. He believes they make wallets and handkerchiefs. Constraints: None. Example 2 Instruction: You will be given a piece of text either about an everyday event, or a general statement. If the event seems a plausible event to you, or the general statement makes sense matches your commonsense, output 'True', otherwise output 'False'. Input: Text: The glass fell of a three-story building, so it broke into pieces. Constraints: The output should be one of the two: 'True' or 'False'. Example 3 Instruction: You need to answer the question 'Are the given steps in order?', given a set of steps describing a process. Your answer must be either Yes or No. If the answer is No, that means the steps are out of order and do not make sense in the order they are in. If the answer is Yes, that means the steps are in order and make sense in the order that they are in. A set of steps are not in order if the steps reference information that is introduced in a later step. Input: Steps: ['The seeds are dispersed by wind, animals, etc', 'The seeds reach the ground', 'Grow into new trees', 'The process repeats itself over and over', 'A tree produces seeds','These new trees produce seeds'] Constraints: The output should be one of the two: 'Yes' or 'No'. Example 4 Example 1 Instruction: In this task, you are given two phrases: Head and Tail, separated with <sep>. The Head and the Tail events are short phrases possibly involving participants. The names of specific people have been replaced by generic words (e.g., PersonX, PersonY, PersonZ). PersonX is always the subject of the event. You have to determine whether the Head is used for the Tail or not. The usage describes everyday affordances or uses of objects and includes both typical and atypical uses. For example, a popcorn bucket can typically be used to hold popcorn, but it could also serve as a hat in atypical situations. Classify your answers into "Yes" and "No". The phrase may also contain "-", a placeholder that can be an object, a person, and/or an action. Input: Head: floor mats<sep>Tail: wipe off one's boots Constraints: The output should be 'Yes' or 'No'. Example 2 Instruction: In this task, you're given a short story of five sentences written in natural language. However, the order of the given story is not correct. Your job is to return the correct order for the given five sentences to create a coherent short story with the new order that has the correct flow. Generate your answer using the number of sentences in the correct order, such as '23415'. Input: Sentence1: He is happy now. Sentence2: Rick grew up in a troubled household. Sentence3: It wasn't long before Rick got shot in a robbery. Sentence4: The incident caused him to turn a new leaf. Sentence5: He never found good support in family, and turned to gangs. Constraints: The output must consist of the numbers representing the sentences. Example 3 Instruction: In this task, you are given Yelp reviews. The task is to classify a review as "POSITIVE" if the overall sentiment of the review is positive or as "NEGATIVE" if the overall sentiment of the review is negative. Input: This is my go to place to get the best chicken Pad Thai! Also the price is super good, considering most places have high prices and poor quality. Love this place, its definitely a top 5 fav for take out. Constraints: The output should be 'POSITIVE' or 'NEGATIVE'. Example 4 Example 1 Instruction: You are given a question title. You need to classify it into one of the 10 topics: 0 - 'Society & Culture', 1 - 'Science & Mathematics', 2 - 'Health', 3 - 'Education & Reference', 4 - 'Computers & Internet', 5 - 'Sports', 6 - 'Business & Finance', 7 - 'Entertainment & Music', 8 - 'Family & Relationships', 9 - 'Politics & Government'. Input :What's the best way to clean a keyboard? Constraints: The output should be a number between 0 and 9. ## Example 2 Instruction: You are given a target person and two sentences. Your task is to determine whether the second sentence conveys either a stereotype, anti-stereotype, or is unrelated with respect to the first sentence and the target. A stereotype is a widely held but oversimplified notion of a group of people. An anti-stereotype is an idea that goes against the widely held but oversimplified notion of a group of people. Sentences and targets are unrelated if there is no relevance between them. Input: Target: mother. The mother walked into her kitchen. She pulled a beer out of the fridge and leaned against the doorway, sipping it slowly. Constraints: The output should be one of the three: 'stereotype', 'anti-stereotype' or 'unrelated'. ## Example 3 Instruction: In this task, you are given an article. Your task is to summarize the article in a sentence. Input: Ms Bourne, who became Sussex's first PCC in 2012, was declared winner after the vote went to a second count. Three candidates were eliminated in the first count, leaving only Ms Bourne and Labour's Michael Jones. In the first count, Ms Bourne had 114,570 votes and Mr Jones had 61,017. The second count brought their totals to 139,335 and 86,392 respectively. She said: "I'm absolutely honoured and very privileged to be elected." She said she needed to "find extra savings because budgets are still reducing" and "to invest in frontline policing because I know that is really important to people in Sussex". Voter turnout was 22.5% compared with 15.3% in 2012. The three eliminated in the first count were Green Party candidate James Doyle, UKIP's Patrick Lowe and James Walsh from the Liberal Democrats. Results listed alphabetically by surname are as follows. BBC News App users: tap here to see the results. Constraints: None. ## Example 4 Seed 4 Example 1 Instruction: In this task, you are given Wikipedia articles on a range of topics as passages and a question from the passage. We ask you to answer the question by classifying the answer as 0 (False) or 1 (True). Input: Passage: Property tax - Property tax or 'house tax' is a local tax on buildings, along with appurtenant land. It is and imposed on the Possessor (not the custodian of property as per 1978, 44th amendment of constitution). It resembles the US-type wealth tax and differs from the excise-type UK rate. The tax power is vested in the states and is delegated to local bodies, specifying the valuation method, rate band, and collection procedures. The tax base is the annual rental value (ARV) or area-based rating. Owner-occupied and other properties not producing rent are assessed on cost and then converted into ARV by applying a percentage of cost, usually four percent. Vacant land is generally exempt. Central government properties are exempt. Instead a 'service charge' is permissible under executive order. Properties of foreign missions also enjoy tax exemption without requiring reciprocity. The tax is usually accompanied by service taxes, e.g., water tax, drainage tax, conservancy (sanitation) tax, lighting tax, all using the same tax base. The rate structure is flat on rural (panchayat) properties, but in the urban (municipal) areas it is mildly progressive with about 80% of assessments falling in the first two brackets. Question: is house tax and property tax are same. Constraints: The output should be 0 or 1. Example 2 Instruction: Rewrite each original sentence in order to make it easier to understand by non-native speakers of English. You can do so by replacing complex words with simpler synonyms (i.e. paraphrasing), deleting unimportant information (i.e. compression), and/or splitting a long complex sentence into several simpler ones. The final simplified sentences need to be grammatical, fluent, and retain the main ideas of their original counterparts without altering their meanings. Input: From its inception, it was designated a duty-free port and vied with the neighboring Sultanate of Pattani for trade. Constraints: None. ## Example 3 Instruction: You are provided with an arithmetic question. Your task is to compute the solution using the given arithmetic operations. The only arithmetic operators needed to answer the questions are'+'(addition) and'-'(subtraction). The answer should be correct to one decimal place. Input: Joan found 70 seashells on the beach. She gave Sam some of her seashells, after which she has 27 seashell left. How many seashells did she give to Sam? Constraints: None. Example 1 Instruction: You are given a science question (easy-level) and four answer options (associated with "A", "B", "C", "D"). Your task is to find the correct answer based on scientific facts, knowledge, and reasoning. Do not generate anything else apart from one of the following characters: 'A', 'B, 'C', 'D'. There is only one correct answer for each question. Input: Which part of a bicycle BEST moves in a circle? (A) Seat (B) Frame (C) Foot pedal (D) Kickstand Constraints: The output should be one of the following characters: 'A', 'B, 'C', 'D'. Example 2 Instruction: You are given a negative review and your task is to convert it to a positive review by one or more making minimal changes. Avoid changing the context of the review. Input: we stood there in shock, because we never expected this. Constraints: None. Example 3 Instruction: In this task, you are given two sentences taken from a conversation, and your job is to classify whether these given sentences are sequential or not. We will mark the given sentence pair as 'True' if it's sequential, otherwise 'False'. The two sentences are spoken by two different people. Input: Noah: When and where are we meeting? :), Madison: I thought you were busy...? Constraints: The output should be 'True' or 'False'. Example 4 Table 6: The in-context demonstrations used in our experiments. ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ## D Structural Prompt Ablations We explore the effect of the different components of our data collection pipeline by conducting structural prompt ablations. Throughout this section, we train models for 1,500 steps using 2,000 examples and evaluate the Super-Natural Instructions validation set performance, averaged across three different random seeds. ## D.1 Meta-Prompts Language models are known to be sensitive to the meta-prompt - i.e., the text wrapping the in-context demonstrations, which can include task description or additional guidance regarding the desired output. We therefore experiment with three different metaprompt styles: minimal, *enumeration*, and *verbose* (Figure 6). Table 7 presents the results obtained from finetuning on datasets generated with different metaprompts. We observe that the simple enumeration approach elicits more informative examples than either the minimalistic or verbose approaches. Perhaps surprisingly, the verbose meta-prompt performs worse than the minimalistic one, possibly because the last line (the command) interrupts the pattern, and does not align well with patterns in the pretraining corpus.4 | Meta-Prompt | Super-Natural Instructions | |---------------|------------------------------| | Minimal | 47.5 ± 0.6 | | Enumeration | 48.7 ± 0.3 | | Verbose | 46.9 ± 0.3 | Table 7: Performance of 11B T5-LM models trained on 2,000 examples, generated with each meta-prompt, on the Super-Natural Instructions validation set. | Seed Demonstrations | Super-Natural Instructions | |-----------------------|------------------------------| | 1 | 46.9 ± 0.3 | | 2 | 46.1 ± 0.3 | | 3 | 46.8 ± 0.4 | | 4 | 41.9 ± 1.0 | | 5 | 46.0 ± 0.2 | | Mix | 46.1 ± 0.3 | ## D.2 In-Context Examples Models such as GPT-3 are known to be sensitive to slight variations in prompt content, resulting in performance differences when provided with different demonstrations sampled from the same dataset (Liu et al., 2022b) and when permuting the in-context demonstrations (Kumar and Talukdar, 2021; Lu et al., 2022). To account for the effect of the provided demonstrations on the quality of the generated data, we experiment with each of our five demonstration sets separately.5 Table 8 shows that the data generation pipeline is largely robust to variations in the in-context demonstrations, with one outlier (seed 4). Inspecting the differences between these groups, we find that seed 4 led to less constrained instructions: 1,376 out of 2,000 examples do not have constraints, whereas that number is between 28 and 880 for all other sets. Indeed, in seed 4, only one out of three prompt demonstrations had constraints, while in other sets, at least two demonstrations had constraints. ## D.3 Constraints As mentioned in §2, each instruction-input demonstration is accompanied by an additional *constraints* field, which details the task's output space restrictions (e.g., "entailment", "contradiction" or "neutral" for NLI). We note that, in all demonstrations, the instruction itself lists the output space | Use "Constraints:" for | Super-Natural | | |--------------------------|-----------------------|--------------| | Input Gen | Output Gen | Instructions | | ✓ | ✓ | 46.9 ± 0.3 | | ✓ | 43.9 ± 0.7 41.7 ± 0.2 | | constraints. We hypothesize that adding the constraints field may emphasize these restrictions, ultimately steering the output generation model to produce outputs in the correct format. We verify our hypothesis by conducting two ablation experiments. First, we keep the constraints field when generating the instructions and inputs, but only use instructions and input arguments for the output generation step (i.e., without concatenating generated constraints). Second, we completely remove the constraints field from the data generation pipeline, leaving the instruction field as the only source of information for output space constraints. Table 9 shows that the constraints field has a positive effect both on the quality of the generated outputs and inputs. Removing constraints from the output generation step reduces performance by 3 points, and removing the field from the instructions-inputs generation phase decreases performance by an additional 2.2 points. ## D.4 Two-Step Process An alternative to our two-step pipeline is to generate instruction-input-output triplets in one pass. To test this approach, we provide the model with the same prompt used for the instruction-inputconstraints generation, only with an additional *output* field, added after the constraints field. As Table 9 shows, one-step generation obtains a score that is lower by 1.7 than the default two-step process. We suspect that this gap is a result of using stochastic decoding in the unified input-output generation phase, which is critical for obtaining diverse inputs. In contrast, when generating outputs in a separate phase, we can use deterministic decoding algorithms to maximize accuracy. | Data Generation Process | Super-Natural Instructions | |---------------------------------------------------|------------------------------| | Separate I/O Steps | 46.9 ± 0.3 | | Unified I/O Step | 45.2 ± 0.6 | | Table 10: Performance of 11B T5-LM models trained | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 9 ✓ A2. Did you discuss any potential risks of your work? 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2 ✓ B1. Did you cite the creators of artifacts you used? 1, 2, 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We verified that all the data and code used is publicly open - we verified license details for each, and we provided citation to all relevant resources, where license details can also be found. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? As for existing datasets we used, we didn't discuss that, but other than the fact that we used published datasets that are already used by the research community - we also sampled examples and manually verified their content. As for data we collected, we did discuss that in section 9, and additionally provided data analysis in section 3. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** 4, 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4, Appendix A ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5, 6, Appendix D ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4, Appendix A, Appendix B ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
dua-etal-2023-adapt
To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering
https://aclanthology.org/2023.acl-long.807
Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on general-purpose domains like Wikipedia. While some work has been investigating how well ODQA models perform when tested for out-of-domain (OOD) generalization, these studies have been conducted only under conservative shifts in data distribution and typically focus on a single component (i.e., retriever or reader) rather than an end-to-end system. This work proposes a more realistic end-to-end domain shift evaluation setting covering five diverse domains. We not only find that end-to-end models fail to generalize but that high retrieval scores often still yield poor answer prediction accuracy. To address these failures, we investigate several interventions, in the form of data augmentations, for improving model adaption and use our evaluation set to elucidate the relationship between the efficacy of an intervention scheme and the particular type of dataset shifts we consider. We propose a generalizability test that estimates the type of shift in a target dataset without training a model in the target domain and that the type of shift is predictive of which data augmentation schemes will be effective for domain adaption. Overall, we find that these interventions increase end-to-end performance by up to {\textasciitilde}24 points.
# To Adapt Or To Annotate: Challenges And Interventions For Domain Adaptation In Open-Domain Question Answering Dheeru Dua1∗ Emma Strubell2∗ Sameer Singh1 **Pat Verga**3 1University of California, Irvine 2 Carnegie Mellon University 3 Google DeepMind ## Abstract Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on general-purpose domains like Wikipedia. While some work has been investigating how well ODQA models perform when tested for out-of-domain (OOD) generalization, these studies have been conducted only under conservative shifts in data distribution and typically focus on a single component (i.e., retriever or reader) rather than an end-to-end system. This work proposes a more realistic endto-end domain shift evaluation setting covering five diverse domains. We not only find that endto-end models fail to generalize but that high retrieval scores often still yield poor answer prediction accuracy. To address these failures, we investigate several interventions, in the form of data augmentations, for improving model adaption and use our evaluation set to elucidate the relationship between the efficacy of an intervention scheme and the particular type of dataset shifts we consider. We propose a generalizability test that estimates the type of shift in a target dataset without training a model in the target domain and that the type of shift is predictive of which data augmentation schemes will be effective for domain adaption. Overall, we find that these interventions increase end-to-end performance by up to ∼24 points. ## 1 Introduction General-purpose open-domain question answering (ODQA; Chen et al. (2017); Lee et al. (2019); Izacard et al. (2022)) is an important task that automates reading and understanding a large corpus of documents to answer a given question succinctly. It is especially crucial in fields such as biomedicine, legal, news, etc., where more documents are added daily, outpacing the speed at which a user can process the information. Current state-of-the-art ODQA systems perform a two-stage pipeline process (Izacard et al., 2022): 1) Given a question ∗*This work was done while authors were at Google. ![0_image_0.png](0_image_0.png) Figure 1: Eff**ect of interventions on dataset shifts**. Top: Average end-to-end performance of source domain model is quite poor when applied to OOD datasets. Source model (trained on general-purpose domain) performance improves when adapted to unseen target domain with interventions. *Bottom:* Drill-down of performance into zero and few-shot data augmentations averaged over target datasets exhibiting these shifts shows covariate and concept shifts respond to zero and fewshot data augmentations. Target datasets with No shift do not improve much with any intervention while full shift benefits most from Few-shot. and document corpus, a *retriever* (Karpukhin et al., 2020; Izacard et al., 2021; Raffel et al., 2020) selects relevant passages and 2) a question answering model, also known as a *reader* (Izacard and Grave, 2021) answers the given question based on the retrieved passages. This decoupling allows for independent advances in domain adaptation of general-purpose retrievers (Thakur et al., 2021) and question-answering (Fisch et al., 2019) models. To enable practical application, an ODQA system should assist humans in keeping up with new knowledge without requiring annotations for every new domain or concept. For this, the system should be resilient to changes in the document, question, 14429 and answer distributions. Unfortunately, the current work in ODQA focus solely on Wikipedia corpus and do not study effectiveness of a model trained on such a general-purpose domain when applied to an unseen domain. To gauge how likely it is for a source domain model to succeed on an unseen domain we need to understand its ability to work out-of-the-box or even adapt to a new target domain, under varying types and degrees of dataset shifts. (Quinonero-Candela et al., 2008). In this work, we study the challenges and interventions for generalizing ODQA models to new domains via four contributions. First, to understand how well the state-of-the-art ODQA system (trained on the general-purpose domain) performs on a variety of target distributions, we define a collection of datasets for evaluating domain generalization. We aggregate a set of seven ODQA datasets spanning five different domains (§2). We observe that the source ODQA model does not generalize well (Fig.1, Top) on this collection (§4). Second, to automatically determine the type of data shift with only a small number of labeled target domain examples, we propose a *generalizability test*. This test assesses the type and degree of shift a new domain suffers with respect to the source domain (§3). Third, to understand the adaptability of the source model to a target domain, we analyze the performance of various intervention schemes, including existing zero-shot in-domain question generation and a novel few-shot language modelaided generation. These schemes create data akin to the target domain which is augmented with the source domain to learn an adapted version of the source model. Overall, we observe improvement in performance across all the target datasets (Fig. 1). The degree of improvement depends on the intervention scheme and underlying dataset shift (§5). Finally, we propose a simple and effective few-shot method that improves the performance by up to 24% in F1. This method prompts a large language model with 8 examples to generate examples for further adaptation. Putting it all together, we use the generalizability test to gauge the type and degree of dataset shift in a target dataset. Then, we empirically show that certain types of dataset shifts respond well to specific intervention schemes (§5, Fig. 1). This helps ascertain whether we can adapt a source model to unseen domain with minimal supervision. The resources used in this work are released at https: ## 2 Background And Evaluation Setup An ODQA model learns interactions among three random variables: Question (Q), answer (A) and context corpus (C). For a given q ∈ Q, first the retriever R returns a set of passages, Cq = R(q,C). These passages are then sent to a reader model M to obtain the final answer, ˆa ← M(a|q,Cq). Following prior work, we evaluate retriever performance with the Acc@k metric, which computes if the oracle answer is found in the top-k retrieved passages1. We set k=100 in all of our experiments. For reader performance, we compute token-level F1 between the oracle and predicted answer. ## 2.1 Datasets We test the generalization capabilities of a model trained on a *source domain* when applied to seven datasets in five very different *target domains*. Source Domain: For source domain we use documents from English Wikipedia and QA pairs for supervision from NaturalQuestions (NQ) (Kwiatkowski et al., 2019) and BoolQ (Clark et al., 2019). We treat this domain as our source as it is used for the vast majority of current work in ODQA (and many other areas of language research). In addition to the supervised training data from NQ and BoolQ, we also consider clozestyle questions derived from the QA pairs in NQ. For each QA pair, we retrieve a sentence from Wikipedia with the highest BM25 similarity score. We convert the retrieved sentence into a cloze-style question by replacing the answer string in the sentence with sentinel markers (Raffel et al., 2020) 2. Target Domains: For our target corpora, we re-purpose seven open-domain QA and/or reading comprehension datasets spanning five different domains (Stack Overflow, Reddit, Pubmed, Japanese Statute Law codes, CNN/DailyMail, and Wikipedia). The datasets Quasar-S (Dhingra et al., 2017), Quasar-T (Dhingra et al., 2017), SearchQA (Dunn et al., 2017) and BioASQ (Balikas et al., 2015) were introduced as ODQA datasets over Stackoverflow, Reddit, Wikipedia, and Pubmed corpus respectively. Additionally, we re-purpose reading comprehension datasets, NewsQA (Trischler et al., 2017) and CliCR (Šuster and Daelemans, 2018) as ODQA datasets, by retrieving a set of passages for each QA pair from Pubmed and CNN/Dailymail corpus. For COLIEE (Rabelo et al., 2022), we convert the original entailment questions into boolean questions and retrieve passages from legal code statutes provided with the task. We confirm that these reading comprehension datasets can be reasonably re-purposed for our ODQA setup by achieving a reasonable endto-end performance of ODQA models trained on gold target domain QA pairs with BM25 retrievals from the target corpus (UB-Ret, Fig. 3). ## 2.2 Models We compare four **retrievers**: (1) BM25 (Robertson and Spärck Jones, 1994) (sparse and unsupervised), (2) Contriever, semi-supervised with MSMARCO (Izacard et al., 2021), (3) Dense Passage Retriever (DPR) (Karpukhin et al., 2020), and (4) the state-of-the-art source domain model Spider (Ram et al., 2022). DPR and Spider are dense and supervised. As for **reader**, we use the stateof-art fusion-in-decoder (FiD) model (Izacard and Grave, 2021) that uses the top 100 documents to generate the final answer. ## 3 Generalizability Test There are many aspects that determine in what ways and to what extent one data distribution differs from another. It is often challenging to quantify the degree of *generalizability* or diverseness for a new domain without collecting enough samples to train a model in the new domain. To address this issue, we propose a method to assess the type and degree of diversity by utilizing only a few examples from the target domain as an evaluation set. ## 3.1 Types Of Dataset Shift Different types of dataset shifts (QuinoneroCandela et al., 2008) have been proposed in the literature but they are often studied in a classification setup. For our application, we consider *concept* and *covariate* shifts which are more amenable to our pipelined ODQA setup - with input as a joint distribution over question and contexts and output as a distribution over answers given question and contexts as input. No shift occurs when the input and output distributions match across the source and target domains. Concept shift (Widmer and Kubát, **2004)** occurs when the input distribution of the source and target domains match, i.e., ps(x) = pt(x) while the output distribution between source and target domain does not match, ps(y|x) , pt(y|x). Covariate shift (Zadrozny, **2004)** occurs when the source and target input distributions do not match, i.e. ps(x) , pt(x) while the output distributions match ps(y|x) = pt(y|x). Full shift occurs when both the source and target input and output distributions do not match. ## 3.2 Calculating Shift For Odqa We characterize the shift in ODQA as a two-step process. First, we compute the input distribution, i.e, the joint question and context distribution using un-normalized (energy) scores from a dense retriever (Karpukhin et al., 2020) that quantifies the compatibility between a given question, q and a context, c via R(q, c). Then, we normalize the scores from the retriever over a set of contexts. Ideally, the set of contexts should be the entire target domain document corpus, however, that can be prohibitively computationally expensive and also results in a high entropy distribution. Instead, we use a subset of contexts, C, from the entire corpus C. We ignore the prior over questions since it remains constant when calculating the context distribution for a specific question. Instead, we approximate the joint with conditional distribution over contexts given question. $$p(q,c)\propto{\frac{{\mathcal{R}}(q,c)}{\sum\limits_{c_{k}\in C}{\mathcal{R}}(q,c_{k})}}\qquad\qquad(1)$$ In the second step, we test whether the output distributions match by computing the likelihood of generating the oracle answer given a question, q, and the relevant contexts, Cq. In an ideal scenario, we can do this by performing global normalization (Goyal et al., 2019) over all possible answer spans in the corpus which is intractable. Instead, we use a sub-sample of answers, A, to compute the output distribution as shown below. $$p(a|q,C_{q})=\frac{\prod_{t}{\mathcal{M}}(a^{t}|a^{<t},q,C_{q})}{\sum\limits_{a_{k}\in{\mathcal{A}}}\prod_{t}{\mathcal{M}}(a_{k}^{t}|a_{k}^{<t},q,C_{q})}\quad\quad(2)$$ ![3_image_0.png](3_image_0.png) | Dataset | Retriever | Reader | Shift | |-----------------|------------------|----------|-----------| | (w t u − w g u) | (v r u − v t ) u | | | | BioASQ | 0.30 | 0.17 | Concept | | CliCR | -0.88 | 0.23 | Full | | Quasar-S | -0.66 | 0.07 | Covariate | | Quasar-T | 0.20 | 0.16 | Concept | | NewsQA | -0.19 | 0.18 | Full | | SearchQA | 0.61 | 0.00 | No | ## 3.3 Predicting Type Of Dataset Shift To compute the type of shift (§3.1), we need a model trained on the target domain (pt) which requires a large number of examples. However, our goal is to determine if a source model can be adapted to the target dataset with only a few examples for target evaluation. To do this, we conceptualize adapting or fine-tuning a pre-trained source model as a Bayesian framework. In this framework, the source model acts as a prior which when exposed to interventional data (for adapting) and target data (for fine-tuning), results in an adapted or fine-tuned posterior distribution. If the prior (source model) contains an informative signal with respect to the target dataset then we do not require much supervision to learn an effective posterior. However, if the prior is non-informative we need a lot of supervision to learn the posterior. Towards this end, we devise a *generalizability test*, where we use a small set of evaluation examples sampled from each target dataset to compute input and output distribution using the source domain model. Then, we compare these distribution with the a noninformative prior like uniform distribution and informative prior like the oracle distribution to gauge if the source model is closer to uniform or oracle distribution. This helps us assess the effectiveness of the source model towards the target dataset without having to train a model in the target domain. Input/**Retriever Distribution:** To determine if the input distribution contains informative signal with respect to target evaluation set, we need to compute the distance of the input distribution from uniform and oracle distribution. To do this, we follow Eq. 1 and compute the input distribution, with passages from across examples in the entire target evaluation set as the subset for normalizer computation. Then, for a each question, we compute the Wasserstein distance, w tu , (Kantorovich, 1960) between the input distribution and the uniform distribution and average these values over all the examples in the target evaluation set. Similarly, we also compute the distance between the gold or oracle distribution and the input distribution as w tg . If w tu > w g u, we conclude that the target distribution is far from the uniform distribution and closer to the gold distribution, indicating that the source model is compatible with the target distribution (Fig. 2). Output/**Reader Distribution:** In similar vein as input distribution, we need to compare the output distribution with corresponding uniform and oracle distribution over answers. To do this, we follow Eq. 2 and compute the output distribution, with set of answer spans from across all the examples in the target evaluation set for normalizer computation. Then, we compute the Wasserstein distance between the uniform and output distribution averaged over the target evaluation set as v tu . In an ideal scenario, we would compare the distance between and oracle and output distribution with v tu , similar to input distribution. However, empirically we find that output distribution is always closer to uniform than oracle, even when evaluated on source domain. We believe this is because of two reasons. First, the conditional answer generation model (M) is not trained with a contrastive loss like the retriever, resulting in a high entropy answer likelihood distribution. Second, the support set of answers used for normalization contains only grammatically correct answer spans making the likelihood scores attenuated. To address these issues, we use a reference answer conditional distribution to de-bias the likelihood scores with a threshold. To obtain this threshold, we consider the source distribution as a reference and compute the distance between output distribution evaluated on examples from source evaluation set and the uniform distribution as v ru . Since the reference based output distribution is in-domain, it should be far from the uniform distribution and closer to oracle distribution. As a result, if v ru − v tu is close to 0, we assess that the target is far from uniform and that source model is compatible with the target dataset. In Figure 2, we put this altogether as a decision tree to identify the type of dataset shift. We observe that SearchQA falls under the *No shift* category as it is close to the source domain, hence, we conjecture that it will observe minimal improvements under most data intervention schemes as the source model already captures the target distribution (§5). We also conjecture that datasets falling under *Concept shift* and *Covariate shift* are more amenable to zero-shot data interventions, while, *Full shift* would benefit more from few-shot or in-domain annotations from the target domain. We consider few shot augmentations as a proxy for annotating examples in the target domain because they are generated with supervision from target dataset. ## 4 How Well Do Models Generalize? We test the OOD performance of the source model on target datasets and analyze the failures. ## 4.1 Reader Generalization In Fig. 3, we test the end-to-end performance of three model variants: Source: a reader trained with source dataset and contexts retrieved by BM25, demonstrating zero- ## Shot Generalization Performance. Upperbound-Reader (UB-READ): a reader trained on the target dataset with contexts retrieved by BM25 - the overall strongest retriever. Upperbound-Retriever (UB-RET): a reader trained on the target dataset with gold contexts to approximate upper-bound performance. We observe large performance drops when evaluating the source model on target domains (Fig. 3), especially when the target corpus differs from Wikipedia, such as in Quasar-S (Stack Overflow) and CliCR (PubMed), even though the model requires similar reading capabilities to those needed in the source domain. Interestingly, even though BM25 retriever accuracy is relatively high on the target datasets (Fig. 4, ∼83% Acc@100 on QuasarS), that accuracy does not translate to strong reader performance (Fig. 3, ∼11% F1 on Quasar-S). To understand this performance gap, we manually sample 50 predictions from each target dataset where retrieved passages contain the oracle answer but the reader produced an incorrect prediction. We observe that in ∼**65% cases, the** Acc@100 metric yields a false positive, where the passage contains an exact string match of the correct answer, but the context does not actually answer the given question. In other cases, the reader is unable to understand the context. For example, for the question: What is the name of the office used by the president in the white house? and answer: oval, the retrieved passage: A tunnel was dug into the White House connecting the Oval Office to a location in the East Wing.... is credited (incorrectly) as context answering the question. ## 4.2 Retriever Generalization We compare the zero-shot generalization of four retrieval models in Fig. 4. Spider, which is the best performing model on the source domain, exhibits improvement on SearchQA (∼1%) (which is similar to source distribution), but shows large drops in performance when applied to the target datasets: ∼40% on NewsQA, ∼28% on Quasar-T and, Quasar-S. To understand the drop, we manually analyze 50 random incorrect predictions from Spider. We observe two major failure modes. First, we find that dense models are sensitive to changes in the length of contexts. When exposed to documents with heterogeneous lengths, models tend to over-retrieve shorter contexts. To ![5_image_0.png](5_image_0.png) quantify the sensitivity to changes in lengths on source domains itself, we pool passages from all target corpus into a combined index. We observe that the performance of Spider when exposed to this combined index reduces by ∼15% and restricting the minimum length of contexts to 50 words alleviates the problem and recovers the original performance. The second common failure mode occurs due to changes in distribution of entity types from source to target. For example, words like plant in Which is produced in plants of narora kakrapar tarapur refers to power plant in Wikipedia, while in case of PubMed it often refers to living organic matter (Sciavolino et al., 2021). Overall, BM25, being an unsupervised method, has the best performance across all domains. ## 5 **Interventions For Improving Adaptation** Domain Adaptation is shown to be a causal intervention (Jin et al., 2021) mechanism to effectively understand impact of an augmentation technique without much concern about spurious correlations. ## 5.1 Zero-Shot Adaptation Methods We perform a set of zero-shot data intervention methods, where we consider the effect of change in distribution of each random variable: Question, answer and context one at a time, while keeping the other two fixed. Varying context distribution To test the effect of change in context distribution, we pool passages ![5_image_1.png](5_image_1.png) | Augmentations | Retriever | Reader | |----------------------|-------------|----------| | Random | 45.35 | 33.50 | | Uniform | 50.02 | 39.07 | | Most frequent | 39.33 | 38.18 | | BioASQ train answers | 47.48 | 41.33 | from all corpora into a combined index. We observe that supervised models like Spider are sensitive to out-of-domain distractors, unlike BM25, especially when the target dataset uses same corpus as source (Wikipedia). For example, SearchQA suffers a performance drop of ∼15%. On average we see a performance improvement of ∼2% (w/o COLIEE) when the target index is changed to the combined index. BM25 still out-performs Spider on average by 19.1% with the combined index. However, we observe a drop in performance of the FiD reader of up to ∼5% in F1 for NewsQA with the combined index. More details are in the appendix (Figs. 5 and 6.) Varying answer distribution Many works (Gururangan et al., 2018; Dua et al., 2020; Jiang and Bansal, 2019) have shown that bias in the answer prior distribution can introduce spurious correlations in model learning. This effectively improves the model performance at the cost pf OOD generation. To test whether we can improve the performance of adapted source model by varying the answer distribution, we experiment with a variety of answer distributions over plausible set of answer spans. To obtain the set of answer spans, we extract and annotate coarse-grained entity types from the | Dataset | Retriever | Reader | | | | | |-----------------------------------------|-------------|----------|------|------|------|------| | Source ClozeQA QGen Source ClozeQA QGen | | | | | | | | BioASQ | 50.41 | 48.0 | 45.4 | 45.3 | 49.4 | 46.4 | | CliCR | 23.8 | 24.9 | 23.9 | 6.12 | 7.34 | 10.5 | | Quasar-S | 50.3 | 66.8 | 68.2 | 10.2 | 21.7 | 17.4 | | Quasar-T | 54.7 | 53.9 | 55.5 | 34.9 | 41.9 | 44.7 | | NewsQA | 12.5 | 18.7 | 15.2 | 18.5 | 21.2 | 12.7 | | SearchQA | 63.0 | 52.9 | 54.7 | 34.6 | 38.8 | 37.2 | | COLIEE | 61.4 | 60.5 | 57.8 | 46.7 | 54.1 | 62.3 | target corpus using spaCy3. We use this coarsegrained entity type information as a set of classes from which to choose 50k entities with four different sampling strategies: Most frequent, uniform, randomly sampled based on entity type categories, and sampling in proportion to entity type distribution of answers in the target training set. The source model has reasonable end-to-end performance on BioASQ, even with passages from the source corpus (Wikipedia), suggesting that it contains sufficient information for answering many BioASQ questions. Consequently, we select BioASQ for these controlled experiments (Appendix Fig. 6). This allows us to use the Wikipedia corpus alone for retrieval, which makes it easier to fix the passage distribution. In Table 2, we show that uniform sampling boosts retriever performance compared to random sampling, allowing the model to learn from all types of answers and generalize better to unseen answer distributions. On the other hand, the best reader model performance is when we know the correct answer distribution of the target dataset up front, showing that the answer priors influence reader performance. However, in a zero-shot setup, we do not have access to this distribution, so we adopt the second-best technique, uniform sampling from across the entity type categories, in the following experiments. Varying question distribution We vary the question distribution by augmenting the source domain with QA pairs generated using two different methods. Our first approach (QGen) uses a question generation model (Subramanian et al., 2017) trained on the source domain to generate a question given a passage and an answer. This question generation model is applied to a new target passage and a plausible answer span from the passage (Shakeri 3https://spacy.io/ et al., 2020; Krishna and Iyyer, 2019; Song et al., 2018; Klein and Nabi, 2019). The second approach (Cloze QA), which has been less explored previously, converts a sentence in the target corpus to a fill-in-the-blank style cloze question (Taylor, 1953) by masking a plausible answer span (entity mention) in the sentence. We sample answer spans uniformly based on an entity type distribution from the target corpus and then query our combined index to create a dataset containing cloze style questions aligned with relevant documents. We use these same sampled answers to generate standard QGen QA pairs as well. We combine these data interventions with our initial source domain data to train a DPR retriever and a FiD reader (Table 3). We observe similar average performance across both intervention types in retriever and reader models. However, cloze QA pairs are computationally much more efficient to generate as they do not require additional question generation models. Discussion on generalizability test In §3, we hypothesized that datasets with less severe shift (Quasar-S, Quasar-T, and BioASQ) would show more performance improvements with zero-shot adaptation as compared to datasets with severe shift (CliCR and NewsQA). Indeed, we observe an avg. improvement of about 8.5% F1 on datasets having Concept and Covariate shift while only 3.5% F1 on datasets with Full shift in Table 3. Moreover, in Fig. 1, we see that target datasets with *No shift*, do not show much improvement with any intervention as the source model already captures the distribution. Datasets with *Full shift* need few-shot examples for better adaptation while datasets with Concept and *Covariate* shift are able to adapt with zero-shot data interventions. ## 5.2 Few-Shot Generalizability And Adapatability Zero-shot adaptation does not work well when the target distribution is far from the source. For these cases, we experiment with few-shot adaptation. Few-shot data generation Zero-shot interventions like QGen are trained on the source and do not produce generations that are fully compatible with the target domain and thereby do not provide much useful signal. An alternative approach would be to train a question generation model with a few examples from the target domain. However, it is difficult to adapt or fine-tune a question genera- | Dataset | Retriever | Reader | Closed Book | | | |-----------------------------------|-------------|----------|---------------|------|------| | Baseline DataGen Baseline DataGen | (F1) | | | | | | BioASQ | 50.4 | 51.3 | 45.3 | 50.6 | 32.0 | | CliCR | 23.8 | 29.0 | 6.12 | 19.4 | 10.8 | | Quasar-S | 50.3 | 71.9 | 10.2 | 34.2 | 23.7 | | Quasar-T | 54.7 | 55.4 | 34.9 | 45.8 | 55.3 | | NewsQA | 12.5 | 22.7 | 18.5 | 23.3 | 8.67 | | SearchQA | 63.0 | 63.3 | 34.6 | 37.6 | 61.5 | | COLIEE | 73.3 | 82.2 | 46.8 | 61.1 | 53.0 | tion and answering model (for validating QA pair correctness) with very few examples. Table 4: Both Closed Book and DataGen use eight fewshot examples from the target domain. Closed Book LLM contains 540B params while the Retriever and Reader contain 110M and 770M params respectively. Closed-book performance for NQ is 36.71. To capture target distribution without a lot of supervision, we propose a few-shot technique (DataGen) that prompts a large language model (LLM; Chowdhery et al. (2022)) to generate a sentence given a passage. We use eight seed examples from the target domain to generate additional training data to help bootstrap adaptation in the target domain. We observe that it is easier for large language models to condition on a single variable (context) and compress (Goyal et al., 2022) multiple facts from the passage into a single sentence, as compared to conditioning on a context and answer span together. Moreover, in section 5.1 we observed that augmentation with cloze-style QA pairs yields similar performance to using question-formatted QA pairs, offering evidence that the precise format is not as important as the content itself. We prompt the model in the following format: After reading the article, *«context»* the doctor said *«sentence»* for PubMed articles. For other target corpora we replace doctor with engineer, journalist, and poster for Stack Overflow, DailyMail, and Reddit respectively. To filter out invalid sentences, we remove any generation that: 1) includes a number, 2) does not repeat part of the passage verbatim, and 3) has less than 75% word set overlap with the passage (after removing stopwords). To gauge the precision of our generations, we sampled 20 generated sentences for each dataset and found that they are correct more than 70% of the time. To test retriever performance, we train a DPR model with source domain data and ∼8k examples containing pairs of original passage and generated sentence for each target dataset. We observe performance improvements of ∼18% on NewsQA, ∼13% on CliCR, and ∼24% on Quasar-S (Table 4). Moreover, LLMs contain substantial factual knowledge in their parameters and we observe that they do particularly well in a closed-book setting on datasets with trivia-based factual questions, like SearchQA and Quasar-T, but do not perform well in other cases. Following our conjecture in §3, datasets with *Full shift* on average show an improvement of 12.1% with few-shot interventions, compared to 3.5% with zero-shot, which is also evident in Fig. 1. We show qualitative examples in Appendix (Fig. 8). ## 6 Related Work Domain generalization in readers The most popular work in generalization in reading comprehension was introduced as part of the MRQA (Fisch et al., 2019) challenge, which focuses on transfer learning from multiple source datasets to unseen target datasets (Gottumukkala et al., 2020). This multi-task learning setup requires the model to perform complex reasoning at test time that may be unseen at training. However, in this work, we focus on the generalization capabilities of an end-to-end ODQA setup that is able to understand passages in the new domain and not the ability to perform unseen reasoning. Domain generalization in retrievers A recent line of work that tests domain generalization of retrievers (Petroni et al., 2021; Ram et al., 2022; Izacard et al., 2022) focuses on conservative changes to the source domain, for instance, testing generalization of a model trained on Natural Questions applied to WebQuestions (Berant et al., 2013) or TriviaQA (Joshi et al., 2017), all of which use the same Wikipedia corpus. BEIR is a recent retrieval benchmark, (Thakur et al., 2021) tests the generalizasbility of only the retriever in isolation and not end-to-end ODQA performance, which is a brittle metric. Domain adaptation work in retrievers (Dai et al., 2022) generate passages using few shots but do not require the answer to be correct. Ma et al. (2021) performs a zero-shot adaptation using noisy labels for retrievers. Siriwardhana et al. (2022) utilizes examples from the target domain in a transfer learning setup while we work in a zero to a few shot setting. Domain generalization in other tasks Incidental supervision signals in (He et al., 2021) determine which dataset has a useful signal for a target classification task. Similar to (Fisch et al., 2019), in machine translation, various works (Dua et al., 2022; Fedus et al., 2022) learn to balance positive and negative feature transfer from multiple source domains to a target domain. ## 7 Conclusion We investigate failures of ODQA models under non-conservative dataset shift. We also propose a way to test compatibility of source model with new domains without much supervision. We establish how different dataset shift behave under a variety of intervention schemes. We hope future research will adopt these target datasets for evaluation. ## 8 Limitations This work focuses on English QA datasets only. Similar techniques should apply in other languages as well; however, we did not evaluate them. The augmentations generated are difficult to validate for yes/no questions for the few-shot method. Moreover, it can be challenging to generate these augmentations if access to large LM is unavailable. However, under those scenarios, data in the target domain should be annotated, which ideally would perform better than the few-shot setting. Our models also suffer from similar problems as LLMs, like hallucinations, misinformation, etc. ## 9 Ethics Statement This works focuses on testing the generalization and adaptability of general-purpose models to various domains. It uses existing training data and conventional methods of testing model performance. This work does not deal with any social impacts or biases in natural language processing systems. ## Acknowledgements We would like to thank William Cohen, Haitian Sun, Tom Kwiatkowski and the anonymous reviewers for their feedback. This work was partly supported by the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. ## References Georgios Balikas, Anastasia Krithara, Ioannis Partalas, and George Paliouras. 2015. Bioasq: A challenge on large-scale biomedical semantic indexing and question answering. In *International Workshop on Multimodal Retrieval in the Medical Domain*, pages 26–39. Springer. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In *Proceedings of the 2012 Joint* Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995–1005, Jeju Island, Korea. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *ArXiv preprint*, abs/2204.02311. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. ArXiv preprint, abs/2209.11755. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question answering by search and reading. *ArXiv preprint*, abs/1707.03904. Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, and Angela Fan. 2022. Tricks for training sparse translation models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3340–3345, Seattle, United States. Association for Computational Linguistics. Dheeru Dua, Sameer Singh, and Matt Gardner. 2020. Benefits of intermediate annotations in reading comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5627–5634, Online. Association for Computational Linguistics. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. *ArXiv preprint*, abs/1704.05179. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1):5232– 5270. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In *Proceedings of the 2nd Workshop* on Machine Reading for Question Answering, pages 1–13, Hong Kong, China. Association for Computational Linguistics. Ananth Gottumukkala, Dheeru Dua, Sameer Singh, and Matt Gardner. 2020. Dynamic sampling strategies for multi-task reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 920–924, Online. Association for Computational Linguistics. Kartik Goyal, Chris Dyer, and Taylor Berg-Kirkpatrick. 2019. An empirical investigation of global and local normalization for recurrent neural sequence models using a continuous relaxation to beam search. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1724–1733, Minneapolis, Minnesota. Association for Computational Linguistics. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of gpt-3. *ArXiv preprint*, abs/2209.12356. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Hangfeng He, Mingyuan Zhang, Qiang Ning, and Dan Roth. 2021. Foreseeing the benefits of incidental supervision. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 1782–1800, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. ArXiv preprint, abs/2112.09118. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. *ArXiv preprint*, abs/2208.03299. Yichen Jiang and Mohit Bansal. 2019. Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2726–2736, Florence, Italy. Association for Computational Linguistics. Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, and Bernhard Schoelkopf. 2021. Causal direction of data collection matters: Implications of causal and anticausal learning for NLP. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 9499–9513, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Leonid V Kantorovich. 1960. Mathematical methods of organizing and planning production. Management science, 6(4):366–422. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Tassilo Klein and Moin Nabi. 2019. Learning to answer by learning to ask: Getting the best of gpt-2 and bert worlds. *ArXiv preprint*, abs/1911.02365. Kalpesh Krishna and Mohit Iyyer. 2019. Generating question-answer hierarchies. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 2321–2334, Florence, Italy. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2021. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1075–1088, Online. Association for Computational Linguistics. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics. Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2008. *Dataset* shift in machine learning. Mit Press. Juliano Rabelo, Randy Goebel, Mi-Young Kim, Yoshinobu Kano, Masaharu Yoshioka, and Ken Satoh. 2022. Overview and discussion of the competition on legal information extraction/entailment (coliee) 2021. The Review of Socionetwork Strategies, 16(1):111– 133. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve passages without supervision. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2687–2700, Seattle, United States. Association for Computational Linguistics. Stephen E Robertson and Karen Spärck Jones. 1994. Simple, proven approaches to text retrieval. Technical report, University of Cambridge, Computer Laboratory. Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric questions challenge dense retrievers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138–6148, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Siamak Shakeri, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. End-to-end synthetic data generation for domain adaptation of question answering systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5445–5460, Online. Association for Computational Linguistics. Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, and Suranga Nanayakkara. 2022. Improving the domain adaptation of retrieval augmented generation (rag) models for open domain question answering. *ArXiv* preprint, abs/2210.02627. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 569–574, New Orleans, Louisiana. Association for Computational Linguistics. Sandeep Subramanian, Tong Wang, Xingdi Yuan, Saizheng Zhang, Yoshua Bengio, and Adam Trischler. 2017. Neural models for key phrase detection and question generation. *ArXiv preprint*, abs/1706.04560. Simon Šuster and Walter Daelemans. 2018. CliCR: a dataset of clinical case reports for machine reading comprehension. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 1551–1563, New Orleans, Louisiana. Association for Computational Linguistics. Wilson L Taylor. 1953. "cloze procedure": A new tool for measuring readability. *Journalism quarterly*, 30(4):415–433. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. *ArXiv preprint*, abs/2104.08663. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In *Proceedings of the 2nd Workshop* on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Gerhard Widmer and Miroslav Kubát. 2004. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23:69–101. Bianca Zadrozny. 2004. Learning and evaluating classifiers under sample selection bias. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, volume 69 of *ACM International Conference Proceeding Series*. ACM. ## A Experimental Details We used JAX on TPUs for training reader models and PyTorch on GPU for training retriever models. We used open-source github implementations for DPR 4, Contriever 5and Spider 6. For retrieving top-100 passages for reader input, we used ScaNN7library. We use T5-base model for reader and BERT-base for retriever. We fine-tune the retriever and reader with learning rate 1e-5 and 5e-5 respectively. ## B How Are Evaluation Sets Curated? We consider validation sets from each of the target dataset, BioASQ, CliCR, Quasar-S, Quasar-T, NewsQA, SearchQA, COLIEE as part of our evaluation set. SearchQA, Quasar-S and Quasar-T were already published as ODQA datasets so we used them as it is while we had re-purpose some of the other datasets that were not originally ODQA dataset by processing them as described below. COLIEE: The COLIEE Shared Task 48 provides a list of Japanese legal codes in English language. To convert these legal codes from a flat list into paragraphs, first we split them into specific article sections with regex string "Article [0-9]+ ". We further split each article into passages containing a maximum of 256 words. NewsQA: We created an index on CNN/Dailymail documents by splitting them into passages of 256 words and pooled them together to create a corpus. CliCR and BioASQ: We used PubMed corpus published as part of BEIR (Thakur et al., 2021) benchmark. We split the pubmed abstracts in this corpus into passages of size 256 words. ## C Varying Context Distribution As described in §5.1, we test retriever (Fig. 5) and reader performance (Fig. 6) when exposed to different set of passage. Fig. 6 shows reader performance with passages retrieved with BM25 on source (i.e. wikipedia), target (i.e. respective target corpus) and combined (i.e. all corpora pooled together). Fig. 5 compared performance of Spider and BM25 4https://github.com/facebookresearch/DPR.git 5https://github.com/facebookresearch/contriever.git 6https://github.com/oriram/spider 7https://ai.googleblog.com/2020/07/announcing-scannefficient-vector.html 8https://sites.ualberta.ca/ rabelo/COLIEE2022/ with Target (i.e. dataset specific target corpus) and Comb (i.e. all corpora pooled together) ![12_image_0.png](12_image_0.png) ## D Varying Answer Distribution And Pre-Training Corpus Following §5.1 we try to understand the impact of pre-training and fine-tuning corpus on answer distribution. We do this by comparing the performance of the FiD reader initialized from T5 pretrained on common-crawl dataset(C4) compared to one that was pre-trained on PubMed articles (Table 5). After pre-training, both models are then fine-tuned on our source domain data. In this case, ![13_image_0.png](13_image_0.png) we observe that fine-tuning on a domain that differs from that used in pre-training results in deterioration of model performance. | Augmentations | C4 | Pubmed | |----------------------|-------|----------| | Random | 33.50 | 33.51 | | Uniform | 39.07 | 35.97 | | Most frequent | 38.18 | 34.90 | | BioASQ train answers | 41.33 | 36.71 | ![13_image_2.png](13_image_2.png) ## E Degree Of Domain Shift In Table 1, we showed only differences that governed which side of decision tree the shift types were categorized into, while in Table 6 we show all the raw distance values. ## F Statistical Significance The number of examples in all datasets except COLIEE are in the order of thousands, making the performance improvements significant. In the case of COLIEE, which has a boolean output space (i.e. answers are yes/no), we performed a binomial test to test the significance of few-shot reader performance in Table 4. The number of samples n = 116 (number of test examples), p0=0.468 and pt=0.616. We will reject the null hypothesis that baseline and few-shot distribution are equivalent, when P(X >= pt ∗ n) <= 0.05, where X is drawn ![13_image_1.png](13_image_1.png) from a binomial distribution, i.e., X ∼ B(n, p0) (Berg-Kirkpatrick et al., 2012) and we can compute the L.H.S to be, P(X >= 0.616 ∗ 116) = 0.00006 making it significant. | Dataset, | #ques, | Passage | Question-Answer | |----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------| | Corpus | #docs | | | | BioASQ, | 5k, | Parkinson's disease (PD) is one of the most common | | | Pubmed | 30M | degenerative disorders of the central nervous system that produces motor and non-motor symptoms. The majority of cases are idiopathic and characterized by the presence of Lewy bodies. | Q: Which disease of the central nervous system is characterized by the presence of Lewy bodies? A: Parkinson's disease | | CliCR, | 90k, | Detailed history and examination ruled out the above | | | Pubmed | 30M | causes except the exposure to high altitude as a cause for koilonychia in our patient. Exposure to high altitude is a known aetiology for koilonychias, also described by some authors as "Ladakhi koilonychia". | Q: __ is a known cause of koilonychia, described by some as Ladakhi koilonychia. A: High altitude exposure Q: scip - an software package for solving mixed integer __ problems A: linearprogramming | | Quasar-T, | 30k, | Because of widespread immunization , tetanus is now | Q: Lockjaw is another name for | | Reddit | 2M | rare. Another name for tetanus is lockjaw. | which disease A: tetanus | | NewsQA, | 70k, | Former boxing champion Vernon Forrest, 38, was shot | Q: Where was Forrest killed ? A: | | Dailymail | 0.5M | and killed in southwest Atlanta, Georgia, on July 25. | in southwest Atlanta , Georgia | | SearchQA, | 70k, | The Dangerous Summer and The Garden of Eden. Written in 1959 while Hemingway was in Spain on commission for Life... | Q: While he was in Spain in | | WIkipedia | 20M | 1959, he wrote "The Dangerous Summer", a story about rival bullfighters A: Hemingway | | | COLIEE, Japanese Legal Codes Quasar-S, | 30k, | I have a mixed integer quadratic program MIQP which | | | Stackoverflow | 1.5M | I would like to solve using SCIP. The program is in the form such that on fixing the integer variables the problem turns out to be a linear program. | | | 886, 1k | A manifestation of intention based on fraud or duress is voidable. If a third party commits a fraud inducing a first party to make a manifestation of intention to a second party, that manifestation of intention is voidable only if the second party knew or could have known that fact. The rescission of a manifestation of intention induced by fraud under the provisions of the preceding two paragraphs may not be duly asserted against a third party in good faith acting without negligence. | Q: Is it true: | A person who | | made a manifestation of intention which was induced by duress emanated from a third party may rescind such manifestation of intention on the basis of duress, only if the other party knew or was negligent of such fact. A: No | | | | Figure 7: Examples from datasets with context and question-answer pairs from different domains. | Dataset, | Passage | Generated Sentence | | | | |---------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|-------|-----|-----| | Corpus BioASQ, | Herceptin is widely used in treating Her2-overexpressing breast cancer. However, | | | | | | Pubmed | the application of Herceptin in prostate cancer is still controversial This implies that targeting Her2 by both radio- and immunotherapy might be a potential strategy for treating patients with androgen-independent prostate cancer... | Herceptin is a breast cancer drug that has been used in treating prostate cancer. | | | | | CliCR, | An infant was admitted with symptoms of diarrhoea and vomiting. After initial | | | | | | Pubmed | improvement she unexpectedly died. Postmortem confirmed a diagnosis of cytomegalovirus (CMV) enterocolitis. The authors report this case and review other published cases of immunocompetent infants who presented with this infection. Clinicians should consider stool CMV PCR test or referral for endoscopy and biopsy in young babies who present with profuse and prolonged episodes of diarrhoea. | Immunocompetent infants can present with CMV enterocolitis. | | | | | Quasar-S, | I've recently found scala-bindgen from a Gitter room on Scala Native. Seems like | | | | | | Stackoverflow | at the present point in time they are developing a tool for generating Scala bindings for C header-files. Are there plans for generating Scala bindings for Objective-C and C++ too... | scala-bindgen is a tool that generates scala bindings for C header files. | | | | | Quasar-T, | Interview With Gary James' Interview With Marshall Lytle of Bill Haley's Comets | | | | | | Reddit | It can be safely said that "Rock Around The Clock" was the song by the group Bill Haley And His Comets that started the Rock 'n Roll movement. Still performing today, he spoke about those early days of Rock 'n Roll and his appreciation for what it meant to him. | Bill | Haley | and | his | | comets made rock and roll music | | | | | | | NewsQA, CNN/ Dailymail | The Kardashians are already a staple on E! Network . But they've chosen the month of November to assert their dominance on the book world. Kourtney, Kim, and Khloe's first novel," Dollhouse ," hits shelves today . "Dollhouse," the first fiction endeavor from the Kardashians, follows sisters Kamille, Kassidy, ... | The Kardashians released a new book called 'Dollhouse'. | | | | | SearchQA, | Charles Henry Dow was an American journalist who co-founded Dow Jones and | | | | | | Wikipedia | Company with Edward Jones and Charles Bergstresser. Dow also founded The Wall Street Journal, which has become one of the most respected financial publications in the world... In 1877, he published a History of Steam Navigation between New York and... | Charles Henry Dow, an American journalist, founded The Wall Street Journal in 1882. | | | | Figure 8: Examples of data generated from few-shot prompting. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✗ A2. Did you discuss any potential risks of your work? This is a study of existing work, we do not use a lot of compute resources for running these experiments so this point is not applicable. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1(introduction) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 And 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4, 5 and Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-etal-2023-survey-efficient
A Survey for Efficient Open Domain Question Answering
https://aclanthology.org/2023.acl-long.808
Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency.
# A Survey For Efficient Open Domain Question Answering Qin Zhang1, Shangsi Chen1, Dongkuan Xu2**, Qingqing Cao**3, Xiaojun Chen1, Trevor Cohn4**, Meng Fang**5∗ 1Shenzhen University; 2North Carolina State University; 3University of Washington; 4The University of Melbourne; 5University of Liverpool {qinzhang@, chenshangsi2021@email., xjchen@}szu.edu.cn; dxu27@ncsu.edu; qicao@cs.washington.edu; tcohn@unimelb.edu.au; Meng.Fang@liverpool.ac.uk ## Abstract Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency. ## 1 Introduction Open domain question answering (Voorhees and Tice, 2000) is a longstanding task in natural language processing that can answer factoid questions, from a large knowledge corpus such as Wikipedia (Wikipedia, 2004) or BookCorpus (Zhu et al., 2015). Traditional QA models rely on explicit evidence texts to locate the answer (Cao et al., 2019; Khashabi et al., 2020; Huang et al., 2021), while ODQA models require the processing of large amounts of knowledge quickly to answer input questions. And compared to search engines, ODQA models aim to enhance user-friendliness by presenting the final answer to a question directly, rather than returning a list of relevant snippets or hyperlinks (Zhu et al., 2021). Recently, ODQA systems have attracted considerable research attention and a classic framework ![0_image_0.png](0_image_0.png) search reader **23 June** 1912 evidences (Ⅰ) What is Alan Turing's birthday? (a) generator 23 June 1912 (c) 23 June 1912 (b) of the ODQA system is implemented by encompassing an information retriever (IR) and a reader, i.e., *Retriever-Reader* (Chen et al., 2017). The task of IR is to retrieve evidence pieces from a large knowledge corpus. Popularly used IR can be TFIDF (Chen et al., 2017), BM25 (Mao et al., 2021) and DPR (Karpukhin et al., 2020), etc. The target of the reader is understanding and reasoning the retrieved evidence to yield the answer. It is often achieved by transformer-based language models, such as BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019) or generator T5 (Raffel et al., 2020), BART (Lewis et al., 2020a), GPT (Brown et al., 2020), etc. This two-module system enjoys a broad range of applications (Zhu et al., 2021). However, most general-purpose ODQA models are computationally intensive, slow to infer, and expensive to train. One of the reasons is the huge index/document size. For example, Karpukhin et al. (2020) processed an English Wikipedia corpus including 26 million articles and built a dense index with a size of 65GB. Besides, the majority of general-purpose ODQA models are developed with large pre-trained language models, which often contain millions of parameters. For instance, the state- ∗*Corresponding author. of-the-art ODQA models on the Natural Question dataset, R2-D2 (Fajcik et al., 2021) and UnitedQA (Cheng et al., 2021) have 1.29 billion and 2.09 billion model parameters, respectively. Storing the corpus index and pre-trained language models is memory-intensive (Xia et al., 2022) while evidence retrieving and reading are memory and time consuming. These make general-purpose ODQA models a big challenge for real-time use (Seo et al., 2019), such as on a mobile phone. Towards this challenge, there are various tradeoffs in building ODQA models that meet real-world application needs, such as the trade-offs among accuracy, memory consumption, inference speed, and so on (Izacard et al., 2020; Wu et al., 2020; Mao et al., 2021). NeurIPS 2020 organized an EfficientQA Competition (Min et al., 2021), aiming to build ODQA systems that can predict correct answers while also satisfying strict on-disk memory budgets. For this purpose, a line of work focused on building more efficient protocols. Besides Retriever-Reader, Retriever-Only (Lee et al., 2021b), Generator-Only (Roberts et al., 2020) are newly proposed protocols. See Fig. 1 for more details. Various efficiency techniques are also developed, such as index downsizing (Yamada et al., 2021; Lewis et al., 2022), fast searching (Lewis et al., 2021; Malkov and Yashunin, 2020), evidence retrieval or reading omitting (Roberts et al., 2020; Seonwoo et al., 2022; Lee et al., 2021b) and model size reducing (Yang and Seo, 2021; Singh et al., 2021) etc. In this survey, we provide a comprehensive introduction to the broad range of methods that aim to improve efficiency with a focus on the ODQA task. In Section 2, we overview general-purpose ODQA models and discuss their strategies and limitations in terms of efficiency. In Section 3, we first walk through the key ODQA models which concentrate on efficiency, then conclude the core techniques used. Section 4 gives a quantitative analysis with an overall comparison of different frameworks and three specific aspects, i.e., memory cost, processing speed, and accuracy. Finally, in Section 5, we discuss the challenges reminded followed by the conclusion given in Section 6. 1 ## 2 Overview Of Odqa Models In this section, we summarize ODQA models into three typical frameworks (see in Fig. 1): Retriever-1We accompany the survey with a repository that lists the resources: https://github.com/hyintell/EfficientODQA. Reader, Retriever-Only, and Generator-Only. As described in Section 1, Retriever-Reader models include two modules: a retriever and a reader. For retrievers, traditional non-neural methods, such as TF-IDF (Chen et al., 2017) and BM25 (Mao et al., 2021), use sparse representations to measure term matching between questions and passages. However, these approaches can only capture lexical information, limiting capabilities in matching questions and passages (Qu et al., 2021). Differently, recent neural network-based dual-encoder retrievers (Karpukhin et al., 2020) encode questions and documents into a latent dense vector space where text semantics beyond terms can be adequately learned and measured. For readers, considering the way of obtaining answers, there exist two categories: extractive readers and generative readers. Extractive readers normally answer the question using a span from the context and the goal is to classify the start and end positions of the answer in the retrieved evidence (Karpukhin et al., 2020; Qu et al., 2021). And generative readers are not restricted to the input context and freely generate answers by autoregressively predicting tokens (Raffel et al., 2020; Izacard and Grave, 2021). Distinctively, RetrieverOnly models only use one retriever to extract answers directly from a phrase or QA-pair knowledge base. And Generator-Only models directly generate answers with the question, not involving evidence retrieval and reading (Lee et al., 2021c; Lewis et al., 2021). Retriever-Reader ODQA methods generally obtain good performance. However, due to dense encoding for corpus passages and longer evidence for answer reasoning, they normally suffer from a larger index size and a slower processing speed. In addition, the dual-encoder retrievers like DPR, encoding for questions and documents independently, ignored interaction between them and limited the retrieval performance (Khattab et al., 2021; Lu et al., 2022). In Retriever-Only ODQA models, the omission of the reading/generating step greatly improves the speed of answering questions. But there are a few limitations for Retriever-Only ODQA models: (1) lower performance on average compared to Retriever-Reader ODQA models since less information is considered during answer inference; (2) high storage requirement in terms of indexes for fine-grained retrieval units such as phrases or QA pairs. For Generator-Only ODQA models, skipping evidence retrieving and reading makes low memory costs and short processing time than two-stage systems. However, the performances of Generator-Only ODQA methods have much room for improvement. Additionally, real-world knowledge is updated routinely, and the huge training cost of the generative language models makes it laborious and impractical to keep them always up-to-date or retrain them frequently. Billions of parameters also make them storage-unfriendly and hard to apply on resource-constrained devices (Roberts et al., 2020). A diagram is provided with the typology of ODQA methods in Fig. 4 in the Appendix, while their main concerns are also indicated. ## 3 **Efficient Odqa Models And Techniques** In this section, we first walk through the key ODQA models which concentrate on efficiency, and discuss their strengths and weaknesses as well as their unique characteristics in Section 3.1. Then we conclude the core techniques used in these models for improving the efficiency of ODQA, from data and model perspectives, respectively, in Section 3.2. Before we start, we first take DPR on the Natural Questions (NQ) test dataset as an example to show the time each module needs during inference and their detailed memory costs in Fig. 2. We can see the total processing time DPR needs is 0.91 seconds (s)2 where the inference speed is mainly affected by evidence searching (74.79%) and reading (23.95%). The total memory cost of DPR is 79.32GB which is huge. The index takes up 81.95% of the memory, the raw corpus takes 16.39% space, and the remaining 1.66% are for the models where the retriever model is around twice the size of the reader model. Based on these observations, how to improve the efficiency of ODQA models focuses on the reduction of processing time and memory cost. To reduce processing time, we can accelerate evidence searching and reading. To reduce the memory cost, we can reduce the size of the index and model. Besides, some emerging directions are also proposed, such as jumping the retrieval part to generate answers using questions directly or retrieving answers directly to omit evidence reading. We introduce the details below. ## 3.1 Walk Through Efficiency Odqa Models In this subsection, we delve into the details of efficiency ODQA models. We categorize them into 2The passages in the corpus are embedded offline. ![2_image_0.png](2_image_0.png) 0.88, 1.11% Reader model, 0.44, 0.55% (total memory=79.32GB) 0.01 , 1.10% Raw corpus, 13.00, 16.39% 0.68 , 74.73% 65.00, 81.95% three classes regarding the different means of implementing efficiency, i.e., reducing processing time, reducing memory cost, and blazing new directions. ## 3.1.1 Reducing Processing Time When giving a question, the processing time for ODQA involves three stages: question embedding, evidence searching, and evidence reading. Whereas evidence searching and evidence reading occupy most of the processing time, researchers mainly focus on narrowing the time cost of the two stages. By Accelerating Evidence Searching. Other than the traditional brute search method (Zhan et al., 2021), hierarchical navigable small world graphs (HNSW) (Malkov and Yashunin, 2020) and the approximate nearest neighbor (ANN) search (Johnson et al., 2021) techniques become increasingly popular, due to the characteristic of fast searching. DPR (Yamada et al., 2021) and RePAQ (Lewis et al., 2021) adopt HNSW to achieve much faster search without a significant decline in retrieval accuracy. However, the negative effect HNSW brings is a larger index. For example, DPR with HNSW increases the index from 65GB to 151GB (Yamada et al., 2021). Besides, Locality Sensitive Hashing (LSH) (Neyshabur and Srebro, 2015) and Inverted File (IVF) (Sivic and Zisserman, 2003) are both efficient ANN methods to speedup search (Yamada et al., 2021; Lewis et al., 2022), but they often lead to a significant drop of retrieval accuracy (Yamada et al., 2021; Lewis et al., 2021, 2022). Concretely, LSH generates the same hashkey for similar embeddings through suitable hash functions, and then evidence retrieval is based on hashkeys (Wang et al., 2022). IVF constructs two-level indices using the K-means clustering method (Lewis et al., 2022). Different from LSH which can reduce the index size, IVF does not achieve this goal. Compared to LSH and IVF, Learned Index for large-scale DEnse passage Retrieval (LIDER) (Wang et al., 2022) makes a trade-off between search speed and retrieval accuracy through dynamically learning a corpus index when training. It achieves a faster search with a fewer drop in retrieval accuracy compared IVF, by predicting the location from the learned key-location distribution of the dataset. Specifically, LIDER builds two-level indices with a similar method IVF uses. LIDER further maps the documents in indices into hashkeys using the LSH method and sorts them based on the hashkeys. Meanwhile, the hashkeys are also used to train a multi-layer linear regression model for the location prediction of a hashkey in the sorted indexes. During inference, with a query embedded by DPR (Karpukhin et al., 2020), LIDER first calculates its hashkey, and finds its c nearest centroids. With these centroids, LIDER then searches the top-p nearest evidence in each subset in parallel. Finally, it merges all the retrieved evidence and selects the top-k ones as output. To conclude, LIDER is a powerful, efficient, and practical method for ODQA evidence searching. By Accelerating Evidence Reading. Accelerating the evidence reading is another effective way to speed up the question processing of ODQA models. Actually, in the retrieved evidence, a high percentage of content is not pertinent to answers (Min et al., 2018). However, the reader module still allocated the same computational volume to these contents, which involves many unnecessary computations and prolongs the inference latency (Wu et al., 2020). Thus, the jumping reading strategy is proposed and studies have found it can bring certain inference speedup (Wu et al., 2020; Guan et al., 2022). Concretely, the jumping reading strategy dynamically identifies less relevant text blocks at each layer of computation by calculating an important score for each text block. Toward blocks with low scores, they will not be further involved in the subsequent processing. Adaptive computation (AC) (Bengio et al., 2015; Graves, 2016) and Block-Skim (Guan et al., 2022) are efficient methods to ameliorate the reading efficiency following jumping reading strategy which manipulates the allocation of computation of the model input (Wu et al., 2020, 2021). SkyLineBuilder (Wu et al., 2020) applies AC to an extractive reader and dynamically decides which passage to allocate computation at each layer during reading. Further, Adaptive Passage Encoder (APE) (Wu et al., 2021) considers applying the AC strategy to Fusion-in-Decoder (FiD) system. In APE, the AC strategy is used to early stop the encoder of the generator to read the evidence that is less likely to include answers. Meanwhile, inspired by the idea of passage filtering before retrieval (Yang and Seo, 2021), Block-Skim (Guan et al., 2022) is proposed which skips question-irrelevant text blocks to optimize the reading speed. It first slices an input sequence into text blocks with a fixed length. A CNN module is utilized to compute the importance score for each block in each transformer layer, then the unimportant blocks are skipped. Block-Skim implements an average of 2.56 times speedup inference than BERT-based models with little loss of accuracy on multiple extractive QA datasets. This enlightens us that all BERT-based Retriever-Reader ODQA models can be optimized by Block-skim to speed up their inference. ## 3.1.2 Reducing Memory Cost For ODQA models, there are three kinds of memory cost: index, model, and raw corpus. Normally, reducing the sizes of the index and model are two ways to break through and to achieve storage efficiency, while reducing raw corpus size results in certain knowledge source loss and a significant drop in performance (Yang and Seo, 2021). By Reducing Index Size. The index of a corpus takes a major proportion of memory cost during running an ODQA system. The evidence-searching module, which is strongly related to the index size, is also the module that takes the most time during reference. Thus, downsizing the index is key to improving the efficiency of ODQA models. A line of research has tried to achieve this goal. BPR (Yamada et al., 2021) and DrBoost (Lewis et al., 2022) are representative works in this direction. BPR reduces the index size by sacrificing data precision while DrBoost achieves this through compacting embedding dimension (Lewis et al., 2022). Specifically, BPR (Yamada et al., 2021) leverages a learning-to-hash technique (Cao et al., 2017; Wang et al., 2018) to hash continuous passage vectors into compact binary codes, which is different from DPR (Karpukhin et al., 2020) utilizing dense continuous embeddings of corpus passages. It optimizes the search efficiency of the retriever while maintaining accuracy through multi-target joint learning: evidence retrieval and reranking. During retrieval, top-c passages are retrieved with the Hamming distance of the binary codes. Then, the retrieved evidences are reranked with maximum inner product search (MIPS) (Shrivastava and Li, 2014; Guo et al., 2016) between the query dense vector and the passage binary codes. Finally, the top-k evidences are outputted, where k is much smaller than c. Differently, DrBoost (Lewis et al., 2022), a dense retrieval ensemble method inspired by boosting (Freund and Schapire, 1997), incrementally compacts the dimension of representations during training. Concretely, it builds sequentially multiple weak learners and integrates them into one stronger learner. Each weak learner consists of a BERT-based dual-encoder for encoding passages and questions by learning embeddings in low dimensions, normally 32-dim. The weak learners are trained iteratively using hard negative samples. The final embeddings for passages and questions are a linear combination of embeddings from all weak learners. Thus the dimension of the final embedding can be controlled by the iterative rounds during training, which makes the total embedding dimension flexible and the index size adjustable. One limitation of DrBoost is that it must keep multiple encoders simultaneously to compute the final representation of the question during inferring. To remedy this issue, DrBoost further distills all R question encoders (32 dim) into a single encoder (32*R dim). Therefore, the single encoder outputs the final question embedding directly, which achieves the goal of low resources. By Reducing Model Size. Besides downsizing the index, compressing model is another way to cut the memory cost of ODQA systems. One way to accomplish this goal is building a comprehensive model to implement retrieval and reading simultaneously, instead of multiple models in traditional ODQA systems. YONO (You Only Need One model) (Lee et al., 2021a) is a representative model in this way, which integrates retriever, reranker, and generator models into a T5-large based singular transformer pipeline. In this way, YONO achieves a less than 2GB model size which is as large as EMDR2 (Singh et al., 2021), and a higher QA performance. This makes YONO the best performance among models that are under the size of 2GB. Moreover, YONO can further manipulate its model size by adding or removing certain layers flexibly. To be specific, YONO first discards 18 decoder layers of the T5large model and splits the rest model into four parts. The first 12 layers are for evidence retrieval; the middle 4 layers are for evidence reranking; the following 8 layers are for impressive encoding and the last 6 layers are for decoding. The hidden representations are progressively improved along the pipeline. A fully end-to-end training over all stages is performed to make full use of the capability of all modules. However, YONO still needs to do evidence indexing and searching, which is timeconsuming. Thus, how to improve the processing speed of YONO is still a problem that needs to be solved urgently. ## 3.1.3 One-Stage Frameworks Besides the methods which accelerate evidence searching and reading and the methods that reduce the size of the index and model, some one-stage frameworks are proposed as well, such as generating the answer using the input question directly or retrieving answers directly from a finer-grained knowledge base (ie., phrases or question-answer pairs). Directly Generate Answers. Some researchers blazed a brand new path that omits the whole evidence retrieval process, including corpus indexing and evidence searching, by leveraging generative language models (such as T5, BART, GPT) to tackle ODQA tasks (Roberts et al., 2020; Brown et al., 2020; Lewis et al., 2020a). Generative models have learned and stored the knowledge of a large-size corpus. Given a question, they can generate the answers directly. Without the evidence retrieval process, they save much processing time during ODQA, making them inference efficient. The main advantage of Generator-Only methods is that they can answer open-domain questions without any access to external knowledge (Roberts et al., 2020). And they output the literal text of the answer in a more free-form fashion. However, generally, there is a significant gap in QA performance between generative models and Retriever-Reader ODQA models, as well as the adequacy of explanation. Thus, single generator-based ODQA models are further combined with existing evidence retriever models (Lewis et al., 2020b; Izacard and Grave, 2021; Singh et al., 2021) to obtain better QA performance. Directly Retrieve Answers. As discussed in the first few paragraphs of Section 3, evidence reading takes non-negligible processing time. An innovative idea to improve the efficiency of ODQA is to omit evidence reading. Without evidence reading, the document corpus is first preprocessed into a knowledge base offline. When encountering a new sample, the model searches the final answer from the knowledge base for the question directly (Seo et al., 2019; Lee et al., 2021b; Lewis et al., 2021). RePAQ (Lewis et al., 2021) is representative of this framework. It first converts a large corpus to a knowledge base of question-answer (QA) pairs using a question generation model, then uses a lightweight QA-pair retriever to answer the questions. When inferring, it first calculates the similarity between the input question and each one in the knowledge base using the maximum inner product search (MIPS) technique (Shrivastava and Li, 2014; Guo et al., 2016), to retrieve the most similar QA pairs. The answer to the most similar question is returned as the output answer directly. However, the 220GB index for the 65 million QA pairs becomes a major drawback for RePAQ. Similarly, phrasebased ODQA models, such as DenSPI (Seo et al., 2019) and DensePhrases (Lee et al., 2021b), split the corpus documents into fine-grained phrases. They build an index for these phrases which can be retrieved directly as the predicted answers. Similar to RePAQ, omitting evidence reading makes phrase-based ODQA models faster than RetrieverReader ODQA models when processing questions, as analyzed in Section 4.3. ## 3.2 Core Techniques This section concludes the core techniques commonly used in existing ODQA models with respect to improving efficiency. It can be briefly divided into two categories: data-based and model-based techniques. Data-based techniques mainly focus on the reduction of the index, which can be downsized from different hierarchies such as the number of corpus passages, feature dimension, and storage unit per dimension. Model-based techniques try to reduce the model size while avoiding a significant drop in performance. Model pruning and knowledge distillation are commonly used techniques. ## 3.2.1 Data-Based Techniques Passage Filtering. Among the huge corpus ODQA models rely on, there are massive passages that contain little useful information and are unlikely to be evidence for answers. Thus, filtering unrelated passages is a way to reduce the memory cost of corpus without a large negative impact. For example, some researchers have designed a linear classifier to discriminate and discard unnecessary passages before evidence retrieval (Izacard et al., 2020; Yang and Seo, 2021). Dimension Reduction. Another way to reduce the memory cost is to reduce the dimension for dense passage representations. To achieve this goal, Izacard et al. (2020) learns an additional feed-forward layer to project the high-dimensional embeddings to lower ones. Principle component analysis (PCA) is another efficient technique that is commonly used to reduce the dimension of passage representations without a loss of important information (Ma et al., 2021; Zouhar et al., 2022). In work Ma et al. (2021), PCA is used to build a projection matrix to project the raw data onto the principal components using an orthonormal basis. Product Quantization. Product quantization (PQ) (Jégou et al., 2011) further reduces the index size by reducing the storage cost of each dimension of the embeddings. It divides a d-dimensional vector into n sub-vectors with d/n dimension and quantifies these sub-vectors independently using k-means (Izacard et al., 2020; Ma et al., 2021; Yang and Seo, 2021). However, PQ also results in a significant drop in accuracy while it reduces the index size. The three techniques introduced above are adopted jointly in Fusion-in-Decoder with Knowledge Distillation (FiD-KD) (Izacard et al., 2020) to reduce the memory cost of one ODQA system. It obtains competitive performance compared to the original system while compressing memory from more than 70GB to less than 6GB. ## 3.2.2 Model-Based Techniques Model Pruning. Most recent works on open domain question answering (Chen et al., 2017; Guu et al., 2020) prefer to adopt large pre-trained language models (Devlin et al., 2019; Raffel et al., 2020) as passage retriever, reader or generator due to their powerful deep semantic understanding capability. These large models have millions or even billions of parameters, requiring large storage, long training time, and leading to slow inference. To this point, some researchers have turned to adopt more lightweight language models (Yang and Seo, 2021). For example, a smaller pre-trained language model, MobileBERT (Sun et al., 2020), has been used to reduce the size of an ODQA system to 972MB (Yang and Seo, 2021). Parameter sharing is another way to constrain the model size. Skylinebuilder (Wu et al., 2020) and RePAQ downsize their model by using the parameter sharing encoders, i.e., ALBERT (Lan et al., 2019). More lightweight pre-trained language models have been proposed and verified in other natural language tasks, such as machine reading comprehension (Fan et al., 2019; Sajjad et al., 2020; Lagunas et al., 2021; Xia et al., 2022). They obtain smaller model sizes and achieve high accuracy for downstream tasks, including ODQA tasks. Knowledge Distillation. Compared to structure pruning, knowledge distillation pays more attention to effectively improving question processing speed. Knowledge distillation, which transfers knowledge from a large model into a small one, has been widely used in several NLP tasks, including ODQA and MRC tasks (Sanh et al., 2019; Sun et al., 2020; Izacard and Grave, 2020; Lewis et al., 2022; Yang and Seo, 2021). For example, Minimal R&R system (Yang and Seo, 2021) and DrBoost (Lewis et al., 2022) both integrate multiple modules into a single one via knowledge distillation. ## 4 Quantitative Analysis This section gives a quantitative analysis of the aforementioned ODQA models. We first give an overall comparison of different frameworks and further discuss the methods quantitatively from three specific aspects: memory cost, processing speed, and accuracy3. At the end of the analysis, the following subsection summarizes and concludes what has been analyzed and discussed. ## 4.1 Overall Comparison In Table 1 in Appendix B, we demonstrate a comprehensive comparison of efficiency-related ODQA models from three aspects: memory cost, processing speed, and answering quality. Specifically, total memory storage, detailed model size, and index size are listed to show details of memory cost. The number of questions that can be answered per second (Q/s) demonstrates the processing speed. Exact match (EM) scores on Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) datasets indicate answering quality. Concerning comparison between different frameworks, we can see that two-stage methods (Retriever-Reader) generally obtain better ODQA performances than one-stage methods (i.e., Retriever-Only and Generator-Only). The best end-to-end EM performance on NQ (55.9%) and TriviaQA (74.8%) datasets are obtained by R2- D2+reranker and GAR_extractive respectively. They are both under the Retriever-Reader framework. The second-best ODQA performances on NQ (54.7%) and TriviaQA (72.1%) are obtained by UnitedQA and Fid-large+KD_DPR methods, which are also under the two-stage frameworks. In terms of total **memory cost**, i.e., the sum of model size and the index size, Generator-Only systems keep generally low memory overhead. Except GPT-3, the rest of the Generator-Only systems take less than 50GB of memory, and five methods out of the eight are less than 5GB. On the contrary, most Retriever-Only ODQA models require huge memory, normally greater than 200GB. The method DenSPI needs a 2002.69GB memory cost, which is enormous. Retriever-Reader ODQA models have a wide range in terms of memory cost, from 0.31GB to 363.26GB. Overall speaking, Minimal R&R achieves the smallest memory overhead (0.31GB) while DenSPI keeps the largest one (2002.69GB). In terms of **processing speed**, which determines how fast one ODQA system can answer a given question, one-stage methods generally achieve higher processing speed than two-stage methods, especially Retriever-Only systems. Among the eight Retriever-Only methods, five of them can process more than 20 questions per second (Q/s) and RePAQ_XL and RePQA_base can answer 800 and 1400 questions per second respectively, which is impressive. For the methods with slow processing speed, Fig-large and RAG-seq from the RetrieverReader framework are the two slowest systems, which process less than 1 question per second. To conclude, Fig. 3 gives a visual presentation for **comprehensive comparison** of efficiencyrelated ODQA models according to different frameworks. By using the NQ evaluation dataset as an example, it illustrates the detailed model size, index size, EM scores, and processing speed respectively. From Fig. 3, we can see each framework has its strengths and weaknesses. Retriever-Only systems achieve significantly high processing speeds but cost enormous memory storage. Generator-Only systems require the least memory storage. However, the main concern of them is the answering quality while the majority of these systems' EM scores are less than 30% on NQ datasets. Twostage Retriever-Reader systems relatively behave balanced. They achieve high EM scores and obtain moderate memory cost and processing speed. ![7_image_0.png](7_image_0.png) Generative-Reader ODQA Systems Model Size Index Size EM on NQ Q/s 0.1 1.0 10.0 100.0 Model Size Index Size EM on NQ Q/s 0.1 1.0 10.0 ## 4.2 Details In Memory Cost The total memory cost depends on the model size and the index size. Index Size. For the index size, the two kinds of one-stage frameworks are two extremes. GeneratorOnly methods do not require creating an index file while Retriever-Only methods generally need a huge storage space for index. Most two-stage methods have a moderate index of 65GB or less. For Retriever-Reader ODQA systems, the 65GB index set of dense passage embedding, developed by DPR (Karpukhin et al., 2020), is the most commonly adopted index set. It is adopted by 17 methods as we listed in Table 1 in Appendix B. Based on this index set, DrQA and GAR_extractive represent passages into sparse vectors, obtained a much smaller index size (26GB) (Chen et al., 2017; Mao et al., 2021). Through the product quantization (PQ) technique, DPR+PQ compresses the index size from 65GB to 2GB and the index size of RePAQ is from 220GB to 48GB. On the other side, BPR (Yamada et al., 2021) creates a small index of less than 2.1GB. It also improves the answering performance from 41.6% to 49% on the NQ dataset by replacing the BERT-based reader with the ELECTRA-large reader. Meanwhile, Minimal R&R (Yang and Seo, 2021) establishes the smallest index of less than 0.15GB with a price of a significant drop in ODQA performance has been paid. For Retriever-Only ODQA systems, DenSPI+Sparc (Lee et al., 2020) and DensePhrase (Lee et al., 2021b) smallen the phrase index by pointer sharing, phrase filtering, and PQ. However, the phrase index is still larger than 1000GB. DensePhrases further cuts down the index to 320GB by omitting sparse representations and using SpanBERT-based encoder while a relatively high performance remained. SpanBERT-based represents phrases into a lower-dimension space (Joshi et al., 2020) than that in DenSPI+Sparc. And DrBoost (Lewis et al., 2022) builds an index under 1GB where a passage is represented with a 190-dim vector through the boosting and PQ techniques. Model Size The model size involves all modules present in one ODQA system, including the retriever and the reader. It has a great range, from 0.04GB to 700GB. Among all mentioned ODQA models, a quarter of ones have model sizes less than 1GB; the model sizes of 40% systems are between 1∼2GB and 12.5% ones have sizes between 2∼3GB; 7.5% systems have model sizes between 3∼4GB; the remaining 15% models weigh larger than 4GB. Specifically, GPT-3 (Brown et al., 2020) has an extremely huge model size of 700GB. Besides it, another three systems are obtaining relatively large models: T5-1.1-XL_SSM (45.27GB) (Roberts et al., 2020), UnitedQA (8.36GB) (Cheng et al., 2021) and R2-D2+reranker (5.16GB) (Fajcik et al., 2021), while the system with the smallest model (0.04GB) is achieved by RePAQ-base (Lewis et al., 2021). Specifically, GPT-3 keeps the largest model (700GB) and achieves relatively high performance, i.e., 71.2% EM on TriviaQA (top 1) and 29.9% EM on NQ dataset (top 3), compared to other models with the same GeneratorOnly framework. Minimal R&R (Yang and Seo, 2021) cuts down the total model size to 0.17GB. DrQA (Chen et al., 2017) has a small total model size 0.27GB in that its retriever is non-parameter BM25 and the reader relies on LSTM with fewer parameters. GAR_extractive (Mao et al., 2021) maintains a small total model size and achieves the best performance on TriviaQA (74.8%) and similar performance with DPR on NQ (41.8%). RePAQ (Lewis et al., 2021) achieves the smallest model of 0.04GB. It also gains competitive performance compared to DPR. Most ODQA models are implemented with PLMs that are less than 2GB. A few ODQA models keep the total model size more than 3GB to achieve higher performance, like FiD-large+KD_DPR (Izacard and Grave, 2020), RePAQ+FiD_large (Lewis et al., 2021), UnitedQA (Cheng et al., 2021) and R2-D2_reranker (Fajcik et al., 2021). As they employ either larger or more pre-trained language models to focus on improving performance. ## 4.3 Details On Latency In terms of latency, i.e., processing speed, most ODQA models answer less than 10 questions per second. Retriever-Only ODQA models bring faster processing speed than the other three frameworks. Compared to phrase-base systems, the QA-pairbased system RePAQ (Lewis et al., 2021) and its variants win the fastest inference speed among the listed ODQA models, up to 1400 Q/s. GeneratorOnly ODQA models also achieve higher Q/s than Retriever-Reader ODQA models, as they do not need retrieving evidence from a larger corpus which is time-consuming. ## 5 Discussion In this section, we summarize and illustrate the insights and future directions from the following aspects. We first summarize the key points to improve the effectiveness of ODQA systems, from the two aspects of index and model respectively. In terms of index size, it is worth exploring deeper on generative models and the techniques of compacting embedding. In terms of model size, knowledge distillation is a promising direction to reduce model size while another direction is the application of lightweight models. In addition, one-stage ODQA models are also worthy of research. Additionally, we provide some advice on model recommendations under different requirements. For example, if we pursue real-time feedback, Retriever-Only systems should be good choices; if we are limited by computing resources, GeneratorOnly systems are suitable candidates; and if we need to trade off performance, memory cost and processing time, Retriever-Reader systems are relatively more appropriate. In general, for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks, this survey can serve as an entry point to find opportunities for new research directions. However, some salient challenges need to be addressed in the way of ODQA efficiency research. One of the worrisome things is that most ODQA approaches are computation-heavy and energyexpensive. How can the ODQA system be deployed in low-power devices with limited computing resources and mobile devices is still very challenging. Another thing is that it seems to be inadequate to evaluate the efficiency of ODQA models only on accuracy, memory, and processing time, due to many other factors that should be considered and traded off. For example, it is also important to establish what resource, e.g., money, time, and data for model training, power consumption, carbon emissions, etc. ## 6 Conclusion In this survey, we retrospected the typical literature according to three different frameworks of open domain question answering (ODQA) systems. Further, we provided a broad overview of existing methods to increase efficiency for ODQA models and discussed their limitations. In addition, we performed a quantitative analysis in terms of efficiency and offered certain suggestions about method selections of open domain question answering. Finally, we discussed possible open challenges and potential future directions of efficient ODQA models. ## 7 Limitations It seems to be difficult to evaluate the efficiency of ODQA models fairly and impartially due to multiple factors that should be considered and need to be traded off. On the one hand, it is not enough to only use accuracy, memory, and processing time to evaluate effectiveness. It is also important to establish what resource, e.g., money, power consumption, carbon emissions, etc., one attempt to constrain (Treviso et al., 2022). On the other hand, how to deploy models and what tools model implementation relies on contributes to inequity growth (Blodgett et al., 2020). It is extremely challenging to unify the deployment of all models and the tools they rely on and to achieve a truly fair and unbiased effectiveness comparison. ## 8 Ethics Statement Our work focuses on summarizing and discussing the accuracy, inference speed, and memory cost of open domain question answering systems. We believe that our work is helpful for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks. We do not anticipate any ethical concerns arising from the research presented in this paper. ## Acknowledgments This research was supported by National Natural Science Foundation of China (62206179, 92270122), Guangdong Provincial Natural Science Foundation (2022A1515010129, 2023A1515012584), University stability support program of Shenzhen (20220811121315001), Shenzhen Research Foundation for Basic Research, China (JCYJ20210324093000002). ## References Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. 2015. Conditional computation in neural networks for faster models. CoRR, abs/1511.06297. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Yu Cao, Meng Fang, and Dacheng Tao. 2019. BAG: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 357–362, Minneapolis, Minnesota. Association for Computational Linguistics. Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. Hashnet: Deep learning to hash by continuation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2021. UnitedQA: A hybrid approach for open domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3080–3090, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Romina Etezadi and Mehrnoush Shamsfard. 2022. The state of the art in open domain complex question answering: a survey. Applied Intelligence, pages 1–21. Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for opendomain question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 854–870, Punta Cana, Dominican Republic. Association for Computational Linguistics. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. CoRR, abs/1909.11556. Yoav Freund and Robert E Schapire. 1997. A decisiontheoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. ArXiv, abs/1603.08983. Yue Guan, Zhengyi Li, Zhouhan Lin, Yuhao Zhu, Jingwen Leng, and Minyi Guo. 2022. Blockskim: Efficient question answering for transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10710–10719. Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2022. Semantic models for the first-stage retrieval: A comprehensive review. ACM Trans. Inf. Syst., 40(4). Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 482–490, Cadiz, Spain. PMLR. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International Conference on Machine Learning, pages 3929–3938. PMLR. Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, and Xiaodan Liang. 2021. DAGN: Discourse-aware graph network for logical reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5848–5855, Online. Association for Computational Linguistics. Zhen Huang, Shiyi Xu, Minghao Hu, Xinyi Wang, Jinyan Qiu, Yongquan Fu, Yuncai Zhao, Yuxing Peng, and Changjian Wang. 2020. Recent trends in deep learning based open-domain textual question answering systems. IEEE Access, 8:94341–94356. Gautier Izacard and Edouard Grave. 2020. Distilling knowledge from reader to retriever for question answering. CoRR, abs/2012.04584. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics. Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. 2020. A memory efficient baseline for open domain question answering. CoRR, abs/2012.15156. Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:117–128. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64–77. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Annual Meeting of the Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics. Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Relevance-guided Supervision for OpenQA with ColBERT. Transactions of the Association for Computational Linguistics, 9:929–944. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453– 466. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. 2021. Block pruning for faster transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619–10629, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for selfsupervised learning of language representations. CoRR, abs/1909.11942. Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, and Kyoung-Gu Woo. 2021a. You only need one model for open-domain question answering. CoRR, abs/2112.07381. Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, and Jaewoo Kang. 2020. Contextualized sparse representations for real-time open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 912–919, Online. Association for Computational Linguistics. Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021b. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647, Online. Association for Computational Linguistics. Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021c. Phrase retrieval learns passage retrieval, too. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3661–3672, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Patrick Lewis, Barlas Oguz, Wenhan Xiong, Fabio Petroni, Scott Yih, and Sebastian Riedel. 2022. Boosted dense retriever. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3102–3117, Seattle, United States. Association for Computational Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledgeintensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc. Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them. Transactions of the Association for Computational Linguistics, 9:1098–1115. Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, and Haifeng Wang. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval. Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, and Jimmy Lin. 2021. Simple and effective unsupervised redundancy elimination to compress dense vectors for passage retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2854–2859, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824–836. Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Generation-augmented retrieval for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4089–4100, Online. Association for Computational Linguistics. Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2021. Neurips 2020 efficientqa competition: Systems, analyses and lessons learned. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track, volume 133 of Proceedings of Machine Learning Research, pages 86–111. PMLR. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1725–1735, Melbourne, Australia. Association for Computational Linguistics. Behnam Neyshabur and Nathan Srebro. 2015. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, page 1926–1934. JMLR.org. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835–5847, Online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426, Online. Association for Computational Linguistics. Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's bert: Smaller and faster transformer models. ArXiv, abs/2004.03844. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108. Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4430–4441, Florence, Italy. Association for Computational Linguistics. Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, and Alice Oh. 2022. Two-step question retrieval for open-domain QA. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1487–1492, Dublin, Ireland. Association for Computational Linguistics. Xiaoyu Shen, Svitlana Vakulenko, Marco Del Tredici, Gianni Barlacchi, Bill Byrne, and A. Gispert. 2022. Low-resource dense retrieval for open-domain question answering: A comprehensive survey. ArXiv, abs/2208.03197. Anshumali Shrivastava and Ping Li. 2014. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for opendomain question answering. In Advances in Neural Information Processing Systems, volume 34, pages 25968–25981. Curran Associates, Inc. Sivic and Zisserman. 2003. Video google: a text retrieval approach to object matching in videos. In Proceedings Ninth IEEE International Conference on Computer Vision, pages 1470–1477 vol.2. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158–2170, Online. Association for Computational Linguistics. Marcos Vinícius Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro Henrique Martins, André F. T. Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, and Roy Schwartz. 2022. Efficient methods for natural language processing: A survey. ArXiv, abs/2209.00099. Ellen M. Voorhees and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources Association (ELRA). Jingdong Wang, Ting Zhang, jingkuan song, Nicu Sebe, and Heng Tao Shen. 2018. A survey on learning to hash. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):769–790. Yifan Wang, Haodi Ma, and Daisy Zhe Wang. 2022. Lider: An efficient high-dimensional learned index for large-scale dense passage retrieval. ArXiv, abs/2205.00970. ## Wikipedia. 2004. Wikipedia. Pediapress. Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2021. Training adaptive computation for open-domain question answering with computational constraints. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 447–453, Online. Association for Computational Linguistics. Semiparametric Methods in NLP: Decoupling Logic from Knowledge, pages 41–53, Dublin, Ireland and Online. Association for Computational Linguistics. Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, and Pontus Stenetorp. 2020. Don't read too much into it: Adaptive computation for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3029–3039, Online. Association for Computational Linguistics. Mengzhou Xia, Zexuan Zhong, and Danqi Chen. 2022. Structured pruning learns compact and accurate models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1513–1528, Dublin, Ireland. Association for Computational Linguistics. Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 979–986, Online. Association for Computational Linguistics. Sohee Yang and Minjoon Seo. 2021. Designing a minimal retrieve-and-read system for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5856–5865, Online. Association for Computational Linguistics. Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Jointly optimizing query encoder and product quantization to improve retrieval performance. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM '21, page 2487–2496, New York, NY, USA. Association for Computing Machinery. Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering. CoRR, abs/2101.00774. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 19– 27. Vilém Zouhar, Marius Mosbach, Miaoran Zhang, and Dietrich Klakow. 2022. Knowledge base index compression via dimensionality and precision reduction. In Proceedings of the 1st Workshop on ![14_image_0.png](14_image_0.png) 2.Retriever-only 3.Generator-only ![14_image_1.png](14_image_1.png) √ √ √ reader √ √ √ √ √ √ reader √ √ √ √ √ √ √ √ √ Frameworks Models Accuracy | Index_size | Speed ## A **Connection To Existing Related Surveys** ODQA has been discussed and summarized with a broad overview of techniques for NLP in several survey papers. However, they more focus on deep neural models for improving ODQA performance. Specifically, the survey given by Huang et al. (2020) introduces deep learning-based ODQA models proposed in the early years, which are mainly based on LSTM or CNN. Modern Transformerbased ODQA models are not included. Work given by Zhu et al. (2021) provides a comprehensive literature review of ODQA models, with particular attention to techniques incorporating neural machine reading comprehension models. Guo et al. (2022) focuses on the semantic models of the firststage retrieval models. Shen et al. (2022) pays more attention to how to train the dense retrievers effectively with fewer annotation training data. Treviso et al. (2022) retrospects the efficient methods in natural language processing (NLP). It mainly involves the upstream generic pre-trained language models and training methods. Etezadi and Shamsfard (2022) mainly concentrates on the comparison of ODQA methods for complex question answering. As far as we know, there is no survey summarizing ODQA methods from the efficiency perspective so far, which inspires us to overview the efficient ODQA models in this paper. ## B Corpus And Metrics Normally Used Corpus. The most commonly used corpus for open domain question answering systems is the 201812-20 dump of Wikipedia corpus, which contains 21 million 100-word-long passages after removing semi-structured data (tables, information boxes, lists, and the disambiguation pages) (Karpukhin et al., 2020). Most ODQA models, such as RocketQA (Qu et al., 2021), FiD (Izacard and Grave, 2021), and R2-D2 (Fajcik et al., 2021), directly | Frameworks | Systems | Grouped by | Memory Cost(GB) | Processing | EM | | | |--------------------------|--------------|--------------|-------------------|--------------|--------|----------|-------| | score (%) | | | | | | | | | Speed | | | | | | | | | Memory Cost(GB) | Models | Index | Total | Q/s | NQ | TriviaQA | | | Minimal R&R | 0.17 | 0.15 | 0.31 | - | 32.60 | 48.75 | | | SkylineBuilder | 0.07 | 2.40 | 2.47 | - | 34.20 | - | | | BM25+BERT-base | 0.44 | 2.40 | 2.84 | 4.68* | - | - | | | GAR_extractive | 0.44 | 2.40 | 2.84 | 2.61* | 41.80 | 74.80 | | | DrBoost+PQ(8-dim) | 2.64 | 0.42 | 3.06 | - | - | - | | | DPR+PQ | 1.32 | 2.00 | 3.32 | 4.67* | 38.40 | 52.00 | | | BPR_BERT | 1.32 | 2.10 | 3.42 | 4.81* | 41.60 | 56.80 | | | DrBoost+PQ(4-dim) | 2.64 | 0.84 | 3.48 | - | - | - | | | BPR_ELECTRA-large | 2.22 | 2.10 | 4.32 | - | 49.00 | 65.60 | | | DrBoost | (0, 10] | 2.64 | 13.00 | 15.64 | - | - | - | | Extractive-Reader | ORQA | 1.32 | 18.00 | 19.32 | 8.60 | 33.30 | 45.00 | | REALM | 1.32 | 18.00 | 19.32 | 8.40 | 39.20 | - | | | DrQA | 0.27 | 26.00 | 26.27 | 1.80 | 35.70 | - | | | ColBERT-QA-base | (10, 50] | 0.88 | 65.00 | 65.88 | - | 42.30 | 64.60 | | ANCE | 1.32 | 65.00 | 66.32 | 5.51* | 46.00 | 57.50 | | | DPR | 1.32 | 65.00 | 66.32 | 1.60* | 41.50 | 56.80 | | | GAR+DPR_extractive | 1.32 | 65.00 | 66.32 | 1.25* | 43.80 | - | | | RocketQA | 1.50 | 65.00 | 66.50 | - | 42.80 | - | | | ColBERT-QA-large | 1.76 | 65.00 | 66.76 | - | 47.80 | 70.10 | | | ERNIE-Search_base | 1.76 | 65.00 | 66.76 | - | - | - | | | R2-D2_reranker | 5.16 | 65.00 | 70.16 | - | 55.90 | 69.90 | | | UnitedQA | 8.36 | 65.00 | 73.36 | - | 54.70 | 70.50 | | | DPR+HNSW | (100, 500]) | 1.32 | 151.00 | 152.32 | 5.82* | 41.20 | 56.60 | | ERNIE-Search_2.4B | 19.20 | 344.06 | 363.26 | - | - | - | | | Retriever-Reader | (50, 100] | | | | | | | | EMDR2 | (10, 50] | 1.76 | 32.00 | 33.76 | - | 52.50 | 71.40 | | YONO_retriever | 1.54 | 65.00 | 66.54 | - | 53.20 | 71.30 | | | FiD-base | 1.76 | 65.00 | 66.76 | 2.00 | 48.20 | 65.00 | | | FiD-base+KD_DPR | 1.76 | 65.00 | 66.76 | - | 49.60 | 68.80 | | | YONO_reranker | 1.76 | 65.00 | 66.76 | - | 53.20 | 71.90 | | | GAR+DPR_generative | 2.50 | 65.00 | 67.50 | 1.25* | 45.30 | - | | | RAG-seq | 2.50 | 65.00 | 67.50 | 0.80 | 44.50 | 56.80 | | | FiD-large | 3.96 | 65.00 | 68.96 | 0.50 | 51.40 | 67.60 | | | FiD-large+KD_DPR | 3.96 | 65.00 | 68.96 | - | 53.70 | 72.10 | | | RePAQ+FiD_large | (100, 500] | 3.32 | 220.00 | 223.32 | 2.30 | 52.30 | 67.30 | | Generative-Reader | (50, 100] | | | | | | | | RePAQ_base+PQ | (10, 50] | 0.04 | 48.00 | 48.04 | 100.00 | 41.20 | - | | RePAQ_base | 0.04 | 220.00 | 220.04 | 1400.00 | 40.90 | - | | | RePAQ_base+reranker_base | 0.09 | 220.00 | 220.09 | 55.00 | 45.70 | - | | | RePAQ_XL | 0.24 | 220.00 | 220.24 | 800.00 | 41.50 | - | | | RePAQ_XL+reranker_XXL | 1.18 | 220.00 | 221.18 | 6.00 | 47.60 | 52.10 | | | DensePhrases | 0.88 | 320.00 | 320.88 | 20.60 | 40.90 | 50.70 | | | DenSPI+Sparc | (1000, 2010] | 2.69 | 1547.00 | 1549.69 | 2.10 | 14.50 | 34.40 | | DenSPI | 2.69 | 2000.00 | 2002.69 | 2.90 | 8.10 | 30.70 | | | Retriever-Only | (100, 500] | 0.24 | 0.00 | 0.24 | 7.20* | 25.50 | - | | T5-1.1-small+SSM T5-base | 0.88 | 0.00 | 0.88 | 7.53* | 25.90 | 29.10 | | | BART-large | 1.62 | 0.00 | 1.62 | 5.88* | 26.50 | 26.70 | | | GAR_generative | 1.62 | 0.00 | 1.62 | 2.94* | 38.10 | 62.20 | | | T5-large | 3.08 | 0.00 | 3.08 | 3.85* | 28.50 | 35.90 | | | T5-1.1-XL+SSM | (10, 50] | 12.00 | 0.00 | 12.00 | - | 29.50 | 45.10 | | T5-1.1-XXL+SSM | 45.27 | 0.00 | 45.27 | - | 35.20 | 61.60 | | | GPT-3 | (500, 1000] | 700.00 | 0.00 | 700.00 | - | 29.90 | 71.20 | | Generator-Only | (0, 10] | | | | | | | | Wikipedia | Split | Retrieval | Length of | Number of | Encoding | Index | Relatived | |------------------------|-----------------|-------------|-----------------|----------------|----------------|--------------|-------------| | a Unit (tokens) | Units (million) | Size (GB) | ODQA models | | | | | | Corpus | Method | Unit | Methods | | | | | | 2016-12-21 | - | article | - | 5.1 | TF-IDF | 26 | DrQA | | dump of | BM25 | 2.4 | Skylinebuilder, | | | | | | English Wikipedia | GAR_extractive | | | | | | | | BERT's | passage | 288 | 13 | dense encoding | 18 | ORQA, | | | block/ | | | | | | | | | tokenizer | REALM | | | | | | | | passage | 100 | 21 | dense encoding | 65 | DPR, RocketQA, | | | | - | block/ | R2-D2, etc. | | | | | | | 2018-12-20 snapshot of | | | | | | | | | English Wikipedia | TF-IDF+ | | | | | | | | dense encoding | 2000 | DenSPI | | | | | | | - | phrase | <=20 | 60000 | dense encoding | 320 | DensePhrases | | | generator | QA-pair | - | 65 | dense encoding | 220 | RePAQ | | Table 2: The statistical information of Wikipedia corpora used in ODQA models. build the index for passages on this Wikipedia corpus. The size of the index file is 65GB. Based on this Wikipedia corpus, RePQA further generates 65 million QA pairs and indexes these QA pairs to a 220GB file. Some other methods, e.g. DrQA (Chen et al., 2017) and Skylinebuilder (Wu et al., 2020), encode and build indexes for documents from the 2016-12-21 dump of English Wikipedia which includes 5.1 million articles (Chen et al., 2017; Wu et al., 2020), and the size of this index file is 26GB. Except for the different choices of the original corpus, there are also different partition and segmentation strategies. For example, ORQA (Lee et al., 2019) and REALM (Guu et al., 2020) segment the corpus documents into 13 million blocks, each of which has 288 tokens. DenseSPI (Seo et al., 2019), Dens+Sparc (Lee et al., 2020) and DensePhrases (Lee et al., 2021b) divide corpus documents into 60 billion phrases, each phrase including 20 tokens. The rest ODQA models segment corpus documents into 21 million passages with a length of 100 tokens, leading to a 65GB index (Karpukhin et al., 2020; Lewis et al., 2021; Izacard and Grave, 2021; Qu et al., 2021). A comprehensive introduction is illustrated in Table 2. In general, the index size of the corpus is quite large, and the storage of the index is one of the main challenges for ODQA efficiency. Metrics. There are various metrics to depict efficiency in different dimensions. In terms of latency, training time (Mao et al., 2021), indexing time (Mao et al., 2021), query time (Yamada et al., 2021) and reasoning time are normally considered. The metrics Q/s (questions per second) (Seo et al., 2019) and *FLOPs* (floating point operations) (Guan et al., 2022) are popular in measuring the total processing latency, where Q/s is the number of questions one ODQA system can answer per second and *FLOPs* is the number of floating point operations of the model. In terms of memory, model parameter size, passage corpus size, index size, and training data size are important to influence factors of memory cost (Yamada et al., 2021). We measure the memory consumption for ODQA models using memory units (bytes) of corresponding data (corpus, index, and model) after loading into memory. In terms of answering quality, EM (Exact Match accuracy) (Chen et al., 2017), F1-score, MRR@k (Mean Reciprocal Rank ) (Qu et al., 2021), precision@k, Recall@k and retrieval accuarcy@k (Karpukhin et al., 2020) are normally used to measure the quality of the ODQA models. Specifically, EM is the percentage of questions for which the predicted answers can match any one of the reference answers exactly, after string normalization (Qu et al., 2021). MRR@k is the mean reciprocal of the rank at which the first relevant passage was retrieved (Qu et al., 2021). In this paper, we adopt metrics on latency, memory, and accuracy to evaluate ODQA models comprehensively. Specifically, we use Q/s to measure the processing speed, use *total memory overhead* to evaluate the memory cost, and use EM score to estimate the end-to-end answer prediction quality as shown in Table 1. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
ahmadi-anastasopoulos-2023-script
Script Normalization for Unconventional Writing of Under-Resourced Languages in Bilingual Communities
https://aclanthology.org/2023.acl-long.809
The wide accessibility of social media has provided linguistically under-represented communities with an extraordinary opportunity to create content in their native languages. This, however, comes with certain challenges in script normalization, particularly where the speakers of a language in a bilingual community rely on another script or orthography to write their native language. This paper addresses the problem of script normalization for several such languages that are mainly written in a Perso-Arabic script. Using synthetic data with various levels of noise and a transformer-based model, we demonstrate that the problem can be effectively remediated. We conduct a small-scale evaluation of real data as well. Our experiments indicate that script normalization is also beneficial to improve the performance of downstream tasks such as machine translation and language identification.
# Script Normalization For Unconventional Writing Of Under-Resourced Languages In Bilingual Communities Sina Ahmadi **Antonios Anastasopoulos** Department of Computer Science George Mason University {sahmad46,antonis}@gmu.edu ## Abstract The wide accessibility of social media has provided linguistically under-represented communities with an extraordinary opportunity to create content in their native languages. This, however, comes with certain challenges in script normalization, particularly where the speakers of a language in a bilingual community rely on another script or orthography to write their native language. This paper addresses the problem of script normalization for several such languages that are mainly written in a Perso-Arabic script. Using synthetic data with various levels of noise and a transformerbased model, we demonstrate that the problem can be effectively remediated. We conduct a small-scale evaluation of real data as well. Our experiments indicate that script normalization is also beneficial to improve the performance of downstream tasks such as machine translation and language identification.1 ## 1 Introduction With the increasing accessibility of the internet around the globe, from nomad settlements to megacities, we are living in a critical era that gives voice to different language communities to be represented online. Text as the essential material of many applications in natural language processing (NLP) may not be consistently created by under-represented communities due to the lack of a writing system and/or tradition, a widely-used orthography, and reasons related to regional and geopolitical conditions. Moreover, speakers of a language may opt for another script or prefer to use the script of an administratively dominant language other than the conventional one used in their native language (Gathercole and Thomas, 2009). This can be due to illiteracy in the native or home language, diglossia, or simply, the lack of adequate support for specific languages and scripts re- ْی ٛی ٙو ُا ِا َا 1The data and code are publicly available at https:// github.com/sinaahmadi/ScriptNormalization. ![0_image_0.png](0_image_0.png) sulting in remarkable negative effects on writing (Oriyama, 2011; Eisenchlas et al., 2013). Despite the fascinating progress in language technology in recent years, there are many fundamental tasks in NLP that are usually deemed trivial yet are ubiquitous, needed, and *still unsolved* in low-resource settings. One of these tasks is to deal with noisy text which affects the quality and performance of various applications such as text mining (Dey and Haque, 2009) and machine translation (Sperber et al., 2017). In practice, some steps are generally taken to deal with inconsistencies in the text, such as unifying character encoding and rectifying orthographic rules - a task broadly referred to as data cleaning or text preprocessing (Chu et al., 2016). However, unconventional and unsystematic writing such as using the script of a language to write in another language presents challenges beyond traditional data cleaning scenarios. In this vein, we address the task of script normalization which aims to normalize a text writؠ ٗو ٚے 14466 | Language | Unconventional script | Unconventional writing | Conventional writing | یته زوؤنٚ نؤم | | |--------------------------------------------|---------------------------------------|--------------------------------|------------------------|----------------|--------------------------------------| | هي | | | | | | | سه گه گيلکؤن اۊنٚ | | | | | | | جي گ | | | | | | | ب زنن | | | | | | | یته زون نم | | | | | | | هي | | | | | | | سه گه گيلکن اون | | | | | | | جی گ | | | | | | | ب زنن | | | | | | | | | | | | | | Persian | | | | | | | | | | | | | | Gilaki Kashmiri | Urdu | ۔"#$ & '%)(* +,"- ./ , .10"-2( | َکھ ُورٲ | | | | س | | | | | | | 4 | | | | | | | جانَور۔ | | | | | | | 3 | برٛور | | | | | | قایمقامێ ئامێدیێ بەرسڤا پارێزگارێ دهۆکێ دا | چھُ ا | | | | | | قايمقام اامدي بةرثوا پارزكار دهوك دا | | | | | | | | | | | | | | Arabic | | | | | | | | | | | | | | Kurmanji | هەر لەیەکەم شانۆوە دیارە فەهەدیاندەوێ | | | | | | ت | | | | | | | هةر لةيةكةم شانؤوة ديارة فهدياندةوي | | | | | | | ت | | | | | | | | | | | | | | Arabic | | | | | | | | | | | | | | Sorani | C D 5 | N =<PO $ Q ,$ | | | | | Sindhi | Urdu | 5 6ٔ 89 $:%;=< > '%?@",-A 5 B | 6EFHG I$JG G K-LAG ( M | R 5 6$ 8T 5 | مديني ڏانهن ھجرت وقت فقط ھيء گھرواري | | ساڻن گڏ ھئي | | | | | | | U:% | | | | | | ten in a script or orthography other than the one that is widely used in the language as the conventional one. For instance, writing Kashmiri or Sorani Kurdish in a script like that of Urdu or Persian, rather than in their own conventional scripts. Although this task has much in common with data cleaning, transliteration (Ahmadi, 2019), text normalization as defined by Sproat and Jaitly (2016) and spelling error correction (Hládek et al., 2020), it is particular in a few ways: unconventional writing does not necessarily comply with orthographic rules of the text's language; when writing borrowed or common words from the dominant language, there is an influence of the orthography of the donor language rather than writing in the recipient language's script or orthography; phonemes and graphemes may not be represented according to any specific rule, as in writing /S/ as 'ch', 'sh' or '$' in *Arabizi* (Al-Badrashiny et al., 2014) following an erratic or obscure pattern among speakers leading to a huge amount of noisy material. A few examples of unconventional writing in Kurdish, Kashmiri and Gilaki are shown in Table 1. In this paper, we focus on the script normalization of a few under-resourced languages that use variants of the Perso-Arabic script, as schematized in Figure 1, and are spoken as minority languages in bilingual communities with a dominant language using a similar but different script. These languages are Azeri Turkish (AZB), Mazanderani (MZN), Gilaki (GLK), Sorani Kurdish (CKB), Kurmanji Kurdish (KMR), Gorani (HAC), Kashmiri (KAS) and Sindhi (SND). Although these languages have their own customized scripts with more or less defined orthographies in their communities, they are oftentimes written in the script of the dominant language, notably Persian (FAS), Arabic (ARB) and Urdu (URD) scripts. Furthermore, these languages have been lexically and, to a lesser extent, typologically influenced by the administratively dominant languages. Akin to many other multilingual and pluricultural societies, speakers of these languages have faced language conflict and linguistic discrimination in different educational, political, cultural, and communicative domains, and struggle with ethnolinguistic vitality (Mohan, 1989; Shah, 1997; Bird, 2020; Sheyholislami, 2022). Appendix A presents a summary of the languages we study. تەمەن درێژ و لەش ساق بیت پیاگە سەربەرزەکە تمن دریژ و لش ساق بیت پیاگه سر برزکە Persian Contributions In this work, we aim to: 1. shed light on script normalization for underresourced languages with very limited progress in language technology to facilitate the identification and collection of relevant data in the future, 2. leverage synthetic data for script normalization by mapping scripts based on rules and sequence alignment, and 3. cast script normalization as a translation task where noisy text is "translated" into normalized one using synthetic data generated. We demonstrate that imposing different levels of noise on the synthetic data is beneficial to train more robust transformer-based models to normalize scripts and also, improve the performance of downstream tasks such as machine translation and language identification of unconventional writing. ## 2 Related Work Although some aspects of script normalization have been previously addressed in related NLP tasks, its definition is a rather subjective matter, where a set of intentional or unintentional anomalies are "normalized". Therefore, script normalization overlaps considerably with more well-defined tasks such as spelling correction, lexical normalization (van der Goot et al., 2021) where an utterance is transformed into its standard form, pattern recognition (Schenk et al., 2009; Maddouri et al., 2000), language identification and standardization (Partanen et al., 2019; Ahmadi, 2020c). Script and text normalization have been proven beneficial in various downstream applications such as dependency parsing (Zhang et al., 2013), sentiment analysis (Mandal and Nanmaran, 2018) and named-entity recognition (Baldwin and Li, 2015). Although in some contexts normalization has been used to refer to basic tasks such as stemming and lemmatization, as in (Toman et al., 2006), those are not within the scope of this paper. Text Normalization One of the most related tasks to script normalization is text normalization which broadly deals with alternative spellings, typos, abbreviations and non-canonical language and is of importance to text-to-speech systems and for handling micro-blogging data such as Tweets (Sproat and Jaitly, 2016). To this end, a wide range of techniques have been proposed using rules based on annotated corpora (Sigurðardóttir et al., 2021) or linguistic information (Xia et al., 2006), edit operations and recurrent neural networks (Chrupała, 2014), machine translation (Graliński et al., 2006), supervised learning (Yang and Eisenstein, 2013), encoder-decoders (Lusetti et al., 2018) and more recently, transformers (Zhang et al., 2019; Tan et al., 2020; Bucur et al., 2021). MoNoise (van der Goot and van Noord, 2017) is a prominent approach to text normalization where the problem is framed as a domain adaptation one and various steps are taken to generate and rank normalized candidates using spell checkers, word embeddings, dictionaries and n-grams features. Perso-Arabic Script Normalization As one of the important scripts adopted by languages spoken by over 600 million speakers (Doctor et al., 2022), the Perso-Arabic scripts are prevalent on the Web nowadays. Although script normalization in general and addressing ambiguities of writing systems, in particular, have been previously addressed in the related tasks for such languages, such as Arabic (Ayedh et al., 2016; Shaalan et al., 2019), Kashmiri (Lone et al., 2022a), Kurdish (Ahmadi, 2019) and Sindhi (Jamro, 2017), normalizing Perso-Arabic scripts has not received much attention, let alone for noisy data originated from unconventional writing in bilingual communities. In a recent study, Doctor et al. (2022) address the normalization of Perso-Arabic script for a number of languages, namely Urdu, Punjabi, Sindhi, Kashmiri, Sorani Kurdish, Uyghur and Azeri Turkish. Inspired by Johny et al. (2021)'s approach to using finite-state transducers (FSTs) to normalize Brahmic scripts, Gutkin et al. (2022) implement FSTs for Perso-Arabic scripts for Unicode normalization, visual normalization and reading normalization by focusing on normalization and unification of varieties based on regional orthographies rather than that of specific dominant scripts. Low-resource Setup Most under-resourced languages that require script normalization face the predicament of data paucity. On the other hand, data annotation is a tedious task that may not be always feasible for all languages. To tackle these, Dekker and van der Goot (2020) create synthetic data in which canonical words are replaced with non-canonical ones. Lusito et al. (2022) use a transformer-based model and employ modern data augmentation techniques for the endangered language of Ligurian; to deal with the scarcity of data, "back-normalization" is proposed where normalized text is transformed to a noisy one, analogous to back-translation (Sennrich et al., 2016). Similarly, many other studies rely on the synthetic generation of noisy data for tasks such as grammatical error correction (Foster and Andersen, 2009), creating noise-resistant word embedding (Doval et al., 2019; Malykh et al., 2018) and machine translation (Bogoychev and Sennrich, 2019). In comparison to the previous work, our work focuses on text anomalies caused by the usage of a different script in bilingual communities. In addition, we propose modeling the problem as a machine translation and generating synthetic data by script mapping. To the best of our knowledge, our approach to this problem has not been previously explored for the selected languages. ## 3 Methodology This section presents our methodology to collect data, create script mapping to generate synthetic data and implement a transformer-based model. Source and dominant in this context respectively refer to the language of the original text and that of the dominant one used unconventionally. ## 3.1 Data Collection As the first step, we collect data written in the conventional script of the selected languages. To that end, we create corpora based on Wikipedia dumps.2 Since Wikipedia is not available for Gorani, we use Ahmadi (2020b)'s corpus for Gorani. Unlike the Latin script of Kurmanji for which there are corpora and Wikipedia, such as Pewan (Esmaili et al., 2013), there is no corpus for Kurmanji written in its Perso-Arabic script. Instead of relying on unreliable transliteration tools to convert the existing Kurmanji data, we crawl data from mainly news websites in the Iraqi Kurdistan for Kurmanji using the Perso-Arabic script.3 It is worth mentioning that for Sorani Kurdish we use a large existing corpus (Ahmadi and Masoud, 2020) instead of the (smaller) Wikipedia dump. We clean raw text by removing hyperlinks, email addresses, dates, non-relevant symbols and zero-width non-joiner (ZWNJ), if not systematically used in the script. Different types of numerals, namely Eastern Arabic <٠١٢٣٤٥٦٧٨٩>, Farsi <۰۱۲۳۴۵۶۷۸۹> and Hindu-Arabic <0123456789>, are unified with the later ones for consistency. We also deal with script switching in some Wikipedia articles, particularly in Sindhi and Kashmiri, using regular expressions to only keep the Perso-Arabic data. Following this, we extract vocabularies from the corpora based on a frequency list; depending on the size and quality of the data, we select words appearing with a minimum frequency in the range of 3 to 10. In addition to the vocabulary extracted from corpora, we also collect word lists and bilingual dictionaries in the source and target languages from other sources online. We consulted Wiktionary4for Azeri Turkish, Kashmiri, Mazanderani, Sindhi and Sorani without finding any such resources for the other languages. Additionally, the lexicographical data provided by Ahmadi et al. (2019) were used for Sorani. ## 3.2 Script Mapping To simulate the process that leads to noisy data, we create script mappings that map characters in the conventional script of the source language to that of the dominant language. To do so, we define mapping rules between the scripts based on the orthographies of the languages, as in the compound characters <ئێ <in Kurdish (composed of <ئ) <U+0626) and <ێ) <U+06CE)) that appear so only at the beginning of a word and this can be mapped to either <ا) <U+0627) or the same character but with the diacritic *Kasrah* as <ِا<. In addition, we take the closest candidates in the other script into account according to Unicode normalization as in <ک) <U+06A9) and <ك) <U+0643), and visual normalization, i.e. resemblance of the graphemes as in <ڎ) <U+068E) and <ذ) <U+0630). Table 2 shows a few mapping rules. Azeri Turkish Persian چ چ | Source | Target | |----------|----------| | script | ذ / | | ض / | | | ظ / ز | | | ز | | | | | | Arabic | | | | | | Sorani | ے / | | ي / | | | ی | | | | | | ي | | | | | | Urdu | | | | | | Sindhi | | ا / اُ ٲ Urdu Kashmiri Table 2: An example of script mapping rules. In unconventional writing, we assume that a character in the source language can be mapped to one or more characters in the target script. '/' specifies different mapping possibilities. Using the rule-based script mappings, we also determine words in the word lists and bilingual dictionaries that are potential translations and written similarly in the two scripts. We also consider removing diacritics, also known as *Harakat*, as they are not always included in writing. The following words are collected this way: 'برنج') 'rice') in Kurmanji and Persian, 'سویدی') 'Swedish') written with <ی) <U+06CC) in Sorani and 'سويدي 'written with <ي) <U+064A) in Arabic, 'کیٖامریَ') 'American') in Kashmiri and 'امریکی 'in Urdu and, 'بۆرج' ('tower') in Azeri Turkish and 'برج 'in Persian by removing <ۆ) <U+06C6). As such, we refer to the set of words collected as spelling pairs. ## 3.3 Character-Alignment Matrix Although script mapping based on rules and modifications is effective, especially to retrieve common words or words borrowed by the two languages, it may lead to potentially false friends or incorrect spelling pairs as well. Hence, to capture information based on the spelling pairs, we rely on the character alignment of words. To this end, we employ Needleman and Wunsch (1970)'s algorithm for sequence alignment that maximizes the number of matches between sequences, i.e. words, a and b with respect to the length of the two sequences. Therefore, we define the alignment matrix D for each spelling pair by setting elements i in a and j in b according to the following: $$D_{i,j}=m a x\left\{\begin{array}{l l}{{D_{i-1,j-1}+d_{a_{i}b_{j}}}}\\ {{D_{i-1,j}+w}}\\ {{D_{i,j-1}+w}}\end{array}\right.$$ where Di,j is the score of character i in the sequence a and character j in the sequence b, daibj denotes the similarity score of the characters at i and j (1 if similar and -1 otherwise) and, w refers to the gap penalty which is set to -1. A gap penalty is a penalizing score for non-matching characters and is shown by '-' in our implementation. The matrix is initialized with D0,0 = 0. This algorithm is beneficial for our task as it considers the two sequences globally and allows back-tracing, hence useful to provide sequence matches. The following example shows the alignment of 'ڤیەتنام' ('Vietnam') in Sorani with the same word 'ویتنام', in Persian using this algorithm: $$\operatorname*{lim}_{r\to\infty}\left|\begin{array}{l}{{r}}\\ {{r}}\end{array}\right|\begin{array}{l}{{\phi}}\\ {{\phi}}\end{array}\right|\begin{array}{l}{{\phi}}\\ {{\phi}}\end{array}\right.$$ Finally, we merge all the alignment matrices, i.e. Ds, and create a character-alignment matrix for each pair of source and dominant languages. This matrix is normalized to have a unit norm. Furthermore, the mappings based on the rules of script mapping described in the previous subsection are appended to the matrix with a probability of 1. Alignments with a score < 0.1 are removed from the matrix due to the low probability of replacement. Depending on the score, a character can be aligned to more than one character in the dominant script. ## 3.4 Synthetic Data Generation Given that there is a limited amount of data available for the selected languages, and that identifying languages in noisy setups is a challenging task, we have to rely on synthetic data generation. To that end, we first extract sentences of minimal and maximal lengths of 5 and 20 tokens (space-delimited) from the corpora described in §3.1. Then, we replace characters in the extracted sentences (clean data) with those in the characteralignment matrix. The replacement is done randomly with uniform sampling from the character alignment matrices to increase diversity. Depending on the number of replacements, we also consider specific percentages of noise generation at 20%, 40%, 60%, 80% and 100%. We create a last dataset by merging all datasets with all levels of noise; this creates more noisy instances given the randomness of noise generation. The parallel data are then tokenized using regular expressions. The number of parallel sentences and words per noise scale is provided in Appendix C. ## 3.5 Implementation And Evaluation We use JoeyNMT (Kreutzer et al., 2019) to train transformer encoder-decoder models at the character level based on the degree of noise and language pairs. Using this model, our objective is to encode noisy data, i.e. synthetically noisy sentences, and decode normalized ones, i.e. the clean sentences in the collected datasets. We report the performance of the models by BLEU score (Papineni et al., 2002) and character n-gram F-score (chrF Popović, 2015), both calculated based on SacreBLEU (Post, 2018), along with sequence-level accuracy (Seq. Acc.), i.e. number of correct tokens in the hypothesis appearing in the same position as the reference divided by all tokens. Hyperparameter details are outlined in Appendix B. We evaluate the performance of the trained models based on the noise scale. As a naive baseline system, we calculate the evaluation metrics for the parallel data without applying any normalization technique. Finally, we evaluate the effectiveness of our models in two downstream tasks: language identification and machine translation. ## 4 Results And Analysis 4.1 Script Normalization As the first set of experiments, we evaluate the performance of the script normalization models on synthetic noisy data at all levels of noise. In Appendix, Table D.3 provides the results of all the models and compares them with the baseline, and examples of normalization by our models are provided in Table D.1. Furthermore, the performance of a few selected models is presented in Figure 2. Although our models perform competitively in comparison to the baseline, performance is not identical across the datasets. By increasing the noise level from 20% to 100% making the source data harder to be normalized, a gradual decrease in performance is expected, and we indeed observe this for both the baseline and our model, but for ![5_image_0.png](5_image_0.png) most datasets (7 out of 12) the degradation for the naive baseline is more rapid and pronounced. Our model does seem to handle various levels of noise: in Sindhi, for instance, we get 75.1 BLEU score vs. 19.7 of the baseline (see bottom right SND100 URD→SND in Figure 2). ![5_image_1.png](5_image_1.png) For five datasets, namely, AZBFAS→AZB, GLKFAS→GLK, MZNFAS→MZN, KASURD→KAS and HACCKB→HAC, the naive baseline outperforms our models. We believe that this is explained by the level of similarity of the source and dominant scripts, which in turn determines the difficulty of script normalization. To quantify our assumption, we define RA:B as the script ratio of scripts A and B, both as two sets of characters, as follows: $$R_{A:B}={\frac{A\cap B}{A\cup B}}\times{\frac{A\,{\@}\,B}{A\cap B}}$$ where A ⋒ B refers to the intersection of characters in scripts A and B which are mapped in the rule-based script mapping (see §3.2) without any other alternative in the other script while A ∩ B is the intersection of A and B regardless of the mapping. Intuitively speaking, the script ratio of two identical scripts should be closer to 1 while more different scripts with various mappings between characters should get a lower value. Table A.1 provides the script ratios. Figure 3 shows the BLEU score (left y-axis) of the baseline and our model to normalize the datasets containing 100% noise, e.g. GLK100 FAS →GLK along with the script ratio for each language (right y-axis). This indicates that the normalization model (model100) performs better where the script ratio is relatively low (<0.6, i.e. the two scripts are not that similar). On the other hand, the baseline performs better for higher script ratios, because in general the two scripts are very similar and hence there is less "noise". We leave for future work an exploration as to why our transformer models fail to even simply learn to copy their input to perform at least on par with the baseline. Real Data Evaluation Given the scarcity of real data, we resorted to generating synthetic data for training models and consequently evaluating them. However, working with real data is also crucial to evaluate the effectiveness of our approach. As such, we collected 100 sentences from social media in Sorani Kurdish written in unconventional scripts of Persian and Arabic. These sentences are then manually normalized by native and expert speakers based on the standard orthography of Kurdish. | Original | Normalized | | | | | | | | | | | |------------|------------------|------|------|------|---------|-------|------|-----|---------|-----|-----| | Sorani | (Unconventional) | | | | | | | | | | | | Eval Set | BLEU | chrF | BLEU | chrF | Noise % | Model | P@1 | R@1 | F@1 P@2 | R@2 | F@2 | | 0 | ours | 0.52 | 0.54 | 0.51 | 0.32 | 0.64 | 0.32 | | | | | | fastText | 0.69 | 0.69 | 0.69 | 0.39 | 0.78 | 0.39 | | | | | | | 20 | ours | 0.93 | 0.93 | 0.93 | 0.48 | 0.97 | 0.48 | | | | | | fastText | 0.71 | 0.71 | 0.71 | 0.44 | 0.89 | 0.44 | | | | | | | 40 | ours | 0.91 | 0.9 | 0.91 | 0.48 | 0.96 | 0.48 | | | | | | fastText | 0.56 | 0.56 | 0.56 | 0.38 | 0.76 | 0.38 | | | | | | | 60 | ours | 0.9 | 0.9 | 0.9 | 0.48 | 0.96 | 0.48 | | | | | | fastText | 0.41 | 0.41 | 0.41 | 0.31 | 0.63 | 0.31 | | | | | | | 80 | ours | 0.9 | 0.9 | 0.9 | 0.48 | 0.96 | 0.48 | | | | | | fastText | 0.37 | 0.37 | 0.37 | 0.28 | 0.57 | 0.28 | | | | | | | 100 | ours | 0.89 | 0.89 | 0.89 | 0.48 | 0.96 | 0.48 | | | | | | fastText | 0.37 | 0.37 | 0.37 | 0.28 | 0.57 | 0.28 | | | | | | | All | ours | 0.91 | 0.91 | 0.91 | 0.48 | 0.96 | 0.48 | | | | | | fastText | 0.48 | 0.48 | 0.48 | 0.34 | 0.68 | 0.34 | | | | | | | Merged | ours | 0.72 | 0.72 | 0.72 | 0.4 | 0.8 | 0.4 | | | | | | fastText | 0.69 | 0.69 | 0.69 | 0.4 | 0.79 | 0.4 | | | | | | | CKBFAS→CKB | 1.2 | 38.4 | 20.1 | 69.6 | | | | | | | | | CKBARB→CKB | 0.4 | 19.4 | 12.7 | 65.2 | | | | | | | | | Table 3: Experiments on normalization of real-world data. The source sentences in Sorani Kurdish are written in the unconventional scripts of Persian (CKBFAS) and Arabic (CKBARB). Even in this challenging setting (note how different the unconventional sentences are, as evidenced by low scores in the left column), model100 manages to decently normalize them. | | | | | | | | | | | | The results of the small-scale evaluation on the real data are provided in Table 3. In these datasets, calculating BLEU scores of the source sentences (noisy) with respect to the reference ones (clean) for CKBFAS→CKB and CKBARB→CKB gets to 1.2 and 0.4 points, respectively. Once the source sentences are normalized using model100, the corresponding BLEU scores increase to 20.1 for CKBFAS→CKB and 12.7 for CKBARB→CKB. We selected this model as it has been trained on the most diverse training set. We believe that such a boost in BLEU scores indicates the robustness of our models to effectively normalize unconventional writing. ## 4.2 Language Identification Language identification is the task of detecting the language of a text, usually a sentence. It is modeled as a probabilistic classification problem. As the first downstream task, we carry out a few experiments on language identification in three setups: 1. CLEAN: identifying languages without injecting any noise in the datasets, i.e. the target sentences in the parallel data. This is equivalent to 0% of noise in the data. 2. NOISY: identifying languages with noisy data at various levels, starting from 20% of noise and gradually increasing 20% up to 100%. We combine all data with all levels of noise in a separate dataset referred to as ALL. 3. MERGED: merging CLEAN with ALL dataset, i.e. with all noisy data. We use the Tatoeba sentence dataset5for data in Persian, Urdu and Arabic, with additional data for Urdu from the TED corpus on Opus (Tiedemann, 5https://tatoeba.org 2012). To tackle data imbalance, we downsampled all the datasets to only include 6000 sentences for each language6. In the MERGED setup where there are 12,000 sentences per language (half noisy, half clean), additional sentences (clean) in Persian, Urdu and Arabic are added to avoid data imbalance. Finally, we then split datasets into train and test sets with an 80-20% split. To train supervised language identification models, we use fastText (Bojanowski et al., 2017) with subword features with minimum and maximum character ngram sizes of 2 and 4, word vectors of size 16 and hierarchical softmax as the loss function. Table 4 presents the results of the performance of our models in comparison to fastText's language identification model trained on data from Wikipedia, Tatoeba and SETimes on 176 languages,7including, Persian, Arabic, Urdu, Sindhi, Sorani and Mazanderani. Although Azeri Turkish is supported, it is not clear which script it is trained on in fasText. The results are reported based on precision, recall and F1 score of the first and second detection of the models, respectively denoted by '@1' and '@2'. Since Gorani and Gilaki are not among the Fairseq-supported languages, we focus our analysis on the top-two predictions (@2) to en6Kashmiri had only 4700 instances. 7As of January 2023. sure fairness against the baseline. Our models outperform fastText in noisy conditions, regardless of the level of noise. The baseline performance degrades faster as the level of noise increases. Our models perform less effectively on clean data but recall that they are trained on a substantially smaller amount of data.8 ![7_image_0.png](7_image_0.png) Figure 4 illustrates the results of classification in our MERGED model where the detected languages (x-axis) are compared with the references (y-axis). Although the model performs well in detecting Arabic, Persian, Sindhi, Urdu and Kashmiri, the other languages are often confused. The misclassified languages can be categorized into two groups of [Azeri, Gilaki, Mazanderani] and [Sorani, Kurmanji, Gorani]. This can be explained by the similarity of scripts. Surprisingly, Sindhi and Kashmiri represent a less salient overlap in classification. ## 4.3 Neural Machine Translation Furthermore, we evaluate the performance of the script normalization models in neural machine translation (NMT). To do so, we use the devtest data of the FLORES 200 dataset (Costa-jussà et al., 2022) that contains 1012 parallel sentences for 204 languages including Kashmiri, Sorani and Sindhi in their Perso-Arabic scripts along with English. With English as the target language and the other available languages as sources, we carry out the evaluation in the following three setups: 1. CLEAN: translating the devtest of the source languages as it is in the FLORES dataset (with no noise), e.g. CKB→ENG for translating Sorani (CKB) to English (FAS). This is a skyline setting (best possible). 2. NOISY: translating the devtest by synthetically applying a certain amount of noise. For this setting, we specify the maximum amount of "interference" with the dominant language's script (100% following the previous notation). The noise generation is random similar to language identification (§3.4). This is the baseline setting (worst possible, no normalization). 3. NORMALIZED: we normalize the data from the NOISY setting with our text normalization models (§4.1), and then translate the normalized text to English. We use Facebook's No Language Left Behind (NLLB) model on HuggingFace9 which is trained on parallel multilingual data from a variety of sources to translate into English and evaluate using SacreBLEU (Post, 2018). Table D.2 presents results of the translation quality of the NLLB model in our intended setups, with a sample in Table 5. Except for KASURD→ENG, the translation quality of unnormalized data (S) deteriorates as noise levels increase while it remains steady when using normalization models as a preprocessing step (H). In Sorani, for example, even with a 100% of noise (CKB100 ARB), our normalization approach (H) recovers almost 20 BLEU points out of the 26 points that were lost due to the non-conventional setting. The importance of a trustworthy text normalization system is clear in the case of Kashmiri, where our normalization model fails to reduce noise, resulting in subpar performance. From a qualitative point of view, the noisy input affects the quality of translations depending on the type of noise. Although the NLLB model shows resilience to certain types of errors, particularly where a character is substituted by an incorrect one without an impact on word boundaries, i.e. no spaces being added or removed, noisy characters that can possibly affect tokenization lead to incorrect translations. For instance in Table D.4, the translation of noisy sentences in Sindhi is less different from that of the reference, while the Sorani noisy sentence is translated terribly differently and 9The nllb-200-distilled-600M variant. | Language | Noise % | Test set | BLEU | chrF | |-----------------|-----------|------------|--------|--------| | 0 | R | 30.81 | 55.96 | | | SND100 URD →ENG | 100 | S | 7.09 | 27.59 | | H | 23.97 | 49.47 | | | | 0 | R | 30.81 | 55.96 | | | CKB100 ARB →ENG | 100 | S | 4.72 | 24.33 | | H | 24.53 | 50.55 | | | | 0 | R | 29.69 | 45.79 | | | CKB100 FAS →ENG | 100 | S | 17.29 | 25.39 | | H | 26.27 | 46.7 | | | | 0 | R | 28.40 | 55.71 | | | KAS100 URD →ENG | 100 | S | 27.93 | 55.13 | | H | 11.39 | 35.97 | | | incorrectly. Needless to say, the meaning of some words varies with incorrect characters. ## 5 Conclusion And Discussion This paper discusses the challenge of script normalization of unconventional writing of languages that are spoken in bilingual communities. Under the influence of the dominant language of such communities, speakers tend to use scripts or orthographies of the dominant language rather than the one that is conventionally used in their native language. This leads to noisy data that hinder NLP applications and reduce data availability. Framing the problem as a machine translation one where noisy data is 'translated' into clean one, we address the task of script normalization for the Perso-Arabic scripts of several languages, namely Azeri Turkish, Mazanderani, Gilaki, Sorani and Kurmanji Kurdish, Gorani, Kashmiri and Sindhi, with dominant languages being Arabic, Persian and Urdu. Given that these languages are lessresourced, we rely on synthetic data to create parallel datasets by injecting noise based on script mapping. We then train transformer-based encoderdecoders and show that the models can tackle the problem effectively, even for real data, but differently based on the level of noise. We demonstrate that script normalization can be effectively addressed, particularly where there are many dissimilarities between the source and dominant languages. It can also alleviate the impact of noisy data on downstream tasks, namely language identification and machine translation. In addition to script normalization that can help retrieve texts for the selected under-resourced languages, the trained models along with the collected data, with or without noise, can pave the way for further developments for those languages. Limitations One of the limitations of the current study is the lack of annotated data for all languages. This is also the case of machine translation for which data could only be found for Kashmiri, Sorani and Sindhi, while other languages do not have much parallel data yet. On the other hand, the notion of noisy data is limited to the replacement of the missing characters in a script when compared to another one, i.e. that of the dominant language. As an ablation study, injecting other types of noise, beyond those discussed in this paper, may improve the performance of the models to tackle not only script normalization but several related tasks such as spelling error correction and may also increase the robustness of the models for morphologically rich languages or languages with versatile word boundaries using ZWNJ. Although we did our best to filter out code-switched data in the corpora, our datasets may contain data in other languages (in Perso-Arabic scripts). In future work, we would like to apply our approach to other scripts and languages in bilingual communities. We also suggest evaluating the impact of script normalization on more downstream tasks, especially transliteration and tokenization. ## Ethics Statement All corpora and datasets used in this study are publicly available, ensuring compliance with data privacy regulations. Although we did our best to remove any personally identifiable information and preserve the privacy and anonymity of individuals, it is possible that some of the selected corpora contain sensible or offensive information. Filtering such content is challenging given that all the targeted languages are low-resourced and lack proper NLP tools for this purpose. We believe that it is unlikely that the normalization models cause benefits or harm to individuals. Regarding the annotation of the data (§4.1), annotators were fairly compensated for their time and effort. By upholding these ethical principles, we aimed to conduct the study in a responsible and conscientious manner. ## Acknowledgments This work is supported by the National Science Foundation under DLI-DEL award BCS-2109578. The authors are also grateful to the anonymous reviewers, as well as the Office of Research Computing at GMU, where all computational experiments were conducted. ## References Sina Ahmadi. 2019. A rule-based Kurdish text transliteration system. *ACM Transactions on Asian* and Low-Resource Language Information Processing (TALLIP), 18(2):1–8. Sina Ahmadi. 2020a. A Tokenization System for the Kurdish Language. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 114–127. Sina Ahmadi. 2020b. Building a corpus for the Zaza– gorani language family. In *Proceedings of the* 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 70–78, Barcelona, Spain (Online). International Committee on Computational Linguistics (ICCL). Sina Ahmadi. 2020c. KLPT–Kurdish Language Processing Toolkit. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS), pages 72–84. Sina Ahmadi, Milind Agarwal, and Antonios Anastasopoulos. 2023. PALI: A language identification benchmark for Perso-Arabic scripts. In Tenth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2023), pages 78–90, Dubrovnik, Croatia. Association for Computational Linguistics. Sina Ahmadi, Hossein Hassani, and John P McCrae. 2019. Towards electronic lexicography for the Kurdish language. In Proceedings of the sixth biennial conference on electronic lexicography (eLex). eLex 2019. Sina Ahmadi and Maraim Masoud. 2020. Towards Machine Translation for the Kurdish Language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 87–98. Mohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, and Owen Rambow. 2014. Automatic transliteration of Romanized dialectal Arabic. In Proceedings of the eighteenth conference on computational natural language learning, pages 30–38. Abdullah Ayedh, Guanzheng Tan, Khaled Alwesabi, and Hamdi Rajeh. 2016. The effect of preprocessing on arabic document categorization. *Algorithms*, 9(2):27. Tyler Baldwin and Yunyao Li. 2015. An in-depth analysis of the effect of text normalization in social media. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 420–429, Denver, Colorado. Association for Computational Linguistics. Steven Bird. 2020. Decolonising speech and language technology. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3504–3519. Nikolay Bogoychev and Rico Sennrich. 2019. Domain, translationese and noise in synthetic data for neural machine translation. *CoRR*, abs/1911.03362. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Habib Borjian. 2008. Two Mazandarani texts from the nineteenth century. *Studia Iranica*, 37:7–49. Maryam Borjian and Habib Borjian. 2023. At the Crossroads: Caspian Languages through a Sociolinguistic Lens. Iranian and Minority Languages at Home and in Diaspora, page 9. Ana-Maria Bucur, Adrian Cosma, and Liviu P Dinu. 2021. Sequence-to-sequence lexical normalization with multilingual transformers. *arXiv preprint* arXiv:2110.02869. Grzegorz Chrupała. 2014. Normalizing tweets with edit scripts and recurrent neural embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 680–686, Baltimore, Maryland. Association for Computational Linguistics. Xu Chu, Ihab F. Ilyas, Sanjay Krishnan, and Jiannan Wang. 2016. Data cleaning: Overview and emerging challenges. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD Conference 2016, San Francisco, CA, USA, June 26 - July 01, 2016, pages 2201–2206. ACM. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. *arXiv preprint* arXiv:2207.04672. Maya Khemlani David, Mumtaz Ali, and Gul Muhammad Baloch. 2017. Language shift or maintenance: The case of the Sindhi language in Pakistan. *Language Problems and Language Planning*, 41(1):26– 45. Kelly Dekker and Rob van der Goot. 2020. Synthetic data for English lexical normalization: How close can we get to manually annotated data? In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 6300–6309, Marseille, France. European Language Resources Association. Lipika Dey and S. K. Mirajul Haque. 2009. Studying the effects of noisy text on text mining applications. In Proceedings of the Third Workshop on Analytics for Noisy Unstructured Text Data, AND 2009, Barcelona, Spain, July 23-24, 2009 (in conjunction with ICDAR 2009), pages 107–114. ACM. Raiomond Doctor, Alexander Gutkin, Cibu Johny, Brian Roark, and Richard Sproat. 2022. Graphemic Normalization of the Perso-Arabic Script. *CoRR*, abs/2210.12273. Ehsan Doostmohammadi, Minoo Nassajian, and Adel Rahimi. 2020. Joint persian word segmentation correction and zero-width non-joiner recognition using BERT. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4612–4618. International Committee on Computational Linguistics. Yerai Doval, Jesús Vilares, and Carlos GómezRodríguez. 2019. Towards robust word embeddings for noisy texts. *CoRR*, abs/1911.10876. Susana A Eisenchlas, Andrea C Schalley, and Diana Guillemin. 2013. The importance of literacy in the home language: The view from australia. Sage Open, 3(4):2158244013507270. Kyumars Sheykh Esmaili, Donya Eliassi, Shahin Salavati, Purya Aliabadi, Asrin Mohammadi, Somayeh Yosefi, and Shownem Hakimi. 2013. Building a test collection for Sorani Kurdish. In 2013 ACS International Conference on Computer Systems and Applications (AICCSA), pages 1–7. IEEE. Jennifer Foster and Øistein E. Andersen. 2009. Generrate: Generating errors for use in grammatical error detection. In *Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications*, EdAppsNLP '09, page 82–90, USA. Association for Computational Linguistics. Virginia C Mueller Gathercole and Enlli Môn Thomas. 2009. Bilingual first-language development: Dominant language takeover, threatened minority language take-up. *Bilingualism: language and cognition*, 12(2):213–237. Masood Ghayoomi and Saeedeh Momtazi. 2009. Challenges in developing Persian corpora from online resources. In *2009 International Conference on Asian* Language Processing, pages 108–113. IEEE. Filip Graliński, Krzysztof Jassem, Agnieszka Wagner, and Mikołaj Wypych. 2006. Text normalization as a special case of machine translation. In *Proceedings* of the International Multiconference on Computer Science and Information Technology, Wisła, Poland, pages 51–56. Alexander Gutkin, Cibu Johny, Raiomond Doctor, Brian Roark, and Richard Sproat. 2022. Beyond Arabic: Software for Perso-Arabic Script Manipulation. In *Proceedings of the 2022 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 381–389. Daniel Hládek, Ján Staš, and Matúš Pleva. 2020. Survey of automatic spelling correction. *Electronics*, 9(10). Wazir Ali Jamro. 2017. Sindhi language processing: A survey. In *2017 International Conference on Innovations in Electrical Engineering and Computational* Technologies (ICIEECT), pages 1–8. IEEE. Cibu Johny, Lawrence Wolf-Sonkin, Alexander Gutkin, and Brian Roark. 2021. Finite-state script normalization and processing utilities: The Nisaba Brahmic library. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, EACL 2021, Online, April 19-23, 2021, pages 14–23. Association for Computational Linguistics. Siavash Khoshrouz. 2021. Acquisition of the English vowel system by Farsi and Gilaki speakers. Master's thesis, UiT Norges arktiske universitet. Julia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A Minimalist NMT Toolkit for Novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 109–114. Ron Lockwood. 2011. Machine Parsing of Gilaki Verbs with Fieldworks Language Explorer. *SIL International*. Nawaz Ali Lone, Kaiser J Giri, and Rumaan Bashir. 2022a. Issues in Machine Translation—A Case Study of the Kashmiri Language. In *Machine Intelligence and Data Science Applications*, pages 117– 123. Springer. Nawaz Ali Lone, Kaiser J Giri, and Rumaan Bashir. 2022b. Natural Language Processing Resources for the Kashmiri Language. Indian Journal of Science and Technology, 15(43):2275–2281. Massimo Lusetti, Tatyana Ruzsics, Anne Göhring, Tanja Samardžić, and Elisabeth Stark. 2018. Encoder-decoder methods for text normalization. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 18–28, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Stefano Lusito, Edoardo Ferrante, and Jean Maillard. 2022. Text normalization for endangered languages: the case of Ligurian. *CoRR*, abs/2206.07861. Samia Maddouri, Hamid Amiri, and Abdel Belaïd. 2000. Local Normalization Towards Global Recognition of Arabic Handwritten Script. In *4th International Workshop on Document Analysis Systems* - DAS'2000, page 13 p, Rio de Janeiro, Brésil. Colloque avec actes et comité de lecture. internationale. Valentin Malykh, Varvara Logacheva, and Taras Khakhulin. 2018. Robust word vectors: Contextinformed embeddings for noisy texts. In *Proceedings of the 2018 EMNLP Workshop W-NUT: The* 4th Workshop on Noisy User-generated Text, pages 54–63, Brussels, Belgium. Association for Computational Linguistics. Soumil Mandal and Karthick Nanmaran. 2018. Normalization of transliterated words in code-mixed data using Seq2Seq model & Levenshtein distance. arXiv preprint arXiv:1805.08701. Seyyed-Abdolhamid Mirhosseini. 2015. Loving but not living the vernacular: A glimpse of MazandaraniFarsi linguistic culture in northern Iran. *Language* Problems and Language Planning, 39(2):154–170. Rakesh Mohan. 1989. *Language planning and language conflict: the case of Kashmiri*. Walter de Gruyter, Berlin/New York Berlin, New York. Saul B Needleman and Christian D Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. *Journal of molecular biology*, 48(3):443–453. Kaya Oriyama. 2011. The effects of the sociocultural context on heritage language literacy: Japanese– English bilingual children in Sydney. International Journal of Bilingual Education and Bilingualism, 14(6):653–681. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Niko Partanen, Mika Hämäläinen, and Khalid Alnajjar. 2019. Dialect text normalization to normative standard Finnish. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 141–146, Hong Kong, China. Association for Computational Linguistics. Maja Popović. 2015. chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the* Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Vera Sergeevna Rastorgueva, Aza Alimovna Kerimova, Akhmed Kerimovich Mamedzade, LA Pireiko, Džoy I Edel'man, and Ronald M Lockwood. 2012. The Gilaki Language. Acta Universitatis Upsaliensis. Zobia Rehman, Waqas Anwar, and Usama Ijaz Bajwa. 2011. Challenges in Urdu text tokenization and sentence boundary disambiguation. In Proceedings of the 2nd workshop on south southeast asian natural language processing (WSSANLP), pages 40–45. Saeed Rezaei, Ashkan Latifi, and Arash Nematzadeh. 2017. Attitude towards Azeri language in Iran: a large-scale survey research. Journal of Multilingual and Multicultural Development, 38(10):931–941. Joachim Schenk, Johannes Lenz, and Gerhard Rigoll. 2009. Novel script line identification method for script normalization and feature extraction in on-line handwritten whiteboard note recognition. Pattern Recognition, 42(12):3383–3393. New Frontiers in Handwriting Recognition. A. Sedighi. 2023. Iranian and Minority Languages at Home and in Diaspora. The Companions of Iranian Languages and Linguistics [CILL]. De Gruyter. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Khaled Shaalan, Sanjeera Siddiqui, Manar Alkhatib, and Azza Abdel Monem. 2019. Challenges in Arabic natural language processing. In Computational linguistics, speech and image processing for arabic language, pages 59–83. World Scientific. Sayed Mehtab Ali Shah. 1997. Ethnic tensions in Sindh and their possible solution. Contemporary South Asia, 6(3):259–272. Jaffer Sheyholislami. 2022. Linguistic Human Rights in Kurdistan. *The Handbook of Linguistic Human* Rights, pages 357–371. Helga Svala Sigurðardóttir, Anna Björk Nikulásdóttir, and Jón Guðnason. 2021. Creating data in icelandic for text normalization. In Proceedings of the 23rd Nordic Conference on Computational Linguistics, NoDaLiDa 2021, Reykjavik, Iceland (Online), May 31 - June 2, 2021, pages 404–412. Linköping University Electronic Press, Sweden. Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In *Proceedings of the 14th International Conference on Spoken Language Translation, IWSLT 2017, Tokyo, Japan, December 14-15,* 2017, pages 90–96. International Workshop on Spoken Language Translation. Richard Sproat and Navdeep Jaitly. 2016. RNN approaches to text normalization: A challenge. *CoRR*, abs/1611.00068. Fei Tan, Yifan Hu, Changwei Hu, Keqian Li, and Kevin Yen. 2020. TNT: Text normalization based pretraining of transformers for content moderation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4735–4741, Online. Association for Computational Linguistics. Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 2325, 2012, pages 2214–2218. European Language Resources Association (ELRA). Michal Toman, Roman Tesar, and Karel Jezek. 2006. Influence of word normalization on text classification. *Proceedings of InSciT*, 4:354–358. Rob van der Goot, Alan Ramponi, Arkaitz Zubiaga, Barbara Plank, Benjamin Muller, Iñaki San Vicente Roncal, Nikola Ljubešic, Özlem Çetinoğlu, Rahmad Mahendra, Talha Çolakoglu, et al. 2021. MultiLexNorm: A shared task on multilingual lexical normalization. In *Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021)*, pages 493–509. Association for Computational Linguistics. Rob van der Goot and Gertjan van Noord. 2017. MoNoise: Modeling noise using a modular normalization system. *arXiv preprint arXiv:1710.03476*. Yunqing Xia, Kam-Fai Wong, and Wenjie Li. 2006. A phonetic-based approach to Chinese chat text normalization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 993–1000. Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 61–72, Seattle, Washington, USA. Association for Computational Linguistics. Congle Zhang, Tyler Baldwin, Howard Ho, Benny Kimelfeld, and Yunyao Li. 2013. Adaptive parsercentric text normalization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1159–1168. Hao Zhang, Richard Sproat, Axel H Ng, Felix Stahlberg, Xiaochang Peng, Kyle Gorman, and Brian Roark. 2019. Neural models of text normalization for speech applications. *Computational Linguistics*, 45(2):293–337. ## A Selected Languages In this work, we focus on script normalization for a few languages that are spoken in bilingual communities and use a Perso-Arabic script. Even though the Perso-Arabic script is not limited to our selected languages, we did not include languages that are spoken in countries where the administratively dominant language uses another script, such as Uyghur and Malay, or those that have historically used a Perso-Arabic script that is now obsolete, like Dogri, Turkish and Tajik Persian. This said, there are other languages that would fit into our study but due to lack of data could not be included, such as Luri (LDD), Balochi, Shina, or Burushaski. Azeri Turkish also known as Azerbaijani or Azari, is a Turkic language mainly spoken in Azerbaijan, Iranian Azerbaijan and broadly the Caucasus by 20 million speakers (Rezaei et al., 2017). The two varieties of Azeri Turkish, Northern Azeri Turkish and Southern Azeri Turkish respectively use the Latin and the Perso-Arabic scripts. In this study, we focus on the latter variety. The PersoArabic script of Azeri Turkish is similar to that of Persian, with additional graphemes such as <ؽ< (U+063D) and <ۇ) <U+06C7). Mazanderani also known as Mazandarani or Tabari (Borjian and Borjian, 2023), is an IndoAryan language spoken in regions adjoining the Caspian Sea in Iran, chiefly in Mazandaran Province, by 2.5-3 million speakers (Mirhosseini, 2015, p. 157). The Perso-Arabic script adopted for Mazandarani is almost identical to the one used for Persian, with the additional diacritic <ˇ> (U+02C7). Gilaki is an Indo-Aryan language, similar to Mazanderani spoken in regions adjoining the Caspian Sea in Iran by over 2 million speakers (Rastorgueva et al., 2012; Khoshrouz, 2021). Although Mazanderani and Gilaki have been historically written using the Persian script (Borjian, 2008) without a recognized orthography, there have been recently many efforts, particularly on Wikipedia10, to adopt a Latin-based script. This said, the Latin script did not survive and the PersoArabic scripts are used for both Gilaki and Mazanderani (Sedighi, 2023, p. 20). The Perso-Arabic script adopted for Gilaki has graphemes in addition 10https://www.wikipedia.org to those in Persian, such as <ۋ) <U+06CB) and <ۊ< (U+06CA). Kurdish is an Indo-Aryan language spoken by over 25 million speakers in Iraq, Iran, Turkey and Syria, in varieties classified as Northern Kurdish or Kurmanji, Central Kurdish or Sorani, and Southern Kurdish (Ahmadi and Masoud, 2020). Kurdish has historically employed various scripts for writing, such as Cyrillic, Armenian, Latin and PersoArabic. Among these, the two latter scripts are still widely used. While the Latin script is more popular for writing Kurmanji spoken in Turkey, speakers of other varieties prefer the Perso-Arabic script of Kurdish, mainly due to the widespread usage of those scripts by the regional administrations. This is particularly the case of Sorani and Kurmanji spoken in Iraq and to some extent, in Syria. The Kurmanji variant spoken in Kurdistan of Iraq is called Behdini (also spelled Badini). Kurdish is a phonemic alphabet which is known by its distinct characters such as <ۆ< (U+06C6) and <ێ) <U+06CE), and absence of varying phonetically-related graphemes of Arabic such as <ض) <U+0636) and <ظ) <U+0638) even for loanwords (Ahmadi, 2020a). It is worth noting that the Perso-Arabic script that is used for both Sorani and Kurmanji follow the same orthographies. However, this is less established for Kurmanji given the popularity of the Latin script for that dialect. Throughout the paper, we consider Kurmanji and Sorani differently due to the considerable lexical and morpho-syntactic differences. It is worth noting that Gorani speakers oftentimes use Sorani script without the additional characters of Gorani making Sorani a dominant language. Gorani also known as Hawrami, is an IndoAryan language spoken by 300,000 speakers in the parts of the Iranian of Iraqi Kurdistan (Ahmadi, 2020b). Due to the mutual linguistic and cultural influences of Sorani and Gorani, Gorani speakers rely on the Perso-Arabic script and the orthography used for Kurdish. However, there are a few additional graphemes that are unique to Gorani and cannot be found in Kurdish, such as <ڎ) <U+068E) and <ۋ) <U+06CB). Kashmiri also known as Koshur, is an IndoAryan language spoken by over 7 million speakers in the disputed territories administered by three countries: India, Pakistan, and China (Lone et al., 2022b). Although Kashmiri's Perso-Arabic script relies much on that of Urdu, the extensive use of diacritics and distinct characters such as <ۄ< (U+06C4) and <ؠ) <U+0620) make the adopted script discernible. Moreover, Kashmiri extensively uses diacritics to specify vowels, while diacritization is less frequent in everyday writing of the other languages, except for disambiguation. Sindhi is an Indo-Aryan language primarily spoken in Pakistan and parts of India by over 20 million speakers (David et al., 2017). Although influenced by Urdu, Sindhi uses a more diverse set of letters (62 vs. 40 in Urdu) (Doctor et al., 2022). It can be distinguished by unique letters such as <ڏ< (U+068F) and <ڻ) <U+06BB). | Language | Script ratio | |------------|----------------| | MZNFAS | 0.976 | | AZBFAS | 0.909 | | GLKFAS | 0.909 | | KASURD | 0.87 | | HACCKB | 0.857 | | SNDURD | 0.582 | | CKBFAS | 0.35 | | KMRFAS | 0.35 | | HACFAS | 0.318 | | CKBARB | 0.254 | | KMRARB | 0.254 | | HACARB | 0.232 | Table A.1: Script ratios as defined in §4.1 Similar to Persian and Urdu, Azeri Turkish, Mazanderani and Gilaki use the zero-width nonjoiner (ZWNJ, U+200C) character (Lockwood, 2011) which creates lexical variations and adds to the complexity of downstream tasks like tokenization (Ghayoomi and Momtazi, 2009; Rehman et al., 2011; Doostmohammadi et al., 2020). Furthermore, the usage of some graphemes such as <ض) <U+0636) and <ظ) <U+0638) in the adopted scripts is mainly limited to loanwords from Arabic and Persian, unless conventionally used to represent a specific phoneme in the language. Moreover, the final glyph <ھ) <U+06BE) is used in all languages except Persian, Mazanderani, Azeri Turkish and Gilaki. While the Perso-Arabic script of Kurdish, Gorani and Kashmiri are alphabets, the other selected languages are impure Abjads. Similarly, all the selected languages use Naskh style for writing, unlike Urdu that uses Nastaliq. A summary of various aspects of the selected languages is provided in Table B.1. We also provide the script ratio (as defined in §4.1) of the script of the selected languages and that of the dominant one in Table A.1. ## B Models Details We train the normalization models in different configuration of hyper-parameters as follows: - Training: - level: character - maximum number of characters: 100 - character minimal frequency: 5 - optimizer: adam - learning rate: 0.001 - learning rate min: 0.0002 - patience: 5 - number of layers: 6 - number of heads: 8 - embedding dimension: 128 - hidden size: 128 - position-wise feed-forward layer size: 512 - Testing: - number of best prediction: 1 - beam size: 4 - beam alpha: 1.0 - max output length: 100 - batch size: 10 In addition to these common hyper-parameters, we set different hyper-parameters depending on the size of the datasets. The approximate overall number of pairs in all noisy datasets are provided in parentheses. - (<10k pairs) KASURD, HACCKB epochs: 100 / dropout: 0.2 / batch size: 64 - (<50k pairs) KMRARB, HACARB, KMRFAS, HACFAS, GLKFAS, MZNFAS epochs: 80 / dropout: 0.2 / batch size: 64 - (<600k pairs) SNDURD, AZBFAS epochs: 40 / dropout: 0.1 / batch size: 64 - (<6.6M pairs) CKBARB, CKBFAS epochs: 6 / dropout: 0.1 / batch size: 10 For each configuration (72 in total), the training was carried out on a cluster using one GPU per configuration with 32GB memory. ## C Datasets The number of words and sentence pairs in the synthetic datasets generated for the selected languages is provided in Table C.1. | Language | 639-3 | WP | script type | diacritics ZWNJ | Dominant | | |-----------------|---------|-------|---------------|-------------------|------------|-------------------------| | azb | azb | Abjad | ✓ | ✓ | Persian | | | Kashmiri | kas | ks | Alphabet | ✓ | ✗ | Urdu | | Gilaki | glk | glk | Abjad | ✓ | ✓ | Persian | | Gorani | hac | - | Alphabet | ✗ | ✗ | Persian, Arabic, Sorani | | Kurmanji | kmr | - | Alphabet | ✗ | ✗ | Persian, Arabic | | Sorani | ckb | ckb | Alphabet | ✗ | ✗ | Persian, Arabic | | Mazanderani mzn | mzn | Abjad | ✓ | ✓ | Persian | | | Sindhi | snd | sd | Abjad | ✓ | ✗ | Urdu | Table C.1: Number of words and aligned sentences (pairs) in the synthetic datasets | SRC-TRG | Noise % | | | | | | | |-----------|-----------|----------|----------|----------|----------|----------|---------| | 20 | 40 | 60 | 80 | 100 | All | | | | AZBFAS | Pairs | 517860 | 517860 | 517860 | 517860 | 517860 | 584229 | | Words | 4266950 | 4266950 | 4266950 | 4266950 | 4266950 | 4987065 | | | CKBARB | Pairs | 1220386 | 1326715 | 1411998 | 1441641 | 1451201 | 6663362 | | Words | 13522986 | 14554313 | 15183038 | 15381207 | 15457911 | 72697912 | | | CKBFAS | Pairs | 1186567 | 1325960 | 1408885 | 1435023 | 1442000 | 6491240 | | Words | 12838133 | 14381425 | 15125839 | 15305707 | 15363441 | 70402972 | | | GLKFAS | Pairs | 16779 | 16779 | 16779 | 16779 | 16779 | 22215 | | Words | 176602 | 176602 | 176602 | 176602 | 176602 | 240833 | | | HACARB | Pairs | 4718 | 4767 | 4802 | 4805 | 4802 | 23398 | | Words | 49804 | 50244 | 50457 | 50476 | 50464 | 248025 | | | HACCKB | Pairs | 4668 | 4669 | 4672 | 4686 | 4687 | 6474 | | Words | 49191 | 49195 | 49218 | 49417 | 49424 | 71246 | | | HACFAS | Pairs | 4712 | 4773 | 4798 | 4803 | 4802 | 23104 | | Words | 49646 | 50232 | 50429 | 50469 | 50464 | 244911 | | | KASURD | Pairs | 4729 | 4729 | 4734 | 4761 | 4759 | 9463 | | Words | 43907 | 43907 | 43925 | 44060 | 44064 | 96159 | | | KMRARB | Pairs | 10659 | 10963 | 11334 | 11417 | 11412 | 54430 | | Words | 96441 | 98403 | 100463 | 100877 | 100874 | 490034 | | | KMRFAS | Pairs | 10631 | 10997 | 11336 | 11420 | 11417 | 53272 | | Words | 95971 | 98474 | 100454 | 100906 | 100884 | 482298 | | | MZNFAS | Pairs | 36663 | 36663 | 36663 | 36663 | 36663 | 36665 | | Words | 365428 | 365428 | 365428 | 365428 | 365428 | 365446 | | | SNDURD | Pairs | 122446 | 122537 | 122865 | 122908 | 122946 | 496239 | | Words | 1328696 | 1329684 | 1333815 | 1334417 | 1334770 | 5581407 | | | D | Experiments Results and Examples | Language | Noise % | Test set | BLEU | chrF | |-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|------------|-------------|------------|--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | R | 28.40 | 55.71 | | | | | This section presents examples of script normalization and machine translation. Given the source sentence, the model trained on all noisy datasets, i.e. All, generates a hypothesis which is then translated. The selected sentences are taken from the 100% noisy dataset. The source, hypothesis and reference sentences are specified as S, H and R, respectively. | 20 | S | 28.42 | 55.70 | | | | H | 11.60 | 36.15 | | | | | | 40 | S | 27.86 | 55.22 | | | | | H | 11.38 | 36.03 | | | | | | 60 | S | 27.88 | 55.14 | | | | | H | 11.38 | 36.05 | | | | | | 80 | S | 27.94 | 55.23 | | | | | H | 11.31 | 35.96 | | | | | | 100 | S | 27.93 | 55.13 | | | | | H | 11.39 | 35.97 | | | | | | KAS100 URD→ENG | | | | | | | | Language | S/H/R | Sentence | 0 | R | 30.81 | 55.96 | | دزي حذبةدةسةلطداروبراوةكاني كوردسطانكار دةكةن | | | | | | | | S | | | | | | | | CKBARB→CKB | دژی حزبەدەسەڵتداروبڕاوەکانی کوردستانکار دەکەن | | | | | | | H | دژی حزبەدەسەڵتداروبراوەکانی کوردستانکار دەکەن | | | | | | | R | 20 | S | 21.54 | 46.63 | | | | H | 24.52 | 50.43 | | | | | | 40 | S | 15.16 | 39.49 | | | | | H | 21.76 | 47.49 | | | | | | 60 | S | 8.88 | 31.05 | | | | | H | 19.87 | 45.73 | | | | | | 80 | S | 4.75 | 24.59 | | | | | H | 24.11 | 50.06 | | | | | | 100 | S | 4.72 | 24.33 | | | | | H | 24.53 | 50.55 | | | | | | کلأحمد | | | | | | | | شنه عزيز | | | | | | | | خؤنه و نگاره | | | | | | | | خو | | | | | | | | ش همرأ بئنه | | | | | | | | S | | | | | | | | GLKFAS→GLK | کلأحمد | | | | | | | شنه عزيزˇ خؤنه و نگاره | | | | | | | | خۊ | | | | | | | | شˇ همرأ بئنه | | | | | | | | H | کلأحمد | | | | | | | شنه عزيزˇ خؤنه ؤ نگاره | | | | | | | | خۊ | | | | | | | | شˇ همرأ بئنه | | | | | | | | R | CKB100 ARB→ENG | | | | | | | یاگی ویشن ای زوانانی | | | | | | | | ب نیممرد نامیشانبنییمی | | | | | | | | S | | | | | | | | HACFAS→HAC | یاگێ وێشەنەئی زوانانی بەنیم مەردە نامێشانبنیێمێ | | | | | | | H | ٛشانبنیەیمٛی | | | | | | | R | ٛشەنەئی زوانانیەبەنیمەمەردە نامی | یاگٛی وی | 0 | R | 29.69 | 45.79 | | 20 | S | 21.04 | 39.95 | | | | | H | 23.27 | 38.07 | | | | | | 40 | S | 17.61 | 29.14 | | | | | H | 17.29 | 30.41 | | | | | | 60 | S | 21.04 | 38.56 | | | | | H | 26.31 | 44.04 | | | | | | 80 | S | 9.98 | 25.43 | | | | | H | 19.45 | 35.45 | | | | | | 100 | S | 17.29 | 25.39 | | | | | H | 26.27 | 46.7 | | | | | | رامہ ہٗون | | | | | | | | ے | | | | | | | | سنز ماد چھ | | | | | | | | ے رامہ ہٗون | | | | | | | | ے | | | | | | | | S | رامہ ہٗون | | | | | | | ! ٕسنز ماد چھ | | | | | | | | ے رامہ ہٗون | | | | | | | | ! | | | | | | | | H | | | | | | | | KASURD→KAS | رامٕہ ہٗون | | | | | | | ! ٕسنٛ ز ما ٕد | | | | | | | | ٚ | چھ | | | | | | | ے رامٕہ ہٗون | | | | | | | | ! | | | | | | | | R | CKB100 FAS →ENG | | | | | | | دلن وال ژکهرب وکينه نه دهردوکولنه ژي کهصهر | | | | | | | | S | | | | | | | | KMRFAS→KMR | دلێن ڤال ژکەربێ وکینەنەدەردوکولنەژی کەسەر | | | | | | | H | دلێن ڤال ژکەرب وکینەنەدەردوکولنەژی کەسەر | | | | | | | R | کالئائد علم کیمیا یا کیمستري م ھ | | | | | | | ک مرکب جو نالو آہي | | | | | | | | S | | | | | | | | SNDURD→SND | ڪالئائڊ علم | | | | | | | ڪیمیا یا | | | | | | | | ڪیمسٽري م ھ | | | | | | | | ڪ مرڪب جو نالو آھي | | | | | | | | H | ڪالئائڊ علم | | | | | | | ڪیمیا یا | | | | | | | | ڪیمسٽري ۾ ھ | | | | | | | | ڪ مرڪب جو نالو آھي | | | | | | | | R | 0 | R | 30.81 | 55.96 | | | | 20 | S | 23.15 | 48.29 | | | | | H | 23.57 | 49.32 | | | | | | 40 | S | 18.03 | 42.60 | | | | | H | 21.84 | 47.57 | | | | | | 60 | S | 12.42 | 35.53 | | | | | H | 19.43 | 45.23 | | | | | | 80 | S | 6.95 | 27.45 | | | | | H | 23.39 | 49.51 | | | | | | 100 | S | 7.09 | 27.59 | | | | | H | 23.97 | 49.47 | | | | | | (a) Script normalization of sentences containing 100% noise شاري اه صطرهکان، ب باشطرين گراني ره صهنپالوران Source | SND100 URD→ENG | | | | | | | شاری ئەستێرەکان، بۆباشترین گۆرانی ڕەسەنپاڵێوران | | | | | | | | Reference | City of Stars, received nominations for best original song شاری اه صطرهکان، ب باشطرین گرانی ڕه صهنپالوران | | | | | | | 20 | شاری ئەسترەکان، بۆباشترین گۆرانی ڕەسەنپاڵێوران | | | | | | | 40 | شاری ئەصطرەکان، بۆباشترین گۆرانی ڕەصەنپاڵێورانشا | | | | | | | 60 | شاری ئەستێرەکان، بۆباشترین گۆڕانی ڕەسەنپاڵێوران | | | | | | | 80 | شاری ئەستێرە 'ان، بۆباشترین گۆرانی ڕە سەنپاڵێور | | | | | | | | | | | | | | | 100 All | ڕەسەنپاڵێوران | گۆرانی | بۆباشترین ، | ئەستێرەکان | شاری | Table D.2: Evaluation of NLLB model for translation of reference data (R), noisy data (S) and normalized data (H) using normalization models trained on all levels of noise (All) | | (b) Script normalization of a Sorani sentence with varying noise | | | | | | | Table D.1: Examples of script normalization of sentences in different languages containing 100% noise (a) and in Sorani Kurdish with various levels of noise (b) using a model trained on all noisy datasets Language Noise % Baseline Our models BLEU chrF seq. acc. BLEU chrF seq. acc. AZBFAS → AZB 20 95.99 98.98 87.47 67.37 0.77 54.15 40 92.37 98.05 79.51 67 0.77 54.01 60 91.73 97.86 77.51 66.94 0.77 53.87 80 91.53 97.8 76.68 66.89 0.76 53.67 100 91.52 97.79 76.61 66.86 0.76 53.62 All 92.2 97.95 77.41 66.67 0.76 51.75 GLKFAS → GLK 20 90.36 96.59 77.77 67.33 0.77 45.17 40 82.93 93.16 65.91 67.4 0.77 44.87 60 81.38 92.37 63.29 66.94 0.77 43.92 80 80.99 92.11 61.92 66.47 0.77 42.61 100 80.95 92.11 61.8 66.76 0.77 42.55 All 80.75 92.22 58.51 66.43 0.77 38.16 HACARB → HAC 20 55.47 83.92 13.35 47.1 0.65 28.39 40 10.44 46.13 1.05 38.95 0.63 17.4 60 1.35 28.31 0 31.23 0.59 8.52 80 1.53 18.88 0 25.16 0.56 7.28 100 0.81 17.27 0 26 0.56 9.36 All 12.29 38.79 2.39 52.61 0.68 27.69 HACFAS → HAC 20 68.74 88.98 35.17 54.88 0.69 31.14 40 17.75 53.16 7.32 33.65 0.6 14.02 60 3.32 35.12 0.42 26.43 0.57 8.96 80 2.71 28.12 0.42 23.05 0.54 6.86 100 2.24 26.41 0.42 21.52 0.53 5.82 All 19.46 46.84 8.09 49.07 0.66 25.14 HACCKB → HAC 20 99.97 99.98 99.79 62.65 0.72 41.54 40 99.44 99.75 97 62.24 0.72 40.26 60 98.81 99.51 95.51 60.67 0.71 37.18 80 93.04 97.83 78.46 60.04 0.71 38.38 100 92.55 97.66 75.48 59.14 0.71 37.53 All 91.75 97.33 71.91 55.12 0.68 30.56 KASURD → KAS 20 89.49 96.91 71.25 70.69 0.81 49.89 40 77.48 93.28 53.07 69.7 0.81 49.05 60 76 92.65 48.1 69.24 0.81 46.41 80 66.39 88.28 40.04 67.17 0.8 44.23 100 67.63 88.77 41.18 67.64 0.8 46.22 All 68.03 89.28 33.05 64.23 0.77 35.16 MZNFAS → MZN 20 99.93 99.98 99.86 72.38 0.8 50.97 40 99.93 99.98 99.86 72.38 0.8 50.97 60 99.93 99.98 99.86 72.38 0.8 50.97 80 99.93 99.98 99.86 72.39 0.8 50.97 100 99.93 99.98 99.86 72.39 0.8 50.97 All 99.93 99.98 99.86 72.39 0.8 51 KMRARB → KMR 20 57.55 82.52 12.1 66.82 0.78 43.43 40 8.88 45.55 0.46 57.82 0.75 34.18 60 3.32 31.48 0 53.38 0.73 27.78 80 2.43 21.68 0 49.06 0.71 24.69 100 1.82 19.81 0 47.36 0.7 24.61 All 13.73 39.91 2.35 65.77 0.77 43.28 KMRFAS → KMR 20 69.98 88.35 27.26 68.92 0.79 45.39 40 16.02 51.11 6 54.81 0.73 34.73 60 5.5 36.06 0.18 50.11 0.72 26.1 80 3.95 30.62 0.09 49.19 0.72 23.82 100 3.52 28.72 0.18 45.11 0.7 23.73 All 18.9 46.12 5.48 65.39 0.77 42.7 CKBARB → CKB 20 55.89 84.47 9.15 54.54 0.66 30.93 40 12.92 51.51 0.35 53.85 0.66 29.44 60 3.27 31.48 0.04 53.38 0.66 28.33 80 2.17 19.3 0.04 51.92 0.66 26.46 100 2.14 17.36 0.04 50.11 0.65 26.16 All 12.37 39.31 1.72 50.92 0.65 25.17 CKBFAS → CKB 20 67.7 89.45 26.9 55.96 0.67 33.49 40 19.42 55.9 6.3 52.71 0.66 28.97 60 4.56 35.67 0.2 51.04 0.65 26.89 80 3.62 29.04 0.12 50.34 0.65 25.14 100 3.26 26.43 0.14 47.95 0.64 24.44 All 16.43 44.73 5.34 50.12 0.65 25.34 SNDURD → SND 20 79.46 89.83 41.54 77.68 0.84 51.4 40 57.96 78.51 14.91 76.56 0.83 48.57 60 29.13 57.93 4.72 75.5 0.83 46.37 80 19.8 49.49 3.52 75.32 0.83 46.24 100 19.74 49.38 3.22 75.14 0.82 46.06 All 41.02 64.43 11.15 77.52 0.83 48.79 | Language | TestNoise% | Sentence | ساڳئ | |--------------|--------------------------------------------------------------------|--------------------------------------------------------------------|--------| | ي | | | | | ھائ | | | | | ڪن | | | | | گ | | | | | رو | | | | | ٽ | | | | | وانگر ا | | | | | س | | | | | ڪائین | | | | | گ | | | | | رو | | | | | ٽ | | | | | ک | | | | | ي | | | | | سم | | | | | جھو. | | | | | | | | | | RSND RENG | Think of the skiing route as of a similar hiking route. | | | | SNDFAS→ENG | ساگئ | | | | ي ہائکن | | | | | گ | | | | | رو | | | | | ت | | | | | وانگر ا | | | | | سکائین | | | | | گ | | | | | رو | | | | | ت | | | | | ک | | | | | ي | | | | | سم | | | | | جہو. | | | | | | | | | | 100% | | | | | S T100% | Think of a skiing route as the same as a hiking route. | ساڳئ | | | ي | | | | | ھائ | | | | | ڪن | | | | | گ | | | | | رو | | | | | ٽ | | | | | وانگر ا | | | | | س | | | | | ڪائین | | | | | گ | | | | | رو | | | | | ٽ | | | | | ک | | | | | ي | | | | | سم | | | | | جھو. | | | | | | | | | | 100%Ŝ 100% T̂ | similar route on the mountain. | | | | 100% | | | | | T̂ | Think of a skiing route as the same as a hiking route. | بیر لە | | | ڕێڕەو | | | | | ی | | | | | خلی | | | | | سکێنە | | | | | ی | | | | | سەر | | | | | بەفر | | | | | بکەوە بۆ | | | | | ڕێڕەوێک | | | | | ی | | | | | هاو | | | | | شێوە | | | | | ی | | | | | سەر | | | | | شا | | | | | خ | | | | | کەوت | | | | | ن. | | | | | | | | | | RCKB RENG | Think of the skiing route as of a similar hiking route. | بير لة | | | ررة | | | | | ي | | | | | خلي | | | | | سكنة | | | | | ي | | | | | سة | | | | | ر | | | | | بة | | | | | فر | | | | | بكة ة | | | | | | | | | | ب | | | | | ررة | | | | | ك | | | | | ي | | | | | ها | | | | | شة | | | | | ي | | | | | سة | | | | | ر | | | | | شا | | | | | خ | | | | | كة | | | | | ط | | | | | ن. | | | | | | | | | | 100% | | | | | S | | | | | CKBARB→ENG | T100% | The church is divided into two parts, each of which is a mountain. | بیر لە | | ڕێڕە | | | | | ی | | | | | خولی | | | | | سکۆنە | | | | | ی | | | | | سەر | | | | | بەفر | | | | | پێکەوە بۆ | | | | | ڕێڕەک | | | | | ی | | | | | هاو | | | | | شێوە | | | | | ی | | | | | سەر | | | | | شا | | | | | خ | | | | | کەوت | | | | | ن. | | | | | | | | | | 100%Ŝ | They thought of the snow-capped roller coaster route together to a | | | Table D.4: Examples of translating sentences into English using the NLLB model given a reference (R), a noisy sentence (S) and a normalized sentence (Sˆ). Translations are shown with T. ![19_image_0.png](19_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 5 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract section and Section 1 as an introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Sections 2 and 3 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 3.1 B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 and appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
lindemann-etal-2023-compositional-generalization
Compositional Generalization without Trees using Multiset Tagging and Latent Permutations
https://aclanthology.org/2023.acl-long.810
Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform non-tree-based models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion depth.
# Compositional Generalization Without Trees Using Multiset Tagging And Latent Permutations Matthias Lindemann1and **Alexander Koller**2and **Ivan Titov**1,3 1ILCC, University of Edinburgh, 2 LST, Saarland University, 3ILLC, University of Amsterdam m.m.lindemann@sms.ed.ac.uk, koller@coli.uni-saarland.de, ititov@inf.ed.ac.uk ## Abstract ![0_Image_0.Png](0_Image_0.Png) Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform non-tree-based models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion depth.1 ## 1 Introduction Sequence-to-sequence models have been very successfully applied to many structural tasks in NLP such as semantic parsing. However, they have also been shown to struggle with compositional generalization (Lake and Baroni, 2018; Finegan-Dollak et al., 2018; Kim and Linzen, 2020; Hupkes et al., 2020), i.e. the model fails on examples that contain unseen compositions or deeper recursion of phenomena that it handles correctly in isolation. For example, a model which correctly parses 'Mary knew that Jim slept' should also be able to parse sentences with deeper recursion than it has seen during training such as 'Paul said that Mary knew that Jim slept'. This sort of generalization is easy for humans but challenging for neural models. 1https://github.com/namednil/multiset-perm In order for a model to generalize compositionally in semantic parsing, it has to identify reusable 'fragments' and be able to systematically combine them in novel ways. One way to make a model sensitive to fragments is to make it rely on a tree that makes the compositional structure explicit. However, this complicates the training because these trees need to be obtained or induced. This is computationally demanding or at least requires structural preprocessing informed by domain knowledge. In this paper, we propose a two-step sequencebased approach with a structural inductive bias that does not rely on trees: the 'fragments' are multisets of output tokens that we predict for every input token in a first step. A second step then arranges the tokens we predicted in the previous step into a single sequence using a permutation model. In contrast to prior work (Wang et al., 2021; Lindemann et al., 2023), there are no hard constraints on the permutations that our model can predict. We show that this enables structural generalization on tasks that go beyond the class of synchronous context-free languages. 14488 We overcome two key technical challenges in this work: Firstly, we do not have supervision for the correspondence between input tokens and output tokens. Therefore, we induce the correspondence during training. Secondly, predicting permutations without restrictions is computationally challenging. For this, we develop a differentiable GPU-friendly algorithm. On realistic semantic parsing tasks our approach outperforms previous work on generalization to longer examples than seen at training. We also outperform all other non-tree models on the structural generalization tasks in semantic parsing on COGS (Kim and Linzen, 2020). For the first time, we also show that a model without an inductive bias towards trees can obtain high accuracy on generalization to deeper recursion on COGS. To summarize, our main contributions are: - a flexible seq2seq model that performs well on structural generalization in semantic parsing without assuming that input and output are related to each other via a tree structure. - a differentiable algorithm to parameterize and predict permutations without a priori restrictions on what permutations are possible. ## 2 Overview And Motivation Our approach consists of two stages. In the **first** stage (multiset tagging), we predict a multiset zi of tokens for every input token xi from the contextualized representation of xi. This is motivated by the observation that input tokens often systematically correspond to a fragment of the output (like *slept* corresponding to sleep and agent and a variable in Fig. 1). Importantly, we expect this systematic relationship to be largely invariant to phrases being used in new contexts or deeper recursion. We refer to the elements of the multisets as *multiset tokens*. In the **second stage** (permutation), we order the multiset tokens to arrive at a sequential output. Conceptually, we do this by going from left to right over the *output* and determining which multiset token to put in every position. Consider the example in Fig. 1. For the first output position, we simply select a multiset token (* in the example). All subsequent tokens are put into position by 'jumping' from the token that was last placed into the output to a new multiset token. In Fig. 1, we jump from * to girl (shown by the outgoing red edge from *). This indicates that girl is the successor of * in the output and hence the *second* output token. From girl we jump to one of the x1 tokens to determine what the *third* output token is and so on. Since we predict a permutation, we must visit every multiset token exactly once in this process. The jumps are inspired by reordering in phrasebased machine translation (Koehn et al., 2007) and methods from dependency parsing, where directly modeling bi-lexical relationships on hidden states has proven successful (Kiperwasser and Goldberg, 2016). Note also that any permutation can be represented with jumps. In contrast, prior work (Wang et al., 2021; Lindemann et al., 2023) has put strong restrictions on the possible permutations that can be predicted. Our approach is more flexible and empirically it also scales better to longer inputs, which opens up new applications and datasets. Setup. We assume we are given a dataset D = {(x 1, y 1)*, . . .*} of input utterances x and target sequences y. If we had gold alignments, it would be straightforward to train our model. Since we do not have this supervision, we have to discover during training which tokens of the output y belong into which multiset zi. We describe the model and the training objective of the multiset tagging in Section 3. After the model is trained, we can annotate our training set with the most likely z, and then train the permutation model. For predicting a permutation, we associate a score with each possible jump and search for the highest-scoring sequence of jumps. We ensure that the jumps correspond to a permutation by means of constraints, which results in a combinatorial optimization problem. The flexibility of our model and its parametrization come with the challenge that this problem is NP-hard. We approximate it with a regularized linear program which also ensures differentiability. Our permutation model and its training are described in Section 4. In Section 5, we discuss how to solve the regularized linear program and how to backpropagate through the solution. ## 3 Learning Multisets For the multiset tagging, we need to train a model that predicts the multisets z1*, . . . ,* zn of tokens by conditioning on the input. We represent a multiset zi as an integer-valued vector that contains for every vocabulary item v the multiplicity of v in zi, i.e. zi,v = k means that input token i contributes k occurrences of output type v. If v is not present in multiset zi, then zi,v = 0. For example, in Fig. 1, z3,sleep = 1 and z2,x1 = 2. As discussed in Section 2, we do not have supervision for z1*, . . . ,* zn and treat them as latent variables. To allow for efficient exact training, we assume that all zi,v are independent of each other conditioned on the input: $$P(\mathbf{z}\mid\mathbf{x})=\prod_{i,v}P(\mathbf{z}_{i,v}\mid\mathbf{x})\qquad\qquad(1)$$ where v ranges over the *entire* vocabulary. Parametrization. We parameterize P(zi,v | x) as follows. We first pass the input x through a pretrained RoBERTa encoder (Liu et al., 2019), where ENCODER(x) is the output of the final layer. We then add RoBERTa's static word embeddings from the first, non-contextualized, layer to that: hi = ENCODER(x)i + EMBED(xi) (2) We then pass hithrough a feedforward network obtaining h˜i = FF(hi) and define a distribution over the multiplicity of v in the multiset zi: $$P(\mathbf{z}_{i,v}=k|\mathbf{x}_{i})={\frac{\exp\left({\hat{\mathbf{h}}}_{i}^{T}\mathbf{w}^{v,k}+b^{v,k}\right)}{\sum_{l}\exp\left({\hat{\mathbf{h}}}_{i}^{T}\mathbf{w}^{v,l}+b^{v,l}\right)}}\quad(3)$$ where the weights w and biases b are specific to v and the multiplicity k. In contrast to standard sequence-to-sequence models, this softmax is not normalized over the vocabulary but over the multiplicity, and we have distinct distributions for every vocabulary item v. Despite the independence assumptions in Eq. (1), the model can still be strong because hitakes the entire input x into account. Training. The probability of generating the overall multiset m as the union of all ziis the probability that for every vocabulary item v, the total number of occurrences of v across all input positions i sums to mv: $$P(\mathbf{m}|\mathbf{x})=\prod_{v}P(\mathbf{z}_{1,v}+\ldots+\mathbf{z}_{n,v}=\mathbf{m}_{v}|\mathbf{x})$$ This can be computed recursively: $P(\mathbf{z}_{1,v}+\ldots+\mathbf{z}_{n,v}=\mathbf{m}_{v}\mid\mathbf{x})=$ $\sum P(\mathbf{z}_{1,v}=k|\mathbf{x})P(\mathbf{z}_{2,v}+\ldots\mathbf{z}_{n,v}=\mathbf{m}_{v}-k|\mathbf{x})$ Let m(y) be the multiset of tokens in the gold sequence y. We train our model with gradient ascent to maximize the marginal log-likelihood of m(y): $$\sum_{(\mathbf{x},\mathbf{y})\in{\mathcal{D}}}\log P(\mathbf{m}(\mathbf{y})\mid\mathbf{x})$$ $${\mathrm{}}(4)$$ Like Lindemann et al. (2023), we found it helpful to initialize the training of our model with high-confidence alignments from an IBM-1 model (Brown et al., 1993) (see Appendix C.3 for details). Preparation for permutation. The scoring function of the permutation model expects a sequence as input. There is no a priori obvious order for the elements within the individual multisets zi. We handle this by imposing a canonical order ORDER(zi) on the elements of zi by sorting the multiset tokens by their id in the vocabulary. They are then concatenated to form the input z′ = ORDER(z1)*. . .* ORDER(zn) to the permutation model. $$\mathbf{D}(\mathbf{x}_{i})$$ ## 4 Relaxed Permutations After predicting a multiset for every input token and arranging the elements within each multiset to form a sequence z′, we predict a permutation of z′. We represent a permutation as a matrix V that contains exactly one 1 in every row and column and zeros otherwise. We write Vi,j = 1 if position i in z′is mapped to position j in the output y. Let P be the set of all permutation matrices. Now we formalize the parametrization of permutations as discussed in Section 2. We associate a score predicted by our neural network with each permutation V and search for the permutation with the highest score. The score of a permutation decomposes into a sum of scores for binary 'features' of V. We use two types of features. The first type of feature is active if the permutation V maps input position i to output position j (i.e. Vij = 1). We associate this feature with the score si7→j and use these scores only to model what the first and the last token in the output should be. That is, si7→1 models the preference to map position i in the input to the first position in the output, and analogously si7→n models the preference to put i into the last position in the output. For all output positions j that are neither the first nor the last position, we simply set si7→j = 0. The second type of feature models the jumps we introduced in Section 2. We introduce a feature that is 1 iff V contains a jump from k to i, and associate this with a score sk↷i. In order for there to be a jump from k to i, the permutation V must map input i to some output position j (Vij = 1) and it must also map input position k to output position j − 1 (Vk,j−1 = 1). Hence, the product Vk,j−1Vij is 1 iff there is a jump from k to i at output position j. Based on this, the sum Pj Vk,j−1Vij equals 1 if there is any output position j at which there is a jump from k to i and 0 otherwise. This constitutes the second type of feature. Multiplying the features with their respective scores, we want to find the highest-scoring permutation under the following overall scoring function: $$\operatorname*{arg\,max}_{\mathbf{V}\in\mathcal{P}}\sum_{i,j}\mathbf{V}_{i j}s_{i\to j}+$$ $$\sum_{i,k}s_{k\cap i}\left(\sum_{j}\mathbf{V}_{k,j-1}\mathbf{V}_{i j}\right)\quad\quad(5)$$ Let V∗(s) be the solution to Eq. (5) as a function of the scores. Unfortunately, V∗(s) does not have sensible derivatives because P is discrete. This makes V∗(s) unsuitable as a neural network layer. In addition, Eq. (5) is NP-hard (see Appendix A.1). We now formulate an optimization problem that approximates Eq. (5) and which has useful derivatives. Firstly, we relax the permutation matrix V to a bistochastic matrix U, i.e. U has non-negative elements and every row and every column each sum to 1. Secondly, note that Eq. (5) contains quadratic terms. As we will discuss in the next section, our solver assumes a linear objective, so we replace Vk,j−1Vij with an auxiliary variable Wijk. The variable Wijk is designed to take the value 1 if and only if Ui,j = 1 and Uk,j−1 = 1. We achieve this by coupling W and U using constraints. Then, the optimization problem becomes: $$\begin{array}{c}\arg\max\limits_{\mathbf{U},\mathbf{W}}\sum\limits_{i,j}\mathbf{U}_{ij}s_{i\to j}+\sum\limits_{i,j,k}\mathbf{W}_{ijk}s_{k\cap i}\\ \text{subject to}\sum\limits_{i}\mathbf{U}_{ij}=1\qquad\qquad\forall j\qquad\qquad\text{(6a)}\\ \sum\limits_{j}\mathbf{U}_{ij}=1\qquad\qquad\forall i\qquad\qquad\text{(6b)}\\ \sum\limits_{k}\mathbf{W}_{ijk}=\mathbf{U}_{ij}\qquad\forall j>1,i\quad\text{(6c)}\\ \sum\limits_{i}\mathbf{W}_{ijk}=\mathbf{U}_{k(j-1)}\;\forall j>1,k\quad\text{(6d)}\\ \mathbf{U},\mathbf{W}\geq0\qquad\qquad\qquad\qquad\qquad\text{(6e)}\end{array}$$ Finally, in combination with the linear objective, the argmax operation still causes the solution U∗(s) of Eq. (6) as a function of s to have no useful derivatives. This is because an infinitesimal change in s has no effect on the solution U∗(s) for almost all s. To address this, we add an entropy regularization term τ (H(U) + H(W)) to the objective Eq. (6), where H(U) = −Pij Uij (log Uij−1), and τ > 0 determines the strength of the regularization. The entropy regularization 'smooths' the solution U∗(s) in an analogous way to softmax being a smoothed version of argmax. The parameter τ behaves analogously to the softmax temperature: the smaller τ , the sharper U∗(s) will be. We discuss how to solve the regularized linear program in Section 5. Predicting permutations. At test time, we want to find the highest scoring permutation, i.e. we want to solve Eq. (5). We approximate this by using U∗(s) instead, the solution to the entropy regularized version of Eq. (6). Despite using a low temperature τ , there is no guarantee that U∗(s) can be trivially rounded to a permutation matrix. Therefore, we solve the linear assignment problem with U∗(s) as scores using the Hungarian Algorithm (Kuhn, 1955). The linear assignment problem asks for the permutation matrix V that maximizes Pij VijU∗(s)ij . ## 4.1 Parametrization We now describe how we parameterize the scores s to permute the tokens into the right order. We first encode the original input utterance x like in Eq. (2) to obtain a hidden representation hi for input token xi. Let a be the function that maps a(i) 7→ j if the token in position i in z′came from the multiset that was generated by token xj . For example, in Fig. 1, a(6) = 3 since sleep was predicted from input token *slept*. We then define the hidden representation h′i as the concatenation of ha(i) and an embedding of z′i : $$\mathbf{h}_{i}^{\prime}=\left[\mathbf{h}_{a(i)};\mathrm{EMBED}(z_{i}^{\prime})\right]$$ $$(7)^{\frac{1}{2}}$$ (7) We parameterize the scores for starting the output with token i as $$s_{i\mapsto1}=\mathbf{w}_{\mathrm{start}}^{T}\mathrm{FF}_{\mathrm{start}}(\mathbf{h}_{i}^{\prime})$$ and analogously for ending it with token i: $$s_{i\to n}=\mathbf{w}_{\mathrm{end}}^{T}\mathrm{FF}_{\mathrm{end}}(\mathbf{h}_{i}^{\prime})$$ We set si7→j = 0 for all other *i, j*. We parameterize the jump scores sk↷i using Geometric Attention (Csordás et al., 2022) from h′k to h′i . Intuitively, Geometric Attention favours selecting the 'matching' element h′i that is closest to h′k in terms of distance |i − k| in the string. We refer to Csordás et al. (2022) for details. ![4_image_0.png](4_image_0.png) ## 4.2 Learning Permutations We now turn to training the permutation model. At training time, we have access to the gold output y and a sequence z′from the output of the multiset tagging (see the end of Section 3). We note that whenever y (or z′) contains one vocabulary item at least twice, there are multiple permutations that can be applied to z′to yield y. Many of these permutations will give the right result for the wrong reasons and the permutation that is desirable for generalization is latent. For example, consider Fig. 2. The token agent is followed by the entity who performs the action, whereas theme is followed by the one affected by the action. The permutation indicated by dashed arrows generalizes poorly to a sentence like *Emma knew that James slid* since *slid* will introduce a theme role rather than an agent role (as the sliding is happening to James). Thus, this permutation would then lead to the incorrect output know theme Emma ... slide agent James, in which Emma is affected by the knowing event and James is the one who slides something. In order to train the permutation model in this setting, we use a method that is similar to EM for structured prediction.2 During training, the model output U∗(s) and W∗(s) often represents a soft permutation that does not permute z′into y. Our goal is to push the model output into the space of (soft) permutations that lead to y. More formally, let Q ∈ Q(y, z′) be the set of bistochastic matrices such that Qi,j = 0 iff z′i̸= yj . That is, any permutation included in Q(y, z′) leads to the gold output y when applied to z′. First, we project the current prediction U∗(s) and W∗(s) into Q(y, z′) using the KL divergence as a measure of distance (E-step): $$\hat{\mathbf{U}},\hat{\mathbf{W}}=\operatorname*{arg\,min}_{\mathbf{U}\in\mathcal{Q}(\mathbf{y},\mathbf{x}^{\prime}),\mathbf{W}}\mathbf{KL}(\mathbf{U}\mid\mid\mathbf{U}^{*}(s))+\tag{8}$$ $^\star\left(s\right)$) subject to Eq. (6a) to Eq. (6e). Similar to the M-step of EM, we then treat Uˆ and Wˆ as soft gold labels and train our model to minimize the KL divergence between labels and model: ## Kl(Uˆ || U ∗(S)) + Kl(Wˆ || W∗(S)) Eq. (8) can be solved in the same way as the entropy regularized version of Eq. (6) because expanding the definition of KL-divergence leads to a regularized linear program with a similar feasible region (see Appendix A.3 for details). ## 5 Inference For Relaxed Permutations Now we describe how to solve the entropy regularized form of Eq. (6) and how to backpropagate through it. This section may be skipped on the first reading as it is not required to understand the experiments; we note that the resulting algorithm (Algorithm 1) is conceptually relatively simple. Before describing our method, we explain the general principle. ## 5.1 Bregman'S Method Bregman's method (Bregman, 1967) is a method for constrained convex optimization. In particular, it can be used to solve problems of the form $$\mathbf{x}^{*}=\operatorname*{arg\,max}_{\mathbf{x}\in C_{0}\cap C_{1}\ldots\cap C_{n},\mathbf{x}\geq\mathbf{0}}\mathbf{s}^{T}\mathbf{x}+\underbrace{\tau H(\mathbf{x})}_{\mathrm{regularizer}}\quad(9)$$ where C0*, . . . , C*n are linear equality constraints, H(x) = −Pi xi(log xi − 1) is a form of entropy regularization, and τ determines the strength of the regularization. Note that our parameterization of permutations (Eq. (6)) has this form. Bregman's method is a simple iterative process. We start with the scores s and then cyclically iterate over the constraints and project the current estimate x i onto the chosen constraint until convergence: $$\begin{array}{c}{{\bf x}^{0}=\exp\frac{{\bf s}}{\tau}}\\ {{\bf x}^{i+1}=\begin{array}{c}{{\rm arg\,min}}\\ {{\bf x}\in C_{i\,{\rm mod}\,(n-1)}}\end{array}\quad{\bf KL}({\bf x}\,||\,{\bf x}^{i})}\end{array}\tag{10}$$ where $\mbox{KL}(\mathbf{x}\mid\mathbf{y})=\sum_{i}\mathbf{x}_{i}\log\frac{\mathbf{x}_{i}}{\mathbf{y}_{i}}-\mathbf{x}_{i}+\mathbf{y}_{i}$ is the generalized KL divergence. We call Algorithm 1 Bregman's method for Eq. (6) with entropic regularization function BREGMAN(s, τ ) Uij = exp(τ−1si7→j ) Wijk = exp(τ−1sk↷i) while within budget and not converged do U, W = KL project(U, W; 6a,6c) with Prop. 2 U, W = KL project(U, W; 6a,6d) with Prop. 2 U = KL project (U, 6b) with Prop. 1 end while return U, W end function arg minx∈C KL(x | x i) a KL projection. In order to apply Bregman's method, we need to be able to compute the KL projection arg minx∈C KL(x | x i) for all C0*, . . . , C*n in closed-form. We discuss how to do this for Eq. (6) in the next section. As an example, consider a problem of the form Eq. (9) with a single linear constraint C P ∆ = {x | i xi = 1}. In this case, Bregman's method coincides with the softmax function. This is because the KL projection x∗ = arg minx∈C∆ KL(x || y) for y > 0 has the closed-form solution x∗ i =y P i i yi . If we have a closed-form expression for a KL projection (such as normalizing a vector), we can use automatic differentiation to backpropagate through it. To backpropagate through the entire solver, we apply automatic differentiation to the composition of all projection steps. ## 5.2 Bregman'S Method For Eq. (6) In order to apply Bregmans' method to solve the entropy regularized version of Eq. (6), we need to decompose the constraints into sets which we can efficiently project onto. We choose the following three sets here: (i), containing Eq. (6a) and Eq. (6c), and (ii), containing Eq. (6a) and Eq. (6d), and finally (iii), containing only Eq. (6b). We now need to establish what the KL projections are for our chosen sets. For (iii), the projection is simple: Proposition 1. (Benamou et al. (2015), Prop. 1) For A, m > 0, the KL projection arg minU KL(U || A) subject to Pj Uij = miis given by U∗ ij = mi P Aij j′ Aij′ . Let us now turn to (i) and (ii). The constraints Eq. (6d) and Eq. (6c) are structurally essentially the same, meaning that we can project onto (ii) in basically the same manner as onto (i). We project onto (i), with the following proposition: Proposition 2. For A, B > 0, the KL projection arg min U, W KL(U || A) + KL(W || B) subject to X i Uij = 1 ∀j (11) X k Wijk = Uij ∀j, i is given by: U ∗ ij = P Tij i′ Ti′j W∗ ijk = U ∗ ij P Bijk k′ Bijk′ where Tij = pAij ·Pk Bijk. The proof can be found in Appendix A.2. We present the overall algorithm in Algorithm 1, and note that it is easy to implement for GPUs. In practice, we implement all projections in log space for numerical stability. ## 6 Evaluation 6.1 Doubling Task Our permutation model is very expressive and is not limited to synchronous context-free languages. This is in contrast to the formalisms that other approaches rely on (Wang et al., 2021; Lindemann et al., 2023). To evaluate if our model can structurally generalize beyond the synchronous contextfree languages in practice, we consider the function F = {(*w, ww*) | w ∈ Σ∗}. This function is related to processing challenging natural language phenomena such as reduplication and cross-serial dependencies. We compare our model with an LSTM-based seq2seq model with attention and a Transformer in the style of Csordás et al. (2021) that uses a relative positional encoding. Since the input is a sequence of symbols rather than English, we replace RoBERTa with a bidirectional LSTM and use randomly initialized embeddings. The models are trained on inputs of lengths 5 to 10 and evaluated on longer examples. The results can be found in Fig. 3. All models get perfect or close to perfect accuracy on inputs of length 11 but accuracy quickly deteriorates for the LSTM and the Transformer. In contrast, our model extrapolates very well to longer sequences. ## 6.2 Cogs COGS is a synthetic benchmark for compositional generalization introduced by Kim and Linzen 14493 ![6_image_0.png](6_image_0.png) (2020). Models are tested for 21 different cases of generalization, 18 of which focus on using a lexical item in new contexts (Lex). There are 1000 instances per generalization case. Seq2seq models struggle in particular with the *structural* generalization tasks (Yao and Koller, 2022), and we focus on those: (i) generalization to deeper PP recursion than seen during training ("Emma saw a hedgehog on a chair in the garden beside the tree..."), (ii) deeper CP recursion ("Olivia mentioned that James saw that Emma admired that the dog slept"), and (iii) PPs modifying subjects when PPs modified only objects in the training data (OS). We follow previous work and use a lexicon (Akyurek and Andreas, 2021) to map some input tokens to output tokens (see Appendix C.2 for details). We also use this mechanism to handle the variable symbols in COGS. We report the means and standard deviations for 10 random seeds in Table 1. Our approach obtains high accuracy on CP and PP recursion but exact match accuracy is low for OS. This is in part because our model sometimes predicts semantic representations for OS that are equivalent to the gold standard but use a different order for the conjuncts. Therefore, we report accuracy that accounts for this in Table 2. In both tables, we also report the impact of using a simple copy mechanism instead of the more complex lexicon induction mechanism (-Lex). Our model outperforms all other non-treebased models by a considerable margin. Structural generalization without trees. All previous methods that obtain high accuracy on recursion generalization on COGS use trees. Some approaches directly predict a tree over the input (Liu et al., 2021; Weißenhorn et al., 2022), while others use derivations from a grammar for data augmentation (Qiu et al., 2022) or decompose the input along a task-specific parse tree (Drozdov et al., 2022). Our results show that trees are not as important for compositional generalization as their success in the literature may suggest, and that weaker Model OS CP PP Lex Total Q'22⋄♣ - - - - 99 D'22⋄♣ - - - - 99 LexLSTM 0 0 1 96 82 Dangle 0 14 14 98 85 Dangle⋄ 0 25 35 99 88 T5-base⋄ 0 13 18 99 86 Ours⋄ 9±13 79±11 85±14 97±2 92±2 Ours⋄- Lex 5±6 71±26 82±20 79±1 75±2 Model OS CP PP Lex Total Lear♣ **93 100 99 99 99** AM parser⋄♣ 72 100 97 76 78 Dangle 8 14 14 99 87 Ours⋄ 33±24 82±11 91±5 97±2 93±2 Ours⋄- Lex 35±22 73±27 92±5 79±1 77±2 structural assumptions already reap some of the benefits. Logical forms with variables. COGS uses logical forms with variables, which were removed in conversion to variable-free formats for evaluation of some approaches (Zheng and Lapata, 2021; Qiu et al., 2022; Drozdov et al., 2022). Recently, Wu et al. (2023) have argued for keeping the variable symbols because they are important for some semantic distinctions; we keep the variables. ## 6.3 Atis While COGS is a good benchmark for compositional generalization, the data is synthetic and does not contain many phenomena that are frequent in semantic parsing on real data, such as paraphrases that map to the same logical form. ATIS (Dahl et al., 1994) is a realistic English semantic parsing dataset with executable logical forms. We follow the setup of our previous work (Lindemann et al., 2023) (L'23) and use the variable-free FunQL rep- | Model | iid | Length | |--------------|------------|-------------| | LSTM seq2seq | 75.98±1.30 | 4.95±2.16 | | Transformer | 75.76±1.43 | 1.15±1.41 | | BART-base⋄ | 86.96±1.26 | 19.03±4.57 | | L'23 ♣ | 68.26±1.53 | 29.91±2.91 | | L'23 † ♣ | 74.15±1.35 | 35.41±4.09 | | Ours⋄ | 76.65±1.67 | 41.39±13.47 | | Ours | 73.93±1.43 | 38.79±7.11 | resentation (Guo et al., 2020). Apart from the usual iid split, we evaluate on a length split, where a model is trained on examples with few conjuncts and has to generalize to longer logical forms with more conjuncts. For a fair comparison with previous work, we do not use the lexicon/copying. We also evaluate a version of our model without RoBERTa that uses a bidirectional LSTM and GloVe embeddings instead. This mirrors the model of L'23. Table 3 shows mean accuracy and standard deviations over 5 runs. Our model is competitive with non-pretrained models in-distribution, and outperforms all other models on the length generalization. The high standard deviation on the length split stems from an outlier run with 18% accuracy – the second worst-performing run achieved an accuracy of 44%. Even without pretraining, our model performs very well. In particular, without grammarbased decoding our model performs on par or outperforms L'23 *with* grammar-based decoding. The runtime of the model in L'23 is dominated by the permutation model and it takes up to 12 hours to train on ATIS. Training the model presented here only takes around 2 hours for both stages. Performance breakdown. In order for our approach to be accurate, both the multiset tagging model and the permutation model have to be accurate. Table 4 explores which model acts as the bottleneck in terms of accuracy on ATIS and COGS. The answer depends on the dataset: for the synthetic COGS dataset, predicting the multisets correctly is easy except for OS, and the model struggles more with getting the permutation right. In contrast, for ATIS, the vast majority of errors can be attributed to the first stage. | Freq | Seq | Seq/Freq | | | |--------|-----------|------------|----------|----------| | OS | 50±34 | 9±13 | 11±15 | | | COGS | CP | 97±5 | 79±11 | 82±13 | | PP | 99±0 | 85±14 | 85±15 | | | ATIS | iid | 77.6±1.4 | 76.7±1.7 | 98.7±0.5 | | Length | 42.2±13.6 | 41.4±13.5 | 97.8±0.8 | | | Model | Calendar | Doc | Email | |------------|------------|----------|----------| | BART-base⋄ | 36.7±3.0 | 0.6±0.3 | 20.5±9.8 | | L'23 ♣ | 57.2±19.9 | 36.1±5.6 | 43.9±3.8 | | L'23 † ♣ | 69.5±13.9 | 42.4±5.7 | 55.6±2.7 | | Ours⋄ | 74.3±3.5 | 57.8±5.5 | 60.6±4.8 | | Ours | 65.6±2.8 | 41.4±4.9 | 47.6±4.5 | Table 5: Accuracy on length splits by domain on **Okapi**. ## 6.4 Okapi Finally, we consider the recent Okapi (Hosseini et al., 2021) semantic parsing dataset, in which an English utterance from one of three domains (Calendar, Document, Email) has to be mapped to an API request. We again follow the setup of L'23 and evaluate on their length split, where a model has to generalize to longer logical forms. In contrast to all other datasets we consider, Okapi is quite noisy because it was collected with crowd workers. This presents a realistic additional challenge on top of the challenge of structural generalization. The results of 5 runs can be found in Table 5. Our model outperforms both BART (Lewis et al., 2020) and the model of L'23. In the comparison without pretraining, our model also consistently achieves higher accuracy than the comparable model of L'23 without grammar-based decoding. ## 7 Related Work Predicting permutations. Mena et al. (2018) and Lyu and Titov (2018) use variational autoencoders based on the Sinkhorn algorithm to learn latent permutations. The Sinkhorn algorithm (Sinkhorn, 1964) is also an instance of Bregman's method and solves the entropy regularized version of Eq. (6) without the W-term. This parameterization is considerably weaker than ours since it cannot capture our notion of 'jumps'. Wang et al. (2021) compute soft permutations as an expected value by marginalizing over the permutations representable by ITGs (Wu, 1997). This approach is exact but excludes some permutations. In particular, it excludes permutations needed for COGS.3In addition, the algorithm they describe takes a lot of resources as it is both O(n 5) in memory and compute. Devatine et al. (2022) investigate sentence reordering methods. They use bigram scores, which results in a similar computational problem to ours. However, they deal with it by restricting what permutations are possible to enable tractable dynamic programs. Eisner and Tromble (2006) propose local search methods for decoding permutations for machine translation. Outside of NLP, Kushinsky et al. (2019) have applied Bregman's method to the quadratic assignment problem, which Eq. (5) is a special case of. Since they solve a more general problem, using their approach for Eq. (6) would require O(n 4) rather than O(n 3) variables in the linear program. Compositional generalization. Much research on compositional generalization has focused on lexical generalization with notable success (Andreas, 2020; Akyurek and Andreas, 2021; Conklin et al., 2021; Csordás et al., 2021). Structural generalization remains more challenging for seq2seq models (Yao and Koller, 2022). Zheng and Lapata (2022) modify the transformer architecture and re-encode the input and partially generated output for every decoding step to disentangle the information in the representations. Structure has also been introduced in models by means of grammars: Qiu et al. (2022) heuristically induce a quasi-synchronous grammar (QCFG, Smith and Eisner (2006)) and use it for data augmentation for a seq2seq model. Kim (2021) introduces neural QCFGs which perform well on compositional generalization tasks but are very compute-intensive. Other works directly parse into trees or graphs inspired by methods from syntactic parsing (Liu et al., 2021; Herzig and Berant, 2021; Weißenhorn et al., 2022; Jambor and Bahdanau, 2022; Petit and Corro, 2023). Several approaches, including ours, have decoupled the presence or absence of output tokens from their order: Wang et al. (2021) train a model endto-to-end to permute the input (as discussed above) and then monotonically translate it into an output sequence. Lindemann et al. (2023) also present an end-to-end differentiable model that first applies a 'fertility step' which predicts for every word how many copies to make of its representation, and then uses the permutation method of Wang et al. (2021) to reorder the representation before translating them. Cazzaro et al. (2023) first translate the input monotonically and feed it into a second model. They use alignments from an external aligner to train the first model. The second model is a tagger or a pretrained seq2seq model and predicts the output as a permutation of its input. We compare against such a baseline for permutations in Appendix B, finding that it does not work as well as ours in the compositional generalization setups we consider. ## 8 Conclusion In this paper, we have presented a flexible new seq2seq model for semantic parsing. Our approach consists of two steps: We first tag each input token with a multiset of output tokens. Then we arrange those tokens into a sequence using a permutation model. We introduce a new method to predict and learn permutations based on a regularized linear program that does not restrict what permutations can be learned. The model we present has a strong ability to generalize compositionally on synthetic and natural semantic parsing datasets. Our results also show that trees are not necessarily required to generalize well to deeper recursion than seen at training time. ## Limitations The conditional independence assumptions are a limitation for the applicability of our multiset tagging model. For example, the independence assumptions are too strong to apply it to natural language generation tasks such as summarization. From a technical point of view, the independence assumptions are important to be able to induce the latent assignment of output tokens to multisets efficiently. Future work may design multiset tagging methods that make fewer independence assumptions. While our method for predicting permutations is comparatively fast and only has a memory requirement of O(n 3), inference on long sequences, e.g. with more than 100 tokens, remains somewhat estimation. *Computational Linguistics*, 19(2):263– 311. slow. In future work, we plan to investigate other approximate inference techniques like local search and dual decomposition. Regarding the importance of trees for compositional generalization, our model has no explicit structural inductive bias towards trees. However, we do not exclude that the pretrained RoBERTa model that we use as a component *implicitly* captures trees or tree-like structures to a certain degree. ## Acknowledgements We thank Bailin Wang and Jonas Groschwitz for technical discussions; we thank Hao Zheng for discussions and for providing system outputs for further analysis. We also say thank you to Christine Schäfer and Agostina Calabrese for their comments on this paper. ML is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1), the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences, and a grant from Huawei Technologies. IT is supported by the Dutch National Science Foundation (NWO Vici VI.C.212.053). ## References Ekin Akyurek and Jacob Andreas. 2021. Lexicon learning for few shot sequence modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4934–4946, Online. Association for Computational Linguistics. Jacob Andreas. 2020. Good-enough compositional data augmentation. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7556–7566, Online. Association for Computational Linguistics. Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. 2015. Iterative bregman projections for regularized transportation problems. *SIAM Journal on Scientific Computing*, 37(2):A1111–A1138. Lev M Bregman. 1967. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. *USSR computational mathematics and* mathematical physics, 7(3):200–217. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter Francesco Cazzaro, Davide Locatelli, Ariadna Quattoni, and Xavier Carreras. 2023. Translate first reorder later: Leveraging monotonicity in semantic parsing. In *Findings of the Association for Computational* Linguistics: EACL 2023, pages 227–238, Dubrovnik, Croatia. Association for Computational Linguistics. Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. 2021. Meta-learning to compositionally generalize. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3322–3335, Online. Association for Computational Linguistics. Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 619– 634, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. 2022. The neural data router: Adaptive control flow in transformers improves systematic generalization. In *International Conference on Learning Representations*. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In *Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994*. John M. Danskin. 1967. *The Theory of Max-Min* and its Application to Weapons Allocation Problems. Springer Berlin Heidelberg. Nicolas Devatine, Caio Corro, and François Yvon. 2022. Ré-ordonnancement via programmation dynamique pour l'adaptation cross-lingue d'un analyseur en dépendances (sentence reordering via dynamic programming for cross-lingual dependency parsing ). In *Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale*, pages 183–197, Avignon, France. ATALA. Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. 2022. Compositional semantic parsing with large language models. *arXiv* preprint arXiv:2209.15003. Jason Eisner and Roy W Tromble. 2006. Local search with very large-scale neighborhoods for optimal permutations in machine translation. In Proceedings of the HLT-NAACL Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pages 57–75. Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving textto-SQL evaluation methodology. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351–360, Melbourne, Australia. Association for Computational Linguistics. Kuzman Ganchev, Joao Graça, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 11:2001–2049. Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. AMR dependency parsing with a typed semantic algebra. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1831–1841, Melbourne, Australia. Association for Computational Linguistics. Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, and Ting Liu. 2020. Benchmarking meaning representations in neural semantic parsing. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 1520–1540, Online. Association for Computational Linguistics. Jonathan Herzig and Jonathan Berant. 2021. Spanbased semantic parsing for compositional generalization. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 908–921, Online. Association for Computational Linguistics. Saghar Hosseini, Ahmed Hassan Awadallah, and Yu Su. 2021. Compositional generalization for natural language interfaces to web apis. *arXiv preprint* arXiv:2112.05209. Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: how do neural networks generalise? Journal of Artificial Intelligence Research, 67:757–795. Dora Jambor and Dzmitry Bahdanau. 2022. LAGr: Label aligned graphs for better systematic generalization in semantic parsing. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3295–3308, Dublin, Ireland. Association for Computational Linguistics. Najoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9087–9105, Online. Association for Computational Linguistics. Yoon Kim. 2021. Sequence-to-sequence learning with latent neural grammars. In *Advances in Neural Information Processing Systems*, volume 34, pages 26302– 26317. Curran Associates, Inc. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. *Transactions of the* Association for Computational Linguistics, 4:313– 327. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Harold W Kuhn. 1955. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83–97. Yam Kushinsky, Haggai Maron, Nadav Dym, and Yaron Lipman. 2019. Sinkhorn algorithm for lifted assignment problems. *SIAM Journal on Imaging Sciences*, 12(2):716–735. Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *International Conference on Machine Learning*, pages 2873–2882. PMLR. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Matthias Lindemann, Alexander Koller, and Ivan Titov. 2023. Compositional generalisation with structured reordering and fertility layers. In *Proceedings of the* 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2172– 2186, Dubrovnik, Croatia. Association for Computational Linguistics. Chenyao Liu, Shengnan An, Zeqi Lin, Qian Liu, Bei Chen, Jian-Guang Lou, Lijie Wen, Nanning Zheng, and Dongmei Zhang. 2021. Learning algebraic recombination for compositional generalization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1129–1144, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In *Proceedings of the 56th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 397–407, Melbourne, Australia. Association for Computational Linguistics. Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning latent permutations with gumbel-sinkhorn networks. In International Conference on Learning Representations. Alban Petit and Caio Corro. 2023. On graph-based reentrancy-free semantic parsing. *Transactions of* the Association for Computational Linguistics. Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova. 2022. Improving compositional generalization with latent structure and data augmentation. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4341–4362, Seattle, United States. Association for Computational Linguistics. Richard Sinkhorn. 1964. A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statistics, 35(2):876– 879. David Smith and Jason Eisner. 2006. Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies. In Proceedings on the Workshop on Statistical Machine Translation, pages 23–30, New York City. Association for Computational Linguistics. Bailin Wang, Mirella Lapata, and Ivan Titov. 2021. Structured reordering for modeling latent alignments in sequence transduction. In Thirty-Fifth Conference on Neural Information Processing Systems. Pia Weißenhorn, Lucia Donatelli, and Alexander Koller. 2022. Compositional generalization with a broadcoverage semantic parser. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 44–54, Seattle, Washington. Association for Computational Linguistics. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Zhengxuan Wu, Christopher D Manning, and Christopher Potts. 2023. Recogs: How incidental details of a logical form overshadow an evaluation of semantic interpretation. *arXiv preprint arXiv:2303.13716*. Yuekun Yao and Alexander Koller. 2022. Structural generalization is hard for sequence-to-sequence models. In *Proceedings of the 2022 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics. Hao Zheng and Mirella Lapata. 2021. Compositional generalization via semantic tagging. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1022–1032, Punta Cana, Dominican Republic. Association for Computational Linguistics. Hao Zheng and Mirella Lapata. 2022. Disentangled sequence to sequence learning for compositional generalization. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 4256–4268, Dublin, Ireland. Association for Computational Linguistics. ## A Math Details A.1 Np-Hardness We show that Eq. (5) can be used to decide the Hamiltonian Path problem. Let G = (*V, E*) be a graph with nodes V = {1, 2*, . . . , n*}. A Hamiltonian path P = v1, v2*, . . . , v*n is a path in G (i.e. (vi, vi+1) ∈ E for all i) such that each node of G appears exactly once. Deciding if a graph has a Hamiltonian path is NP-complete. Reduction of Hamiltonian path to Eq. (5). Note that a necessary but not sufficient condition for P to be a Hamiltonian path is that P is a permutation of V . This will be ensured by the constraints on the solution in Eq. (5). We construct a score function $$s_{k\cap i}={\begin{cases}1&{\mathrm{if}}\ (k,i)\in E\\ 0&{\mathrm{else}}\end{cases}}\qquad(12)$$ and let si7→j = 0 for all *i, j*. If we find the solution of Eq. (5) for the score function Eq. (12), we obtain a permutation P of V , which may or may not be a path in G. In a path of n nodes, there are n − 1 edges that are crossed. If the score of the solution is n − 1, then all node pairs (vi, vi+1) that are adjacent in P must have had a score of 1, indicating an edge (vi, vi+1) ∈ E. Therefore, P must be a Hamiltonian path. If the score of the solution is less than n − 1, then there is no permutation of V that is also a path, and hence G has no Hamiltonian path. ## A.2 Proof Of Proposition 2 We now prove Prop. 2 using a very similar technique as Kushinsky et al. (2019). As the constraints KL(x || z) + KL(Y ∗(x) || W) = KL(x || z) +X i,j P xiWi,j j′ Wi,j′ (log Pxi✟W✟i,j j′ Wi,j′ ✟W✟i,j✟ − 1) xi✘ P✘✘✘ j Wi,j ✘ P✘✘✘✘ j′ Wi,j′ (log P xi j′ Wi,j′ − 1) = KL(x || z) +X i i xi(log(xi zi ) − 1 + log P xi j′ Wi,j′ − 1) = X i xi(2 log xi − log zi − log X j′ Wi,j′ = X − 2) = 2X i xi(log xi − 1 − 1 2 (log zi + log X j′ Wi,j′ ) ) | {z } log qi = 2 · KL(x || q) P Figure 4: Rewriting the objective function of Eq. (14). We note that the generalized KL divergence KL(x | y) = i xilog xi yi− xi + yi simplifies to KL(x | y) = Pi xi(log xi yi− 1) because we want to find the argmin wrt to x. in Prop. 2 for any value of j do not interact with constraints for other values of j, we can assume w.l.o.g. that j takes a single value only and drop it in the notation. We want to solve: $$\begin{array}{ll}\mbox{arg min}&\mbox{KL}(\mbox{\bf x}\,||\,\,\mbox{\bf z})+\mbox{KL}(\mbox{\bf Y}\,||\,\,\mbox{\bf W})\\ \mbox{\bf x},\mbox{\bf Y}&\\ \mbox{subject to}&\sum_{i}\mbox{\bf x}_{i}=1\\ &\sum_{k}\mbox{\bf Y}_{ik}=\mbox{\bf x}_{i}\,\,\,\forall i\end{array}\tag{13}$$ We can find Y∗ = arg minY KL(Y || W) subject to Pj Yi,j = xi based on Prop. 1: Y∗ i,j = PxiWi,j j Wi,j . That is, we can express Y∗as a function of x (which we write Y∗(x)), and therefore our overall problem is now a problem in one variable (x): $$\begin{array}{ll}\mbox{arg min}&\mbox{KL}(\mbox{\bf x}\,||\,\mbox{\bf z})+\mbox{KL}(\mbox{\bf Y}^{*}(\mbox{\bf x})\,||\,\mbox{\bf W})\\ \mbox{\bf x}&\\ \mbox{subject to}&\sum_{i}\mbox{\bf x}_{i}=1\\ \end{array}\tag{14}$$ We can now rewrite the objective function as $$\begin{array}{r l}{\arg\operatorname*{min}_{\mathbf{x}}}&{{}\quad\operatorname{KL}(\mathbf{x}\mid\mid\mathbf{q})}\\ {\mathrm{subject~to}}&{{}\quad\sum_{i}\mathbf{x}_{i}=1}\\ {\quad}&{{}}\end{array}$$ $$(15)$$ where qi = qzi·Pj′ Wi,j′. This step is justified in detail in Fig. 4. The rewritten optimization problem has the right form to apply Prop. 1 a second time. We obtain: $$\mathbf{x}_{i}^{*}={\frac{\mathbf{q}_{i}}{\sum_{i^{\prime}}\mathbf{q}_{i^{\prime}}}}$$ By plugging this into Y∗(x), we obtain the solution to the overall optimization problem. ## A.3 Reduction Of Eq. (8) To Eq. (6) In this section, we show how Eq. (8) can be reduced to a problem of the entropy regularized version of Eq. (6). This is useful because it means we can use Algorithm 1 to solve Eq. (8). First, we show that computing a KL projection is equivalent to solving an entropy-regularized linear program. Let C be the feasible region of the linear constraints. $$\operatorname*{arg}_{\mathbf{x}\in C}\mathbf{x}^{T}\mathbf{x}-\tau\sum_{i}\mathbf{x}_{i}(\log\mathbf{x}_{i}-1)$$ $$=\operatorname*{arg}_{\mathbf{x}\in C}\tau\sum_{i}\mathbf{x}_{i}(\log\mathbf{x}_{i}-{\frac{\mathbf{s}_{i}}{\tau}}-1)$$ $$=\operatorname*{arg}_{\mathbf{x}\in C}\operatorname*{min}_{\mathbf{x}\in C}\operatorname{KL}(\mathbf{x}\mid\mid\operatorname*{exp}({\frac{\mathbf{s}}{\tau}}))$$ $$14500$$ Due to this, Eq. (8) is equivalent to a linear program that has the same feasible region as Eq. (6) except for the additional constraint U ∈ Q(y, z′). Note that U ∈ Q(y, z′) essentially rules out certain correspondences. Therefore we can approximately enforce U ∈ Q(y, z′) by masking U∗(s) such that any forbidden correspondence receives a very low score. ## A.4 Derivation Of Loss Function As Elbo We now show how the training procedure we use to train our permutation model can be derived from a form of evidence lower bound (ELBO). Ideally, our permutation model would be a distribution Pθ(R|x, z′) over permutation matrices R and we would maximize the marginal likelihood, i.e. marginalizing over all permutations: $$P(\mathbf{y}|\mathbf{x},\mathbf{z}^{\prime})=\sum_{\mathbf{R}\in\mathcal{P}}P_{\theta}(\mathbf{R}|\mathbf{x},\mathbf{z}^{\prime})P(\mathbf{y}|\mathbf{z}^{\prime},\mathbf{R})\tag{16}$$ where $P(\mathbf{y}|\mathbf{z}^{\prime},\mathbf{R})=\prod_{j}\sum_{i}R_{ij}\cdot\mathbbm{1}(y_{j}=z_{i})$ with 1 being the indicator function. P(y|z′, R) returns 1 iff applying the permutation R to z′results in y. Unfortunately, computing Eq. (16) exactly is intractable in general due to the sum over permutation matrices. We instead use techniques from variational inference and consider the following evidence lower bound (ELBO): $$\log P({\bf y}|{\bf x},{\bf z^{\prime}})\geq\max_{Q}{\rm I\!E}_{{\bf R}\sim Q({\bf R}|{\bf x},{\bf z^{\prime}},{\bf y})}\log P({\bf y}|{\bf z^{\prime}},{\bf R})$$ $$-{\rm KL}(Q({\bf R}|{\bf x},{\bf z^{\prime}},{\bf y})\ ||\ P_{\theta}({\bf R}|{\bf x},{\bf z^{\prime}}))\,\tag{17}$$ where Q(R|x, z′, y) is an approximate variational posterior. We now relax the restriction that P(R|x, z′) places non-zero mass only on permutation matrices and use the following definition of Pθ(R|x, z′): $$P_{\theta}(\mathbf{R}_{i j}=1|\mathbf{x},\mathbf{z}^{\prime})=\mathbf{U}^{*}(s)_{i j}$$ lutien to Eq. ($\boldsymbol{\Theta}$) w. where U∗(s) is the solution to Eq. (6) with added entropy regularization. It turns out, in our case, we can easily construct a variational posterior Q that has zero reconstruction loss (the first term on the right side in Eq. (17)): we can choose any Q(R|x, z′, y) ∈ Q(y, z′) where Q(y, z′) is the set of bistochastic matrices such that Q(R|x, z′, y)i,j = 0 iff z′i̸= yj . To see that this gives zero reconstruction error, consider position j in the output: The probability mass is distributed across precisely those positions i in z′ where the ![13_image_0.png](13_image_0.png) | BART perm | Ours | | |-------------|------------|-------------| | OS | 16±8 | 33±24 | | CP | 17±3 | 82±11 | | PP | 16±5 | 91±5 | | Length | 23.81±9.45 | 41.39±13.47 | COGS OS 16±8 33±24 CP 17±3 82±11 PP 16±5 91±5 ATIS iid 77.10±1.61 76.65±1.67 Length 23.81±9.45 41.39±13.47 right kind of token lives. In other words, any alignment with non-zero probability will reconstruct the output token at position j. Therefore we can use the following lower bound to the log-likelihood: $$\begin{array}{l}{{\log\ P(y|{\bf x},{\bf z^{\prime}})\geq}}\\ {{-\operatorname*{min}_{Q\in{\cal Q}({\bf y},{\bf z})}\ {\mathrm{KL}}(Q({\bf R}|{\bf x},{\bf z^{\prime}},{\bf y})\ ||\ P_{\theta}({\bf R}|{\bf x},{\bf z^{\prime}}))}}\end{array}$$ During training, we need to compute the gradient of Eq. (18). By Danskin's theorem (Danskin, 1967), this is: $$-\nabla_{\theta}{\bf K}{\bf L}(Q^{*}\mid\mid P_{\theta}({\bf R}|{\bf x},{\bf z}^{\prime}))\qquad\quad(19)$$ where Q∗ ∈ Q(y, z′) is the minimizer of Eq. (18). Note that Q∗can equivalently be characterised as Uˆ (Eq. (8)). In practice, we also add −KL(Wˆ |W∗(s)) to our objective in Eq. (18) to speed up convergence; this does not change the fact that we use a lower bound. ## B Additional Results And Analysis Okapi. In Fig. 5 we show the accuracy of our model on the document domain in comparison with previous work by number of conjuncts in the logical form. Permutation baseline. A simpler approach for predicting a permutation of the output z′from the multiset tagging is to use a seq2seq model. In order to compare our approach to such a baseline, we concatenate the original input x with a separator token and z′. We then feed this as input to a BART-base model which is trained to predict the output sequence y. At inference time, we use beam search and enforce the output to be a permutation of the input. As detailed in Table 6, this approach works well in-distribution and it also shows a small improvement over finetuning BART directly on the length split of ATIS. However, it does not perform as well as our approach. On COGS, our model outperforms the permutation baseline by an even bigger margin. Unseen variable symbols could be a challenge for BART on COGS which might explain part of the gap in performance. This approach towards predicting permutations is similar to that of Cazzaro et al. (2023) except that they do not constrain the beam search to permutations. We found that not constraining the output to be a permutation worked worse in the compositional generalization setups. ## C Further Model Details C.1 Parametrization Of Permutation Model We do not share parameters between the multiset tagging model and the permutation model. Tokens that appear more than once in the same multiset have the same representation h′i in Eq. (7). In order to distinguish them, we concatenate another embedding to h′i : if the token z′i is the k-th instance of its type in its multiset, we concatenate an embedding for k to h′i . For example, in Fig. 1, z′5 = 'x1' and it is the second instance of 'x1' in its multiset, so we use the embedding for 2. We found it helpful to make the temperature τ of the scores for Algorithm 1 dependent on the number of elements in the permutation, setting τ = (log n)−1, so that longer sequences have slightly sharper distributions. Since the permutation model is designed to model exactly permutations, during training, z and y must have the same elements. This is not guaranteed because z is the prediction of the multiset model which may not have perfect accuracy on the training data. For simplicity, we disregard instances where z and y do not have the same elements. In practice, this leads to a very small loss in training data for the permutation model. ## C.2 Lexicon Mechanism The lexicon L is a lookup table that deterministically maps an input token xito an output token L(xi), and we modify the distribution for multiset tagging as follows: $$P^{\prime}(\mathbf{z}_{i,v}=k|\mathbf{x})={\begin{cases}P(\mathbf{z}_{i,\mathcal{L}}=k|\mathbf{x}_{i})&{{\mathrm{if~}}v=L(\mathbf{x}_{i})}\\ P(\mathbf{z}_{i,v}=k|\mathbf{x})&{{\mathrm{else}}}\end{cases}}$$ where P(zi,v = k|x) is as defined in Eq. (3) and L is a special lexicon symbol in the vocabulary. P(zi,L |xi) is a distribution over the multiplicity of L(xi), independent of the identity of L(xi). We use the 'simple' lexicon induction method by Akyurek and Andreas (2021). Unless otherwise specified during learning, L(xi) = xilike in a copy mechanism. Handling of variables in COGS. For the COGS dataset, a model has to predict variable symbols. The variables are numbered (0-based) by the input token that introduced it (e.g. in Fig. 1, slept, the third token, introduces a variable symbol x2). In order to robustly predict variable symbols for sentences with unseen length, we use a similar mechanism as the lexicon look up table: we introduce another special symbol in the vocabulary, Var. If Var is predicted with a multiplicity of k at i-th input token, it adds the token xi−1 to its multiset k times. ## C.3 Initialization Of Multiset Tagging Model If there are l alignments with a posterior probability of at least χ that an input token i produces token v, we add the term λ log P(zi,v ≥ l | x) to Eq. (4). λ is the hyperparameter determining the strength. This additional loss is only used during the first g epochs. ## D Datasets And Preprocessing We show basic statistics about the data we use in Table 7. Except for the doubling task, all our datasets are in English. COGS uses a small fragment of English generated by a grammar, see Kim and Linzen (2020) for details. Doubling task. For the doubling task, we use an alphabet of size |Σ| = 11. To generate inputs with a specific range of lengths, we first draw a length from the range uniformly at random. The symbols in the input are also drawn uniformly at random and then concatenated into a sequence. Examples of lengths 5 - 10 are used as training, examples of | Dataset | Split/Version | Train | Dev | Test | |-----------|-----------------|---------|--------|--------| | Doubling | 4,000 | 500 | 1,000 | | | COGS | 24,155 | 3,000 | 21,000 | | | ATIS | iid | 4,465 | 497 | 448 | | length | 4,017 | 942 | 331 | | | Calendar | 1,145 | 200 | 1061 | | | Okapi | Document | 2,328 | 412 | 514 | | Email | 2,343 | 200 | 991 | | Table 7: Number of examples per dataset/split. length 11 are used as development data (e.g. for hyperparameter selection), and examples of length 11 - 20 are used as test data. ## D.1 Preprocessing COGS. Unlike Zheng and Lapata (2022); Qiu et al. (2022); Drozdov et al. (2022) we do not apply structural preprocessing to the original COGS meaning representation and keep the variable symbols: all our preprocessing is local and aimed at reducing the length of the logical form (to keep runtimes low). We delete any token in {",","_","(",")","x",".",";","AND"} as these do not contribute to the semantics and can be reconstructed easily in post-processing. The tokens {"agent", "theme", "recipient", "ccomp", "xcomp", "nmod", "in", "on", "beside"} are always preceded by a "." and we merge "." and any of those tokens into a single token. Example: * cookie ( x _ 3 ) ; * table ( x _ 6 ) ; lend . agent ( x _ 1 , Dylan ) AND lend . theme ( x _ 1 , x _ 3 ) AND lend . recipient ( x _ 1 , x _ 9 ) AND cookie . nmod . beside ( x _ 3 , x _ 6 ) AND girl ( x _ 9 ) Becomes * cookie 3 * table 6 lend .agent 1 Dylan lend .theme 1 3 lend .recipient 1 9 cookie .nmod .beside 3 6 girl 9 ATIS. We follow the pre-procressing by Lindemann et al. (2023) and use the variable-free FunQL representation as annotated by Guo et al. (2020). We use spacy 3.0.5 (model en_core_web_sm) to tokenize the input. Okapi. Again, we follow the preprocessing of Lindemann et al. (2023). We use spacy 3.0.5 (model en_core_web_sm) to tokenize both the input utterances and the output logical forms. ## E Details On Evaluation Metrics We provide code for all evaluation metrics in our repository. Doubling. We use exact match accuracy on the string. COGS. For COGS we use exact match accuracy on the sequence in one evaluation setup. The other evaluation setup disregards the order of conjuncts: we first remove the 'preamble' (which contains all the definite descriptions) from the conjunctions. We count a prediction as correct if the set of definite descriptions in the preamble matches the set of definite descriptions in the gold logical form and the set of clauses in the prediction match the set of clauses in the gold logical form. ATIS. We allow for different order of conjuncts between system output and gold parse in computing accuracy. We do this by sorting conjuncts before comparing two trees node by node. This is the same evaluation metric as used by Lindemann et al. (2023). Okapi. We follow Hosseini et al. (2021); Lindemann et al. (2023) and disregard the order of the parameters for computing accuracy. We use a case-insensitive string comparison. ## F Hyperparameters We use the same hyperparameters for all splits of a dataset. For our model, we only tune the hyperparameters of the multiset tagging model; the permutation model is fixed, and we use the same configuration for all tasks where we use RoBERTa. For model ablations where we use an LSTM instead of RoBERTa, we use the same hyperparameters for Okapi and ATIS, and a smaller model for the doubling task. These configurations were determined by hand without tuning. For BART, we use the same hyperparameter as Lindemann et al. (2023). We follow the random hyperparameter search procedure of Lindemann et al. (2023) for the multiset tagging models and the LSTM/transformer we train from scratch: we sample 20 configurations and evaluate them on the development set. We run the two best-performing configurations again with a different random seed and pick the one with the highest accuracy (comparing the union of the predicted multisets with the gold multiset). We then train and evaluate our model with entirely different random seeds. Dataset Model First stage Second stage Doubling LSTM 5 - Transformer 3 - Ours 3 14∗ COGS Ours 20 80 ATIS Ours 30 50 Ours/LSTM 30 110 Okapi/Calendar Ours 9 35 Ours/LSTM 5 30 Okapi/Email Ours 12 40 Ours/LSTM 9 40 Okapi/Document Ours 12 70 Ours/LSTM 10 55 Table 8: Average runtime for train/evaluate on dev and test in **minutes**. For the doubling task, we note that our model has converged usually after 14 of the time in the table. Dataset Model First stage Second stage Doubling LSTM 2.462 Transformer 10.424 Ours 0.273 0.463 COGS Ours 125.091 127.16 ATIS Ours 125.506 127.119 Ours/LSTM 3.345 1.816 Okapi/Calendar Ours 124.927 127.03 Ours/LSTM 1.493 1.876 Table 9: Number of parameters in millions in the models we train. This includes the 124.646m params of RoBERTa when we finetune it. The chosen hyperparameters along with the search space are provided in the github repository. ## G Number Of Parameters, Computing Infrastructure And Runtime We show the number of parameters in the models we train in Table 9. All experiments were run on GeForce GTX 1080 Ti or GeForce GTX 2080 Ti with 12GB RAM and Intel Xeon Silver or Xeon E5 CPUs. The runtime of one run contains the time for training, evaluation on the devset after each epoch and running the model on the test set. We show runtimes of the model we train in Table 8. Since we evaluate on 5 random seeds (10 for COGS due to high variance of results), our experiments overall took around 64 hours of compute time on our computing infrastructure. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations, at the end of the paper ✗ A2. Did you discuss any potential risks of your work? No apparent societal risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 6, Appendix D ✓ B1. Did you cite the creators of artifacts you used? 6 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The data we create programmatically is likely too simple to be protected by copyright. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 6 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 6, Appendix D ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 6, Appendix D ## C ✓ **Did You Run Computational Experiments?** 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix H The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6, Appendix G, code submission. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6, Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xu-etal-2023-managertower
{M}anager{T}ower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning
https://aclanthology.org/2023.acl-long.811
Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15{\%} accuracy on VQAv2 Test-Std, 86.56{\%} IR@1 and 95.64{\%} TR@1 on Flickr30K. Code and checkpoints are available at \url{https://github.com/LooperXX/ManagerTower}.
## Managertower: Aggregating The Insights Of Uni-Modal Experts For Vision-Language Representation Learning Xiao Xu1, 3∗ , Bei Li2, 3, Chenfei Wu3, Shao-Yen Tseng4**, Anahita Bhiwandiwalla**4, Shachar Rosenman4, Vasudev Lal4, Wanxiang Che1†**, Nan Duan**3† 1Harbin Institute of Technology, Harbin, China, 2Northeastern University, Shenyang, China 3Microsoft Research Asia, 4Intel Labs, Cognitive Computing Research {xxu,car}@ir.hit.edu.cn, libei_neu@outlook.com {chenfei.wu,nanduan}@microsoft.com, shao-yen.tseng@intel.com {anahita.bhiwandiwalla,shachar.rosenman,vasudev.lal}@intel.com ## Abstract Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15% accuracy on VQAv2 Test-Std, 86.56% IR@1 and 95.64% TR@1 on Flickr30K. Code and checkpoints are available at https://github.com/ LooperXX/ManagerTower. ## 1 Introduction In recent years, there has been a growing interest in the field of Vision-Language (VL) representation learning due to the development of VisionLanguage Pre-training (VLP) techniques. VLP aims to learn transferable multi-modal knowledge from large-scale image-text pairs, which can further improve the performance of various downstream VL tasks, such as visual question answering (Goyal et al., 2017), visual entailment (Xie et al., 2019), visual reasoning (Suhr et al., 2019), and image-text retrieval (Young et al., 2014). Visual and textual modalities in VL models are typically processed by uni-modal encoders and subsequently fused in a cross-modal encoder. This ∗Contribution during internship at Microsoft. †Contact Person ![0_image_0.png](0_image_0.png) general architecture can be referred to as the TwoTower architecture. METER (Dou et al., 2022) and BridgeTower (Xu et al., 2022) are two representative Two-Tower VL models. METER uses CLIPViT (Radford et al., 2021) and RoBERTa (Liu et al., 2019b) as pre-trained uni-modal encoders, but it ignores different levels of uni-modal semantic knowledge in them and only feeds the last-layer outputs of each uni-modal encoder into the cross-modal encoder. In an effort to address this issue, as illustrated in Figure 1(a), BridgeTower connects multiple top uni-modal layers with each cross-modal layer in a layer-by-layer fashion to exploit unimodal semantic knowledge at different levels. In this work, we build upon the research of BridgeTower and advance it in two aspects. Specifically, we address the limitations of BridgeTower: (i) its layer-by-layer utilization of different unimodal layer representations is ineffective. Each cross-modal layer can only utilize an artificiallyconnected uni-modal layer representation, thus restricting the exploitation of different levels of unimodal semantic knowledge. (ii) the number of cross-modal layers is tied to the number of uni14507 modal layer representations it used, thus limiting its scalability and capability. For example, increasing the number of uni-modal layer representations used requires a corresponding increase in the number of cross-modal layers. This leads to an increase in the number of parameters and computation cost, while does not always result in performance improvements as demonstrated by Xu et al. (2022). As shown in Figure 1(b), we propose a novel VL model architecture, ManagerTower, that aggregates multi-layer uni-modal representations via managers in each cross-modal layer. Each manager takes multi-layer uni-modal representations as the **insights** of pre-trained uni-modal **experts** at different levels, and then **adaptively** aggregates them to facilitate more comprehensive cross-modal alignment and fusion. More concretely, inspired by the linear combination of layers (Wang et al., 2019) method, we adapt it as the Static Aggregation of Experts (SAE) manager and then remove redundant information to design the Static Aggregation of Uni-modal Experts (SAUE) manager, which focuses on aggregating uni-modal semantic knowledge. We further propose the Adaptive Aggregation of Uni-modal Experts (AAUE) manager to adaptively aggregate multi-layer uni-modal representations for each token in different cross-modal layers. Moreover, in principle, managers can be easily integrated into any cross-modal encoders and work well with any uni-modal encoders, making ManagerTower scalable and flexible. We first explore the feasibility of various designs of managers by evaluating and analyzing the performance on VQAv2 and Flickr30K datasets. Then, we pre-train ManagerTower with commonly used 4M VLP data and evaluate it on various downstream VL tasks. With the same pre-training and fine-tuning settings and uni-modal backbones as previous strong baselines such as METER and BridgeTower, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15% accuracy on VQAv2 Test-Std, 86.56% IR@1 and 95.64% TR@1 on Flickr30K. It outperforms not only many base-size models pre-trained on 4M data but also some models pretrained on more data and/or with larger size. ## 2 Preliminary In this work, for a fair comparison with METER and BridgeTower, we use the same cross-modal encoder and pre-trained uni-modal encoders. ## 2.1 Visual Encoder CLIP-ViT, the visual encoder of CLIP (Radford et al., 2021), has been widely used in VL models (Shen et al., 2021; Dou et al., 2022). It reshapes each input image into a flattened patch sequence and prepends a [class] token to the sequence. After a linear projection, position embeddings are added to the sequence to get the input visual representation V0. The ` th visual layer representation can be computed as: V` = EncoderV ` (V`−1), `= 1 *. . . L*V, where ` is the layer index and LV is the number of layers of the visual encoder. ## 2.2 Textual Encoder RoBERTa (Liu et al., 2019b) is widely used in the field of VL (Dou et al., 2022; Li et al., 2022b) due to its robust performance. It tokenizes the input text with the byte-level Byte-Pair Encoding (BPE) (Sennrich et al., 2016; Radford et al., 2019) and adds [<s>] and [</s>] tokens to the start and end of the sequence, respectively. Then, it applies word embeddings and positional embeddings to the tokenized sequence to get the input textual representation T0. Similarly, the ` th textual layer representation can be computed as: T` =EncoderT ` (T`−1), `= 1 *. . . L*T, where LT is the number of layers of the textual encoder. ## 2.3 Cross-Modal Encoder We adopt the transformer encoder (Vaswani et al., 2017) with the co-attention mechanism as the cross-modal encoder (Lu et al., 2019). For each cross-modal layer, each modality has a multi-head self-attention (MSA) block, a multi-head crossattention (MCA) block, and a feed-forward (FFN) block. The MCA block allows the visual part of the cross-modal encoder to attend to the textual part and vice versa. Each cross-modal layer is denoted as EncoderC ` , `= 1 *. . . L*C, where LC is the number of cross-modal layers. For brevity, the ` th cross-modal layer computes as: $$\begin{array}{c c}{{\tilde{\mathbf{C}}_{\ell}^{\mathrm{V}}=\mathbf{C}_{\ell-1}^{\mathrm{V}},}}&{{\quad\quad\quad\quad(1)}}\\ {{\tilde{\mathbf{C}}_{\ell}^{\mathrm{T}}=\mathbf{C}_{\ell-1}^{\mathrm{T}},}}&{{\quad\quad\quad(2)}}\\ {{\mathbf{C}_{\ell}^{\mathrm{V}},\mathbf{C}_{\ell}^{\mathrm{T}}=\mathrm{Encoder}_{\ell}^{\mathrm{C}}(\tilde{\mathbf{C}}_{\ell}^{\mathrm{V}},\tilde{\mathbf{C}}_{\ell}^{\mathrm{T}}),}}&{{\quad\quad(3)}}\end{array}$$ where CV ` , CT ` are the output representations of the visual and textual part at the ` th layer, C˜ V ` , C˜ T ` are inputs of each part. CV 0 , CT 0 are initialized with the last-layer representations from uni-modal encoders: CV 0 =VLVWV, CT 0 =TLTWT, where WV,WT 14508 ![2_image_0.png](2_image_0.png) are linear cross-modal projections. In this work, we use the same default setting as BridgeTower for a fair comparison: LV =LT = 12, LC = 6, and only top N= 6 uni-modal layer representations are used. ## 2.4 Utilization Of Uni-Modal Experts Different layers of uni-modal encoders encoding different levels of semantic information are well demonstrated in vision (Dosovitskiy et al., 2020; Raghu et al., 2021; Naseer et al., 2021) and language (Peters et al., 2018b; Liu et al., 2019a; Jawahar et al., 2019). According to Dosovitskiy et al. (2020) and Raghu et al. (2021), lower layers of ViT tend to attend both locally and globally, while higher layers primarily focus on global information. Similarly, Jawahar et al. (2019) found that the intermediate layers of BERT (Devlin et al., 2019) encode a hierarchy of linguistic information, with surface features at the bottom, syntactic features in the middle, and semantic features at the top. In the field of VL, some works have explored the usage of pre-trained multi-layer uni-modal representations (Dou et al., 2022; Xu et al., 2022). They simply feed the weighted sum of uni-modal layer representations into the first cross-modal layer, or layer-by-layer exploit multiple top uni-modal layer representations in each cross-modal layer. In this work, we take each layer of the pre-trained unimodal encoder as a uni-modal **expert**, and the output representation of each layer as the **insight** of the uni-modal expert into the current input. ## 3 Manager Design Figure 2 depicts the overall framework of ManagerTower. It introduces managers in each cross-modal layer to adaptively aggregate the insights of pretrained uni-modal experts at different levels. In the subsequent subsections, we will elaborate on the detailed design schema for the three types of managers, and conclude with the cross-modal encoder with our well-designed managers.1 ## 3.1 Static Aggregation Of Experts The effectiveness of layer fusion in learning comprehensive representations has been well demonstrated in machine translation (Wang et al., 2018, 2019; Wei et al., 2020). Motivated by this, we decide to apply this technique in the context of VL. As a preliminary approach, we choose to utilize the linear combination of layers method (Wang et al., 2019), which is a simple yet effective way to aggregate the representations of previous layers through the use of learned weights in each encoder layer. A natural idea is to adapt it to aggregate unimodal and cross-modal output representations of all previous layers. We name it the Static Aggregation of Experts (SAE) manager. The calculation of the ` th visual manager is: $$\begin{array}{l}{{\mathcal{M}_{\ell}^{\mathrm{V}}(\mathbf{V}_{7},\ldots,\mathbf{V}_{12},\mathbf{C}_{1}^{\mathrm{V}},\ldots,\mathbf{C}_{\ell-1}^{\mathrm{V}})=}}\\ {{\sum_{i=1}^{\ell-1}\mathbf{W}_{i+6}^{\mathrm{V},\ell}\odot\mathrm{LN}(\mathbf{C}_{i}^{\mathrm{V}})+\sum_{i=1}^{6}\mathbf{W}_{i}^{\mathrm{V},\ell}\odot\mathrm{LN}(\mathbf{V}_{i+6}),}}\end{array}$$ where MV ` denotes the manager for the visual part of the ` th cross-modal layer, WV,` ∈R (6+`−1)×D is a learnable parameter matrix, denotes the element-wise product operation and LN(·) denotes Layer Normalization (Ba et al., 2016). The 1More details on pre-training objectives and downstream fine-tuning are described in Appendix A. softmax with a learnable temperature is used to normalize WV,`. We then omit the superscript V,` of W for brevity. The learned aggregation weight W is initialized with 1 6+`−1 on average in order to assign equal weights to the output representation of all previous layers. However, directly applying SAE to VL models is non-trivial, since it does not bring a desired performance improvement compared to BridgeTower but led to a significant performance decrease. We posit that this decrease may be due to the average initialization of W not being suitable for cross-modal and pre-trained uni-modal output representations as they have different scales. To investigate this hypothesis, we propose dividing the parameter matrix W into uni-modal and cross-modal parts and initializing them with 16 and 1 `−1 , respectively,2and also learn the softmax temperature separately. The experimental result yield a significant improvement compared to the direct application of SAE, but a limited improvement compared to BridgeTower. These observations provide a compelling argument for re-examining how to aggregate multi-layer pretrained uni-modal representations. ## 3.2 Static Aggregation Of Uni-Modal Experts Since Equation (4) can be divided into uni-modal and cross-modal parts, by computing the cosine similarity of aggregated uni-modal/cross-modal representations between every two consecutive textual/visual managers, we further analyze the insights aggregated by different SAE managers. As shown in Figure 3, for SAE managers, the uni-modal similarity is always similar to 1, while the cross-modal similarity increases with depth and gets closer to 1. This indicates that, the uni-modal representations aggregated by different SAE managers are almost identical, and the aggregated crossmodal representations get similar with depth. We hypothesize that, since different SAE managers provide similar aggregated uni-modal representations to each cross-modal layer, output representation of more preceding cross-modal layers may bring redundant information to confuse the managers. This leads to aggregated cross-modal representations converging to indistinguishable vectors as the depth increases. Hence, we propose focusing on aggregating the insights of pre-trained uni-modal experts and keep-2We also try some different initialization methods: one, progressive, exponential moving average, BridgeTower-like, etc., but the results are similar to or lower than the average. ![3_image_0.png](3_image_0.png) ing only the output representation of the previous cross-modal layer. We name it the Static Aggregation of Uni-modal Experts (SAUE) manager. The calculation of the ` th visual manager becomes: $$\begin{array}{l}{{{\mathcal{M}}_{\ell}^{\mathrm{V}}(\mathbf{V}_{7},\ldots,\mathbf{V}_{12},\mathbf{C}_{\ell-1}^{\mathrm{V}})=}}\\ {{\mathbf{W}_{\mathrm{C}}\odot\mathrm{LN}(\mathbf{C}_{\ell-1}^{\mathrm{V}})+\sum_{i=1}^{6}\mathbf{W}_{i}\odot\mathrm{LN}\left(\mathbf{V}_{i+6}\right),}}\end{array}\tag{5}$$ $\mathbb{D}\times\mathbb{D}$, $\alpha$. $\tau=\mathrm{Re}$. where W∈ R 6×D and WC ∈ R 1×D are learnable parameter matrices and initialized with 16 and 1 on average, respectively. The softmax with a learnable temperature only normalizes W. The significant improvement compared to BridgeTower empirically support our hypothesis. Moreover, in Figure 3, the cross-modal similarity of SAUE decreases with depth, which indicates that comprehensive and distinguishable cross-modal representations are learned as depth increases. ## 3.3 Adaptive Aggregation Of Uni-Modal Experts Although the SAUE manager achieves a significant performance improvement, it still has two limitations: (i) W, the learned aggregation weight of uni-modal expert insights, is almost identical between managers in different cross-modal layers, as shown in Figure 3 & 7, which is inconsistent with the intuition that the need for uni-modal semantic knowledge varies among cross-modal layers; (ii) in the inference phase, managers in different crossmodal layers use the same aggregation weight of uni-modal expert insights for all tokens in different samples, which does not match the intuition that the need for uni-modal semantic knowledge varies among tokens and samples. Name | $\mathsf{CA}(\mathsf{C}^\mathsf{V}_{\ell-1},\mathsf{C}^\mathsf{T}_{\ell-1})$ | Shape | LxD ![4_image_0.png](4_image_0.png) Figure 4: An illustration of the calculation of aggregated uni-modal representations AV ∈R L×D in the visual AAUE manager. CA denotes the cross-attention mechanism. N= 6. We omit LN and softmax for brevity. To address the above limitations, we propose the Adaptive Aggregation of Uni-Modal Experts (AAUE) manager. During training and inference phases, AAUE managers can adaptively exploit different levels of uni-modal semantic knowledge from pre-trained uni-modal experts, for different tokens in different samples. Take the visual AAUE manager for example, the calculation of the ` th visual manager becomes: $$\mathcal{M}_{\ell}^{\mathrm{V}}(\mathbf{V}_{7},\ldots,\mathbf{V}_{12},\mathbf{C}_{\ell-1}^{\mathrm{V}})=$$ $$\mathbf{W}_{\mathrm{C}}\odot\mathrm{LN}(\mathbf{C}_{\ell-1}^{\mathrm{V}})+\sum_{i=1}^{6}\mathbf{W}_{\mathrm{A},i}\odot\mathrm{LN}\,(\mathbf{V}_{i+6}),\tag{6}$$ $$\mathbf{W}_{\mathrm{A}}=\mathrm{softmax}(\mathrm{LN}(\mathbf{C}_{\ell-1}^{\mathrm{V}})\times\mathbf{W}_{\mathrm{M}}+\epsilon),\tag{7}$$ where WM ∈ R D×6is a linear projection layer. The generated aggregation weights WA ∈R 6×L×D can adaptively aggregate uni-modal representations of each token from different levels of pre-trained uni-modal experts. The softmax has a learnable temperature and ∼ N (0, 1 6 2 ) is a Gaussian noise for exploration of aggregation (Xue et al., 2022). Furthermore, to better help managers to exploit uni-modal semantic knowledge for the current cross-modal layer, we propose replacing the visual query CV `−1 in Equation (7) with the cross-modal fused query CA(CV `−1 , CT `−1 ) to further improve performance, where CA is a cross-attention mechanism. We visualize WA in Section 4.4. ## 3.4 Cross-Modal Encoder With Managers Since the 1 st cross-modal layer lacks the output representations of the previous cross-modal layer as the query, we introduce the SAUE managers in the 1 st cross-modal layer and the AAUE managers in the subsequent cross-modal layers. Hence, Equation (1) & (2) of the 1 st cross-modal layer with SAUE managers becomes: $$\begin{array}{l}{{\tilde{\mathbf{C}}_{1}^{\mathrm{V}}={\mathcal{M}}_{1}^{\mathrm{V}}(\mathbf{V}_{7},\ldots,\mathbf{V}_{12}),}}\\ {{\tilde{\mathbf{C}}_{1}^{\mathrm{T}}={\mathcal{M}}_{1}^{\mathrm{T}}(\mathbf{T}_{7},\ldots,\mathbf{T}_{12}).}}\end{array}$$ For the 2 nd and subsequent cross-modal layers with AAUE managers: $$\tilde{\bf C}_{\ell}^{\rm V}={\cal M}_{\ell}^{\rm V}({\bf V}_{7},\ldots,{\bf V}_{12},{\bf C}_{\ell-1}^{\rm V},{\bf C}_{\ell-1}^{\rm T}),\tag{10}$$ $$\tilde{\bf C}_{\ell}^{\rm T}={\cal M}_{\ell}^{\rm T}({\bf T}_{7},\ldots,{\bf T}_{12},{\bf C}_{\ell-1}^{\rm T},{\bf C}_{\ell-1}^{\rm V}),\tag{11}$$ where we omit the modality type and layer index embeddings added to uni-modal layer representations V, T in the above equations for simplicity. Figure 4 shows adaptive aggregation of the insights of pre-trained visual experts in AAUE manages, which is the uni-modal (right) part of Equation (6). As for SAUE managers, they directly broadcast the learned weights W∈R 6×D to WA and then aggregate the insights. ## 4 Experiments 4.1 Implementation Details ManagerTower consists of a pre-trained textual encoder, RoBERTaBASE with 124M parameters, a pre-trained visual encoder, CLIP-ViT B-224/16 with 86M parameters, and a randomly-initialized 6-layer cross-modal encoder with managers which has 113M+12M parameters. The detailed setting of the cross-modal encoder is the same as BridgeTower. The maximum length of the text sequence is set to 50, and the image patch size is 16× 16. We use an image resolution of 384 × 384 for Flickr30K and 576×576 for VQAv2 for a fair comparison with BridgeTower. AdamW (Loshchilov and Hutter, 2019) optimizer with a base learning rate of 2e−5and warmup ratio of 0.1 is used. ## 4.2 Investigation And Analysis In this section, we investigate various designs of managers and evaluate the performance by directly fine-tuning on VQAv2 and Flickr30K without VLP. Experimental settings are the same as BridgeTower for a fair comparison. Note that uni-modal encoders are initialized with their pre-trained weights. $$\begin{array}{l}{(8)}\\ {\qquad(9)}\end{array}$$ 4.2.1 Type of Manager We first investigate the performance of different types of managers and different queries. Take the $$14511$$ Type Visual Query Weight Test-Dev RMEAN BT - N × 1 75.91 93.33 SAE - N × 1 76.19 93.57 ![5_image_1.png](5_image_1.png) SAUE - N × 1 76.38 93.75 ![5_image_0.png](5_image_0.png) visual manager for example, based on the top N= 6 visual layer representations V ∈ R N×L×D from CLIP-ViT, different managers provide the aggregation weights that can be broadcast to WA for aggregating the insights of pre-trained visual experts. From the perspective of aggregation weights WA, the SAE and SAUE managers are **static** sentencelevel managers that share the same aggregation weights for all tokens in different samples. Correspondingly, the AAUE manager is an **adaptive** token-level manager that adaptively **generates** different aggregation weights for different tokens in different samples. Besides, we also implement Equation (7) with commonly used cross-attention and concat-attention mechanisms for comparison. Results are shown in Table 1. By focusing on aggregating the insights of pre-trained uni-modal experts, the SAUE manager outperforms the SAE manager on both datasets. Furthermore, with the help of the cross-modal fused query, the AAUE manager achieves substantially better performance than other managers. This demonstrates the effectiveness of adaptive token-level aggregation with the cross-modal fused query compared to static sentence-level aggregation. Notably, the crossmodal fused query incorporates output representations of both visual and textual parts of the previous cross-modal layer, which can better help managers to correctly aggregate uni-modal semantic knowledge required by the current cross-modal layer. ## 4.2.2 Number Of Cross-Modal Layers We compare ManagerTower to BridgeTower with different numbers of cross-modal layers in Table 2 to further evaluate the effectiveness of ManagerTower. Regardless of the number of cross-modal layers, ManagerTower consistently and signifi- | LC | VQAv2 Test-Dev | Flickr30K RMEAN | | | |------|------------------|-------------------|-------|----------------| | BT | Ours | BT | Ours | | | 2 | 74.86 | 75.47 (↑ 0.61) | 92.45 | 93.31 (↑ 0.86) | | 3 | 75.33 | 76.04 (↑ 0.71) | 92.50 | 93.41 (↑ 0.91) | | 4 | 75.74 | 76.26 (↑ 0.52) | 92.76 | 93.59 (↑ 0.83) | | 6 | 75.91 | 76.65 (↑ 0.74) | 93.33 | 93.97 (↑ 0.64) | | 8 | 75.89 | 76.47 (↑ 0.58) | 93.03 | 93.65 (↑ 0.62) | ![5_image_2.png](5_image_2.png) ![5_image_3.png](5_image_3.png) cantly outperforms BridgeTower on both datasets. More interestingly, the performance of ManagerTower with LC = 3 (76.04) is even better than that of BridgeTower with LC = 6 (75.91). Unlike BridgeTower, the number of uni-modal layer representations used N in ManagerTower is not tied to the number of cross-modal layers LC and can be flexibly adjusted. We fix N = 6 as the default setting. Therefore, ManagerTower actually uses the same number of uni-modal layer representations as BridgeTower, but achieves even better performance using half the number of cross-modal layers. This further demonstrates the flexibility and effectiveness of ManagerTower to adaptively aggregate uni-modal semantic knowledge, compared to layerby-layer exploitation in BridgeTower. ## 4.2.3 Number Of Uni-Modal Experts. We further investigate the effect of varying N in ManagerTower with LC = 3. As shown in Figure 5, there exist two interesting observations: (i) ManagerTower (LC = 3, N = 3) is still better than BridgeTower (LC = 3, N = 3). This indicates that when the same number of uni-modal layer representations are introduced, ManagerTower allows more effective aggregation of uni-modal semantic knowledge, thus facilitating cross-modal alignment and fusion in each cross-modal layer. (ii) the performance of ManagerTower first increases | Model | # Pre-train | VQAv2 | SNLI-VE | NLVR2 | Flickr30K | | | | | |----------------------------------------------------------------------------------------------------|---------------|----------|-----------|---------|-------------|--------|-------|-------|-------| | Images | Test-Dev | Test-Std | Dev | Test | Dev | Test-P | IR@1 | TR@1 | | | Base-size models pre-trained on 4M public data ViLTBASE (Kim et al., 2021) 4M | 71.26 | - | - | - | 75.70 | 76.13 | 64.4 | 83.5 | | | UNITERBASE (Chen et al., 2020) ∗ | 4M | 72.70 | 72.91 | 78.59 | 78.28 | 77.18 | 77.85 | 72.52 | 85.90 | | UNIMOBASE (Li et al., 2021b) | 4M | 73.79 | 74.02 | 80.00 | 79.10 | - | - | 74.66 | 89.70 | | ALBEFBASE (Li et al., 2021a) ∗ | 4M | 74.54 | 74.70 | 80.14 | 80.30 | 80.24 | 80.50 | 82.8 | 94.3 | | METER-SwinBASE(Dou et al., 2022) | 4M | 76.43 | 76.42 | 80.61 | 80.45 | 82.23 | 82.47 | 79.02 | 92.40 | | VLMOBASE (Wang et al., 2021a) | 4M | 76.64 | 76.89 | - | - | 82.77 | 83.34 | 79.3 | 92.3 | | METER-CLIPBASE (Dou et al., 2022) | 4M | 77.68 | 77.64 | 80.86 | 81.19 | 82.33 | 83.05 | 82.22 | 94.30 | | BridgeTowerBASE (Xu et al., 2022) | 4M | 78.66 | 78.73 | 81.11 | 81.19 | 81.85 | 83.09 | 85.83 | 94.73 | | ManagerTowerBASE (Ours) | 4M | 79.39 | 79.15 | 81.26 | 81.44 | 82.81 | 83.34 | 86.56 | 95.64 | | Models pre-trained on more data and/or with larger size UNITERLARGE (Chen et al., 2020) ∗ 4M 73.82 | 74.02 | 79.39 | 79.38 | 79.12 | 79.98 | 75.56 | 87.30 | | | | UNIMOLARGE (Li et al., 2021b) | 4M | 75.06 | 75.27 | 81.11 | 80.63 | - | - | 78.04 | 89.40 | | ALBEFBASE (Li et al., 2021a) ∗ | 14M | 75.84 | 76.04 | 80.80 | 80.91 | 82.55 | 83.14 | 85.6 | 95.9 | | SimVLMBASE (Wang et al., 2021b) | 1.8B | 77.87 | 78.14 | 84.20 | 84.15 | 81.72 | 81.77 | - | - | | BLIPBASE (Li et al., 2022a) ∗ | 129M | 78.24 | 78.17 | - | - | 82.48 | 83.08 | 87.3 | 97.3 | | SimVLMLARGE (Wang et al., 2021b) | 1.8B | 79.32 | 79.56 | 85.68 | 85.62 | 84.13 | 84.84 | - | - | Table 3: Comparisons with previous models on downstream VL tasks. The best score is bolded. ∗ indicates that the model also uses VG-QA data to fine-tune on VQAv2. gradually, but decreases after N > 6. We assume that lower-layer uni-modal representations may not help ManagerTower learn cross-modal fusion and also increases the computational cost, which is also consistent with the observation in Xu et al. (2022). ## 4.3 Comparison With Previous Arts Pre-train Settings. We pre-train ManagerTower with two standard VLP objectives, masked language modeling (MLM) and image-text matching (ITM), on the commonly used 4M public data: Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), MSCOCO Captions (Chen et al., 2015), and Visual Genome (VG) (Krishna et al., 2017). The pre-train settings are the same as BridgeTower and METER for a fair comparison. ManagerTower is pre-trained for 100k steps with a batch size of 4096 and a learning rate of 1e−5. The image resolution for VLP is 288 × 288 and only center-crop (Radford et al., 2021) is used without any data augmentation. Main Results. Table 3 shows the performance of ManagerTower compared with other previous works on various downstream VL tasks. ManagerTower achieves superior performances on these datasets with only 4M VLP data. With the same pre-training and fine-tuning settings and uni-modal backbones as previous strong baselines METER and BridgeTower, ManagerTower significantly improves performances on various downstream VL tasks, especially 79.15% accuracy on VQAv2 Test-Std, 86.56% IR@1 and 95.64% TR@1 on Flickr30K. This further demonstrates that with all other factors fixed, compared to BridgeTower that introduces bridges to METER, ManagerTower allows more effective aggregation of multi-layer unimodal representations via well-designed managers. Managers can adaptively aggregate more accurate uni-modal semantic knowledge to facilitate comprehensive cross-modal alignment and fusion in each cross-modal layer. Notably, ManagerTower not only outperforms many base-size models pretrained on 4M data, but also surpasses some models pre-trained on more data and/or with larger size. ## 4.4 Visualization Of Aggregation Weights We delve into managers by visualizing the average aggregation weights they generate for each cross-modal layer over all samples in VQAv2 Valid in Figure 6. For each row, the first column shows the learned aggregation weights of SAUE managers. The other five columns show the aggregation weights generated by AAUE managers and share the Y-axis to provide easy horizontal comparison. Interestingly, the aggregation weight distributions provided by managers are completely different from the one-hot distributions specified in BridgeTower, and there are two distinct trends: (i) For SAUE managers in the 1 st cross-modal layer, vertically: textual manager exhibits increasing and then decreasing weights, most favoring T10, unlike T12 and T7 used in METER and BridgeTower, ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) respectively; visual manager exhibits increasing weights, most favoring V12, the same as METER and BridgeTower. (ii) For AAUE managers in the 2 nd to 6 th cross-modal layers, horizontally: whether textual or visual managers, they exhibit diverse aggregation weight distributions in different layers. Overall, comparing the aggregation weight distributions horizontally and vertically, ManagerTower learns diverse distributions in different cross-modal layers. This provides strong evidence that the introduced managers can adaptively aggregate unimodal semantic knowledge for comprehensively cross-modal representation learning. ## 4.5 Intuitive Comparison Between Bt&Mt We provide brief illustrations in Figure 8 to intuitively compare BridgeTower (BT) and ManagerTower (MT) with different type of managers. BT vs**. MT with SAUE Managers.** In Table 2 & 5, we provide the performance comparison between BridgeTower and ManagerTower.3In fact, BridgeTower can be seen as an approximate special case of ManagerTower with SAUE managers if we replace the learned weights W in each manager with layer-by-layer one-hot distributions4 used in BridgeTower. However, as shown in Figure 7, the aggregation weight of textual and visual SAUE managers share a similar progressive trend 3The re-implemented BridgeTower obtained higher experimental results than the original paper due to the better fine-tuning settings we used for all experiments in Section 4.2. 4It means that, for each cross-modal layer, only one unimodal expert is activated at a time in the bottom-up direction. across cross-modal layers, which is completely different from the distributions in BridgeTower. This allows ManagerTower with SAUE managers to achieve significant performance gains (from 75.91 to 76.55) compared to BridgeTower. Besides, the similar trend of aggregation weights is consistent with the observations in Figure 3, that is, the cosine similarity of aggregated uni-modal representations between managers is always similar to 1. SAUE Manager vs**. AAUE Manager.** When we compare Figure 6 & 7, their respective aggregation weight distributions are completely different. This further demonstrates that compared with SAUE managers, AAUE managers can adaptively **generates** different aggregation weights for different tokens in different samples. Interestingly, the first column of two figures both comes from the SAUE managers, but the distributions are still clearly different. We presume that high-layer AAUE managers may help low-layer SAUE managers **rectify** their management of experts. We also provide the visualizations of aggregation weights of SAE and AAUE managers without VLP in Figure 9 & 10. Comparing the visualization of three types of managers without VLP, we can find that (i) the learned aggregation weights of SAE and SAUE managers are still a little close to the average initialization we used and they all share a similar progressive trend across cross-modal layers; (ii) for each AAUE manager, its generated aggregation weights vary significantly across 6 uni-modal experts; comparing different cross-modal layers, ![8_image_0.png](8_image_0.png) the distribution of aggregation weights generated by the AAUE manager is also very different. ## 5 Related Work Vision-Language Models. Although VL models differ in model architecture, most of them use unimodal encoders to extract visual and textual representations, and then fuse them in a cross-modal encoder, which can be unified into the Two-Tower architecture (Lu et al., 2019; Su et al., 2020; Chen et al., 2020; Li et al., 2020a,b; Zhou et al., 2020; Kim et al., 2021; Radford et al., 2021; Jia et al., 2021; Li et al., 2021a,b, 2022a; Dou et al., 2022; Wang et al., 2021a,b, 2022a,b; Yu et al., 2022). As a representative model, METER (Dou et al., 2022) adopts pre-trained uni-modal encoders and feeds their last-layer representations into the cross-modal encoder. BridgeTower (Xu et al., 2022) proposes building layer-by-layer connections between the top uni-modal layers and each cross-modal layer to utilize different uni-modal layer representations. However, they still cannot provide adaptive and effective aggregation of multi-layer pre-trained unimodal representations in each cross-modal layer. Multi-Layer Representation Aggregation. The effectiveness of layer representation aggregation in learning comprehensive representations has been well demonstrated in vision (Lin et al., 2017; Huang et al., 2017; Yu et al., 2018; Xie et al., 2021) and language (Peters et al., 2018a; Wang et al., 2018, 2019; Wei et al., 2020). Recent VL models also explore utilization of multi-layer uni-modal representations for better cross-modal representation learning. METER feeds the weighted sum of uni-modal representations into the first crossmodal layer. BridgeTower introduces bridges into METER so that different uni-modal layer representation are fed layer by layer into each cross-modal layer. In this work, ManagerTower explores adaptive and effective aggregation of multi-layer unimodal representations via well-designed managers. ## 6 Conclusion We propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels via the introduced managers in each cross-modal layer. The feasibility of various designs of managers is well explored, and the effectiveness of ManagerTower on various downstream VL tasks is well demonstrated. More comprehensive cross-modal alignment and fusion in each cross-modal layer is achieved by adaptive aggregation of different levels of uni-modal semantic knowledge. We hope that our work can inspire more research on how to better exploit multi-layer pre-trained uni-modal representations for cross-modal representation learning. ## Limitations In this work, we propose managers that allow adaptive aggregation of uni-modal layer representations in each cross-modal layer. Inevitably, AAUE managers significantly improve performance which slightly increasing the computational budget, as we detailed discussed in Appendix C. This needs to be further optimized in the future. Analysis and optimization are also needed for the other types of managers as shown in Appendix D. Moreover, as shown in Figure 5, the performance of ManagerTower first increases gradually with the number of uni-modal representations, but then stops increasing and even decreases when the number of uni-modal representations exceeds 6. How to obtain better ManagerTower performance using a lower computational budget while utilizing more insights of uni-modal experts, especially when scaling the model, *e.g.*, 24-layer CLIP-ViT L-224/16 and 24-layer RoBERTaLARGE, is a question worth further exploration. For example, designing reasonable sparse activation functions for managers in ManagerTower, instead of simple top-N or top-p sampling (which did not work well in our preliminary experiments). ## Acknowledgements This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 62236004 and 61976072. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. *ArXiv preprint*, abs/1607.06450. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. *ArXiv preprint*, abs/1504.00325. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *Proc. of ECCV*. Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. In *Proc. of ICLR*. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, and Michael Zeng. 2022. An empirical study of training end-to-end vision-and-language transformers. Conference on Computer Vision and Pattern Recognition (CVPR). Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6325–6334. IEEE Computer Society. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In *2017 IEEE Conference on* Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2261–2269. IEEE Computer Society. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *Proc. of ICML*. Andrej Karpathy and Fei-Fei Li. 2015. Deep visualsemantic alignments for generating image descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3128–3137. IEEE Computer Society. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In *Proc. of ICML*. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11336–11344. AAAI Press. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022a. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *ArXiv preprint*, abs/2201.12086. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021a. Align before fuse: Vision and language representation learning with momentum distillation. Proc. of NeurIPS. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2021b. UNIMO: Towards unified-modal understanding and generation via cross-modal contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2592– 2607, Online. Association for Computational Linguistics. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2022b. Unimo-2: End-to-end unified vision-language grounded learning. *ArXiv preprint*, abs/2203.09067. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020b. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *Proc. of ECCV*. Tsung-Yi Lin, Piotr Dollár, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie. 2017. Feature pyramid networks for object detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 936–944. IEEE Computer Society. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *ArXiv preprint*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 814, 2019, Vancouver, BC, Canada, pages 13–23. Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. 2021. Intriguing properties of vision transformers. *Proc. of NeurIPS*. Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 1143–1151. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 1499– 1509, Brussels, Belgium. Association for Computational Linguistics. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *Proc. of ICML*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog. Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. 2021. Do vision transformers see like convolutional neural networks? *Proc. of NeurIPS*. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, Melbourne, Australia. Association for Computational Linguistics. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? *ArXiv preprint*, abs/2107.06383. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pretraining of generic visual-linguistic representations. In *8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April* 26-30, 2020. OpenReview.net. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 6418–6428, Florence, Italy. Association for Computational Linguistics. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 4223–4232. IEEE Computer Society. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022a. Unifying architectures, tasks, and modalities through a simple sequenceto-sequence learning framework. *ArXiv preprint*, abs/2202.03052. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1810–1822, Florence, Italy. Association for Computational Linguistics. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018. Multi-layer representation fusion for neural machine translation. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 3015–3026, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. 2022b. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. ArXiv preprint, abs/2208.10442. Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. 2021a. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. *ArXiv preprint*, abs/2111.02358. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021b. Simvlm: Simple visual language model pretraining with weak supervision. *ArXiv preprint*, abs/2108.10904. Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, and Weihua Luo. 2020. Multiscale collaborative deep models for neural machine translation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 414–426, Online. Association for Computational Linguistics. Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. 2021. Segformer: Simple and efficient design for semantic segmentation with transformers. *Proc. of NeurIPS*. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. *ArXiv preprint*, abs/1901.06706. Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, and Nan Duan. 2022. Bridge-tower: Building bridges between encoders in vision-language representation learning. *ArXiv preprint*, abs/2206.08657. Fuzhao Xue, Ziji Shi, Futao Wei, Yuxuan Lou, Yong Liu, and Yang You. 2022. Go wider instead of deeper. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 36, pages 8779–8787. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. *Transactions of the* Association for Computational Linguistics, 2:67–78. Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. 2018. Deep layer aggregation. In *2018* IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 2403–2412. IEEE Computer Society. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text foundation models. *ArXiv preprint*, abs/2205.01917. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In *Proc. of CVPR*. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified visionlanguage pre-training for image captioning and vqa. In *Proc. of AAAI*. ## A Implementation Details A.2 Fine-Tuning On Downstream Tasks A.1 Vision-Language Pre-Training | COCO | VG | CC | SBU | | |------------|------|------|-------|------| | # Images | 113K | 108K | 2.9M | 860K | | # Captions | 567K | 4.8M | 2.9M | 860K | Table 4: Statistics of the pre-train datasets. We remove duplicate image-caption pairs in VG (Kim et al., 2021; Dou et al., 2022) and only 2.9M image-caption pairs can be downloaded in CC. Pre-training Settings. Table 4 shows the statistics of the pre-train datasets. Following previous work (Kim et al., 2021; Chen et al., 2020; Li et al., 2021a; Dou et al., 2022), we adopt four public image-caption datasets for pre-training, including Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (SBU) (Ordonez et al., 2011), MSCOCO Captions (COCO) (Chen et al., 2015), and Visual Genome (VG) (Krishna et al., 2017). The total numbers of the unique images and imagecaption pairs in the combined training data are 4M and 9M. Table 8 describes the hyperparameters for pre-training the ManagerTower. The learning rate of the cross-modal encoder is five times higher than that of uni-modal encoders (Dou et al., 2022). Dataset Setting. Standard settings and splits are used for all datasets. For Flickr30K dataset (Young et al., 2014), we follow the standard Karpathy Split (Karpathy and Li, 2015). For VQAv2 (Goyal et al., 2017) dataset, we follow the common practice (Goyal et al., 2017; Teney et al., 2018): convert VQAv2 to a classification task with 3, 129 answer classes; train the model with training data and validation data, and evaluate the model on the Test-Dev and Test-Std data. We use two commonly used VLP objectives. Masked Language Modeling. For MLM, we follow the conditional masking approach used in UNITER (Chen et al., 2020) that randomly masks 15% of the tokens in the text token sequence while keeping the image patch sequence unchanged. The model is then trained to predict the original masked tokens given the incomplete text sequence and the complete image patch sequence. The masking strategy and MLM task head we use are the same as RoBERTa. The output top-layer representation of the textual part of the cross-modal encoder is used as input for the MLM task head. Image Augmentation. We follow previous works (Li et al., 2021a, 2022a) to use RandomResizedCrop, RandomHorizontalFlip, and RandAugment (Cubuk et al., 2020) to augment the images. Image-Text Matching. For ITM, both matched and mismatched image-text pairs are fed into the model with equal probability. The model is trained to predict whether a given image-text pair is a matched (positive) or a mismatched (negative) pair. The output top-layer representations of [class] and [<s>] tokens are activated by the non-linear function Tanh. Then the concatenation of the above output representations is fed into a linear classifier with cross-entropy loss for binary classification. Fine-Tuning Strategy. For visual question answering, visual entailment and visual reasoning, the fine-tuning strategy is similar to the strategy we used in ITM. For image-text retrieval, we follow the approach used in ALBEF (Li et al., 2021a) to optimize our model with both image-text contrastive (ITC) and ITM objectives. In the training phase, we first add two linear projections on top of the uni-modal encoders and calculate the contrastive similarity of uni-modal representations of image-text pairs by dot product to compute the ![13_image_0.png](13_image_0.png) Figure 9: A visualization of aggregation weights of textual and visual SAE managers in each cross-modal layer. The X-axis is the index of the uni-modal expert, and the legend shows the index of the cross-modal layer. ![13_image_1.png](13_image_1.png) | Visual | Textual | VQAv2 Test-Dev | Flickr30K RMEAN | | | |-------------------|-----------|------------------|-------------------|-------------|---------------| | Backbone | Backbone | BridgeTower | ManagerTower | BridgeTower | ManagerTower | | DeiT B-224/16 | RoBERTa | 71.22 | 72.20 (↑ 0.98) | 87.63 | 88.72(↑ 1.09) | | ViT B-224/16 | RoBERTa | 72.82 | 73.67 (↑ 0.85) | 90.48 | 90.92(↑ 0.44) | | ViT B-384/16 | RoBERTa | 72.94 | 73.80 (↑ 0.86) | 90.51 | 90.96(↑ 0.45) | | CLIP-ViT B-224/32 | RoBERTa | 73.73 | 74.79 (↑ 1.06) | 91.33 | 91.76(↑ 0.43) | | CLIP-ViT B-224/16 | BERT | 75.74 | 76.36 (↑ 0.62) | 92.84 | 93.42(↑ 0.58) | | CLIP-ViT B-224/16 | RoBERTa | 75.91 | 76.65 (↑ 0.74) | 93.33 | 93.97(↑ 0.64) | Table 5: Performance of BridgeTower and ManagerTower with different visual and textual backbones. B, N and M in "ViT B-N/M" denote the model size, image resolution and patch size, respectively. ITC loss. Formerly, negative image-text pairs in ITM loss are sampled randomly. However, after computing the ITC loss, we can use contrastive similarity distribution to sample one hard in-batch negative text (image) for each image (text) in a mini-batch. In the inference phase, we first compute the contrastive similarity for all images and texts, and then select the top-k candidates based on their contrastive similarity. We then calculate their ITM scores for these candidates to determine the final ranking. Fine-Tuning Settings. Similar to the image-text matching (ITM) pre-training objective, we pass the final representation of [class] token and [<s>] token to the non-linear layer activated by Tanh, and feed the concatenation of the output into a linear classifier (Flickr30K) or an MLP classifier(VQAv2, SNLI-VE and NLVR2). We apply cross-entropy loss for SNLI-VE, NLVR2 and Flickr30K and binary cross-entropy loss for VQAv2 (Kim et al., 2021; Dou et al., 2022). Finetuning hyperparameters for VQAv2, SNLI-VE, NLVR2, and Flickr30K are given in Table 9. ## B Switch Visual And Textual Backbones We experiment with different pre-trained visual and textual backbones as uni-modal encoders to further investigate the impact on performance of the managers of ManagerTower compared to the bridges of BridgeTower. As shown in Table 5, regardless of the visual and textual backbones we apply, ManagerTower significantly and consistently outperforms BridgeTower on both datasets. This further proves the effectiveness and generalization of our proposed ManagerTower architecture and managers, which can provide adaptive and effective aggregation of multi-layer uni-modal representations for vision-language representation learning. ## C Computational Budget Table 6 shows the computational budget and downstream task performance without VLP for | Model | Manager | Manager | # Params # FLOPs Inference Time | VQAv2 | Flickr30K | | | |-------------------|--------------|-----------|-----------------------------------|---------|-------------|-------------------------------|-------------------------------| | Type | Visual Query | (M) | (G) | (ms) | Test-Dev | RMEAN | | | BridgeTowerBASE * | - | - | 326.58 | 101.25 | 39.43±1.55 | 75.91 | 93.33 | | ManagerTowerBASE | SAUE | - | 326.77 | 101.34 | 41.12±1.41 | 76.55 (↑ 0.64) 93.73 (↑ 0.40) | | | ManagerTowerBASE | AAUE | CV `−1 | 326.77 | 101.35 | 41.80±1.05 | 76.52 (↑ 0.61) 93.84 (↑ 0.51) | | | ManagerTowerBASE | AAUE | CV | , CT `−1 | 338.64 | 105.52 | 43.20±1.37 | 76.65 (↑ 0.74) 93.97 (↑ 0.64) | | `−1 | | | | | | | | Table 6: Computational budget and downstream task performance without VLP for BridgeTower and ManagerTower. * denotes our re-implementation. BridgeTower and ManagerTower, including the number of parameters, the number of FLoatingPoint operations (FLOPs)5. We measure the average inference time of processing 1 VQA instance over 10K runs on 1 NVIDIA TITAN V GPU. The sequence length is 50, and the image resolution is 384 × 384. Compared with BridgeTower (1 st row), ManagerTower (4 th row) uses an acceptable additional computational budget (3.69% parameters, 4.22% FLOPs, and 3.77ms inference time) and achieves significant performance improvements of 0.74% and 3.1% on VQAv2 and Flickr30K, respectively. We further analyze other well-performed variants of ManagerTower in the 2 nd and 3 rd rows. It is worth noting that the two variants share a similar computational budget as BridgeTower, but achieve better performance. This not only demonstrates the efficiency and effectiveness of our ManagerTower architecture, but also reminds us that the cross-modal fused query via the cross-attention mechanism is the main reason for the additional computational budget of ManagerTower (4 th row), as it is the only difference between the 3 rd and 4 th row models. This inspires us to explore a more efficient method to fuse CV `−1 and CT `−1 to get the cross-modal fused query in the future. ## D Details On Cross-Attention And Concat-Attention Managers Cross-Attention Managers. We implement the standard cross-attention mechanism (Vaswani et al., 2017) and reduce the linear projection layer for value to save computational budget.6 Take the visual manager for example, it takes CV `−1 ∈ R L×D as the query, and the first token of multi-layer unimodal representations, *i.e.*, V[:, 0] ∈ R N×D, as the key. Hence, the shape of generated aggregation weights is N × L, which can be broadcast to the aggregation weights WA ∈R N×L×D. The following calculation is the same as AAUE managers in Figure 4. The results in Table 1 show a significant decrease compared to other managers on Flickr30K. We leave the detailed analysis of this phenomenon to the future work. Concat-Attention Managers. Take the visual manager as an example, it broadcasts CV `−1 ∈ R L×D to R N×L×D, and concatenates it with V ∈ R N×L×D along the last dimension as the concatenated query. It then directly projects the query to WA ∈ R N×L×D. The following calculation is the same as AAUE managers in Figure 4. In fact, this type of manager is different from all other managers from the perspectives of the generated aggregation weights. Although its aggregation weights delve into the feature dimension of CV `−1 and V, the substantially increased number of parameters and computational cost do not result in a significant performance gain, making it impractical and inefficient. More efficient variants of this type of manager should be investigated in the future. ## E Detailed Comparison With Previous Arts Due to the space limitations, we omit some baselines and details in Table 3. Here we provide more details on the comparison with previous arts in Table 7. | Model | # Pre-train | Visual | VQAv2 | SNLI-VE | NLVR2 | Flickr30K | | | | |-----------------------------------------------------------------------------------------------------------|----------------------|-----------------------|-------------------------------|-------------------------------|-------------------------------|-------------|-------------|-------|-------| | Images | Backbone | Test-Dev Test-Std Dev | Test | Dev | Test-P IR@1 TR@1 | | | | | | Base-size models pre-trained on 4M public data ViLTBASE (Kim et al., 2021) 4M | ViT B-384/32 | 71.26 | - | - | - | 75.70 76.13 | 64.4 | 83.5 | | | UNITERBASE (Chen et al., 2020) ∗ | 4M | Faster R-CNN | 72.70 | 72.91 | 78.59 78.28 77.18 77.85 72.52 | 85.90 | | | | | VILLABASE (Gan et al., 2020) ∗ | 4M | Faster R-CNN | 73.59 | 73.67 | 79.47 79.03 78.39 79.30 74.74 | 86.60 | | | | | UNIMOBASE (Li et al., 2021b) | 4M | Faster R-CNN | 73.79 | 74.02 | 80.00 79.10 | - | - | 74.66 | 89.70 | | ALBEFBASE (Li et al., 2021a) ∗ | 4M | DeiT B-224/16 | 74.54 | 74.70 | 80.14 80.30 80.24 80.50 | 82.8 | 94.3 | | | | VinVLBASE (Zhang et al., 2021) | 5.7M | ResNeXt-152 | 75.95 | 76.12 | - | - | 82.05 83.08 | - | - | | METER-SwinBASE (Dou et al., 2022) | 4M | Swin B-384/32 | 76.43 | 76.42 | 80.61 80.45 82.23 82.47 79.02 | 92.40 | | | | | VLMOBASE (Wang et al., 2021a) | 4M | BEiT B-224/16 | 76.64 | 76.89 | - | - | 82.77 83.34 | 79.3 | 92.3 | | METER-CLIPBASE (Dou et al., 2022) | 4M CLIP-ViT B-224/16 | 77.68 | 77.64 | 80.86 81.19 82.33 83.05 82.22 | 94.30 | | | | | | BridgeTowerBASE (Xu et al., 2022) | 4M CLIP-ViT B-224/16 | 78.66 | 78.73 | 81.11 81.19 81.85 83.09 85.83 | 94.73 | | | | | | ManagerTowerBASE (Ours) | 4M CLIP-ViT B-224/16 | 79.39 | 79.15 | 81.26 81.44 82.81 83.34 86.56 | 95.64 | | | | | | Models pre-trained on more data and/or with larger size UNITERLARGE (Chen et al., 2020) ∗ 4M Faster R-CNN | 73.82 | 74.02 | 79.39 79.38 79.12 79.98 75.56 | 87.30 | | | | | | | VILLALARGE (Gan et al., 2020) ∗ | 4M | Faster R-CNN | 74.69 | 74.87 | 80.18 80.02 79.76 81.47 76.26 | 87.90 | | | | | UNIMOLARGE (Li et al., 2021b) | 4M | Faster R-CNN | 75.06 | 75.27 | 81.11 80.63 | - | - | 78.04 | 89.40 | | ALBEFBASE (Li et al., 2021a) ∗ | 14M | DeiT B-224/16 | 75.84 | 76.04 | 80.80 80.91 82.55 83.14 | 85.6 | 95.9 | | | | VinVLLARGE (Zhang et al., 2021) | 5.7M | ResNeXt-152 | 76.52 | 76.63 | - | - | 82.67 83.98 | - | - | | BLIPBASE (Li et al., 2022a) ∗ | 14M | DeiT B-224/16 | 77.54 | 77.62 | - | - | 82.67 82.30 | 87.2 | 96.6 | | SimVLMBASE (Wang et al., 2021b) ? | 1.8B | ResNet-101 | 77.87 | 78.14 | 84.20 84.15 81.72 81.77 | - | - | | | | BLIPBASE (Li et al., 2022a) ∗ | 129M | DeiT B-224/16 | 78.24 | 78.17 | - | - | 82.48 83.08 | 87.3 | 97.3 | | SimVLMLARGE (Wang et al., 2021b) ? | 1.8B | ResNet-152 | 79.32 | 79.56 | 85.68 85.62 84.13 84.84 | - | - | | | | VLMOLARGE (Wang et al., 2021a) | 4M | BEiT L-224/16 | 79.94 | 79.98 | - | - | 85.64 86.86 | 84.5 | 95.3 | | SimVLMHUGE (Wang et al., 2021b) ? | 1.8B | Larger ResNet-152 | 80.03 | 80.34 | 86.21 86.32 84.53 85.15 | - | - | | | Table 7: Comparisons with previous models on various downstream VL tasks. The best score is bolded. B, N and M in "ViT B-N/M" denote the model size, image resolution and patch size, respectively. ∗ indicates that the model also uses VG-QA data to fine-tune on VQAv2. ? denotes the model is trained from scratch. "\# Pre-train Images" denotes the number of unique images used in VLP. | Hyperparameters | ManagerTower | |----------------------------|-------------------| | Number of Layers | 6 | | Hidden size | 768 | | FFN inner hidden size | 3, 072 | | Number of Attention heads | 12 | | Dropout Ratio | 0.1 | | Attention dropout | 0.1 | | Total Steps | 100k | | Batch Size | 4, 096 | | Optimizer | AdamW | | Learning Rate | 1e −5 | | Learning Rate Decay | Linear | | Weight Decay | 0.01 | | Warmup Steps | 10k | | Adam | 1e −8 | | Adam β1 | 0.9 | | Adam β2 | 0.98 | | Center-Crop | ✓ | | Random Resized Crop | ✗ | | Random Augmentation | ✗ | | Random Horizontal Flipping | ✗ | | Textual Encoder | RoBERTaBASE | | Visual Encoder | CLIP-ViT B-224/16 | | Patch Size | 16 | | Image Resolution for VLP | 288 | | Hyperparameters | VQAv2 | SNLI-VE | NLVR2 | Flickr30K | |----------------------------|-------------------|-------------------|-------------------|-------------------| | Total Epochs | 10 | 4 | 5 | 20 | | Batch Size | 576 | 64 | 256 | 512 | | Optimizer | AdamW | AdamW | AdamW | AdamW | | Learning Rate | 9e −6 | 3e −6 | 1.4e −5 | 6e −6 | | Learning Rate Decay | Linear | Linear | Linear | Linear | | Weight Decay | 0.06 | 0.01 | 0.01 | 0.01 | | Warmup Ratio | 0.06 | 0.06 | 0.1 | 0.1 | | Adam | 1e −8 | 1e −8 | 1e −8 | 1e −8 | | Adam β1 | 0.9 | 0.9 | 0.9 | 0.9 | | Adam β2 | 0.98 | 0.98 | 0.98 | 0.98 | | Center-Crop | ✗ | ✗ | ✗ | ✗ | | Random Resized Crop | ✓ | ✓ | ✓ | ✓ | | Random Augmentation | ✓ | ✓ | ✓ | ✓ | | Random Horizontal Flipping | ✗ | ✓ | ✓ | ✓ | | Textual Encoder | RoBERTaBASE | RoBERTaBASE | RoBERTaBASE | RoBERTaBASE | | Visual Encoder | CLIP-ViT B-224/16 | CLIP-ViT B-224/16 | CLIP-ViT B-224/16 | CLIP-ViT B-224/16 | | Patch Size | 16 | 16 | 16 | 16 | | Image Resolution for FT | 576 | 384 | 384 | 384 | | Loss Function | BCE | CE | CE | CE | Table 9: Hyperparameters for fine-tuning ManagerTower on various downstream VL tasks. FT denotes fine-tuning. CE and BCE are short for cross-entropy loss and binary cross-entropy loss, respectively. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations. ✗ A2. Did you discuss any potential risks of your work? This paper proposes a novel Vision-Language model architecture, ManagerTower, which may not contains potential malicious or unintended harmful effects and uses. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section Abstract, and Section 1. Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2. Preliminary, and Section 3. Manager Design. ✓ B1. Did you cite the creators of artifacts you used? Section 2. Preliminary, and Section 3. Manager Design. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All the pre-train data and pre-trained uni-modal backbones are publicly available, we just use them for pre-training and initialization, and we don't repackage or release them. Our code and pre-trained models will be released, and the name of the license for each asset will be stated in our code repo. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our code and pre-trained models will be released, and the name of the license for each asset will be stated in our code repo. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? All the data we used are publicly available and used safely by previous works. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We will provide the documentation of our code and pre-trained models in our code repo. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section Appendix A. Implementation Details. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section Appendix C. Computational Budget. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4. Experiments. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. Experiments. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. Experiments. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
ni-etal-2023-finding
Finding the Pillars of Strength for Multi-Head Attention
https://aclanthology.org/2023.acl-long.812
Recent studies have revealed some issues of Multi-Head Attention (MHA), e.g., redundancy and over-parameterization. Specifically, the heads of MHA were originally designed to attend to information from different representation subspaces, whereas prior studies found that some attention heads likely learn similar features and can be pruned without harming performance. Inspired by the minimum-redundancy feature selection, we assume that focusing on the most representative and distinctive features with minimum resources can mitigate the above issues and lead to more effective and efficient MHAs. In particular, we propose Grouped Head Attention, trained with a self-supervised group constraint that group attention heads, where each group focuses on an essential but distinctive feature subset. We additionally propose a Voting-to-Stay procedure to remove redundant heads, thus achieving a transformer with lighter weights. Extensive experiments are consistent with our hypothesis. Moreover, our method achieves significant performance gains on three well-established tasks while considerably compressing parameters.
# Finding The Pillars Of Strength For Multi-Head Attention Jinjie Ni and **Rui Mao** and **Zonglin Yang** and **Han Lei** and **Erik Cambria** Nanyang Technological University, Singapore {jinjie001, leih0003}@e.ntu.edu.sg, {rui.mao, zonglin.yang, cambria}@ntu.edu.sg ## Abstract Recent studies have revealed some issues of Multi-Head Attention (MHA), e.g., redundancy and over-parameterization. Specifically, the heads of MHA were originally designed to attend to information from different representation subspaces, whereas prior studies found that some attention heads likely learn similar features and can be pruned without harming performance. Inspired by the minimumredundancy feature selection, we assume that focusing on the most representative and distinctive features with minimum resources can mitigate the above issues and lead to more effective and efficient MHAs. In particular, we propose Grouped Head Attention, trained with a self-supervised group constraint that group attention heads, where each group focuses on an essential but distinctive feature subset. We additionally propose a Voting-to-Stay procedure to remove redundant heads, thus achieving a transformer with lighter weights. Moreover, our method achieves significant performance gains on three well-established tasks while considerably compressing parameters. ## 1 Introduction Transformers have shown promising performance across various tasks . However, they have some issues, e.g., redundancy and over-parameterization, which is mainly caused by Multi-Head Attention (MHA) (Michel et al., 2019; Voita et al., 2019) and Feed-Forward Network (FFN) (Sukhbaatar et al., 2019; Wu et al., 2019, 2020) of transformer. We aim to mitigate the redundancy and over-parameterization issues by optimizing the MHA module. The multi-heads were originally designed to attend to different representation subspaces of input (Vaswani et al., 2017). However, prior works (Michel et al., 2019; Voita et al., 2019) have shown that the attention heads are highly redundant and over-parameterized after training because some heads can be switched off with a negligible performance drop. Such an issue is probably caused by their parallel design: the heads naturally work in the same way and likely attend to similar features (Cordonnier et al., 2020). The existing redundancy optimization methods are mainly based on homogenization, diversification, and head significance. However, they all have some limits. (1) The homogenization-based methods mitigate redundancy and over-parameterization by making heads similar and removing unnecessary parameters. Cordonnier et al. (2020) homogenized attention heads by sharing most weights between all heads, which reduced the redundant parameters but sacrificed the performance somewhat because of the lack of diversity. (2) The diversificationbased methods diversify the heads to enrich features and reduce the inter-head redundancy. Li et al. (2018) found that diversifying attention heads by adding a regularization could force MHA to reduce inter-head information redundancy, yielding performance gains in Machine Translation. However, such strategy that retains all feature subsets is suboptimal, because it does not address the issue that MHA is over-parameterized. (3) The significancebased methods (Michel et al., 2019; Voita et al., 2019; Li et al., 2021) learn significance scores for the heads to prune unimportant ones. However, the retained important heads still remain inter-head redundancy without diversifying them. Considering the issues of the above-mentioned methods, we hypothesize that attending to the most representative and distinctive feature subsets with minimum resources leads to more effective and efficient MHAs, which is inspired by the minimum-redundancy feature selection (Cordonnier et al., 2020). Accordingly, we propose a divide-and-conquer strategy, including GroupConstrained Training (GCT) and Voting-to-Stay (V2S), to achieve the setting of our assumption and mitigate the above-mentioned issues. We illustrate them below. 14526 We first propose a strategy to group and distinguish attention heads, where a Grouped Head Attention (GHA) is obtained via the self-supervised GCT. By encouraging homogenization within a group and diversification between groups, the MHA is forced to divide its heads to work in several separate groups, where each group focuses on an essential but unique feature subset, being in line with the setting of our assumption. Note that the redundancy exists when the resources deployed by the model are more than enough to process current information (Cordonnier et al., 2020). GHA reduces the redundancy in two aspects: - The intra-group homogenization reduces redundancy by encouraging similar intra-group heads and only retaining the most representative one later to lower the resource deployment. - The inter-group diversification reduces redundancy by forcing heads to attend to more diversified features (with less overlap between heads) so that the unique information to process increases and matches the resources deployed. Next, we show that GHA-PS (GHA with the Pillar of Strength), a lighter-weight GHA, can be achieved by excluding the redundant heads of GHA via the V2S procedure. V2S culls the redundant heads that share similar patterns with the most representative head (PS head) of a group, which is selected by voting on different training batches. Note that upon the convergence of the GCT, the heads are highly homogenized within a group, thus being redundant because they process similar information. As a result, once the redundant heads are culled, the PS heads can still achieve the essential utility of the original attention layer and yield comparable performance to the unculled model. The Lottery Ticket hypothesis (Frankle and Carbin, 2019) argues that subnetworks in an over-parameterized neural network can converge faster and achieve comparable or better performance than the original network. Our GHA-PS achieving better results is also in line with this hypothesis. Such a divide-and-conquer combination resolves the issues of previous redundancy optimization methods: (1) Our model achieves better parameter efficiency, resolving the issue of previous diversification-based methods; (2) The feature diversity is guaranteed and the inter-head redundancy is reduced, resolving the problems of previous homogenization- and significance-based methods. We evaluate our method on three benchmarking tasks. We denote the corresponding transformer architectures of GHA and GHA-PS as Grouped Head Transformers (GHT) and Grouped Head Transformers with the Pillars of Strength (GHT-PS), respectively. GHT and GHT-PS achieve significant improvements over the strong baselines in Machine Translation (MT) BLEU scores (+3.8% and +4.4% averaged on 7 datasets), Language Modeling (LM) perplexity (-2.8% and -2.9%), and Abstractive summarization (AS) F1-Rouge (+6.7% and +7.0% on average). GHT-PS exhibits higher efficiency in model size, inference speed, and floating-point operations (FLOPs). The light architecture of GHTPS reduces 63.6% parameters of the vanilla transformer and yields comparable performance. The key contributions of our work1are threefold: - We find that, in a certain range, higher compactness of attention heads (i.e., the intra-group heads become closer to each other and the intergroup ones become farther) improves MHA's performance, forcing MHA to focus on the most representative and distinctive features. It provides guidance for future architectural designs of MHA. - We propose a divide-and-conquer strategy that consists of GCT and V2S. It mitigates the redundancy and over-parameterization issues of MHA. Our method uses fewer parameters and achieves better performance, outperforming the existing MHA redundancy/parameter reduction methods. - We verify our methods on three well-established NLP tasks. The superior results on datasets with multiple languages, domains, and data sizes demonstrate the effectiveness of our method. ## 2 Related Work Parameter efficiency. Different methods were proposed to achieve lightweight transformers: (1) replacing attention with lightweight modules, e.g., convolution modules, such as Dynamic Conv (Wu et al., 2019) and Lite Transformer (Wu et al., 2020); (2) removing or replacing the feed-forward layers, such as Sukhbaatar et al. (2019) and Wu et al. (2020); (3) pruning the model, such as Michel et al. (2019), Voita et al. (2019), and Li et al. (2021). Modified multi-head mechanism. Ahmed et al. (2017) learned to weight the projected output of different heads, performing weighted sum over them. Li et al. (2019) aggregated the output of different heads by dynamic routing; Cui et al. (2019) used different attention mechanisms, e.g., global/local and forward/backward attention for different heads; Shazeer et al. (2020) mixed different heads before and after the softmax operation in an attention function to achieve communication between heads. Head redundancy optimization. Michel et al. (2019) and Voita et al. (2019) found that only a subset of the attention heads have significant utilities in transformer, where the important heads could be identified by Expected Sensitivity and Layer-wise Relevance Propagation (LRP) (Ding et al., 2017). Upon this, Li et al. (2021) learned per-head importance scores and pruned the heads. Cordonnier et al. (2020) homogenized the attention heads by sharing a part of the weights between heads, which lowered the number of parameters but sacrificed performance. Li et al. (2018) found that diversifying attention heads by adding a regularization can force MHA to reduce inter-head redundancy, yielding performance gains for Machine Translation. However, previous methods either traded performance for efficiency or retained extra parameters. ## 3 Methodology There are two core components in our method, namely the Group-Constrained Training (GCT) and the Voting-to-Stay (V2S) procedure. GHA (Figure 1) is developed with GCT that removes head redundancy; GHA-PS is developed by removing the redundant parameters of GHA in V2S. In this section, we detail the process of developing the GHA and finding its Pillars of Strength (PS). ## 3.1 Grouped Head Attention With Hidden Units First, we detail the core module of GHT, the GHA with hidden units, which is built based on MHA via the GCT. The GCT divides the attention heads of MHA into several groups and makes heads within a group become more similar, whereas heads between groups become more different. Thus, MHA is forced to divide its heads to work in several separate groups, where each group focuses on an essential but unique feature subset to reduce head redundancy. We will show the effectiveness in § 5. ![2_image_0.png](2_image_0.png) Given a transformer model f(x; θ) with n attention layers, the set of heads at attention layer l is denoted as Hl = {h1,l, ..., hk,l}, where k is the number of heads. The outputs of the attention heads are concatenated and projected with Wout, where the i-th head output oi,l in layer l results from the computation of the projection matrices W Q i,l , W K i,l , and WV i,l of this head: $$MHA_{l}(\mathbf{Q},\mathbf{K},\mathbf{V})=Concat\space(\mathbf{o}_{1,l},...,\mathbf{o}_{k,l})W^{out}\tag{1}$$ $$\mathbf{o}_{i,l}=softmax(\frac{(\mathbf{Q}W_{i,l}^{\mathbf{Q}})(\mathbf{K}W_{i,l}^{\mathbf{K}})^{T}}{\sqrt{d_{k}}})(\mathbf{V}W_{i,l}^{\mathbf{V}}).\tag{2}$$ Three feature maps (FMs) of GHA are extracted for the self-supervised GCT: (1) the result of VWV l, denoted as Vˆl = {v1,l, ..., vk,l} (the value FM); (2) the attention weights of the l-th layer, denoted as Al= {a1,l, ..., ak,l} (the attention FM); (3) the output of the l-th layer before the output projection Wout, denoted as Ol= {o1,l, ..., ok,l} (the head output FM). Moreover, Vˆ = {Vˆ1*, ...,* Vˆl}, A = {A1, ..., Al}, O = {O1*, ...,* Ol}. Given the FMs, a Hidden Unit Discovery System (HUDS) Ω assigns a hidden unit z j i,l for each head to represent its group property, where i denotes the i-th head and j denotes the j-th group hidden unit. z j i,l ∈ Zˆl, where Zˆl = {z 1 l , ..., z C l} represents the hidden unit candidates, and the hidden units assigned to the heads are denoted as Zl = {zi,l, ..., zi,l}. 14528 Zlis discovered by the HUDS Ω: Zl = Ω(El), where El denotes either one of the Vˆl, Al, or Ol. Here Ω(·) is an unsupervised algorithm that divides the heads into C groups given their FMs, such as K-means2: $$\Omega(\mathbf{E}_{l})=\operatorname{arg\,min}_{\mathbf{Z}_{l}}\sum_{i=1}^{C}\sum_{\mathbf{x}\in\mathbf{E}_{l}^{i}}||\mathbf{x}-\mu_{i}||^{2},\quad(3)$$ where Eˆ i l is the set of feature maps of the i-th head group in the l-th attention layer. Then, the feature map groups of the l-th attention layer are denoted as Eˆl = {Eˆ 1 l , ..., Eˆ i l , ..., Eˆ C l}. µiis the mean of the feature map vectors in Eˆ i l . The hidden units Z = {Z1*, ...,* Zl} are C-class categorical variables (Eq.4(A)) or continuous vectors (Eq.4(B)) to supervise the GCT. The objective of the self-supervised GCT is termed as: Lz(f; A, Vˆ , O, Z) = − 1 knα Xn l=1 Xk i=1 log pz(zi,l|vi,l, ai,l, oi,l) + 1 (C − 1)knβ Xn l=1 Xk i=1 X j2̸=j1 log pz(z j2 l|v j1 i,l, a j1 i,l, o j1 i,l) (A) 1 knα Xn l=1 Xk i=1 φ(vi,l, ai,l, oi,l; zi,l) − 1 C 2 n β Xn l=1 CX−1 j1=1 XC j2=j1+1 φ(z j1 l ; z j2 l ) (B) (4) Either when Z are categorical variables (Eq.4(A)) or continuous vectors (Eq.4(B)), the objective is composed of a homogenization term and a diversification term3. v j i,l, a j i,l, and o j i,l denote the feature maps of the i-th head belonging to the j-th group. pz(zi,l|vi,l, ai,l, oi,l) denotes the predicted probability of the assigned group hidden variable zi,l, given vi,l, ai,l, and oi,l. φ(x; y) denotes a cosine similarity measurement between x and y (following Li et al. (2018)). φ(vi,l, ai,l, oi,l; zi,l) = τ1φ(vi,l; zi,l) + τ2φ(ai,l; zi,l) + τ3φ(oi,l; zi,l), where τ is a coefficient, determined by the specific settings for each dataset & task. When Z are categorical variables, the grouping is a classification task whose classification heads project the output into C classes; when Z are continuous vectors, the grouping process is a metric learning task whose 2K-means fixes the group numbers for fair comparisons in §5. Other clustering algorithms may also be applicable. 3The coefficients α and β of Eq.4 respectively control the intra-group homology and inter-group diversity degrees to achieve different group intensities in different tasks/datasets. similarity computations are conducted between Z and the projected FM representations. In both conditions, GHA is supervised by Z to make the heads in the same group yield similar patterns, whereas those in different groups repulse from each other. The overall objective is given by L = Lt + Lz, where Ltis the task-specific objective. ## 3.2 The Pillars Of Strength Being consistent with Lottery Ticket hypothesis (Frankle and Carbin, 2019), we establish the GHT-PS from GHT as its subnetwork by removing redundant heads from GHA to achieve higher parameter efficiency. We propose the V2S procedure to find the Pillars of Strength (PS) heads that constitute the core of the GHA and remove other heads. We first describe the V2S roughly. In GHA, the heads within each group exhibit similar patterns upon the convergence of the Group-Constrained Training (GCT). Then, we only keep the heads with the most explicit group patterns (the PS heads), and switch off the other ones within the same group via V2S. The main idea of V2S is to vote on all heads of the GHA, and only retain one head for each group - the head receiving the most votes. Specifically, it takes an entire epoch to collect the layer-wise votes mb l ∈ {0, 1} kfrom the whole training set (each data batch b creates one layer-wise vote mb l per attention layer), where k denotes the head number; 0 indicates that the corresponding head should be switched off and 1 indicates that a head is retained. We assume that there are B mini-batches in the training set. Then, each attention layer receives B layer-wise votes within which each head-wise vote is denoted by either 0 or 1. For each group, the head receiving the most '1's are assigned a '1' in the final head mask ml ∈ {0, 1} kfor attention layer l, indicating that this head will be retained. Following Michel et al. (2019) and Voita et al. (2019), we mask out the output of heads as the equivalent operation of head removal4. The V2S procedure is outlined in Algorithm 1. We detail some of its definitions below. (1) ρ indicates the full convergence of GHT, i.e., the hidden units found by Ω have a center shift less than a threshold. (2) In Step 7-9, given feature maps Vˆl, Al, and Ol of the lth attention layer, the vote vectors mb l,v, mb l,a, and mb l,o ∈ {0, 1} kare determined by the group pattern scores ηi,l of each head, indicating the explicitness of group patterns. 4We perform real head removal when test inference speed. ## Algorithm 1 The Voting-To-Stay (V2S) Algorithm 1: **Procedure** Voting-to-Stay(f, Vˆ , A, O, Z) 2: if satisfy ρ, and m is none **then** 3: Start voting epoch; Freeze f. 4: Γl ← [ ] ▷ Creat Γlto store votes 5: for batch b in B training batches do 6: for layer l in L layers do 7: for Elin {Vˆl, Al, Ol} do 8: Based on ηl = {η1,l*, ..., η*1,k}, 9: create mb l,v, mb l,a, mb l,o. $\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\mathbf{1}$,$\mathbf{0}$,$\(\mathbf{1} 12: ml ← *V OT E*(Γl) 13: m ← [m1*, ...,* mn] ▷ Stack layer votes 14: Unfreeze f; end voting epoch. 15: f = f ⊙ m ▷ Mask GHT attn outputs with m We set the corresponding digit in the vote vectors as 1 for the head achieving the highest ηi,l in its group, indicating the most representative head of the group. Here ηi,l = pz(zi,l|ei,l) if z is categorical; otherwise ηi,l = −φ(ei,l; zi,l). ei,l denotes the i-th head feature map (either one of the vi,l, ai,l, or oi,l). (3) *V OT E* means counting the '1's for each head based on the 0-1 votes in Γl and only keeping the heads with the highest counts5. After V2S, a finetuning is applied to adapt the pruned network. GHT-PS compresses considerable parameters. In the case of two head groups, GHT-PS reduces 75% parameters for an attention layer and 32.1% for the entire model6. We will show that V2S removing non-PS heads does not sacrifice model performance. Instead, it brings accuracy gains in some cases and improves inference speed. ## 4 Experimental Setup In this section, we detail the key architectural configurations. Further training, model, dataset & evaluation setups are detailed in A.1, A.2, & A.3. We follow the transformer of Vaswani et al. (2017) as a backbone architecture for all datasets and tasks in our experiments. Following Wu et al. (2019, 2020), for Machine Translation and Abstractive Summarization, we adopt the same 8-head encoderdecoder architecture with 6 layers for both encoder and decoder, where the model dimension d*model* = 512 and feed-forward dimension df = 2048. For LM, we adopt the 16-head decoder-only architecture with 16 layers, where the model dimension d*model* = 1024 and feed-forward dimension df = 4096. The layer normalization is applied before the residual connection of each layer. The parameters of decoder input and output projections are shared. Our models are based on fairseq (Ott et al., 2019) implementations. We perform the GCT as a metric learning task because it does not introduce additional projection layers when the shapes of similarity inputs are identical (Eq.4(B)), which makes GHT weight-lighter. In addition, it performs better in our experiments compared to the classification-based grouping. $\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. ## 5 Results And Analysis 5.1 Machine Translation Ours vs. vanilla transformer. We first report results by comparing GHT and GHT-PS with the vanilla transformer (Vaswani et al., 2017) which is the backbone of our model. As shown in Table 1, the models are compared at different parameter levels7. GHT does not have weight reduction, keeping the same parameter size as the vanilla transformer (44M, the same setting as transformer base (Vaswani et al., 2017)). In contrast, GHT-PS is compressed to 30M parameters via V2S. For a fair comparison, we first compare GHT-PS with two lite architectures, Transformer-Lite1 and TransformerLite2, whose parameter numbers are 30M as well. Keeping other settings unchanged, the encoder and decoder of Transformer-Lite1 are reduced to 4 layers, respectively. Transformer-Lite2 reduces the model dimension d*model* to 424, and df to 1696. GHT and GHT-PS consistently and significantly outperform their backbone models at the same parameter level across all datasets. On average, the GHT surpasses 44M vanilla transformer by 3.8% in BLEU (Papineni et al., 2002); GHT-PS surpasses Lite1 and Lite2 by 4.9% and 4.4%, respectively. Although GHT-PS reduces 32.1% parameters, it significantly outperforms both 44M and 30M vanilla transformers, which is comparable to GHT on all datasets. It shows that V2S reduces the parameter size of the original transformer without sacrificing accuracy on MT. Efficiency is analyzed later. 7The parameters analyzed in this paper exclude the embedding layer since they vary a lot between different datasets when the vocabulary sizes are different. Table 1: Benchmark with vanilla transformer (backbone) on IWSLT and WMT Machine Translation datasets, measured by BLEU. All improvements are statistically significant with p < 0.05 under t-test. | BLEU ↑ | IWSLT | WMT | | | | | | | | | |---------------------|---------|---------------|-------|-------|-------|-------|-------|-------|------|------| | Speed ↑ | FLOPs ↓ | de-en | it-en | en-de | en-it | en-fr | en-de | en-fr | | | | Vanilla Transformer | 44M | 1012.1 sent/s | 1996M | 34.4 | 32.3 | 28.0 | 30.8 | 40.1 | 27.3 | 38.1 | | GHT (ours) | 44M | 1016.4 sent/s | 1996M | 35.4 | 32.8 | 29.1 | 31.6 | 41.5 | 28.6 | 40.7 | | Transformer-Lite1 | 30M | 1175.4 sent/s | 1549M | 33.8 | 31.9 | 27.9 | 29.3 | 39.9 | 26.9 | 37.7 | | Transformer-Lite2 | 30M | 1108.7 sent/s | 1465M | 34.0 | 32.2 | 28.2 | 29.5 | 40.0 | 26.7 | 37.8 | | GHT-PS (ours) | 30M | 1122.1 sent/s | 1558M | 35.2 | 32.7 | 28.9 | 31.6 | 41.4 | 28.2 | 40.5 | | Model | Param ↓ | Inference | BLEU ↑ | IWSLT | WMT | | | | | | |--------------------------|-----------|---------------|----------|---------|-------|-------|-------|-------|-------|-------| | Speed ↑ | FLOPs ↓ | de-en | it-en | en-de | en-it | en-fr | en-de | en-fr | | | | Cordonnier et al. (2020) | 44M | 416.6 sent/s | 2054M | 34.4 | 31.8 | 28.2 | 31.0 | 40.7 | 27.6 | 38.5 | | Li et al. (2018) | 44M | 1011.2 sent/s | 1996M | 34.7 | 31.8 | 28.5 | 30.7 | 40.7 | 27.3 | 39.3 | | GHT (ours) | 44M | 1016.4 sent/s | 1996M | 35.4* | 32.8* | 29.1* | 31.6* | 41.5* | 28.6* | 40.7* | | Voita et al. (2019) | 30M | 1099.1 sent/s | 1558M | 32.2 | 30.8 | 26.5 | 30.3 | 39.8 | 22.0 | 34.0 | | Li et al. (2021) | 30M | 1116.9 sent/s | 1558M | 33.2 | 31.3 | 27.5 | 30.0 | 39.7 | 20.5 | 33.6 | | Dynamic conv | 30M | 1050.2 sent/s | 1615M | 34.8 | 32.7 | 28.7 | 31.1 | 40.6 | 24.0 | 36.5 | | Lite Transformer | 30M | 1096.6 sent/s | 1809M | 33.3 | 31.4 | 27.5 | 29.8 | 39.4 | 24.9 | 37.4 | | GHT-PS (ours) | 30M | 1122.1 sent/s | 1558M | 35.2* | 32.7 | 28.9* | 31.6* | 41.4* | 28.2* | 40.5* | Ours vs. efficient attention models. We compare GHT with two state-of-the-art (SOTA) MHA redundancy optimization baselines. Cordonnier et al. (2020) and Li et al. (2018) are respectively homogenization- and diversification-based methods. In addition, we compare GHT-PS with four SOTA baselines that made major contributions to attention parameter compression and redundancy optimization8. Voita et al. (2019) and Li et al. (2021) are significance-based pruning methods. Dynamic Conv (Wu et al., 2019) and Lite Transformer (Wu et al., 2020) modify the MHA arch to reduce parameters. Table 2 shows that GHT outperforms all its baselines on all datasets, exceeding the strongest baseline by 2.9% in averaged BLEU scores. GHT-PS outperforms all its baselines on 6 out of 7 datasets, exceeding the strongest baseline by 4.4% on average. Model compression of the baselines may sacrifice performance (especially on large datasets, e.g., WMT en-de and en-fr), while GHT-PS is al-8Works optimizing parameters of transformer modules rather than the MHA are not compared. In addition, we do not compare to Michel et al. (2019) (post-pruning), because their method performs extremely bad when the parameter level is low, e.g., 30M (Li et al., 2021). most not affected by the parameter reduction, even surpassing GHT's baselines with 44M parameters. Table 3: Ablation study on IWSLT'14. The results are statistically significant with p < 0.05 under t-test. | Model | BLEU ↑ | | | | | |--------------------|----------|-------|-------|-------|------| | de-en | it-en | en-de | en-it | en-fr | | | GHT | 35.4 | 32.8 | 29.1 | 31.6 | 41.5 | | - w/o Diversifying | 34.7 | 31.8 | 28.5 | 30.7 | 40.7 | | - w/o Homologizing | 34.3 | 32.0 | 28.2 | 30.9 | 40.2 | | GHT-PS | 35.2 | 32.7 | 28.9 | 31.6 | 41.4 | | - w/o GCT | 33.8 | 31.9 | 28.1 | 30.5 | 39.8 | | - w/o GC | 34.0 | 32.0 | 28.4 | 31.0 | 40.2 | | - w/o HUDS | 33.7 | 32.0 | 28.1 | 30.9 | 40.3 | | - w/o PS stay | 33.6 | 31.7 | 27.9 | 30.7 | 40.2 | | - w/ stage 2 GC | 33.2 | 31.8 | 28.1 | 30.8 | 40.3 | | - w/ stage 1& 2 GC | 33.4 | 31.9 | 27.7 | 30.6 | 40.2 | Ablation Study. We evaluate the impacts of the features we choose for GHT and GHT-PS (Table 3). We first ablate the diversification/homogenization term of GCT (see Eq.4), which lowers the BLEU scores. Next, we show the importance of GCT for V2S. **w/o GCT** denotes that we directly perform V2S at the very beginning without GCT. **w/o GC** ![6_image_0.png](6_image_0.png) denotes that V2S is employed after normal training without Group Constrain (GC). Both ablation models yield lower BLEU, because they do not homogenize unnecessary heads and prepare them for pruning. Next, we validate the power of Pillars of Strength. **w/o HUDS** denotes we replace HUDS with randomly switching off heads after GCT; w/o PS stay denotes we keep random group members instead of the Pillars of Strength after GCT. We observe lower BLEU in **w/o HUDS** and **w/o PS stay**. Finally, we find that GC only needs to be added before V2S. We denote the training stages before and after V2S as stages 1 and 2. We compare the proposed Stage 1-based GHT-PS with models that perform GCT at Stage 2 (**w/ stage 2 GC**) and at both stages (**w/ stage 1& 2 GC**). BLEU scores of both ablation models decrease. Effect of group compactness. We hypothesize that more compact group patterns bring performance gains to the GHT. Figure 2 shows the correlation between the compactness of the final group patterns and the final BLEU scores GHT achieved on 5 IWSLT'14 development sets when the GHT is fully converged in GCT. One data point corresponds to an independent run. We choose Silhouette Coefficient (SC) (Rousseeuw, 1987) and Dunn's Index (DI) (Bezdek and Pal, 1995) as the measurements of group pattern compactness, both of which increase as the intra-group samples become more similar and the inter-group ones become more separated. The SC and DI are computed with the FMs of GHA (§ 3.1) and controlled by tuning the α and β (Eq.4). ![6_image_1.png](6_image_1.png) Figure 2 shows that, within the normal range, the BLEU scores rise with higher SC/DI scores, which is in line with our assumption. The BLEUs start to drop after the peak as the SC/DI scores increase, because the very heavy group constraint prohibits the model from learning useful task-specific knowledge. Effect of group number. Figure 3 shows the performance trends of 16-head GHT and GHT-PS by different numbers of group hidden units. For GHT, different datasets have different optimal hidden unit quantities, while a similar trend is observed. The optimal group number is between 2 and 8, which is in line with the claim that our group strategy is superior to sole homogenization (1 group) or diversification (16 groups) strategies. For GHT-PS, when the group number is larger than 1, it shows comparable performance to GHT on most datasets. This also verifies that non-PS heads can be switched off without sacrificing performance. ![6_image_2.png](6_image_2.png) | Model | Param ↓ | Inference Speed ↑ | FLOPs ↓ | Rouge-1 ↑ | Rouge-2 ↑ | Rouge-L ↑ | |------------------------------------------|-----------|---------------------|-----------|-------------|-------------|-------------| | LSTM (Paulus et al., 2018) | - | - | - | 38.30 | 14.81 | 35.49 | | CNN (Fan et al., 2018) | - | - | - | 39.06 | 15.38 | 35.77 | | Light Conv (Wu et al., 2019) | 86M | - | - | 39.52 | 15.97 | 36.51 | | Dynamic Conv (Wu et al., 2019) | 87M | - | - | 39.84 | 16.25 | 36.73 | | Vanilla Transformer (Voita et al., 2019) | 44M | 208.77 sent/s | 1996M | 38.45 | 17.97 | 36.03 | | GHT (ours) | 44M | 208.77 sent/s | 1996M | 40.00 | 21.10 | 37.51 | | GHT-PS (ours) | 30M | 257.62 sent/s | 1558M | 40.01 | 21.31 | 37.62 | Group pattern trends. Figure 4 shows the trends of intra-group homogeneity (given by the 1st term of Eq.4(B)) and inter-group diversity (given by the 2nd term of Eq.4(B)) of GHT and vanilla transformer in the training process on five IWSLT datasets. By training, GHT yields higher intragroup homogeneity and inter-group diversity absolute values, leading to more compact groups, while the vanilla transformer shows flattened trends. It shows that GCT can effectively homogenize intragroup heads and diversify inter-group heads. | Model | BLEU↑ Param↓ Infer Speed↑ FLOPs↓ | | | | |--------------------|------------------------------------|------|---------------------|-------| | Transformer base | 25.8 | 44M | 1016.4 sent/s 1996M | | | Transformer big | 26.4 | 176M | 707.1 sent/s | 6635M | | Lite Conv | 26.6 | 166M | 722.1 sent/s | 6184M | | Dynamic Conv | 26.9 | 176M | 710.0 sent/s | 6315M | | GHT-PS-LITE (ours) | 26.6 | 16M | 1170.2 sent/s 1181M | | Efficiency analysis. In Tables 1 and 2, the efficiency metrics are controlled to be identical. Our models with higher inference speed and lower FLOPs show efficiency by culling redundant parameters. We also compare the efficiency metrics by controlling BLEU scores. In Table 5, we select models from the works in Table 1 and 2 that are reported to achieve close BLEU scores on newstest2013 as the baselines. The GHT-PS-LITE is a light version of GHT-PS that has a df of 1024. Given BLEU ranges from 25.8 to 26.9, GHT-PSLITE is much more efficient than the baselines. Noticeably, GHT-PS-LITE achieves 90.36% fewer parameters, 62.05% faster inference speed, and 80.90% fewer FLOPs against Lite Conv which yields the same BLEU as it. ## 5.2 Abstractive Summarization We evaluate the ability of our model to process longer inputs via Abstractive Summarization on the CNN-DailyMail dataset. We take vanilla transformer as the backbone. Table 4 shows that both GHT and GHT-PS achieve higher F1-Rouge scores (Lin, 2004a) on this task. GHT-PS achieves 4.1% higher Rouge-1, 18.6% higher Rouge-2, and 4.4% higher Rouge-L against vanilla transformer. It also achieves 0.4% higher Rouge-1, 31.1% higher Rouge-2 and 2.4% higher Rouge-L against the best-performing baseline (Dynamic Conv). Meanwhile, GHT-PS only takes 68.18% parameters of the vanilla transformer and exhibits higher inference speed and fewer FLOPs. | Model | Param↓ Infer Spd↑ FLOPs↓ Valid↓ Test↓ | | | | | |--------------------|-----------------------------------------|-----------|-------|-------|-------| | S4 | 249M | - | - | 19.69 | 20.95 | | BERT-L-CAS | 395M | - | - | 19.67 | 20.42 | | GPT-2 Large | 762M | - | - | - | 22.05 | | VT w/ AI | 201M | 9.9 tok/s | 6106M | 19.03 | 19.14 | | GHT (ours) | 201M | 9.9 tok/s | 6106M | 18.57 | 18.60 | | GHT-PS (ours) 167M | 19.0 tok/s | 4573M | 18.58 | 18.59 | | ## 5.3 Language Modeling We evaluate LM performance on WIKITTEXT103 dataset. The backbone is a decoder-only transformer with 16 layers and adaptive inputs (Baevski and Auli, 2019). We compare with the backbone model, as well as comparable SOTA LM models, including S4 (Gu et al., 2022), BERT-LargeCAS (Wang et al., 2019), and GPT-2 Large (Radford et al., 2018). Table 6 shows that both GHT and GHT-PS achieve lower perplexity (Vajapeyam, 2014) than the baselines on both validation and test sets (2.9% and 9.0% less perplexity against the backbone and the best performing LM baseline, respectively). Meanwhile, GHT-PS achieves 16.92% parameter reduction, 2 times faster inference speed, and 75% FLOPs compared with the backbone. ## 6 Conclusion In this paper, we assume that only focusing on the most representative and distinctive features with minimum resources may mitigate the redundancy and over-parameterization issues of MHA. Accordingly, we propose a divide-and-conquer strategy, including GCT and V2S to mitigate the issues. The improvements on three tasks and the extensive analysis verify our hypothesis and the effectiveness of our redundancy optimization methods. Our study may inspire future MHA design and training to achieve higher accuracy and efficiency. ## Limitations In this work, we evaluate the proposed models for NLP tasks only. However, tasks in other fields such as Computer Vision may present a very different input inductive bias, thus affecting the performance. Moreover, our models are trained from scratch, hence it is unknown whether the same divide-andconquer strategy works for pre-trained models. We will study these limitations in the future to give a more extensive exploration. ## Ethics Statement This article follows the ACL Code of Ethics. The annotations are based on public datasets that do not contain private data. The algorithm we developed is an architectural optimization technique for improving model performance. To our best knowledge, there are no foreseeable potential risks to using this technique. ## Acknowledgments This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project \#A18A2b0046). ## References Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. *CoRR*, abs/1711.02132. Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. James C Bezdek and Nikhil R Pal. 1995. Cluster validation with generalized dunn's indices. In Proceedings 1995 second New Zealand international two-stream conference on artificial neural networks and expert systems, pages 190–190. IEEE Computer Society. Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. 2017. Massive exploration of neural machine translation architectures. *CoRR*, abs/1703.03906. Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. 2020. Multi-head attention: Collaborate instead of concatenate. *CoRR*, abs/2006.16362. Hongyi Cui, Shohei Iida, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Mixed multihead self-attention for neural machine translation. In EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pages 206–214. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150–1159, Vancouver, Canada. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 355–364. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In *Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, NMT@ACL 2018, Melbourne,* Australia, July 20, 2018, pages 45–54. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Albert Gu, Karan Goel, and Christopher Ré. 2022. Efficiently modeling long sequences with structured state spaces. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPs 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–1701. Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, and Tong Zhang. 2018. Multi-head attention with disagreement regularization. In *EMNLP 2018, Brussels, Belgium, October 31 - November 4, 2018*, pages 2897–2903. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu. 2019. Information aggregation for multi-head attention with routing-by-agreement. In NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3566–3575. Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021. Differentiable subset pruning of transformer heads. Trans. Assoc. Comput. Linguistics, 9:1442–1459. Chin-Yew Lin. 2004a. Rouge: A package for automatic evaluation of summaries. In *Text summarization branches out*, pages 74–81. Chin-Yew Lin. 2004b. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Ilya Loshchilov and Frank Hutter. 2017. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In *Advances in Neural Information Processing Systems 32:* Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14014–14024. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318. ACL. Razvan Pascanu, Tomás Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 1310–1318. JMLR.org. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In *6th International Conference on* Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. Peter J Rousseeuw. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53–65. Noam Shazeer, Zhenzhong Lan, Youlong Cheng, Nan Ding, and Le Hou. 2020. Talking-heads attention. CoRR, abs/2003.02436. Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Hervé Jégou, and Armand Joulin. 2019. Augmenting self-attention with persistent memory. CoRR, abs/1907.01470. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 1139–1147. JMLR.org. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818–2826. IEEE Computer Society. Sriram Vajapeyam. 2014. Understanding shannon's entropy metric for information. *CoRR*, abs/1405.2061. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5797–5808. Chenguang Wang, Mu Li, and Alexander J. Smola. 2019. Language models with transformers. *CoRR*, abs/1904.09408. Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, and Song Han. 2020. Lite transformer with long-short range attention. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. ## A Appendix A.1 Trainig Settings A.1.1 Machine Translation We use Adam to optimize the MT models and set the β1 = 0.9, β2 = 0.98. We use the Inverse Square Root Schedule (Vaswani et al., 2017) where it first warms up for 4K steps until the learning rate reaches 5 × 10−4, and then it exponentially decays the learning rate. We apply early stop as a termination condition. We apply a 0.3 dropout rate for all Machine Translation models. A weight decay of 10−4is used for all IWSLT 2014 models, whereas for WMT models we use a weight decay of 0. We apply a 0.1 label smoothing (Szegedy et al., 2016; Pereyra et al., 2017) for the uniform prior distribution over the vocabulary. ## A.1.2 Language Modeling Following Baevski and Auli (2019), we use Nesterov's accelerated gradient method (Sutskever et al., 2013) with a momentum of 0.99. We clip the gradient norm if it exceeds 0.1 (Pascanu et al., 2013). The learning rate is linearly warmed up from 10−7to 1 for 16K steps and then annealed using a cosine learning rate schedule (Loshchilov and Hutter, 2017) with multiple cycles. Each cycle doubles the number of updates than the previous cycle and we shrink the maximum and minimum learning rates by 0.75 compared to the previous cycle. The initial minimum learning rate is 10−4 and the maximum is 1. We apply 0.2 adaptive softmax dropout rate, 0.1 attention dropout rate, and 0.1 activation dropout rate. ## A.1.3 Abstractive Summarization We use the same training setup with IWSLT 2014 models. We apply 0.1 clip norm and 0.2 attention dropout. The model is warmed up for 10K updates. ## A.2 Further Model Settings Different α, β, and head feature maps (Vˆ , A, and O) are preferred for different datasets to achieve optimal performance. The Machine Translation configurations are detailed in Table 7; Language Modeling and Abstractive Summarization configurations are detailed in Table 8. Note that φ(vi,l, ai,l, oi,l; zi,l) = τ1φ(vi,l; zi,l) + τ2φ(ai,l; zi,l) + τ3φ(oi,l; zi,l), we set one of the {τ1, τ2, τ3} to be 1, the other to be 0. ## A.3 Datasets And Evaluation9 A.3.1 Efficiency Metrics Settings Inference speed. All inference speed results are generated with beam size 5, batch size 256, maximum decoding length 10 on a single NVIDIA Quadro RTX A6000. FLOPs. We use the fvcore10 to calculate the FLOPs, with a fixed input length of 30. ## A.3.2 Machine Translation To fully evaluate the effectiveness of our methods, we evaluate seven MT datasets of IWSLT'14 and WMT 2014 benchmarks. We closely follow the setup of Vaswani et al. (2017) for data preparation. WMT 2014 English-German dataset consists of about 4.5M sentence pairs. It is encoded with byte-pair encoding (Britz et al., 2017), having a shared source-target vocabulary of about 40K tokens. Following the standard setting (Vaswani et al., 2017), we validate on newstest2013 and test on newstest2014 for experiments on this dataset. The WMT 2014 English-French dataset consists of 36M sentence pairs and is encoded with a joint source-target BPE of about 43K vocabularies. Following the standard split, we validate on a concatenation of newstest2012 and newstest2013 and test on newstest2014. For IWSLT'14 German to English, IWSLT'14 English to German, IWSLT'14 English to French, IWSLT'14 English to Italian and IWSLT'14 Italian to English, we encode the sentence pairs with joint source-target BPE. Following Edunov et al. (2018), the validation set is randomly splited from the training set with a ratio of 1:23. The testset consists TED.tst2010, TED.tst2011, TED.tst2012 and TED.dev2010, TEDX.dev2012 for IWSLT'14 German to English, IWSLT'14 English to German, and IWSLT'14 English to French; the TEDX.dev2012 is replaced by TEDX.dev2014 for IWSLT'14 English to Italian and IWSLT'14 Italian to English. For all Machine Translation datasets, we use detokenized BLEU. WMT 2014 English-German and WMT 2014 English-French are evaluated with a beam size 4 and length penalty 0.6; IWSLT'14 datasets are evaluated with a beam size 5 and without length penalty. | Model | IWSLT (α/β/FM) | WMT (α/β/FM) | | | | | | |---------|------------------|----------------|-----------|------------|------------|------------|------------| | de-en | it-en | en-de | en-it | en-fr | en-de | en-fr | | | GHT | 0.7/0.5/Vˆ | 0.3/0.5/Vˆ | 0.3/0.1/A | 0.3/0.3/Vˆ | 0.7/0.7/Vˆ | 0.5/0.5/Vˆ | 0.3/0.3/Vˆ | | GHT-PS | 0.5/0.7/O | 0.3/0.3/A | 0.3/0.7/O | 0.3/1/Vˆ | 0.5/0.3/A | 0.5/0.5/Vˆ | 0.3/0.3/Vˆ | Table 7: The configuration of α, β, and Feature Maps (FM, including Vˆ , A, and O) for GHT and GHT-PS on different Machine Translation datasets. ![12_image_0.png](12_image_0.png) Table 8: The configuration of α, β, and Feature Maps (FM, including Vˆ , A, and O) for GHT and GHT-PS in Abstractive Summarization and Language Modeling. ## A.3.3 Language Modeling We evaluate LM on WIKITEXT-103 (Merity et al., 2017) which has about 100M tokens and a 260K BPE vocabulary. Following Baevski and Auli (2019), we use perplexity as an evaluation metric and a context window of 2047 at the inference stage. ## A.3.4 Abstractive Summarization We also evaluate on CNN-DailyMail (Hermann et al., 2015) for AS to test the ability of GHT in hard tasks with long inputs. The dataset comprises over 280K news articles paired with multi-sentence summaries. Following Wu et al. (2019), articles are truncated to 512 tokens and encoded with 50K BPE. We use F1-Rouge (Lin, 2004b) to evaluate the performance, including Rouge-1, Rouge-2 and Rouge-L. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✗ A2. Did you discuss any potential risks of your work? The study is theoretical. It does not involve human in the experiment or involve ethic issues. It will not result in any potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4; Appendix A.3 ✓ B1. Did you cite the creators of artifacts you used? Section 4; Appendix A.3 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The tools are for public use. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No specific intended use is specified by the artifact creators. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? They does not contain any information that names or uniquely identifies individual people or offensive content. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4; Appendix A.3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A.3 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5; Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4, 5; Appendix A.1, 2, 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.1, 2, 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zheng-etal-2023-jointprop
Jointprop: Joint Semi-supervised Learning for Entity and Relation Extraction with Heterogeneous Graph-based Propagation
https://aclanthology.org/2023.acl-long.813
Semi-supervised learning has been an important approach to address challenges in extracting entities and relations from limited data. However, current semi-supervised works handle the two tasks (i.e., Named Entity Recognition and Relation Extraction) separately and ignore the cross-correlation of entity and relation instances as well as the existence of similar instances across unlabeled data. To alleviate the issues, we propose Jointprop, a Heterogeneous Graph-based Propagation framework for joint semi-supervised entity and relation extraction, which captures the global structure information between individual tasks and exploits interactions within unlabeled data. Specifically, we construct a unified span-based heterogeneous graph from entity and relation candidates and propagate class labels based on confidence scores. We then employ a propagation learning scheme to leverage the affinities between labelled and unlabeled samples. Experiments on benchmark datasets show that our framework outperforms the state-of-the-art semi-supervised approaches on NER and RE tasks. We show that the joint semi-supervised learning of the two tasks benefits from their codependency and validates the importance of utilizing the shared information between unlabeled data.
# Jointprop: Joint Semi-Supervised Learning For Entity And Relation Extraction With Heterogeneous Graph-Based Propagation Zheng Yandan1,2, Hao Anran1 **and Luu Anh Tuan**1 1School of Computer Science and Engineering 2Interdisciplinary Graduate Program-HealthTech Nanyang Technological University, Singapore {yandan002, s190003}@e.ntu.edu.sg anhtuan.luu@ntu.edu.sg ## Abstract Semi-supervised learning has been an important approach to address challenges in extracting entities and relations from limited data. However, current semi-supervised works handle the two tasks (i.e., Named Entity Recognition and Relation Extraction) separately and ignore the cross-correlation of entity and relation instances as well as the existence of similar instances across unlabeled data. To alleviate the issues, we propose *Jointprop*, a Heterogeneous Graph-based Propagation framework for joint semi-supervised entity and relation extraction, which captures the global structure information between individual tasks and exploits interactions within unlabeled data. Specifically, we construct a unified span-based heterogeneous graph from entity and relation candidates and propagate class labels based on confidence scores. We then employ a propagation learning scheme to leverage the affinities between labelled and unlabeled samples. Experiments on benchmark datasets show that our framework outperforms the state-of-the-art semi-supervised approaches on NER and RE tasks. We show that the joint semi-supervised learning of the two tasks benefits from their codependency and validates the importance of utilizing the shared information between unlabeled data. ## 1 Introduction Named Entity Recognition (NER) and Relation Extraction (RE) are two crucial tasks in Information Extraction. Supervised learning schemes have made significant progress in NER and RE research by leveraging rich annotated data (e.g., Lin et al. (2020); Yamada et al. (2020); Baldini Soares et al. (2019). However, high-quality data annotation still involves extensive and expensive labor. Moreover, training NER and RE models in various domains and applications require diverse annotated data. Semi-supervised learning approaches (SSL) employ a small amount of annotated data as a source ![0_image_0.png](0_image_0.png) Figure 1: An example of label propagation. We represent the sentence as a triplet (*entity*1, relation, *entity*2) which consists of an entity pair (circle) and a relation (triangle) in a graph structure. The colored nodes indicate labeled semantic units (entity or relation candidates), while the uncolored nodes represent the unlabeled semantic units. Purple denotes relation label Used-for, blue denotes for entity label Method, and orange denotes another entity label label Task. of supervision for learning powerful models at a lower cost. SSL in NER and RE have performed very well in recent years by employing bootstrapping, distant supervision or graph-based approach (Batista et al., 2015; Zeng et al., 2015; Delalleau et al., 2005). However, they train a NER model (Yang et al., 2018; Chen et al., 2018; Lakshmi Narayan et al., 2019) or a RE model (Lin et al., 2019; Hu et al., 2021a; Li et al., 2021) separately. Therefore, they neglect the underlying connections between entity recognition and relation extraction under a semi-supervised learning scheme, making it harder for the model to assign accurate annotation to unla14541 beled data. For instance, in Figure 1, the annotated entity "*generative model*" in sentence S1 and the unannotated "*probabilistic model*" in sentence S2 are syntactically similar. Likewise, the context phrases "*uses... to*" and "*used in*" are also similar. If such similarities are ignored, the model may fail to draw a syntactic analogy between "*dependency parsing*" and "*alignment*", and thereby miss labeling the latter as an entity that shares the same type with the former. To the best of our knowledge, there is no universal framework to integrate semi-supervised learning for different tasks in IE, despite evidence of the effectiveness of a joint or multi-task learning approach (Luan et al., 2018a, 2019; Ye et al., 2021; Luan et al., 2018a, 2019; Lin et al., 2020). In addition, existing semi-supervised approaches devote considerable effort to aligning labeled and unlabeled data but do not exploit similarities between instance pairs that are structurally parallel, which exist across unlabeled data. Consequently, they do not perform classification from the perspective of global consistency. For example, given the sentences S1 to S3 in Figure 1, we expect a model to recognize the entities and relations as (Method, Used-for, Task) in triplet form. However, it is hard to infer the correct pseudo label to the unlabeled entities "*alignment*" or "*NLI alignment*" from the annotated entity "*dependency parsing*". Because they are not semantically or lexically similar. Likewise, the affinity between "*uses to*" and "*apply*" is not obvious; and hence it would be difficult to extract the relation Used-for between entities. Nonetheless, the "*alignment*" and "*NLI alignment*" pair are alike, and so are the pair "probabilistic model" and "*probabilistic model*". Exploiting the relationships between unlabeled data would integrate the information hidden in the text and make use of the large quantity of unlabeled data for semisupervised learning. To address the above limitations, we propose a semi-supervised method based on label propagation over a heterogeneous candidate graph to populate labels for the two tasks (see Figure 3). More specifically, we introduce a joint semi-supervised algorithm for the two tasks, where unannotated and annotated candidates (entities and relations) are treated as nodes in a heterogeneous graph, and labels are propagated across the graph through similarity-scored edges. Our framework *Jointprop* considers the interactions among the unlabeled data by constructing the graph using the union of labeled and unlabeled data into one learning diagram. We evaluate *Jointprop* on multiple benchmark datasets and our proposed framework achieve state-of-theart results on both semi-supervised NER and RE tasks. To the best of our knowledge, this is the first work that performs semi-supervised learning for entity and relation extraction in a unified framework to leverage unannotated data for both tasks. Our contributions are summarized as following: - We propose a joint learning scheme using heterogeneous graph-based label propagation for semi-supervised NER and RE. The model exploits the interrelations between labeled and unlabeled data and the similarity between unlabeled examples from both tasks by propagating the information across a joint heterogeneous graph. To the best of our knowledge, this is the first work that combines semisupervised NER and RE. - We propose a unified semi-supervised framework for both entity and relation extraction. The framework generates candidate spans from the unlabeled data, automatically constructs a semantic similarity-based graph for all the candidates, and performs label propagation across the graph. - We show that our proposed method can reliably generate labels for unlabeled data and achieve good performance under a limited data scenario. Our model outperforms strong baselines in two- and single-task settings and establishes new state-of-the-art F1 on benchmark datasets. ## 2 Related Work Joint Entity and Relation Extraction Name Entity Recognition, and Relation Extractions are two essential problems in information extraction (Grishman, 1997). Exploiting their interrelationships, models that combine the identification of entities and relations have attracted attention. Conventional joint extraction systems combine the tasks in a pipelined approach (e.g., Ratinov and Roth (2009); Chan and Roth (2011); Luu et al. (2014, 2015); Tuan et al. (2016)): first identifying entities and employing the detected entity for relation extraction. However, they overlook their inherent correlation. Recent works have proposed coupling various IE tasks to avoid error propagation issues. For example, joint extract entities and relations (Miwa and Sasaki, 2014; Li and Ji, 2014; Luu et al., 2016) or end-to-end multi-task learning (Luan et al., 2018a, 2019; Wadden et al., 2019; Lin et al., 2020; Zhang et al., 2017). Despite evidence of the efficiency of joint or multi-task learning, there is currently no framework that integrates semi-supervised learning for both tasks in a joint entity and relation extraction system. Semi-supervised learning The Semi-Supervised learning seeks to enhance limited labeled data by leveraging vast volumes of unlabeled data (Søgaard, 2013) which mitigate data-hungry bottleneck and supervision cost. SSL has a rich history (Scudder, 1965). There have been substantial works in semisupervised settings in NLP, such as bootstrapping (Gupta and Manning, 2014, 2015; Batista et al., 2015), co-training (Blum and Mitchell, 1998), distant supervision (Zeng et al., 2015; Yang et al., 2018), and graph-based methods (Delalleau et al., 2005; Subramanya and Bilmes, 2011; Subramanya et al., 2010; Luan et al., 2017). In particular, graph-based SSL algorithms have gained considerable attention (Zhu and Ghahramani, 2002; Seeger, 2001; Delalleau et al., 2005). There are two underlying assumptions for the label propagation (Zhou et al., 2004). First, similar training samples are more likely to belong to the same class. Second, nodes in similar structures are likely to have the same label. Label propagation exploits these assumptions by propagating label information to surrounding nodes based on their proximity. The metric-based method had been applied in a graph-based SSL setting for its ability to infer labels for unseen classes directly during inference. For example, Luan et al. (2017) propagates the label based on estimating the posterior probabilities of unlabeled data. Meanwhile, Liu et al. (2019) sought to exploit the manifold structure of novel class space in a transitive setting. ## 3 Methodology Problem Definition The input of the problem is a sentence X = {x1*, ..., x*n} consisting of n tokens, from which we derive S = {s1*, ..., s*d}, the set of all possible within-sentence word sequence spans (up to length L) in the sentence. Let START(i) and END(i) denote the start and end indices of span si, E denote a set of predefined entity types, and R denote the set of relational types. The full data is defined as D = (*X, Y* ). In *Jointprop*, the goal is to learn from the small portion of labelled data Dl and generalize to the unlabelled portion of data Du. The labelled data Dl and unlabelled data Du are originally split from the training set D*train*, where Dl ∩ Du = ∅. The purpose of this work is to predict a possible entity type ye(si) ∈ E for each span si ∈ S while predicting a possible relation types yr(si, sj ) ∈ R for every pair of spans si ∈ *S, s*j ∈ S under SSL settings. The label can also be a 'null' label for a span (i.e. ye(si) = ϵ) or a span pair (i.e. yr(si, sj ) = ϵ). The output of the task are Ye = {(si, e) : si ∈ S, e *∈ E}* and Yr = {(si, sj , r) : si, sj ∈ S, r *∈ R}*. Model Overview Figure 2 illustrates an overview architecture of the proposed *Jointprop* framework. Our framework consists of 1) SPAN FEATURE GEN-ERATION that learns the discriminative contextualized features for labelled data Dl and unlabeled span Du; 2) HETEROGENEOUS GRAPH CON-STRUCTION which maps both labelled-unlabeled, labelled-labelled and unlabeled-unlabeled relationships for both entities and relations; 3) JOINT LA-BEL PROPAGATION which disseminates labels over the whole heterogeneous graph is produced by unlabeled nodes, and 4) MODEL DECODE AND FINE-TUNE MODULE that decodes and select the refined propagated pseudo labels to perform fine-tuning. ## 3.1 Span Feature Generation Our feature extractor is a standard span-based model following prior work (Wadden et al., 2019; Luan et al., 2018a,b). For each input token xk, we obtain contextualized representations xk using a pre-trained language model (e.g., BERT (Devlin et al., 2019)). For the i-th span sk ∈ S, the span representation he(si) is as follows: he(si) = [xST ART(i); xEND(i); ϕ(si)] (1) where ϕ(si) ∈ R 1×dF denotes the learned embeddings of span width features. For each pair of spans input si, sj ∈ S, the span pair representation is defined as: hr(si, sj ) = [he(si); he(sj ); Faf ] (2) where Faf = he(si) · he(sj ) refers to the entity affinity function of e(si) and e(sj ). Both pairwise span feature hr(si, sj ) and span feature he(si) will be fed into feedforward neural networks (FFNNs) respectively. The probability distribution of entity is denoted as ![3_image_0.png](3_image_0.png) Pe(e|si),(e *∈ E ∪* ϵ) and entity pairs is denoted as Pr(r|si, sj ),(r *∈ R ∪* ϵ). The classification loss will be defined as: The classification loss will be defined as: $$\mathcal{L}=\sum_{t\in T}w_{t}\mathcal{L}_{t}\tag{3}$$ where $w_{t}$ is the predefined weight of a task $t$ and T is the total number of tasks. We then use labelled data Dlto train the classifier Cl. The Cl generates contextualized span or span pair feature from Equation 1 and Equation 2 which converts unlabeled data Du into unlabeled (query) entity presentation hu,e or query entity pair representation hu,r. For labelled data Dl, we denote the Cl generated labelled (support) entity presentation as hl,e and labelled entity pair representation as hl,r. ## 3.2 Joint Semi-Supervised Learning Heterogeneous Graph Construction We construct the heterogeneous graph to exploit the manifold structure of the class space and exploit the combination of labelled data Dl and unlabeled data Du. Specifically, we examine the similarity relations among pairs of unlabeled data as well as the similarity relationships between the labelled data in order to take advantage of the smoothest constraints among neighbouring unlabelled data in our semi-supervised joint entity and relation extraction task. For computational efficiency, we construct a k Nearest Neighbor (kNN) graph instead of a fullyconnected graph. Let N be the number of labelled entity representations and let M be the number of unlabelled entity representations. Specifically, we take N entity representations and M unlabelled entity representations as nodes of an entity graph with size Te = N + M. For the relation graph, we take span pair representation as nodes with size Tr = ((N+M)×(N+M)). We construct a sparse affinity matrix, denoted as A ∈ R T×T, where Te, Tr ∈ T by computing the Gaussian similarity function between each node: $${\bf A}_{ab}=exp(-\frac{||({\bf h}_{a},{\bf h}_{b})||_{2}^{2}}{2\sigma^{2}})\tag{4}$$ where ${\bf h}_{a}$ denotes the a-th entity represent where ha denotes the a-th entity representation or pairwise entity representation (i.e. {hr(si, sj ), he(si), he(sj )} ∈ ha). The σ is the length scale parameter. Subsequently, we symmetrically normalize the non-negative and symmetric matrix O = A + AT by applying Normalized Graph Laplacian on O: S = H(−1/2)OH(−1/2) (5) where H is a diagonal matrix with its (*i, i*)-value to be the sum of the i-th row of O. For pairwise span representation hr(si, sj ) is essentially a function of he(si) and he(sij). The entity nodes and the relation nodes are automatically associated via their representation. Label propagation Based on the embedding space, we propose the use of transductive label propagation to construct a graph from the labelled support set and unlabeled set, and then propogate the labels based on random walks to reason about relationships in labelled and unlabeled sets. Figure 3 illustrates the whole process of heterogeneous graph-based propagation G. The circle node is the entity span representation and the triangle node is the relation representation. We define a label matrix Z ∈ R V ×U where U is either the size of entity ![4_image_0.png](4_image_0.png) types or relation types U = {E; R}. For label matrix, Z, the corresponding labelled data are one-hot ground truth labels and the rest are 0. Additionally, we denote Yt as a representation of the predicted label distributions at iteration t. Initially, we set the rows in Y0 = Z. Starting from Y0, message passing via label propagation in an iterative manner selects the type of the span or span pairs in the unlabeled set Du according to the graph structure according to the following operation: Yt+1 = cSYt + (1 − c)Z (6) where c ∈ (0, 1) controls the probability of information being obtained from a node's adjacency nodes or its initial label. Yt refers to the predicted labels at time t. Given $Y_{0}=Z$, and equation (6), we have: $$Y_{t}=(cS)^{t-1}Z+(1-c)\sum_{i=0}^{t-1}(cS)^{i}Y\tag{7}$$ As the parameter $c\in(0,1)$, taking the limit of equation (7) $(t->\infty)$ we have: $$\lim_{t\to\infty}Y_{t}=(1-c)(1-cS)^{-1}=Y_{converge}\tag{8}$$ The label propagation will converge to $Y_{converge}$. $$Y_{t+1}=c S Y_{t}+(1-c)Z$$ ## 3.3 Model Optimization After we obtain the Y*converge*, we use the *sof tmax* function followed by a standard argmax operation to determine the pseudo labels {yˆ} for all the instances in the unlabeled set based on the final label probability matrix Y*converge*. After generating the pseudo labels {yˆ} for all the labelled data Dl, we filter those of lower quality with a confidence threshold of g and combine the rest (of confidence above the threshold) with the labelled data Dlto retrain the classification model: $\{\hat{y}\}=\{\hat{y}\,|\,\text{confidence}(y)\geq g\}$ $(X,Y)=(X,Y)_{D_{l}}+\{(x,\hat{y})|x\in D_{u}\}$ As shown in the Figure 2, the final step in our proposed joint semi-supervised learning framework is re-training. The retraining model remains the same as the baseline model, as does the joint NERRE classification function. ## 4 Experiments We evaluate the effectiveness of *Jointprop* against models from two lines of work: semi-supervsied NER and semi-supervsied RE. We also provided a detailed analysis to demonstrate the benefits of our framework. For implementation details and dataset descriptions please refer to Appendix A and Appendix B. Datasets We perform experiments to assess the efficacy of our framework on four public datasets: SciERC (Luan et al., 2018b), ACE05 (Walker et al., 2006), SemEval (Hendrickx et al., 2010) and ConLL (Tjong Kim Sang and De Meulder, 2003). ## 4.1 Main Results Tables 1 to 4 provide the framework performance on the joint entity and relation extraction task, the NER task, and the RE task, respectively. Note that Beforeprop only trains using the labelled corpus. (i.e., The *Beforeprop* only trains with 5%, 10% and 30% training data.) As no unlabeled data are used in the training, this indicates the lower bound performance and establishes a new baseline. Settings% labeled Data Task **5% 10% 20% 30%** P R F1 P R F1 P R F1 P R F1 Beforeprop(baseline) NER 46.78 47.25 47.01 52.44 59.80 55.94 55.80 62.37 58.90 60.42 67.56 63.79 RE 20.89 15.40 17.73 35.75 16.74 22.80 38.68 23.51 29.25 43.41 29.77 35.32 Jointprop NER **52.67 48.46 51.02 60.15 61.95 61.04 62.03 64.52 63.25 66.55 65.73 66.19** RE **40.82 33.78 36.97 44.42 26.34 39.98 44.55 45.28 44.91 57.94 39.32 46.85** Settings% labeled Data Task **5% 10% 20% 30%** P R F1 P R F1 P R F1 P R F1 Beforeprop (baseline) NER 78.32 76.88 77.59 80.81 81.68 81.24 81.01 85.17 83.04 84.51 86.98 85.72 RE 46.33 20.85 28.76 49.10 30.76 37.82 46.71 46.31 46.51 57.59 48.78 52.83 Jointprop NER **81.91 78.39 80.11 83.38 82.76 83.07 86.82 83.76 85.27 87.69 86.40 87.04** RE **48.10 29.63 36.67 48.89 36.23 42.00 60.26 44.67 51.30 61.54 48.65 54.34** Methods / % labeled Data **5% 10% 30%** P R F1 P R F1 P R F1 Mean-Teacher 70.33 68.55 69.05 74.01 72.08 73.37 79.09 82.23 80.61 Self-Training 73.10 70.01 71.34 75.54 73.00 74.25 80.92 82.39 81.71 DualRE 73.32 77.01 74.35 75.51 78.81 77.13 81.30 84.55 82.88 MRefG 73.04 78.29 75.48 76.32 79.76 77.96 81.75 84.91 83.24 MetaSRE 75.59 81.40 78.33 78.05 82.29 80.09 82.01 87.95 84.81 GradLRE 75.96 83.72 79.65 78.90 82.94 81.69 82.74 88.49 85.52 Jointprop† **76.09 86.35 80.89 79.10 88.64 83.60 83.62 89.35 86.39** Gold labels - - 84.64 - - 84.40 - - 87.08 Methods / % labeled Data **5% 10% 30%** P R F1 P R F1 P R F1 VSL-GG-Hier 84.13 82.64 83.38 84.90 84.52 84.71 85.37 85.67 85.52 MT + Noise 83.74 81.49 82.60 84.32 82.64 83.47 84.98 84.78 84.88 Semi-LADA 86.93 85.74 86.33 88.61 88.95 88.78 89.98 90.52 90.25 Jointprop† **89.88 85.98 87.68 88.76 90.25 88.89 91.16 90.58 90.87** Table 2: Performance on ACE05 with various amounts of labelled data. Table 4: Performance on CoNLL 2003 with various labelled data. († indicates our framework.) Results on SciERC Table 1 illustrate our main results on semi-supervised joint learning on the SciERC dataset. We observed *Jointprop* improve significantly on both entity recognition and relation extraction. *Jointprop* achieves 3.97% and 15.89% F1 improvements, respectively, comparing to *Beforeprop*. This improvement validates the robustness of *Jointprop* by performing joint learning on NER and RE. tency of the framework for multitask datasets. Results on SemEval Table 3 summarizes the experimental results on the SemEval dataset using various labelled data and 50% unlabeled data. *Jointprop* improves on the *Beforeprop* by 5.47% on average. We can observe that *Jointprop* attains 1.24%, 1.91% and 0.81% F1 improvements over the stateof-the-art model GradLRE (Hu et al., 2021b) with 5%, 10% and 30% training data. Moreover, the model's performance consistently improves while narrowing down the gap towards the upper bound as the proportion of labelled data increases. *Jointprop* establishes a new state-of-the-art result, indicating that our framework is relatively robust even when performing a single task: semi-supervised Results on ACE05 Table 2 we summarize the results of comparing to the baseline performance. As can be seen from the table, *Jointprop* improves by around 2% and 5% on F1 for entity recognition and relation extraction task respectively. The results of this study provide further evidence of the consis- Table 1: Performance on SciERC with various amount of labeled data. Table 3: Performance on SemEval with various labelled data and 50% unlabeled data. We provide the *Gold labels* serves as the upper bound of the model. († indicates our framework.) ## Re. Results on CoNLL Experimental results on CoNLL dataset are shown in Table 4. Semi-LADA (Chen et al., 2020) is the current state-of-the-art semi-supervised NER model. In multiple training data settings, *Jointprop* achieves an average improvement of 0.9% over Semi-LADA. Semi-LADA reports a 91.83% F1 score in a fully supervised setting, as the upper bound of the semi-supervised model. *Jointprop* achieves 90.87% in F1 score with 30% of training data. The difference between the upper bound and the model performance narrows to less than 1%. Moreover, *Jointprop* surpasses the current state-of-the-art semi-supervised NER model, showing our model's effectiveness on another single task: semi-supervised NER. ## 4.2 Analysis 4.2.1 Ablation Studies This section provides comprehensive ablation studies to show the efficacy of *Jointprop* frameworks. Tables 5 and 7 show the effect of joint label propagation on single-task (NER or RE) prediction accuracy. *w/o REprop* denotes ablating the relation propagation while *w/o NERprop* denotes ablating the entity propagation. As a lower bound to the framework, we provide the *Beforeprop* result, which is the base model without any propagation. As shown in Table 5, although *w/o REprop* achieved an average 0.85% improvement on F1 compared to *Beforeprop*. The *Jointprop* further improve the performance significantly by 4.01%, 4.98%, 3.65% and 2.19% across 5%, 10%, 20% and 30% training data, respectively. From Table 7, we observed that *w/o REprop* attain an average of 2.94% performance gain in F1 compared to *Jointprop*. Though *w/o REprop* shows its effectiveness, w/o NERprop has 7.03% further overall across different proportions of training data. In general, we observe that joint label propagation is very helpful to *Jointprop* performance, especially for relation extraction tasks. We investigate a real and illustrative example in Figure 1. Given sentences S1 to S3. *w/o REprop* is unable to identify the label of "alignment" in S2 and "NLI alignment" in S3. Moreover, w/o NERprop tends to miss predict the pseudo label as no_relation. More specifically, in annotated S1, the entity "dependency parsing" has no direct link to the entity "alignment" in S2 and entity "NLI alignment" in S3. Consequently, *w/o REprop* makes the wrong prediction. Similar to *w/o REprop*, the relation indicator "uses..to" in annotated S1 is semantically similar to "used in" in S2 but not akin to "apply..for.." in S3, hence *w/o NERprop* miss identify the label of r′′′. Whereas *Jointprop* can assign the correct pseudo label to entities and relations in all three sentences for it benefits from the shared information from NER and RE. The results indicate that our framework *Jointprop* could leverage the interactions across the two tasks and derive useful information from a broader context. Therefore achieve significant improvement across NER and RE tasks. ## 4.2.2 Case Study We perform a case study examining our framework's performance on four sentences (i.e., S1, S2, S3, and S4) in comparison to the benchmark models Semi-LADA and GradLRE. Semi-LADA performs semi-supervised NER task while GradLRE performs semi-supervised RE task. Meanwhile Jointprop performs the semi-supervised style joint for NER and RE. S1 has a simple structure, and all three models correctly classify the label for relation and entity. For S2, the GradLRE misclassifies the "Statistical machine translation" entity as Task. Most of the labelled samples with given entity pair are likely as in (e1: Method, e2: Task), plus there is a relation indicator "in order to," which misguides the GradLRE into the wrong prediction. Similarly, in S4, Semi-LADA predicts the entity as Generic, the dominant class in the training set. *Jointprop* can assign the correct label without being sensitive to the label distribution in the training data. Moreover, Semi-LADA fails to recognize the entity "correlation of dependency relation paths" in S3, while GradLRE cannot identify the relation Used-for. One possible reason is that there were not many similar long sequences in the training data. Consequently, Semi-LADA is insufficient in entity learning, especially for long lines, while the GradLRE fails to establish edges with samples in the training set. *Jointprop* not only builds a connection between labelled and unlabeled data but also within labelled/unlabeled data. The extra connections hence help our model to make the correct prediction. ## 4.2.3 Qualitative Analysis Table 8 shows the qualitative results of our proposed method Joint Semi-supervised Learning for | Name Entity Recognition | |---------------------------| Model / Task 5% 10% 20% 30% P R F1 P R F1 P R F1 P R F1 Beforeprop 46.78 47.25 47.01 52.44 59.80 55.94 55.80 62.37 58.90 60.42 67.56 63.79 w/o REprop 51.82 45.10 48.23 58.92 53.46 56.06 61.55 57.77 59.60 64.71 63.32 64.01 Jointprop **52.67 48.46 51.02 60.15 61.95 61.04 62.03 64.52 63.25 66.55 65.73 66.19** Table 5: Ablation study on pure NER task on SciERC dataset. Model / Task 5% 10% 20% 30% P R F1 P R F1 P R F1 P R F1 Beforeprop 20.89 15.40 17.73 35.75 16.74 22.80 38.68 23.51 29.25 43.41 29.77 35.32 w/o NERprop 38.92 13.35 19.88 19.20 44.97 26.91 22.27 62.53 32.74 32.12 44.56 37.33 Jointprop **40.82 33.78 36.97 44.42 26.34 39.98 44.55 45.28 44.91 57.94 39.32 46.85** | Relation Extraction | |-----------------------| Table 6: Ablation study on pure RE task on SciERC dataset. Table 7: Case study of *Jointprop*. The red marked span denotes the head (e1) entity while the blue marked span represents the tail (e2) entity. Semi-LADA performs OtherScientificTerm abbreviated as OST. (x) indicates the wrong prediction and - means the model does not have certain predictions (i.e., The model does not predict entity type or relation type). Entity and Relation Extraction with Heterogeneous Graph-based Propagation. We show the performance of the propagated pseudo labels with the ground truths under 10% split training set on ACE05 dataset. As we can see from the performance Table 8, in both NER and RE, the recall of the predictions indicates that most of the positive candidates have been propagated a positive label.Meanwhile, the precision of the predictions for the NER task is also high. However, the precision for the RE task is low, showing that almost half of the null candidates have been assigned a positive label. The propagation of RE tasks is still quite challenging. $$\begin{array}{r l r l r}{{\hline}{\ \%}&{\quad}&{\mathrm{P}}&{\quad}&{\mathrm{R}}&{\quad}&{\mathrm{Fl}}\\ {\hline\ \mathrm{NER}&{\quad}&{86.23}&{\quad}&{92.78}&{\quad}&{89.34}\\ {\mathrm{RE}}&{\quad}&{52.17}&{\quad}&{98.82}&{\quad}&{68.57}\end{array}$$ Table 8: Qualitative results of our method in 10% split on ACE05 dataset. (Average F1) In spite of this, our method still generally produces more accurate predictions. Given a sentence in ACE05: 'Although the Russian government...'. Our model prediction for the phrase "Russian government" is "Organization", which is more accurate than the ground truth GPE-Geographic Entities. ## 5 Conclusion In this paper, we propose a novel heterogeneous graph-based propagation mechanism for joint semisupervised learning of entity and relation extraction. For the first time, we explore the interrelation between different tasks in a semi-supervised learning setting. We show that the joint semi-supervised learning of two tasks benefits from their codependency and validates the importance of utilizing the shared information between unlabeled data. Our experiments show that combining the two tasks boost the model performance. We also evaluate two public datasets over competitive baselines and achieve | Sentence | Semi-LADA | GradLRE | Jointprop | |-------------------------------------------------------------------------------------------------------------------------------|----------------------------------------|----------------------------------------------------------|-------------------------------| | S1: We propose a Cooperative Model for natural | e1: Method | | | | language understanding in a dialogue system. | e2: Task R: - | e1: Method e2: Task R: Used-for | | | S2: We address appropriate user modelling in order to generate cooperative responses to each user in spoken dialogue systems. | e1: - e2: - R: Used-for | e1: Method e2: Task R: Part-of | | | S3: We explore correlation of dependency relation | e1: - (x) | | | | paths to rank candidate answers in answer extraction. | e2: Task R: - e1: Method e2: Task R: - | e1: - e2: - R: Used-for (x) | e1: OST e2: Task R: Used-for | | S4: We present a syntax-based constraint for word | e1: Generic (x) | | | | alignment, known as the cohesion constrain. | e2: Generic (x) R: - | e1: - e2: - R: no_relation (x) e1: - e2: - R: Hyponym-of | e1: OST e2: OST R: Hyponym-of | state-of-the-art performance. We also conduct ablation studies of our proposed framework, which demonstrate the effectiveness of our model. We further present case studies of our model output. ## 6 Limitations May extend to other domains In this paper, we present a generic framework and evaluate the effectiveness of our proposed model *Jointprop* on three public datasets. We may further extend the framework to various datasets in different domains. For example, ACE05 (Walker et al., 2006) in social networks, journalism, and broadcasting, as well as GENIA corpus (Ohta et al., 2002) in biomedical research. May extend to other NLP tasks Our proposed model focus on two tasks, namely NER and RE. We may extend our framework to include more information extraction tasks, such as coreference resolution and event extraction. Moreover, we may contract knowledge graphs from extracted structural information. ## Acknowledgment This research is supported by Nanyang Technological University, under SUG Grant (020724-00001) ## References Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics. David S. Batista, Bruno Martins, and Mário J. Silva. 2015. Semi-supervised bootstrapping of relationship extractors with distributional semantics. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 499–504, Lisbon, Portugal. Association for Computational Linguistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–3620, Hong Kong, China. Association for Computational Linguistics. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT' 98, page 92–100, New York, NY, USA. Association for Computing Machinery. Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551–560, Portland, Oregon, USA. Association for Computational Linguistics. Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, and Diyi Yang. 2020. Local additivity based data augmentation for semi-supervised NER. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1241–1251, Online. Association for Computational Linguistics. Mingda Chen, Qingming Tang, Karen Livescu, and Kevin Gimpel. 2018. Variational sequential labelers for semi-supervised learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 215–226, Brussels, Belgium. Association for Computational Linguistics. Olivier Delalleau, Yoshua Bengio, and Nicolas Le Roux. 2005. Efficient non-parametric function induction in semi-supervised learning. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, volume R5 of Proceedings of Machine Learning Research, pages 96–103. PMLR. Reissued by PMLR on 30 March 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ralph Grishman. 1997. Information extraction: Techniques and challenges. In Information Extraction, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 11–27. Springer Verlag. International Summer School on Information Extraction, SCIE 1997 ; Conference date: 14-071997 Through 18-07-1997. Sonal Gupta and Christopher Manning. 2014. Improved pattern learning for bootstrapped entity extraction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 98–108, Ann Arbor, Michigan. Association for Computational Linguistics. Sonal Gupta and Christopher D. Manning. 2015. Distributed representations of words to guide bootstrapped entity classifiers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1215–1220, Denver, Colorado. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multiway classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38, Uppsala, Sweden. Association for Computational Linguistics. Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, and Philip S. Yu. 2021a. Semi-supervised relation extraction via incremental meta self-training. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 487–496, Punta Cana, Dominican Republic. Association for Computational Linguistics. Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, and Philip S. Yu. 2021b. Gradient imitation reinforcement learning for low resource relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Pooja Lakshmi Narayan, Ajay Nagesh, and Mihai Surdeanu. 2019. Exploration of noise strategies in semi-supervised named entity classification. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 186–191, Minneapolis, Minnesota. Association for Computational Linguistics. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402–412, Baltimore, Maryland. Association for Computational Linguistics. Wanli Li, Tieyun Qian, Xu Chen, Kejian Tang, Shaohui Zhan, and Tao Zhan. 2021. Exploit a multihead reference graph for semi-supervised relation extraction. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–7. Hongtao Lin, Jun Yan, Meng Qu, and Xiang Ren. 2019. Learning dual retrieval module for semisupervised relation extraction. In The World Wide Web Conference, WWW '19, page 1073–1083, New York, NY, USA. Association for Computing Machinery. Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999–8009, Online. Association for Computational Linguistics. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sungju Hwang, and Yi Yang. 2019. Learning to propagate labels: Transductive propagation network for few-shot learning. In International Conference on Learning Representations. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018a. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018b. Multi-task identification of entities, relations, and coreferencefor scientific knowledge graph construction. In Proc. Conf. Empirical Methods Natural Language Process. (EMNLP). Yi Luan, Mari Ostendorf, and Hannaneh Hajishirzi. 2017. Scientific information extraction with semisupervised neural tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2641–2651, Copenhagen, Denmark. Association for Computational Linguistics. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046, Minneapolis, Minnesota. Association for Computational Linguistics. Anh Tuan Luu, Jung-jae Kim, and See Kiong Ng. 2014. Taxonomy construction using syntactic contextual evidence. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 810–819. Anh Tuan Luu, Jung-jae Kim, and See Kiong Ng. 2015. Incorporating trustiness and collective synonym/contrastive evidence into taxonomy construction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1013–1022. Anh Tuan Luu, Yi Tay, Siu Cheung Hui, and See Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 403–413. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858–1869, Doha, Qatar. Association for Computational Linguistics. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the Second International Conference on Human Language Technology Research, HLT '02, page 82–86, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado. Association for Computational Linguistics. H. Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory, 11(3):363–371. Matthias Seeger. 2001. Learning with labeled and unlabeled data. Amarnag Subramanya and Jeff Bilmes. 2011. Semi-supervised learning with measure propagation. Journal of Machine Learning Research, 12(102):3311–3370. Amarnag Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semi-supervised learning of structured tagging models. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, page 167–176, USA. Association for Computational Linguistics. Anders Søgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technologies, 6:1–103. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Luu Anh Tuan, Siu Cheung Hui, and See Kiong Ng. 2016. Utilizing temporal information for taxonomy construction. Transactions of the Association for Computational Linguistics, 4:551–564. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784–5789, Hong Kong, China. Association for Computational Linguistics. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. In Linguistic Data Consortium. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454, Online. Association for Computational Linguistics. Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly supervised NER with partial annotation learning and reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2159–2169, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Deming Ye, Yankai Lin, and Maosong Sun. 2021. Pack together: Entity and relation extraction with levitated marker. CoRR, abs/2109.06067. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753–1762, Lisbon, Portugal. Association for Computational Linguistics. Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1730–1740, Copenhagen, Denmark. Association for Computational Linguistics. Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, and Bernhard Schölkopf. 2004. Learning with local and global consistency. In Advances in Neural Information Processing Systems, volume 16. MIT Press. Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. ## A Experimental Settings Framework We show our overall framework Jointprop in Figure 2. Following (Liu et al., 2019), the hyper-parameter c in Equation 6 is set to 0.99. According to our empirical findings, the best values for the settings of k and σ in graph construction in Section 3.2 are varied in datasets. We select σ as two and the k as 50. Meanwhile, We adopt the affinity function Faf with all the generated spans between relation spans. Moreover, we perform average pooling for them. The optimal hyperparameters and settings are selected based on the model's performance. Training We employ the BERT-cased as an encoder for SemEval and ConLL datasets and adopt the SciBERT-SCIVOCAB-cased (Beltagy et al., 2019) encoder for the SciERC dataset as suggested in (Luan et al., 2019). The rest will be treated as an unlabeled set. To maximize the loss, we use BERTAdam with a 1e-3 learning rate. The maximum span width is set at 8. ## B Datasets And Baselines B.1 Dataset Implementation For semi-supervised joint task, we consider SciERC (Luan et al., 2018b) and ACE05(Walker et al., 2006) datasets and follow the pre-processing steps in (Wadden et al., 2019). For a single task, we conduct experiments against models from two types of work: semi-supervised NER, and semi-supervised RE. For the semi-supervised NER task, we consider ConLL 2003 (ConLL) (Tjong Kim Sang and De Meulder, 2003) and adopt the pre-processing in (Chen et al., 2020). For semi-supervised RE we evaluate our approach on SemEval 2010 Task 8 (SemEval) (Hendrickx et al., 2010) dataset and adopt the pre-processing in (Hu et al., 2021b). Note that the entity mentioned in the sentences in the **SemEval** has been identified and marked in advance. Table 9 shows the statistics of each dataset. | Dataset | Sentences | Types | | | | |-----------|-------------|---------|------|-----|----| | Train | Dev | Test | # E | # R | | | SciERC | 1861 | 275 | 551 | 6 | 7 | | ACE05 | 10051 | 2424 | 2050 | 7 | 6 | | ConLL | 14,987 | 3466 | 3684 | 4 | - | | SemEval | 7199 | 800 | 1864 | - | 19 | Table 9: **Statistics for the SciERC, ACE05, ConLL** and SemEval datasets. \#E: Number of entity classes. \#R: Number of relation classes. Data split for semi-supervised settings We follow split settings in (Chen et al., 2020), (Hendrickx et al., 2010) respectively for ConLL and SemEval and generate different proportions (5%, 10% and 30%) of training data to investigate how training set size impacts performance and to retain the original development set and test set for evaluation purposes. Noted that we sample 50% of the training set as the unlabeled set as (Hendrickx et al., 2010) for fair comparisons. For ACE05 and SciERC datasets, we split the training data based on documents and generate 5%, 10%l 20% and 30% of training data. In particular, we endeavour to ensure that each proportion of data contains as many types of entity types and relation types possible. ## B.2 Evaluation Metrics We consider the same criteria to apply as previous works (Hu et al., 2021b,a; Li et al., 2021; Lin et al., 2019) where precision and recall serve as supplementary metrics, while F1 score serves as the primary evaluation metrics. Note that the evaluation excludes the accurate prediction for no_relation. ## B.3 Compared Baselines Semi-supervised joint learning For joint learning, because there is no prior study on semisupervise joint learning, we use DYGIE++ (Wadden et al., 2019) (i.e. *Beforeprop*) as our baseline model to train. Semi-supervised NER In order to show that our Jointprop framework works with unlabeled data, we compared it to three recent state-of-the-art semisupervised NER models that were already in use: - **VSL-GG-Hier** (Chen et al., 2018) introduced a hierarchical latent variables models into semi-supervised NER learning. - **MT + Noise** (Lakshmi Narayan et al., 2019) explored different noise strategies including word-dropout, synonym-replace, Gaussian noise and network-dropout in a mean-teacher framework. - **Semi-LADA** (Chen et al., 2020) proposes a local additivity based data augmentation method which uses the back-translation technique. Semi-supervised RE We compared our *Jointprop* framework with the following 6 representative semi-supervised relation models: - **Mean-Teacher** promotes the model's variants to generate consistent predictions for comparable inputs. - **DualRE** (Lin et al., 2019) trains a prediction and retrieval module in conjunction to choose samples from unlabeled data. - **MRefG** (Li et al., 2021) constructs reference graphs to semantically relate unlabeled data to labelled data. - **MetaSRE** (Hu et al., 2021a) constructs pseudo labels on unlabeled data using metalearning from the successfulness of the classifier module as an extra meta-objective. - **GradLRE** (Hu et al., 2021b) is the stateof-the-art approach that encourages pseudolabeled data to mimic the gradient descent direction on labelled data and boost its optimization capabilities via trial and error. - **Gold labels** train annotated (i.e., sampled 5%, 10% or 30% training data) and unlabeled data with their gold labels indicating the model upper bound. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-etal-2023-reasoning
Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering
https://aclanthology.org/2023.acl-long.814
Explainable question answering (XQA) aims to answer a given question and provide an explanation why the answer is selected. Existing XQA methods focus on reasoning on a single knowledge source, e.g., structured knowledge bases, unstructured corpora, etc. However, integrating information from heterogeneous knowledge sources is essential to answer complex questions. In this paper, we propose to leverage question decomposing for heterogeneous knowledge integration, by breaking down a complex question into simpler ones, and selecting the appropriate knowledge source for each sub-question. To facilitate reasoning, we propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT). First, we build the Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of a complex question; then, we conduct probabilistic reasoning over HQDT from root to leaves recursively, to aggregate heterogeneous knowledge at different tree levels and search for a best solution considering the decomposing and answering probabilities. The experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly, demonstrating the effectiveness of leveraging question decomposing for knowledge integration and our RoHT framework.
# Reasoning Over Hierarchical Question Decomposition Tree For Explainable Question Answering Jiajie Zhang1∗, Shulin Cao2∗, Tingjian Zhang2, Xin Lv2, Jiaxin Shi3**, Qi Tian**3, Juanzi Li2† , Lei Hou2 1Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China 2Department of Computer Science and Technology, Tsinghua University, Beijing, China 3Cloud BU, Huawei Technologies {jiajie-z19, caosl19}@mails.tsinghua.edu.cn {houlei, lijuanzi}@tsinghua.edu.cn ## Abstract Explainable question answering (XQA) aims to answer a given question and provide an explanation why the answer is selected. Existing XQA methods focus on reasoning on a single knowledge source, e.g., structured knowledge bases, unstructured corpora, etc. However, integrating information from heterogeneous knowledge sources is essential to answer complex questions. In this paper, we propose to leverage question decomposing for heterogeneous knowledge integration, by breaking down a complex question into simpler ones, and selecting the appropriate knowledge source for each sub-question. To facilitate reasoning, we propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT). First, we build the *Hierarchical Question Decomposition Tree* (HQDT) to understand the semantics of a complex question; then, we conduct probabilistic reasoning over HQDT from root to leaves recursively, to aggregate heterogeneous knowledge at different tree levels and search for a best solution considering the decomposing and answering probabilities. The experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly, demonstrating the effectiveness of leveraging question decomposing for knowledge integration and our RoHT framework. ## 1 Introduction Explainable question answering (XQA) is the task of (i) answering a question and (ii) providing an explanation that enables the user to understand why the answer is selected (Neches et al., 1985; Schuff et al., 2020). It provides a qualified way to test the reasoning ability and interpretability of intelligent systems, and plays an important role in artificial intelligence (Lu et al., 2022). ![0_image_0.png](0_image_0.png) Figure 1: An example of Hierarchical Question Decomposition Tree (HQDT). q irepresents the index of node in its BFS ordering enumeration. Recent work in XQA can be grouped into two directions: 1) neuro-symbolic methods (Berant et al., 2013; Liang et al., 2017; Cao et al., 2022b) translate natural language questions into formal representations (*e.g.*, SPARQL (Sun et al., 2020), KoPL (Cao et al., 2022a), lambda-DCS (Liang, 2013), *etc.*), whose execution on structured knowledge bases (KBs) gives the answer. Here, the formal representation acts as an explanation of the final answer. 2) Decompose-based models generate natural language intermediate steps that lead to the final answer (e.g., question decomposing which decomposes a complex question into sub-questions (Min et al., 2019; Perez et al., 2020; Deng et al., 2022), chain-of-thought prompting (Wei et al., 2022; Dua et al., 2022; Khot et al., 2022), etc.). Here, the intermediate steps shows the rationale of reasoning. Although achieving significant results, both directions have key limitations. For neuro-symbolic methods, the formal representation can only be executed on KBs. However, even the largest KBs are incomplete, thus limits the recall of model. For decompose-based methods, they employ free-text corpora as the knowledge source, and the diversity of natural language makes XQA difficult. In fact, integrating knowledge from heterogeneous sources is of great importance to QA (Wolfson et al., 2020), especially for answering complex questions. Several attempts have been made for knowledge integration (e.g., KBs, text corpora) (Sun et al., 2018, 14556 2019; Shi et al., 2021). Although promising, these graph-based methods suffer from lacking explainability or are constrained to limited reasoning capability. Intuitively, leveraging question decomposing to integrate heterogeneous knowledge sources is a promising direction, since we can flexibly select the appropriate knowledge source for each subquestion. The challenges lie in: 1) How to determine the granularity of question decomposing, since certain complex questions can be directly answered with a knowledge source, and further decomposition increases the possibility of error. For example, in Figure 1, q 1can be answered with the Wikipedia corpus without further decomposition. 2) How to find the optimal solution among various possible ones, since question decomposing and answering are both uncertain. For example, q 0can also be decomposed as "Which mountains are in North America or Afirica", "What's the height of \#1", "*[SelectAmong] [largest] \#2*". To this end, we propose a novel two-stage XQA framework *Reasoning over Hierarchical Question* Decomspotion Tree, dubbed RoHT. First, we propose to understand the complex question by building its *hierarchical question decomposition tree* (**HQDT**). In this tree, the root node is the original complex question, and each non-root node is a subquestion of its parent. The leaf nodes are atomic questions that cannot be further decomposed. Compared with existing representations that directly decompose a question into the atomic ones, e.g., QDMR (Wolfson et al., 2020), our tree structure provides the flexibility to determine solving a question whether by directly answering or further decomposing. Second, we propose **probabilistic reasoning over HQDT**, to fuse the knowledge from KB and text at different levels of the tree, and take into consideration the probability score of both tree generation and answering. The reasoning process is recursive, from the root to leaves, and constitues three steps: 1) a scheduler determines the appropriate knowledge sources for a particular question (from KB, text, or solving its children sequentially); 2) the corresponding executors output the answers with probabilities; 3) an aggregator aggregates the candidate answers from all the knowledge sources and outputs the best ones. In evaluation, we instantiate our RoHT framework on two complex QA datasets: KQA Pro (Cao et al., 2022a), where we remove half of the triples in its KB and supplement it with Wikipedia corpus, and Musique (Trivedi et al., 2022), where we take Wikidata (Vrandecic and Krötzsch, 2014) as additional KB besides the given text paragraphs. Experimental results show that, RoHT improves the performance significantly under the KB+Text setting, by 29.7% and 45.8% EM score on KQA Pro and Musique compared with existing SOTA model. In addition, compared with the decompose-based methods, RoHT improves the SOTA by 11.3% F1 score on Musique. Our contributions include: 1) proposing to leverage question decomposing to integrate heterogeneous knowledge sources for the first time; 2) designing a novel two-stage XQA famework RoHT by first building HQDT and then reasoning over HQDT; 3) demonstrating the effectiveness of our RoHT framework through extensive experiments and careful ablation studies on two benchmark datasets. ## 2 Related Work 2.1 Qa Over Text And Kb Over time, the QA task has evolved into two main streams: 1) QA over unstructured data (e.g., freetext corpora like Wikipedia); 2) QA over structured data (e.g., large structured KBs like DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krötzsch, 2014)). As structured and unstructured data are intuitively complementary information sources (Oguz et al., 2022), several attempts have been made to combines the best of both worlds. An early approach IBM Watson (Ferrucci, 2012) combines multiple expert systems and re-ranks them to produce the answer. (Xu et al., 2016) maps relational phrases to KB and text simultaneously, and use an integer linear program model to provide a globally optimal solution. Universal schema based method (Das et al., 2017) reasons over both KBs and text by aligning them in a common embedded space. GraftNet (Sun et al., 2018) and its successor PullNet (Sun et al., 2019) incorporate free text into graph nodes to make texts amenable to KBQA methods. TransferNet (Shi et al., 2021) proposes the relation graph to model the label-form relation from KBs and text-form relation from corpora uniformly. Although achieving promising results, these methods lack interpretability or are constrained to limited question type, *i.e.*, TransferNet shows interpretability with transparent step transfering, however, it can only answer multi-hop questions, and cannot deal with questions that require attribute comparison or value verification. In contrast, our proposed framework shows great interpretability with HQDT and cover more question types. ## 2.2 Question Decomposing For datasets, KQA Pro (Cao et al., 2022a) proposes to decompose a complex question into a multi-step program KoPL, which can be executed on KBs. BREAK (Wolfson et al., 2020) proposes to decompose questions into QDMR, which constitutes the ordered list of steps, expressed through natural language. Musique (Trivedi et al., 2022) is a QA dataset constructed by composing single-hop questions obtained from existing datasets, and thus naturally provides question decompositions. For models, several attempts have been made for learning to decompose with weak-supervision, such as span prediction based method (Min et al., 2019), unsupervised sequence transduction method ONUS (Perez et al., 2020), AMR-based method QDAMR (Deng et al., 2022). Another line of work is to employ large language models with in-context learning, such as Least-to-most Prompting (Zhou et al., 2022), decomposed prompting (Khot et al., 2022), successive prompting (Dua et al., 2022). Compared with existing works, we are the first to design a hierarchical question decomposition tree for integrating information from multiple knowledge sources. ## 3 Definition Of Hqdt Formally, given a complex question, its HQDT is a tree T. Each node q i ∈ T represents a question. For root node, it represents the given complex question, and for non-root nodes, it represents a sub-question of its parent node. The leaf nodes are simple ("atomic") questions that cannot be decomposed. Note that HQDT is a 3-ary ordered tree. As shown in Figure 1, we enumerate the nodes of T with BFS ordering, and q 0is the root question. A question q i = w1, · · · , wj , · · · , w|q i| can be categorized into one of the three types according to the token vocabulary: 1) natural language question (e.g., q 4: "*Which mountain is the highest in North America?*"), here, wj ∈ V, and V is the word vocabulary; 2) bridge question (e.g., q 5: "*How high is* \#4?"), here, wj *∈ V ∪ R*, and R is the reference token vocabulary. In this question, "\#4" refers to the answer of q 4, which is the sibling question of q 5; 3) symbolic operation question (e.g., q 3: "*[SelectBetween][greater]* \#1 \#2"), here, wj *∈ V ∪ R ∪ O*, and O is the vocabulary of pre-defined symbolic operations, which are designed for supporting various reasoning capacity (e.g., attribute comparison and set operation) and are shown in appendix A in details. Note that all the bridge questions and symbolic operation questions are atomic questions and can only appear in leaf nodes. For every non-leaf question q i, we define two ordered lists: - q i*.children* = q sti, · · · , qedi, which are children of q i, successively indexed from sti to edi. For example, for question q 1in Figure 1, q 1*.children* is q 4, q5. - q i*.atoms* = a i1 , · · · , aini , which is a list of atomic questions deduced from the ni leaf nodes of the sub-tree rooted by q i, by rearranging the reference tokens. For example, for q 0in Figure 1, its leaf nodes is q 4, q5, q6, q7, q3, and correspondingly, q 0*.atoms* is q 4, q˜ 5, q6, q˜ 7, q˜ 3, with q˜ 5as "*How high is \#1?*", q˜ 7as "*How high is \#3*", and q˜ 3as "*[SelectBetween][greater] \#2 \#4*". The detailed deduction algorithm is in appendix B due to space limit. We also call q i*.atoms* the atomic representation of q i. Specially, among q i*.children*, q sti*, . . . , q*edi−1are all natural language questions, and q ediis either a bridge question or a symbolic operation question. Answering q iis semantically equivalent to answering sub-questions in q i*.children* or in q i*.atoms* sequentially. The last question in q i*.children* or q i*.atoms* returns the answer of q i. ## 4 Methodology Our framework RoHT is composed of two stages: 1) Building HQDT. We understand the hierarchical compositional structure of a complex question q 0 by generating its HQDT T with probability, where each question q i ∈ T has a score p ig that represents the certainty of its generation. 2) Probabilistic Reasoning over HQDT. We conduct recursive probabilistic reasoning over the HQDT from root to leaves to solve q 0. For each question q i, we will utilize KBs, text and its child questions together to get a list Ri, which contains answers of q i with probabilistic scores. Finally the answer with the highest score in R0 will be picked out as the final answer of q 0. The details are introduced as follows. ## 4.1 Building Hqdt To build the HQDT for a complex question, we first generate its atomic representation, which corresponds the leaf nodes of HQDT, then generate every non-leaf nodes based on this atomic representation. We compute certainty score of each node based on the likelihood of each step of generation. Building Leaf Nodes Given a complex question q 0, we first use a BART (Lewis et al., 2020)-based question decomposer Mθ to generate its atomic representation and output the likelihood of generation: $$L^{0},l_{d}=M_{0}(q^{0}).\tag{1}$$ Here, $L^{0}=a_{1}^{0}\ \langle sep\rangle\ a_{2}^{0}\ \langle sep\rangle\ldots\langle sep\rangle\ a_{n_{0}}^{0}$ is the serialization of q 0*.atoms*, where ⟨sep⟩ is a separating token. ld = Pr(L 0|q 0; θ) is the likelihood of generation. Since q 0is the root of T, each atomic question in q 0*.atoms* corresponds to a leaf node in T (with the deterministic algorithm in Appendix C), and the certainty score of each leaf node in T is ld. Building Non-leaf Nodes Based on q 0*.atoms*, we can generate all the non-leaf questions in HQDT. The root question is just q 0and thus has certainty score p 0 g = 1. For every other non-leaf question q i, its atomic representation q i*.atoms* = ⟨a i1 , . . . , aini⟩ can be translated from a specific subset of q 0*.atoms* by rearranging the reference tokens. The subset can be determined by considering the reference relations of a bridge or symbolic operation question a 0 j ∈ q 0*.atoms*, which corresponds to the leaf node q edi, with other questions in q 0*.atoms*. We show the details in Appendix C. For example, q 2*.atoms* in Figure 1 is ("*Which mountain is the highest in Africa?*", "*How high is* \#1?"), and it can be obtained from (a 03 , a04 ) in q 0*.atoms*. Then we can use a BART-based question generator Mϕ to generate q ifrom q i*.atoms*: $$q^{i},l_{g}^{i}=M_{\phi}(L^{i}),$$ i), (2) where $L^{i}=a_{1}^{i}$$\langle$_sep_$\rangle$$a_{2}^{i}$$\langle$_sep_$\rangle$$\ldots$$\langle$_sep_$\rangle$$a_{n_{i}}^{i}$ is the serialized q i*.atoms*, and l ig = Pr(q i|L i; ϕ) is the likelihood of q i given L i. The certainty score of q i is computed as: Learning of Question Decomposer and Generator The question decomposer Mθ can be trained with paired (q 0, q 0*.atoms*) data, where the atomic representation can be from either given annotation or unsupervised construction. The question generator Mϕ can also be trained with the same data by exchanging the input and output. The details are shown in Section 5.2. ## 4.2 Probabilistic Reasoning Over Hqdt $$f(q^{i},p^{i}_{g},G,C)\to R^{i}:\{(ans^{i}_{j},p^{i}_{j})\},\tag{4}$$ where $ans^{i}_{j}$ is an answer of $q^{i}$, and score $p^{i}_{j}$ represents the certainty of $ans^{i}_{j}$. As shown in Figure 3, the implementation of f contains tree steps: 1) a scheduler determines the suitable knowledge sources for a particular question, i.e., whether the question can be answered from KB, text, or by solving its child questions sequentially; 2) according to the suitable sources output by the scheduler, executors aim to get the answers with probabilities via executing on KB (KB executor) or retrieving from text (text executor), or answering the child questions (call f recursively); 3) an aggregator aggregates candidate answers from all the knowledge sources and outputs the top-k answers according to their probabilities. In the following, we will introduce their details when answering q i. #### ler We formalize the scheduler as: suitkb, suittext, suitchild = Scheduler(q i, G, C), (5) Where suitkb, *suit*text and *suit*child are 0/1 variables, respectively representing whether the answers of q iare suitable to get from the KB G, the corpus C, or by solving q i*.children* sequentially. Specifically, to check whether G is suitable, the scheduler employs a semantic parser (Cao et al., 2022a) Msp to parse q iinto a program K with probability pparse: $$K,p_{\mathrm{parse}}=M_{\mathrm{sp}}(q^{i}).\qquad\qquad(6)$$ $\mathbf{a},$ Then it classifies the type of q iaccording to the function skeleton of K. For example, the function skeleton of K in Figure 2 is "*Find-RelateFilterConcept-SelectAmong*". If the precision of G on the questions that have the same function skeleton with K is larger than a predefined threshold γ 1, the scheduler will set *suit*kb to be 1. 1The precision of KB is calculated with questions in training set $$p_{g}^{i}=l_{d}\cdot l_{g}^{i}.$$ g. (3) $$(3)$$ $1\mathcal{A}$. ![4_image_0.png](4_image_0.png) To check whether the corpus C is suitable, the scheduler tries to find a set of evidence paragraphs for q i. If C is too large, the scheduler will first use BM25 (Robertson and Zaragoza, 2009) to recall dozens of most relevant paragraphs. For each paragraphs, we train a RoBERTa (Liu et al., 2019)- based selector Msl to classify whether it is an evidence paragraph for q i. Suppose the set of selected evidence paragraphs, Ce is not empty, the scheduler will set *suit*text as 1. To make best use of knowledge from all levels, the scheduler simply set *suit*child to be 1 if q iis a non-leaf question otherwise 0. Executors For the KB executor, it takes the program K in Equation 6 on KB G to get the answers, and takes the parsing score pparse in Equation 6 to calculate the probability score for each answer: $$R_{\mathrm{kb}}^{i}=\{(a n s_{\mathrm{kb},j}^{i},\,p_{g}^{i}\cdot p_{\mathrm{parse}})\}.$$ For the text executor, it takes the selected paragraph set Ce as described above, and employs a Transformer-based reading comprehension model Mrc to extract answers from Ce: $$\begin{array}{l}{{\{(a n s_{\mathrm{text},j}^{i},p_{\mathrm{ex},j}^{i})\}=M_{\mathrm{rc}}(q^{i},C_{e}),}}\\ {{R_{\mathrm{text}}^{i}=\{(a n s_{\mathrm{text},j}^{i},\ p_{g}^{i}\cdot p_{\mathrm{ex},j}^{i})\}.}}\end{array}\quad\quad(8)$$ where p iex,j is the extraction probability of ansi text,j given by Mrc. For solving q i by answering its children, f will recursively call itself to solve q sti*, . . . , q*ediin order: $$R^{st_{i}}=f(q^{st_{i}},p_{g}^{st_{i}},G,C),$$ $$R^{st_{i}+1}=f(q^{st_{i}+1},p_{g}^{st_{i}+1},G,C),\tag{9}$$ $$\cdots$$ $$R^{ed_{i}}=f_{\rm ref}(q^{ed_{i}},p_{g}^{ed_{i}},G,C,[R^{st_{i}},\ldots,R^{ed_{i}-1}]),$$ and let $$R^{i}_{\rm child}=R^{ed_{i}}.\tag{10}$$ Here, $f_{\rm ref}$ is a variant of $f$ to solve bridge and symmet Here, fref is a variant of f to solve bridge and symbolic questions, which refer to the answers of their sibling questions. Suppose q edirefers to the answers of its siblings q r1*, . . . , q*rhi in order. If q ediis a bridge question, fref will 1) convert q ediinto several possible natural language question q 1 nl*, . . . , q*K nl by replacing the reference tokens with every combination ((x k 1 , vk 1 )*, . . . ,*(x k hi , vkhi )) ∈ Rr1 *× · · · ×* R rhi , 2) call f to solve each q k nl and 3) fuse the answers from each Rk nl and select the top-k answers with the highest scores: $$\left(7\right)$$ $$\{(ans^{k}_{\rm nl,j},\ p^{k}_{\rm nl,j})\}=f(q^{j}_{\rm nl},p^{i}_{g},G,C),$$ $$R^{k}_{\rm nl}=\{(ans^{k}_{\rm nl,j},\ {\rm Avg}(p^{k}_{\rm nl,j},v^{k}_{1},\ldots,v^{k}_{h_{i}}))\},$$ $$R^{ed_{i}}={\rm Select}(R^{1}_{\rm nl},\ldots,R^{K}_{\rm nl})\tag{11}$$ Note that the score of answer ansk nl,j is computed by averaging p k nl,j and v k 1 , . . . , vkhi , instead of multiplying them, to avoid exponential shrink during recursion. If q ediis a symbolic operation question with operation op and arguments, fref will execute simple program to apply the operation op over Rr1*, . . . , R*rhi to get Redi. The score of each answer ans edi jis computed as the average of p edi g and the scores of answers in Rr1*, . . . , R*rhi used by the program to get ans edi j. 14560 Aggregator The aggregator fuses Rikb, Ri text and Richild by selecting the top-k answers with the highest scores from them. If several answers have the same surface form, only the one with the highest score will be preserved. $$R^{i}=\mbox{Aggregator}(R^{i}_{\bf kb},R^{i}_{\bf text},R^{i}_{\bf child}).\tag{12}$$ ## 5 Experiments 5.1 Datasets Currently, there are few high-quality complex QA datasets based on both KBs and text. Previous methods (Sun et al., 2018, 2019; Shi et al., 2021) evaluated their models on MetaQA (Zhang et al., 2018) by pairing its KB with the text corpus of WikiMovies (Miller et al., 2016). However, the questions in MetaQA are too simple since there are only 9 relations in its KB. Therefore, we conduct our experiments on two more challenging complex QA datasets: KQA Pro and Musique, and their details are as follows. KQA Pro (Cao et al., 2022a) is a large scale complex QA dataset, including 120k diverse natural language questions up to 5 hops over KB. Its KB is a subset of Wikidata (Vrandecic and Krötzsch, 2014), and consists of 16k entities, 363 predicates, 794 concepts and 890k triple facts. For each question, KQA Pro also provides the corresponding KoPL program. To simulate the realistic case where KB is incomplete, following (Sun et al., 2019; Shi et al., 2021), we randomly discard 50% triples in the KB and take Wikipedia as supplementary text corpus. Musique (Trivedi et al., 2022) is a multi-hop QA dataset over text, including 25k 2-4 hop questions. We evaluate our framework under Musique-Ans setting where all the questions are answerable. Its questions are carefully constructed from several single-hop QA datasets via manually composition and paraphrase, and are hard to cheat via reasoning shortcut. For each complex question, Musique gives 20 paragraphs (including annotated evidence paragraphs and distractor paragraphs) as the corpus. Specially, for each question in the training set, Musique also provides a golden atomic representation, together with the answer and the evidence paragraph of each atomic question. In addition to the given paragraphs, we choose Wikidata as the KB to acquire additional knowledge. ## 5.2 Implementations KQA Pro For the experiments of KQA Pro, a key challenge is that there are no annotations for atomic representation, which are required for training the question decomposer and generator in RoHT. Because the KoPL program of a complex question follows context free grammar, every atomic question will correspond to a specific span of the program. Therefore we first split the KoPL program into subprograms according to the grammar, then use each sub-program to generate the atomic question by applying BART model fintuned with the (KoPL, question) pairs from the original dataset. For the answers for each atomic question, we execute the corresponding sub-programs on the KB to get corresponding answers. Using these constructed atomic representations, we train two BART-base models as the question decomposer and generator, respectively. For the scheduler, we directly use the semantic parser trained by (Cao et al., 2022a) on KQAPro, and set the precision threshold γ to be 0.7. We train a RoBERTa-large as the evidence selector via weak supervised method: for each question in the training set and constructed atomic representations, we first use BM25 to recall 10 related paragraphs from wikipedia, then take the paragraphs that contain the answer as positive samples and take other recalled paragraphs as negative samples. For the text executor, we also train a BART-large reading comprehension model on these positive samples. Musique Since Musique provides golden atomic representation for every complex question in the training set, we directly use them to train BARTbase models as question decomposer and generator. For the scheduler, we adapt semantic parser trained by (Cao et al., 2022a) on Wikidata. The KB precision threshold γ is set to be 0.4, which is determined by the top-10 types of questions with the highest precision. We train the RoBERTa selector model on complex and atomic questions in the training set together, taking annotated evidence paragraphs as positive samples and distractor paragraphs as negative samples. For the text executor, we pre-train a Longformer-large (Beltagy et al., 2020) reading comprehension model on SQUAD (Rajpurkar et al., 2016), then finetune it on complex questions and atomic questions of Musique. | Model | Overall | Multihop | Qualifier | Comparison | Logical | Count | Verify | Zero-shot | |--------------------------|-----------|------------|-------------|--------------|-----------|---------|----------|-------------| | 50% KB KVMemNN | 17.72 | 17.63 | 18.53 | 1.39 | 15.48 | 28.38 | 59.30 | 0.06 | | RGCN | 34.77 | 33.71 | 28.44 | 31.46 | 35.39 | 39.76 | 64.27 | 0.06 | | BART KoPL | 38.04 | 33.10 | 29.40 | 51.81 | 29.92 | 33.69 | 60.12 | 29.03 | | RoHTKB | 38.94 | 34.16 | 31.54 | 50.91 | 31.61 | 33.69 | 60.4 | 30.52 | | 50%KB + Text TransferNet | 16.80 | 15.94 | 17.93 | 45.35 | 14.84 | 10.47 | 0.00 | 8.43 | | RoHTmix | 46.45 | 41.76 | 41.73 | 52.21 | 41.95 | 31.26 | 65.45 | 38.76 | ## 5.3 Baselines we compare RoHT with several representative methods for complex QA, including memory-based methods, graph-based methods, and XQA methods. KVMemNN (Miller et al., 2016) stores encoded knowledge in key-value memory and iteratively reads the memory to update the query vector to conduct multi-hop reasoning. RGCN (Schlichtkrull et al., 2018) is a variant of graph convolutional network and utilizes the graph structure of KB to tackle complex questions. BART KoPL (Cao et al., 2022a) is a BART-based semantic parser which can convert complex question into KoPL program. It achieves over 90% accuracy on KQA Pro on the complete KB. SA (Trivedi et al., 2022) is a two-stage model that first uses a RoBERTa-large selector to rank and select the K most relevant paragraphs with the question and then uses a Longformer-large answerer to predict answer based on selected paragraphs. EX(SA) (Trivedi et al., 2022) is the state-of-the-art model on Musique. It first explicitly decomposes the complex question into atomic representation and then calling SA model repeatedly to answer each atomic question in order. TransferNet (Shi et al., 2021) iteratively transfer entity scores via activated path on the relation graph that consists of both text-form relations and KB-form relations. It is existing state-of-the-art model that utilizes both KBs and text as knowledge soruces, and nearly solves MetaQA. We reimplement it on both KQA Pro and Musique, and the details are shown in Appendix D. RoHT: RoHTKB, RoHTtext and RoHTmix denote the RoHT models that only use KB, only use text and use both KB and text, respectively. ## 5.4 Main Results 5.4.1 Results on KQA Pro The experimental results for KQA Pro are shown in Table 1. When using only the incomplete KB, RoHTKB model respectively improves EM by 21.22, 4.17 and 0.90 compared to KVMemNN, RGCN and BART KoPL, showing the benefit of integrating the answers of sub-questions of different levels. After adding Wikipedia as supplementary text corpus, RoHTmix yields substantial improvement compared with RoHTKB (7.51 on EM), demonstrating the effectiveness of utilizing knowledge from KB and text together. RoHTmix also outperforms TransferNet, which is end-to-endly trained with a mixed relation graph, by a large margin (29.65 on EM). This is because unlike graphbased methods, RoHT explicitly shows the compositional structure of a complex question in natural language form via HQDT generation, and thus can retrieve answers from the KB and text with more advanced and flexible sub-modules (e.g., semantic parser and reading comprehension model). Moreover, our designed atomic operations in the HQDT also enable RoHT to solve a wide variety of complex questions: we can see that RoHTmix achieves the best results on 6 types of questions among 7 types, showing comprehensive reasoning capacity. 5.4.2 Results on Musique Table 2 presents the results on the dev set of Musique dataset. As expected, our RoHT models show significant improvement over all the baselines. With only given paragraphs, RoHTtext improves EM/F1 by 13.8/14.3 and 11.6/11.9 compared with SA and EX(SA), respectively; With both text and KB, the performance of RoHTmix is also remarkably better than TransferNet (62.3 v.s. 10.9 on F1). Comparing RoHTtext and RoHTmix, Model EM F1 Text SA 39.3 47.3 EX(SA) 41.5 49.7 RoHTtext 53.1 61.6 Text+KB TransferNet 8.6 10.9 RoHTmix **54.4 62.3** we can also see some benefits of supplementing the text information with KB information, though the improvement is smaller than supplementing the KB with text on KQA Pro because KBs have lower coverage than text and the semantic parser is not specially finetuned for questions of Musique. We submit the predictions of RoHTmix on the test set and achieve 63.6 F1 score, which significantly outperforms the best public result 52.3. | Model | KQA Pro | Musique | |---------------|-----------|-----------| | RoHTmix | 46.5 | 54.4 | | w/o scheduler | 40.7 | 47.0 | | RoATmix | 32.3 | 47.6 | ## 5.5 Further Analysis 5.5.1 Effect Of Scheduler To show the effect of the scheduler module, we remove it from the RoHTmix model, i.e, default that the KB and recalled/given text paragraphs are suitable for all questions in the HQDT, and evaluate the performance again on the dev set of KQA Pro and Musique. The results are shown in Table 3. We can see that after discarding the scheduler, the EM performance on KQA Pro and Musique drops by 5.8 and 7.4, respectively. Therefore, it is important to use the scheduler to select suitable knowledge sources for each question. ## 5.5.2 Effect Of Hierarchical Decomposition Many existing methods generate non-hierarchical decomposition of complex questions, similar to the atomic representation, to assist reasoning (Min et al., 2019; Wolfson et al., 2020; Deng et al., 2022). To demonstrate the superiority of hierarchical decomposition, we compare our RoHTmix model with ![7_image_0.png](7_image_0.png) RoATmix model, which uses the same scheduler, executors, and aggregator as RoHTmix, but solves the complex question by directly answering the atomic questions in its atomic representation in order. As shown in Table 3, RoHTmix outperforms RoATmix by a large margin on both KQA Pro and Musique. This is because the hierarchical structure of HQDT enables RoHT model to fuse the knowledge from KBs and text at different question levels, and to discard wrong answers via comparing the problisitic scores of answers. To further understand the reason, we show a case from Musique in Figure 3. We can see that both RoHTmix and RoATmix fail to answer the question "*Where did (Titian) die?*" (q 4in the left, a 02 in the right). However, RoHTmix directly extracts the correct answer of q 1from text and finally gets the correct answer of q 0 with the highest score, while RoHTmix fails to solve a 03 because it must rely on the wrong answer from a 02 . ## 6 Conclusion In this paper, we propose RoHT, an understandingreasoning XQA framework that uses both a KB and a text corpus to derive answers of complex questions. RoHT first builds the HQDT for a complex question to understand its hierarchical compositional structure, then conducts recursive probabilistic reasoning over the HQDT to solve the question, integrating answers from the KB, text, and sub-questions. Experiments show that RoHT significantly outperforms previous methods. We also demonstrate the superiority of HQDT compared with non-hierarchical decomposition. ## 7 Limitation Currently, RoHT framework is restricted to incorporating KBs and text. However, since RoHT retrieves answers from each knowledge source in a separate way, it could in principle utilize knowledge from more heterogeneous sources such as tables, and we will study this in future work. In addition, a device with large storage space and memory is needed for the storage and usage of Wikipeida and Wikidata. ## 8 Ethics Statement The data used in this paper are drawn from publicly published datasets, encyclopedias and knowledge bases. Most of them do not involve sensitive data. ## 9 Acknowledgement This work is supported by grants from the Institute for Guo Qiang, Tsinghua University (2019GQB0003) and Cloud BU, Huawei Technologies. ## References Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *CoRR*, abs/2004.05150. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In *Proceedings of the 2013* Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1533–1544. ACL. Shulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Bin He, and Hanwang Zhang. 2022a. KQA pro: A dataset with explicit compositional programs for complex question answering over knowledge base. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 6101– 6119. Association for Computational Linguistics. Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, and Jinghui Xiao. 2022b. Program transfer for answering complex questions over knowledge bases. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8128–8140. Association for Computational Linguistics. Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowledge bases and text using universal schema and memory networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,* ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 358–365. Association for Computational Linguistics. Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, and Patricia Riddle. 2022. Interpretable amrbased question decomposition for multi-hop question answering. pages 4093–4099. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Successive prompting for decomposing complex questions. *CoRR*, abs/2212.04092. David A. Ferrucci. 2012. Introduction to "this is watson". *IBM J. Res. Dev.*, 56(3):1. Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spacy: Industrialstrength natural language processing in python. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2022. Decomposed prompting: A modular approach for solving complex tasks. *CoRR*, abs/2210.02406. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. *Semantic Web*, 6(2):167–195. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. pages 7871–7880. Chen Liang, Jonathan Berant, Quoc V. Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 23–33. Association for Computational Linguistics. Percy Liang. 2013. Lambda dependency-based compositional semantics. *CoRR*, abs/1309.4408. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. volume abs/2209.09513. Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016*, pages 1400–1409. The Association for Computational Linguistics. Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Multi-hop reading comprehension through question decomposition and rescoring. pages 6097–6109. Robert Neches, William R. Swartout, and Johanna D. Moore. 1985. Explainable (and maintainable) expert systems. In *Proceedings of the 9th International* Joint Conference on Artificial Intelligence. Los Angeles, CA, USA, August 1985, pages 382–389. Morgan Kaufmann. Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2022. Unik-qa: Unified representations of structured and unstructured knowledge for opendomain question answering. In Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 1535–1546. Association for Computational Linguistics. Ethan Perez, Patrick S. H. Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. Unsupervised question decomposition for question answering. pages 8864–8880. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. pages 2383–2392. Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. *Found. Trends Inf. Retr.*, 3(4):333–389. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *The Semantic Web - 15th* International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of *Lecture Notes in Computer Science*, pages 593–607. Springer. Hendrik Schuff, Heike Adel, and Ngoc Thang Vu. 2020. F1 is not enough! models and evaluation towards user-centered explainable question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7076– 7095. Association for Computational Linguistics. Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, and Hanwang Zhang. 2021. Transfernet: An effective and transparent framework for multi-hop question answering over relation graph. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4149–4158. Association for Computational Linguistics. Haitian Sun, Tania Bedrax-Weiss, and William W. Cohen. 2019. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2380–2390. Association for Computational Linguistics. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W. Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. pages 4231–4242. Yawei Sun, Lingling Zhang, Gong Cheng, and Yuzhong Qu. 2020. SPARQA: skeleton-based semantic parsing for complex questions over knowledge bases. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8952–8959. AAAI Press. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multihop questions via single-hop question composition. Trans. Assoc. Comput. Linguistics, 10:539–554. Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. *Commun.* ACM, 57(10):78–85. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *CoRR*, abs/2201.11903. Tomer Wolfson, Mor Geva, Ankit Gupta, Yoav Goldberg, Matt Gardner, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. *Trans. Assoc. Comput. Linguistics*, 8:183–198. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Hybrid question answering over knowledge base and free text. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2397– 2407. ACL. Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J. Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 6069–6076. AAAI Press. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed H. Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. *CoRR*, abs/2205.10625. ## A Atomic Operations We design 6 atomic operations: Verify, SelectBetween, SelectAmong, Count, Intersection, *Union*, to support various reasoning capacity. We show their input, output, and examples in Table 4. ## B Get Atomic Representation From Leaf Nodes Algorithm 1 describes that how to get the atomic representation of a question q i ∈ T from the leaf nodes in the sub-tree rooted by q i. Algorithm 1 Get Atomic Representation from Leaf ![11_image_0.png](11_image_0.png) Nodes Input: An HQDT T and a index i. Output: q i*.atoms* 1: **function** DFS(j, atoms, ids, n) ![11_image_1.png](11_image_1.png) 2: if q jis a leaf question **then** 3: n ← n + 1 4: *ids[j*] ← n 5: a ← q j 6: for k in **GetRefTokens**(q ![11_image_2.png](11_image_2.png) 7: if q 8: a ← **ModifyRefTokens**(*a, k, ids[k*]) 9: **else** 10: a ← **ModifyRefTokens**(*a, k, ids[ed*k]) 11: *atoms.*append(a) 12: **return** 13: for k ← stj *, . . . , ed*j do 14: Dfs(k) 15: 16: q i*.atoms* ← [] 17: ids ← empty dict 18: Dfs(i, qi*.atoms, ids,* 0) ## C Pseudocode For Building Hqdt ![11_Image_3.Png](11_Image_3.Png) Algorithm 2 shows the pseudocode for generating the HQDT of a complex question with probability. ## D Reimplementation Of Transfernet To reimplemente TransferNet, we build the mixed relation graphs that consist of both label-form relations (i.e., KB triples) and text-form relations for KQA Pro and Musique, respectively, and train the models with the open source code. We show the details of graph building as follows. KQA Pro We follow the method used by the original paper on MetaQA to build the relation graph of KQA Pro. As mentioned in Section 5.2, we use half of its KB triples as the label form. We constructe the text form by extracting sentences from Wikipedia. Following the original paper, we use exact match of surface forms for entity recognition Algorithm 2 Generation of HQDT Input: a complex question q 0, a question decomposer Mθ, a question generator Mϕ. Output: a list T representing the HQDT, where element (q i, pig*, f a*i) in T denote a sub-question q i, certainty score of q iand the father of q i, respectively. 1: **function** REARRANGEREFTOKENS(ar) 2: *atoms* ← [] 3: ids ← empty dict 4: h ← 0 5: for (*i, a*i) in ar do 6: h ← h + 1 7: *ids[i*] ← h 8: for k in **GetRefTokens**(ai) do 9: ai ← **ModifyRefTokens**(ai*, k, ids[k*]) 10: *atoms.*append(ai) 11: **return** *atoms* 12: 13: ([a 0 1*, . . . , a*0n0 ], ld) ← Mθ(q 0) 14: n ← n0 15: T ← [] 16: for i ← 1, 2*, . . . , n*0 do 17: (q i, pig) ← (a 0 i, ld) 18: ari ← [(*i, a*0 i )] 19: if a 0 i contains referring tokens **then** 20: r1*, . . . , r*h ← **GetRefTokens**(a 0 i ) 21: n ← n + 1 22: arn ← [] 23: for j ← r1, . . . , rh, i do 24: if f aj has been identified **then** 25: q i ← **ModifyRefTokens**(q i*, j, f a*j ) 26: j ← f aj 27: f aj ← n 28: T.append((q j, pjg*, f a*j )) 29: arn.extend(arj) 30: q n*.atoms* ← **RearrangeRefTokens**(arn) 31: L n = **Serialize**(q n*.atoms*) 32: (q n, ln g ) ← Mϕ(L n) 33: p n g ← ld · l n g 34: T.append((q 0, 1, 0)) ▷ directly use q 0as root 35: T ← **ReIndexByBFS**(T) 36: **return** T and linking. For every entity in the KB, we recall all the paragraphs in Wikipedia titled by it, then take the entity as subject and other relevant entities appeared in these paragraphs as objects. The sentences that contain the objects are selected as the relation texts. The recall of answer is 51%, i.e, for 51% questions, there exist a complete path from the topic entity to the answer in the relation graph, and this is a upper bound for the performance of TransferNet. Musique For each question in Musique, we utilize the 20 given paragraphs to build individual relation graph. Specifically, we first identify entities mentioned in these paragraphs via Spacy (Honnibal et al., 2020) and exact match of surface forms with Wikidata entities. Then we take the co-occuring | Operation | Argument | Input → Output | Example | |--------------------------------------------------------------------|-----------------------|-----------------------------------|---------------------------------------| | Verify | Value, > / < / = / != | (Value) → (Bool) | [Verify] [2005] [<] #3 (1998) → (yes) | | SelectBetween | greater / smaller | (Value, Ent) (Value, Ent) → (ENT) | [SelectBetween] [smaller] #3 #4 | | (6670 km, Nile River) (6440 km, Amazon River) → (Amazon River) | | | | | SelectAmong | largest / smallest | [(Value, Ent)] → (Ent) | [SelectAmong] [largest] #1 | | [(8848m, Everest) (8611m, K2) (8516m, Makalu)] → (Everest) | | | | | Count | / | [(Ent)] → (Value) | [Count] #2 | | [(Bronny James) (Bryce James) (Zhuri James)] → (3) | | | | | Intersection | / | [(Ent)] [(Ent)] → [(Ent)] | [Intersection] #1 #2 | | [(apple) (orange) (peach)] [(orange)] → [(orange)] | | | | | Union | / | [(Ent)] [(Ent)] → [(Ent)] | [Union] #1 #2 | | [(apple) (orange)] [(orange) (peach)] → [(apple) (orange) (peach)] | | | | sentences of two entities as the text-form, and take the triples in Wikidata whose subject or object is one of these entities as the label-form. The recall of answer is 72%. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We follow the same license B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data used in this work are drawn from publicly published encyclopedias, knowledge bases and datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5 ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
huang-etal-2023-faking
Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation
https://aclanthology.org/2023.acl-long.815
Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as appeal to authority and loaded language. In particular, we create a new training dataset, PropaNews, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62{--}7.69{\%} F1 score on two public datasets.
# Faking Fake News For Real Fake News Detection: Propaganda-Loaded Training Data Generation Kung-Hsiang Huang♠ **Kathleen McKeown**♣ Preslav Nakov♢ Yejin Choi♡♦ **Heng Ji**♠ ♠ UIUC ♣Columbia University ♡ University of Washington ♢ MBZUAI ♦AI2 {khhuang3, hengji}@illinois.edu kathy@columbia.edu preslav.nakov@mbzuai.ac.ae yejinc@allenai.org ## Abstract Despite recent advances in detecting fake news generated by neural models, their results are not readily applicable to effective detection of human-written disinformation. What limits the successful transfer between them is the sizable gap between machine-generated fake news and human-authored ones, including the notable differences in terms of style and underlying intent. With this in mind, we propose a novel framework for generating training examples that are informed by the known styles and strategies of human-authored propaganda. Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles, while also incorporating propaganda techniques, such as *appeal to authority* and *loaded* language. In particular, we create a new training dataset, PROPANEWS, with 2,256 examples, which we release for future use. Our experimental results show that fake news detectors trained on PROPANEWS are better at detecting human-written disinformation by 3.62– 7.69% F1 score on two public datasets.1 ## 1 Introduction The dissemination of false information online can cause chaos, hatred, and trust issues, and can eventually hinder the development of society as a whole (Dewatana and Adillah, 2021; Wasserman and Madrid-Morales, 2019). In particular, humanwritten disinformation2is often used to manipulate certain populations and reportedly already had a catastrophic impact on multiple events, such as Brexit (Bastos and Mercea, 2019), the COVID-19 pandemic (van Der Linden et al., 2020), and the 2022 Russian assault on Ukraine. 1The code and data are released on GitHub: https:// github.com/khuangaf/FakingFakeNews 2There are many types and definitions of *fake news*, but here we focus on text-only *disinformation*. Yet, we will also use the less accurate term *fake news* as it is more common. Hence, there is an urgent need for a defense mechanism against human-written disinformation.3 For this, we need a substantial amount of training data to build detectors. A naïve solution is to collect human-written news articles that contain inaccurate information by crawling untrustworthy news media. However, news articles published by suspicious sources do not necessarily contain false information, which means that annotators are required to fact-check every claim in each untrustworthy article. Moreover, articles containing false claims are often removed shortly after posting. While some work collected human-written fake news from factchecking websites (Shu et al., 2018; Nguyen et al., 2020), the size of these datasets is limited. The curation process of these websites also requires a lot of manual efforts. Hence, such a solution is neither scalable nor reliable. Thus, an alternative direction complementing the existing efforts would be to generate training data automatically in a way that avoids these issues. Our goal is to enhance disinformation detection by generating training examples that are better informed by the known styles and strategies of human-authored disinformation. We started by collecting human-written articles from untrustworthy sites4, and we analyzed around 40 of them that spread false claims. Throughout our analysis, we found two characteristics of this human-written disinformation. First, about 33% of the articles used propaganda techniques to convince the audience that the fake information was actually authentic, and these techniques often involve the use of emotion-triggering language or logical fallacies (Da San Martino et al., 2019) to increase the impact on the reader. Statistics about the propaganda techniques are given in Appendix A. AJDABIYAH , Libya | Thu Apr 7 , 2011 6:34 pm EDT AJDABIYAH , Libya -LRB- Reuters -RRB- - Rebels fighting to overthrow Muammar Gaddafi said five of their fighters were killed ... "In rebel-held eastern Libya, wounded rebels being brought to a hospital Ajdabiyah said their trucks and tanks were hit on Thursday by a NATO air strike outside Brega. NATO said it was investigating an attack by its aircraft on a tank column in the area along the Mediterranean coast on Thursday , saying the situation was " unclear and fluid . " Rebels said at least five of their fighters were killed when NATO planes mistakenly bombed a rebel tank column near the contested port. "A number of vehicles were hit by a NATO strike ", officers from UN concluded. The fighting for Brega , the only active front , has dragged on for a week ... Table 1: An example of our generated fake news. Given an authentic news article, our approach first identifies a salient sentence, which it then replaces with a plausible but disinformative sentence that is coherent to the context. Finally, it generates a propaganda sentence to make the article resemble human-written fake news. Second, more than 55% of the articles that we analyzed contained inaccurate information mixed with correct information: in fact, all claims, except for one or two, in these disinformation articles were factual, which makes the few false claims in these articles even more believable. Prior work has made significant progress in generating fake news using large pre-trained sequenceto-sequence (seq2seq) models (Zellers et al., 2019; Fung et al., 2021; Shu et al., 2021). However, the articles generated by these approaches contain an overwhelmingly large proportion of false information and do not explicitly use propaganda. To address these issues, here we propose a novel generation method. Given an authentic news article, we replace a salient sentence with a plausible but fake piece of information using a seq2seq model. As the generated texts can often be entailed by the original contexts, we incorporate a self-critical sequence training objective (Rennie et al., 2017) that incorporates a natural language inference (NLI) model into the loss function. Additionally, we use the NLI model to filter out generated sentences that can be inferred from the replaced ones. Then, we add propaganda techniques to mimic how humans craft disinformation. In particular, we automate two commonly used propaganda techniques, *appeal to authority* and *loaded* language, (Da San Martino et al., 2019) to add propaganda into the faked sentences. Subsequently, we use the silver-standard training data generated from these two steps to train a detector. An example is shown in Table 1. We further recruited crowdsourcing workers to validate that some generated texts were indeed fake, so that we could construct a gold-standard training dataset. Comparing our method to state-of-the-art fake news generation approaches, the evaluation results on two human-written fake news datasets show that detectors are substantially better at spotting human-written disinformation when trained on our generated fake news dataset. Our ablation studies confirm the effectiveness of incorporating propaganda into the generated articles for producing better training data. Our contributions can be summarized as follows: - We propose an effective method to automatically generate more realistic disinformation compared to previous work. - We develop the first automatic methods to generate specific propaganda techniques such that the generated articles are closer to disinformation written by humans. - We demonstrate that detectors trained on our generated data, compared to generated articles using other methods, are better at detecting human-written disinformation. - We release PROPANEWS, a dataset for disinformation detection containing 2.2K articles generated by our approach and validated by humans. ## 2 Training Data Generation Our process of generating training data for propaganda-loaded disinformation consists of two main steps: disinformation generation (§2.1) and propaganda generation (§2.2). Below, we describe each of these steps in detail. ## 2.1 Disinformation Generation Our disinformation generation approach aims at two sub-goals: (i) replacing a salient sentence in the given article with a sequence of generated coherent texts that looks plausible, and (ii) ensuring that the generated information cannot be entailed by the original masked-out sentence; otherwise, the generated texts will not be disinformative. To achieve the first sub-goal, we first identify salient sentences using extractive summarization, and we then perform mask-infilling with BART (Lewis et al., 2020). We achieve the second sub-goal using self-critical sequence training (Rennie et al., 2017) with an NLI component, which we use as a reward function for generation. ![2_image_0.png](2_image_0.png) Salient Sentence Identification A salient sentence is critical for the overall semantics of the article. When it is manipulated or replaced, the complex events described in the article may be drastically changed. Yet, there is no salient sentence identification dataset publicly available. Motivated by the fact that sentences included in an extractive summary are often of higher importance, we take the scores computed by an extractive summarization model (Liu and Lapata, 2019), which predicts how likely each sentence is to belong to the summary, to estimate its saliency. We found that this yields reasonably good sentence saliency estimation. For each news outlet, we replaced one sentence that had the highest extractive summarization score with our generated disinformation. Mask Infilling with BART To perform infilling, we took an approach similar to that of Donahue et al. (2020), but we used BART (Lewis et al., 2020). At training time, we randomly masked out a sentence y ∗from a given article x. The bidirectional encoder first produces contextualized representations he = Encoder(x˜) given the article with a masked-out sentence x˜ = x − y ∗. Then, the autoregressive decoder learns a maximum likelihood estimation that aims to maximize the probability of generating the next token y ∗ t at time step t given all tokens in previous time steps {y ∗ 0*, ..., y* ∗ t−1} and the encoder hidden states he by minimizing the negative log probability of generating y ∗ t as follows: $${\mathcal{L}}_{m}=-\sum_{t=1}^{T}\log P(y_{t}^{*}|y_{0}^{*},...,y_{t-1}^{*},h_{e}).\quad\quad(1)$$ During inference time, rather than using random masking, x˜ is formed by masking out the sentence with the highest score computed by the extractive summarization model given the original document x, as discussed in the previous paragraph. ![2_image_1.png](2_image_1.png) Self-critical Sequence Training BART optimized via maximum likelihood estimation alone is capable of generating coherent texts. However, although the generated texts y ′ may be very different from the originally masked out sentence y ∗, there is no guarantee that y ′contains incorrect information. If the generated texts y ′can be entailed by the masked out sentence y ∗, then y ′is actually not disinformative. An example is shown in Figure 2. Here, except for the lack of details, the generated sentence y ′delivers the same message as the masked out sentence y ∗. To reduce the probability that y ′can be entailed by y ∗, we leverage self-critical sequence training (Rennie et al., 2017; Bosselut et al., 2018) that rewards the model for generating sequences that cannot be entailed by the masked-out sentences. Self-critical sequence training (SCST) is a form of the REINFORCE algorithm (Williams, 1992) that allows direct optimization on non-differentiable functions. Using a baseline output y ′′ of the model to normalize the rewards, SCST avoids the challenge of directly estimating the reward signal or estimating normalization (Rennie et al., 2017). Since our goal is to avoid entailment from y ∗to y ′, we define the reward as the negative entailment probability computed by a ROBERTA-based (Liu et al., 2019) NLI model fine-tuned on Multi-NLI (Williams et al., 2018) 5, $$r(y^{'})=-P_{n l i}(y^{*},y^{'}),$$ ), (2) where r(y ′ ) is the reward of the sequence sampled from the current policy y ′, and Pnli(y ∗, y ′ ) is the probability that y ∗entails y ′. To generate y ′, we use Nucleus Sampling (Holtzman et al., 2020) with p = 0.96, as this sampling method has shown advantages in *open-ended* generation (Holtzman et al., 2020; Zellers et al., 2019). We generate the baseline output y ′′ using greedy decoding, then obtain the entailment probabilities between y ′and y ′′ from the NLI model. We then compute the self-critical sequence training loss: $$\mathcal{L}_{s}=-(r(y^{\prime})-r(y^{n}))\sum_{t=1}^{T}\log P(y^{\prime}_{t}|y^{\prime}_{0},..,y^{\prime}_{t-1},h_{e}).\tag{3}$$ Here r(y ′′) is a baseline reward, and r(y ′ )−r(y ′′) is a normalized reward. This loss function encourages BART to generate y ′ when r(y ′ ) > r(y ′′), whereas it suppresses the probability of decoding y ′ when r(y ′ ) < r(y ′′). An overview of SCST is shown in Figure 1. The final objective function to minimize is a weighted sum of Equation (1) and Equation (3), $${\mathcal{L}}_{f i n a l}=\alpha{\mathcal{L}}_{m}+\beta{\mathcal{L}}_{s},$$ L*f inal* = αLm + βLs, (4) where $\alpha$ and $\beta$ are the weights for each loss. Post-processing To further ensure the quality of the disinformation generated, we reuse the NLI model discussed in the previous paragraph to filter out invalid outputs y ′that can be entailed from the masked-out sentence y ∗, as demonstrated in Figure 2. We found that incorporating the SCST loss (Equation (3)) into the training objective successfully reduces the invalid rate from 7.8% to 3.2%. ## 2.2 Propaganda Generation After generating inaccurate information, we incorporate propaganda into each generated article. We chose two representative propaganda techniques of each type: emotional versus non-emotional. Loaded language is an emotional technique and it is also by far the most frequent propaganda technique as shown in Table 5 of (Da San Martino et al., 2019) and Table 2 of (Dimitrov et al., 2021). 5We use the fine-tuned NLI model from https:// huggingface.co/roberta-large-mnli. Its accuracy is 90.2% on the dev set of MNLI, which is on par with state-of-the-art methods. 6Empirically, we set α = 1 and β = 0.01. Based on these two tables, we also see that *appeal to authority* is among the most frequent nonemotional techniques. Appeal to Authority *Appeal to authority* is a propaganda technique that aims to strengthen or to invalidate an argument by referring to a statement made by authorities or experts (Da San Martino et al., 2019). We first collect experts from various domains, such as economics and immunology, from Wikidata.7In particular, we specify the *occupation* (P108) of each expert and we filter out entities that were born before 1940 to ensure recency. To consider only impactful entities, we rank all candidates based on the number of corresponding outcoming *statements* (i.e., connected concepts in Wikidata), inspired by PageRank (Page et al., 1999), and we add the top 100 entities for each occupation into the candidate list Z. Then, we include the person named entities extracted by a name tagger,8 which are more relevant to the local context. This makes sense as we found that more than 73% of the news articles contain authorities. More details on how authority candidates Z are collected can be found in Appendix E. Once we collect a candidate list Z, we then generate fake arguments made by each zi ∈ Z with the BART model that has already been fine-tuned in §2.1. In particular, a <mask> token is inserted right after the filled-in sentence y ′in the input article to BART so that it knows where to perform infilling. To inform BART that it should generate a statement made by an authority, we prefix the decoder with the template [zi *confirmed that "]*, where zi ∈ Z is the name of the authority. The prefix ends with an opening quotation mark to indicate that it should be followed by a statement by authority zi. To increase the diversity of the generated statements, we devise a variety of templates, as detailed in Appendix E. Finally, the best sequence s ∗is selected with the lowest perplexity s ∗= argminsi Perplexity(si), where si denotes the generated sequence using zi as the authority. Loaded Language *Loaded language* is another propaganda technique that uses emotion-triggering terms or phrases to influence the opinions of the audience (Da San Martino et al., 2019; Dimitrov et al., 2021). Often, *loaded language* uses sensational adverbs or adjectives to exaggerate a statement. | Technique | Generated Disinformation and Propaganda | |---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Appeal to Authority | Cairo's Tahrir Square was the scene of clashes between protesters and police on Wednesday. " At least three people were killed and more than 600 were injured in the clashes," said Egypt's President. | | Loaded Language | Cairo's Tahrir Square was the scene of deadly clashes between protesters and police on Wednesday. | Table 2: Examples of the two generated propaganda techniques, as shown by texts in blue. The first row shows how the argument is strengthened by appealing to an authority's statement, while the second row demonstrates how loaded language is introduced with an emotion-triggering term. Based on this observation, we utilize the propaganda dataset released by Da San Martino et al. (2019) where propaganda techniques are annotated at the fragment level (i.e. span level). The dataset contains 2,547 *loaded language* instances. Yet, not every instance contains adjectives or adverbs that are emotion-triggering. To create valid training data for *loaded language* generation, we first use SpaCy to perform part-of-speech tagging and dependency parsing, and then keep the examples where there exists an adverb pointing to a verb or an adjective pointing to a noun through dependency parsing edges. This results in 1,017 samples of valid *loaded language* instances. Examples of the generated *appeal to authority* and *loaded language* are shown in Table 2. Upon collecting the training data to generate loaded language, we fine-tune another BART on this dataset. Naïvely, we can take the articles with emotion-triggering adverbs or adjectives removed as input to BART and using the original article as the decoding target. However, we found that around 25% of the time BART does not exactly reproduce the unmasked texts due to hallucination. This observation is consistent with Donahue et al. (2020)'s findings. To this end, we propose a twostep generation approach. First, we train BART to insert a <mask> token into the target sentence in the input document marked with special tokens. Then, BART learns to infill the <mask> with an approach similar to what is discussed in §2.1 but without the SCST objective. Empirically, we found that this approach successfully reduces the chance of failure in generating the exact unmasked contexts to around 2%. ## 2.3 Intermediate Pre-Training As the size of TIMELINE17 (Tran et al., 2013) and the propaganda dataset (Da San Martino et al., 2019) are relatively small, we perform intermediate pre-training (IPT) on the news articles from CNN/DM, a large news summarization dataset (Hermann et al., 2015), for domain adaptation. Details of IPT can be found in Appendix F. ## 3 Our Propanews **Dataset** 3.1 Data Source When selecting the source of data, we considered two criteria. First, the news articles must have high trustworthiness. This ensures that, except for our manipulated sentences, the rest is genuine. Second, the news events described in the articles must be important. Motivated by these two criteria, we repurposed the TIMELINE17 dataset (Tran et al., 2013) as our source of data. It contains 17 timelines, each of which corresponds to a news event. Each timeline is associated with a series of news articles that span across a wide time span, implying the high importance and impact of these events. Moreover, the articles come from trustworthy media. In total, there are 4,535 news articles in TIMELINE17. ## 3.2 Crowdsourcing For Data Curation We use Amazon's Mechanical Turk (AMT) to verify the quality and the correctness of the generated disinformation. In total, there are around 400 unique crowdsourcing workers contributing to approximately 2,000 Human Intelligence Tasks (HITs). For each HIT, the annotators were tasked to look for supporting evidence from trustworthy news media to determine whether the sentences generated are indeed *inaccurate*. Only those labeled as *inaccurate* were included in PROPANEWS, while the *accurate* counterparts were discarded. Appendix H gives additional details. To measure the inter-annotator agreement (IAA), we use the Worker Agreement With Aggregate (WAWA) score (Ning et al., 2020; Sheng et al., 2021), which compares each annotator's answer to the aggregated answer obtained via majority votes and micro-averages the results across all samples.9 The resulting WAWA precision, recall, and F1 are 80.01%, 78.94%, and 79.47%, which indicates moderate to high agreement. 9We did not use other IAA metrics, such as Cohen's Kappa (Cohen, 1960), as we expect the vast majority of our generated disinformation to be inaccurate. WAWA provides a better approximation for inter-annotator agreement in our scenario. ## 4 Disinformation Detection The disinformation detection task challenges detectors to determine whether a given input article contains inaccurate information or not. We experiment on four detectors, including HDSF (Karimi and Tang, 2019), GROVER (Zellers et al., 2019), BERT (Devlin et al., 2019), and ROBERTA (Liu et al., 2019). HDSF leverages the hierarchical structures of discourse-level features, such as dependency trees, to predict the veracity of a news article. GROVER is an unidirectional seq2seq model pre-trained on news documents. We use the discriminative version for detection which is adapted from its generative version by feeding the [CLS] token representations to a multi-layer perceptron. Similarly, BERT and ROBERTA take in the entire article as input and feed the representations of the first token to a classification head to determine the veracity of each article. In addition, all models are optimized using cross entropy. For fair comparison, we set the maximum sequence length to 512 and we use the LARGE variants for all models. More details can be found in Appendix J. ## 5 Experiments In our experiments, we aim (1) to analyze the performance of different models on our new PROPANEWS dataset, (2) to examine the effect of various training data sets, and (3) to investigate how much silver-standard data is equivalent to goldstandard data. ## 5.1 Data PROPANEWS The PROPANEWS dataset consists of 2,256 distinct articles, with a balanced number of fake and real documents. Within the fake articles, 30% use *appeal to authority*, another 30% include *loaded language*, and the remaining 40% simply contains inaccurate information. We split the data into 1,256:500:500 for training, validation, and testing. Evaluation Data We use two sets of humanwritten articles released by Nguyen et al. (2020) and Shu et al. (2018) to evaluate the effectiveness of our approach. The articles in each dataset are collected from two fact-checking websites, SNOPES and POLITIFACT, respectively. Articles no longer accessible via the given URL are removed. Statistics about both datasets are shown in Appendix I. Other generated training data We compare PROPANEWS to the following approaches. GROVER-GEN (Zellers et al., 2019) generates headlines which condition on the original body texts, followed by body text generation conditioning on the generated headlines. FACTGEN (Shu et al., 2021) enhances the factual consistency of the generated article with a fact retriever that fetches supporting information from external corpora. FA-KEE**VENT** (Wu et al., 2022) generates sentences sequentially with condition on the manipulated knowledge elements of each sentence. Also, we form the PN-**SILVER** dataset by resampling our generated data but disregarding the annotator validation. Furthermore, we construct additional training sets by replacing the salient sentence in each article with one sentence generated by each baseline method, as indicated by -1SENT. To ensure fair comparisons, all generators take in the same set of authentic articles as inputs. ## 5.2 Results And Discussion Human-written disinformation detection To study the effectiveness of human-written disinformation detection, we train GROVER-LARGE and ROBERTA-LARGE on different training datasets and evaluate them on the SNOPES and POLITIFACT datasets, as shown in Table 3. Both models perform best when trained on PROPANEWS, compared to training on other datasets. Consider ablating human validation, detectors trained on PN-SILVER still outperform their counterparts trained on other datasets. This shows that our generative method produces articles that are more similar to humanwritten disinformation. To further verify this finding, we measure the similarity between articles generated by different approaches and disinformative articles in the POLITIFACT dataset using the MAUVE metric (Pillutla et al., 2021). MAUVE computes the similarity between two text distributions by adding the areas under a divergence curve, and has been shown to produce better approximations than other metrics such as JS divergence (Martins et al., 2020). We find that the MAUVE score with POLITIFACT for PROPANEWS and GROVER-GEN is 17.1% and 13.7%, respectively, suggesting that the generated documents in PROPANEWS are closer to human-written disinformation. These results confirm that the advantage of our generated articles in defending against human-written disinformation is resulted from the closer gap between them. | Test Data → | POLITIFACT | SNOPES | | | | | | | |------------------------------|-----------------------------------|--------------|---------------|--------------|---------|--------|---------|--------| | Detectors → | ROBERTA-LARGE | GROVER-LARGE | ROBERTA-LARGE | GROVER-LARGE | | | | | | Training Data ↓ | Without human validation (silver) | | | | | | | | | GROVER-GEN | 57.65 | (±7.6) | 52.77 | (±2.1) | 48.42 | (±2.2) | 49.53 | (±0.1) | | GROVER-GEN-1SENT | 49.65 | (±5.2) | 47.48 | (±1.8) | 44.44 | (±3.2) | 50.10 | (±2.1) | | FAKEEVENT | 46.33 | (±2.6) | 50.27 | (±5.9) | 45.36 | (±1.2) | 47.40 | (±1.3) | | FAKEEVENT-1SENT | 47.32 | (±3.2) | 50.12 | (±3.2) | 46.62 | (±2.9) | 47.29 | (±2.7) | | FACTGEN | 48.46 | (±2.2) | 51.79 | (±3.6) | 41.98 | (±5.4) | 50.47 | (±4.9) | | FACTGEN-1SENT | 41.19 | (±3.5) | 40.92 | (±4.1) | 40.01 | (±3.8) | 45.52 | (±3.7) | | PN-SILVER | 60.39∗ | (±3.9) | 55.23∗ | (±5.8) | 51.52∗∗ | (±3.4) | 52.39∗∗ | (±4.1) | | With human validation (gold) | | | | | | | | | | PROPANEWS | 65.34∗∗ | (±4.5) | 60.43∗∗ | (±6.2) | 53.03∗∗ | (±3.7) | 54.09∗∗ | (±2.8) | | w/o AA | 63.21∗∗ | (±3.2) | 58.28∗∗ | (±4.2) | 50.78∗ | (±1.8) | 53.22∗∗ | (±3.7) | | w/o LL | 64.65∗∗ | (±1.8) | 56.93∗∗ | (±5.3) | 51.92∗∗ | (±3.4) | 51.68∗ | (±1.4) | | w/o AA & LL | 61.83∗ | (±4.9) | 52.82 | (±3.3) | 52.77∗∗ | (±2.7) | 50.93 | (±2.7) | Comparing each baseline method and its counterpart that only generates one sentence to be substituted for the salient sentence (i.e., -1SENT), we found significant performance drops on GROVER-GEN and FACTGEN when only generating one sentence. This is likely caused by the incoherence between the right context and the sentence generated by these approaches due to the left-to-right fashion of text generation. While FAKEEVENT does not see the right context, it additionally conditions on knowledge elements corresponding to the sentence, which discourages it from producing topically irrelevant content and thus does not lead to huge performance drop. In Table 4, we show two examples of disinformative articles from POLITIFACT where ROBERTA is able to classify them as inaccurate when trained on PN-SILVER, but fails when trained on GROVER-GEN. Both articles contain propaganda, which is incorporated into PN-SILVER but not into GROVER-GEN. This demonstrates that detectors trained on our generated data perform better at detecting human-written disinformation that has such properties. Is propaganda generation helpful for disinformation detection? We further conduct an ablation study to analyze the contributions of each propaganda technique. As shown in the bottom of Table 3, both *appeal to authority* and *loaded language* prove beneficial in enhancing models' abilities to detect human-written disinformation. We can further see in Table 3, when comparing PROPANEWS WITHOUT AA& LL to other generation approaches, that both models trained on our generated data, even without the incorporation of propaganda techniques, still outperform their counterparts trained on other datasets. This illustrates that our generated disinformation texts are closer to news articles written by humans. How good is the generation quality? To evaluate the quality of our generation approach, we asked Amazon Mechanical Turk (AMT) workers to rate the plausibility of 100 generated articles from PROPANEWS and to determine the degree by which their answer to this question is influenced by the generated propaganda. Each article was rated by three different AMT workers. For comparison, we also asked the AMT workers to rate the plausibility of 100 generated articles from GROVER-GEN. The average plausibility scores for PROPANEWS and GROVER-GEN were 2.25 and 2.15 (out of 3), respectively. indicating that our generation approach has a slight advantage over GROVER-GEN in terms of plausibility. Moreover, among the articles in PROPANEWS that are rated highly plausible, 29.2% of the workers think that the generated propaganda highly affects their response (i.e. rated 3 out of 3) that the generated article is plausible. This demonstrates the effectiveness of our propaganda techniques in increasing the plausibility of generated articles. Survey details and score distributions are discussed in Appendix K. Article and Analysis Article: ... Statement from FDA Commissioner Scott Gottlieb, M.D., on FDA's ongoing efforts to help improve effectiveness of influenza vaccinesFor Immediate Release: ... Analysis: *Appealing to authority* is common in human-written fake news. Article: ... Regardless of how much we hate Nacy Pelosi, she represents a Congressional District that saw a million fraudulent votes from illegal immigrants... Analysis: The use of *loaded language* often indicates disinformation. Table 4: Examples from POLITIFACT where ROBERTA-LARGE successfully predicts the veracity when trained on PN-SILVER, but classifies incorrectly when trained on GROVER-GEN. ## 6 Related Work Fake News Generation and Detection There has been a focus in prior research on using neural networks to automatically generate fake news as a means of defending against the proliferation of machine-generated fake news. Zellers et al. (2019) pre-trained a generator with the GPT-2 architecture (Radford et al., 2019) on a large-scale news corpus and demonstrated that it was effective in detecting neural fake news. More recently, Fung et al. (2021) improved the controllability of the generated fake news by conditioning the generator on knowledge elements, such as entities, relations and events, extracted from the original news article. Shu et al. (2021) enhanced the factuality of the generated article by introducing a fact retriever that fetches relevant information from external corpora. Mosallanezhad et al. (2021) used adversarial reinforcement learning to generate topic-preserving articles. These studies developed methods for generating fake news that is hard to distinguish from real news to humans. Nevertheless, due to the overwhelming amount of inaccurate information introduced and the lack of propaganda techniques in the generated texts, these approaches are suboptimal for detecting human-written fake news, as shown in §5.2. In contrast, we generate fake news by incorporating propaganda techniques and preserving the majority of the correct information. Hence, our approach is more suitable for studying defense against human-written fake news. Also, since our dataset is annotated with the exact offset of the disinformative passages, it enables research on interpretable detection of fake news. Propaganda Generation and Detection There is little previous work on propaganda generation. Zellers et al. (2019) is the only relevant work, and it studied the generation of propaganda to communicate targeted disinformation. In contrast, we generate propaganda techniques to bring the generated articles closer to human-written fake news. To the best of our knowledge, we are the first to study the incorporation of specific propaganda techniques into generated articles. Prior work on propaganda detection mainly focused on documentlevel detection. Early work collected propaganda datasets using distant supervision (Rashkin et al., 2017) by assigning the same propaganda label to each news outlet under the same source based on the news-media-level label of corresponding news source listed on trustworthy sites. However, classifiers trained on such datasets may only learn to recognize the bias of each news source instead of propaganda (Martino et al., 2020). Our dataset avoids such issues by explicitly incorporating propaganda into each generated article. Furthermore, Da San Martino et al. (2019) presented a fragmentlevel propaganda detection dataset, where specific propaganda techniques were labeled onto spans of text instead of each document. Recent approaches for detecting these propaganda techniques rely on pre-trained transformers (Morishita et al., 2020; Feng et al., 2021). In contrast, we focus on detecting disinformative articles with propaganda signals. ## 7 Conclusions And Future Work We have proposed a novel method for generating disinformation that is closer to human-written fake news. Evaluation on two human-written fake news datasets, POLITIFACT and SNOPES, demonstrated the effectiveness of our generated data PROPANEWS in enabling better detection performance on human-written fake news. We hope that the dataset presented in this work, PROPANEWS, can serve as an enabling resource for detecting human-written fake news and encouraging future research in this direction. In future work, we plan to extend our approach to other languages and to cover more propaganda techniques. We are also interested in studying other aspects of fake news generation, such as novelty and elaboration, as well as engaging linguistic style. ![8_image_0.png](8_image_0.png) ## 8 Limitations To understand the gap between our automatic data generation method and fake news written by humans, we expanded PN-SILVER to different sizes and compared the performance of ROBERTALARGE when trained on these generated datasets and the human-written fake news dataset, SNOPES. Note that since the TIMELINE17 dataset only contains around 4K samples, we additionally crawled New York Times news articles as an input to our generator for the "5 times" to "10 times" experiments. The results are shown in Figure 3. Although the detector performance at first improves as we add more *silver* training data, it reaches a plateau after the size is increased five-fold. This illustrates that while our approach is more effective compared to baseline generation methods, there is still a clear gap between our generated articles and human-crafted fake news, likely in aspects such as style (as discussed in §5.2), intent (i.e., limited modeling of propaganda techniques), and falsehood (i.e., the generated content is 100% false). Despite the advantages of our generation approach, as compared to previous methods, it is uncapable of generating other propaganda techniques covered in (Da San Martino et al., 2019), such as straw man. Thus, our method is not generic enough to handle all types of propaganda techniques within a unified framework. Moreover, our approach is limited to generating English-only news articles, and cannot be applied to other languages. ## 9 Ethical Statement And Broader Impact Our objective for developing a generative approach that produces more realistic news articles is to advance the field of disinformation detection and to bring awareness that the current approaches for generating training data for fake news detection are sub-optimal. We acknowledge that our generator may produce toxic text as it was fine-tuned on propagandistic datasets. We also understand the dual-use concerns for such a generation framework. One potential concern is the possibility of using the generator to produce fake news for political gain or to sow social discord. Another concern is the potential for the generator to be used to generate fake news that could cause harm, such as false medical information or misleading financial advice. Additionally, the generator might be used to create false evidence or to fabricate information to support false allegations in legal or regulatory proceedings. Therefore, to contribute to future studies on human-written disinformation detection, we decided to release the codebase for only the detectors used in the experiments as well as the generated data but not the generator. We highlight some scenarios that illustrate appropriate and inappropriate uses of our generator: - **Appropriate:** Researchers can use our framework to produce more challenging training data for learning stronger detectors. - **Inappropriate:** The method should not be used to intentionally create or propagate false information. - **Inappropriate:** The propaganda generation technique should not be used for political campaigns or any malicious purposes. Both inappropriate uses could lead to harmful consequences, such as undermining trust in the media and causing social unrest. ## Acknowledgement This research is based upon work supported by U.S. DARPA SemaFor Program No. HR001120C0123 and DARPA MIPs Program No. HR00112290105. The views and the conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and to distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References Marco T Bastos and Dan Mercea. 2019. The Brexit botnet and user-generated hyperpartisan news. *Social science computer review*, 37(1):38–54. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995–1005, Jeju Island, Korea. Association for Computational Linguistics. Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, and Yejin Choi. 2018. Discourse-aware neural rewards for coherent text generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 173– 184, New Orleans, Louisiana. Association for Computational Linguistics. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and psychological measurement*, 20(1):37–46. Giovanni Da San Martino, Seunghak Yu, Alberto Barrón-Cedeño, Rostislav Petrov, and Preslav Nakov. 2019. Fine-grained analysis of propaganda in news article. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5636–5646, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Hernawan Dewatana and Siti Ummu Adillah. 2021. The effectiveness of criminal eradication on hoax information and fake news. *Law Development Journal*, 3(3):513–520. Dimitar Dimitrov, Bishr Bin Ali, Shaden Shaar, Firoj Alam, Fabrizio Silvestri, Hamed Firooz, Preslav Nakov, and Giovanni Da San Martino. 2021. Detecting propaganda techniques in memes. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6603–6617, Online. Association for Computational Linguistics. Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2492– 2501, Online. Association for Computational Linguistics. Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, and Li Chen. 2021. Alpha at SemEval-2021 task 6: Transformer based propaganda classification. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 99–104, Online. Association for Computational Linguistics. Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen McKeown, Mohit Bansal, and Avi Sil. 2021. InfoSurgeon: Cross-media fine-grained information consistency checking for fake news detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1683–1698, Online. Association for Computational Linguistics. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, pages 1693–1701, Montreal, Quebec, Canada. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *Proceedings of the 8th International* Conference on Learning Representations, ICLR '20, Addis Ababa, Ethiopia. OpenReview.net. Hamid Karimi and Jiliang Tang. 2019. Learning hierarchical discourse-level structure for fake news detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3432–3442, Minneapolis, Minnesota. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR '15, San Diego, CA, USA. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730–3740, Hong Kong, China. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *Proceedings of the* 7th International Conference on Learning Representations, ICLR '19, New Orleans, LA, USA. OpenReview.net. Giovanni Da San Martino, Stefano Cresci, Alberto Barrón-Cedeño, Seunghak Yu, Roberto Di Pietro, and Preslav Nakov. 2020. A survey on computational propaganda detection. In *Proceedings of the* Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 4826–4832. ijcai.org. Pedro Henrique Martins, Zita Marinho, and André F. T. Martins. 2020. Sparse text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4252–4273, Online. Association for Computational Linguistics. Terufumi Morishita, Gaku Morio, Hiroaki Ozaki, and Toshinori Miyoshi. 2020. Hitachi at SemEval-2020 task 3: Exploring the representation spaces of transformers for human sense word similarity. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 286–291, Barcelona (online). International Committee for Computational Linguistics. Ahmadreza Mosallanezhad, Kai Shu, and Huan Liu. 2021. Generating topic-preserving synthetic news. In *Proceedings of the 2021 IEEE International Conference on Big Data*, Big Data '21, pages 490–499. Van-Hoang Nguyen, Kazunari Sugiyama, Preslav Nakov, and Min-Yen Kan. 2020. FANG: leveraging social context for fake news detection using graph representation. In *Proceedings of the 29th ACM International Conference on Information and Knowledge Management*, CIKM '20, pages 1165–1174, Ireland (Virtual Event). ACM. Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth. 2020. TORQUE: A reading comprehension dataset of temporal ordering questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1158–1172, Online. Association for Computational Linguistics. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. Technical Report 1999-66, Stanford InfoLab. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaïd Harchaoui. 2021. MAUVE: measuring the gap between neural text and human text using divergence frontiers. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural* Information Processing Systems 2021, NeurIPS '21, pages 4816–4828. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In *Proceedings of the 2017* Conference on Empirical Methods in Natural Language Processing, pages 2931–2937, Copenhagen, Denmark. Association for Computational Linguistics. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR '17, pages 1179–1195, Honolulu, HI, USA. IEEE Computer Society. Niloufar Salehi, Lilly Irani, Michael S. Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier. 2015. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In *Proceedings of the 33rd Annual ACM Conference* on Human Factors in Computing Systems, CHI '15, pages 1621–1630, Seoul, Republic of Korea. ACM. Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. "nice try, kiddo": Investigating ad hominems in dialogue responses. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 750–767, Online. Association for Computational Linguistics. Kai Shu, Yichuan Li, Kaize Ding, and Huan Liu. 2021. Fact-enhanced synthetic news generation. In *Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence*, AAAI '21', pages 13825–13833. AAAI Press. Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. 2018. FakeNewsNet: A data repository with news content, social context and dynamic information for studying fake news on social media. *ArXiv:1809.01286*. Giang Binh Tran, Tuan Tran, Nam Khanh Tran, Mohammad Alrifai, and Nattiya Kanhabua. 2013. Leveraging learning to rank in an optimization framework for timeline summarization. In *Proceedings of the SIGIR 2013 Workshop on Time-aware Information Access*, TAIA '13. Sander van Der Linden, Jon Roozenbeek, and Josh Compton. 2020. Inoculating against fake news about COVID-19. *Frontiers in psychology*, 11:2928. Herman Wasserman and Dani Madrid-Morales. 2019. An exploratory study of "fake news" and media trust in Kenya, Nigeria and South Africa. *African Journalism Studies*, 40(1):107–123. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Machine learning*, 8(3):229–256. Xueqing Wu, Kung-Hsiang Huang, Yi Fung, and Heng Ji. 2022. Cross-document misinformation detection based on event graph reasoning. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 543– 558, Seattle, United States. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019*, NeurIPS '19, pages 9051–9062, Vancouver, BC, Canada. ## A Distribution Of Propaganda Figure 4 shows the distribution of the propaganda techniques used in the human-written fake news we collected and analyzed in §1. Note that one article may contain multiple propaganda techniques. ![12_image_0.png](12_image_0.png) ## B Additional Research Questions Q1: Is the detector learning to distinguish between fake/real news articles or simply learning to detect the use of propaganda techniques? In Table 3, PROPANEWS **w/o AA & LL** is the variant of our proposed dataset with both propaganda techniques removed. By training detectors on this version of the proposed dataset, the model is still effective in identifying human-written articles containing false information. Therefore, the detectors trained on our generated data have learned to distinguish between fake and real articles instead of exploiting propaganda information only. On the other hand, comparing the detectors trained on PROPANEWS and their counterparts trained on PROPANEWS **w/o AA & LL** in Table 3, we see that propaganda can help improve the detection of real human-written fake news. We further want to emphasize that fake news detection is an extremely challenging task that requires both factual and stylistic analysis as demonstrated by our experiments and by the relatively low performance of prior SOTA models. Q2: Do real articles make use of propaganda techniques, such as *appeal to authority* and loaded language? The similarity between our generated text and the real articles in PolitiFact is 7.3% as per the MAUVE measure, which is much lower than the similarity between the generated text and the fake news articles, as discussed in §5.2. It is possible that some real news articles can contain propaganda. However, according to MAUVE, the real articles in POLITIFACT do not contain much loaded language or appeal to authority. ## C Further Analysis C.1 Remaining Challenges To better understand the remaining disinformative articles that the detectors failed to identify, we conducted additional analysis by comparing the ROBERTA predictions and the labels. As a result, we identified the following three major modeling capabilities required for successful detection: Static knowledge enrichment About 30% of misclassification is due to the lack of static knowledge that can be found in public databases, such as law dictionaries. For example, in this article,10 Alexandria Ocasio-Cortez falsely states that the U.S. Immigration Customs Enforcement (ICE) is required to fill 34,000 beds every day. According to the Appropriations Act of 2016,11, ICE is only required to detain 34,000 available beds. Therefore, to detect such kind of misinformation, the detector needs to be enriched with static knowledge bases. Dynamic knowledge acquisition Around 48% of the misclassified human-written disinformation is due to the inability to acquire dynamic knowledge from new news sources. For instance, COVID19-related articles are usually published after 2020, while ROBERTA was pre-trained on news articles released before 2019. It is very challenging for ROBERTA to detect disinformation of such topics unless the detector is equipped with the capability to acquire dynamic knowledge from news articles. Particularly, ROBERTA achieves an accuracy of 69.0% on detecting fake articles published before 2019, but its accuracy drops to 51.9% when testing on articles published after 2019. Multi-document reasoning The rest of the incorrect detection is caused by the lack of multidocument reasoning ability. For instance, a news article12 wrongly associates Hillary Clinton with a flawed immigration policy of the former government, and strengthens such a statement by referring to a Senate report and relevant news articles. However, the cited report does not mention Clinton, and the other news articles contain disinformation. To correctly detect this piece of disinformation, detectors should reason across multiple documents. ## D Qualitative Examples Of Generated Articles In Table 8, we show a comparison of generated articles given the same input data across different generative methods. Our approach produces articles with a small fraction of inaccurate information, which matches a property of human-written fake news discussed in §1. ## E Appeal To Authority Details To recap, we first gather a list of authorities Z for each article from Wikidata and the corresponding context. The best *appeal to authority* sequence s ∗ is selected, i.e., the one with the lowest perplexity s ∗= argminsi PPL(si), where si denotes the generated sequence using zi as the authority. However, this process results in every sequence s ∗containing the substring "confirms that", which makes it trivial for detectors to classify these generated documents as fake by simply detecting such substrings. Therefore, we devise an algorithm to diversify the templates so that these generated articles are not easily detectable. First, we define a set of verbs V that can be swapped with "confirms": V = {said, *concluded*, confirmed, emphasized, stated, *argued*}. Then, we diversify the generated structure of the generated sentence s ∗by reordering the subject, the verb, and the object. Next, we swap the verb with another verb from V . Finally, in order to diversify the context, we append a preposition from the preposition set P P = {on, at, in} to the output of the previous step, and then we feed the sequence to BART to generate the context. An example of this process is given in Table 6. ## F Intermediate Pre-Training Details For domain adaptation, we perform intermediate pre-training (IPT) on the CNN/DM dataset, a large summarization corpus containing more than 280K news articles from CNN and Daily Mail. The IPT objectives for disinformation generation and propaganda generation are mostly the same as described in the previous sections, but with some minor changes due to different goals in the IPT phase. When performing IPT for disinformation generation, we removed Lsfrom the final loss function (Equation (4)) as the goal for IPT is only to learn to generate coherent sentences, and thus IPT is not needed. | Detector | Dev Acc. (%) | Test Acc. (%) | |------------|----------------|-----------------| | HDSF | 52.4 (±0.6) | 50.6 (±2.4) | | BERT | 57.7 (±1.0) | 58.0 (±1.2) | | GROVER | 60.3 (±5.8) | 63.3 (±5.0) | | ROBERTA | 70.5 (±0.3) | 69.8 (±1.1) | HDSF 52.4 (±0.6) 50.6 (±2.4) BERT 57.7 (±1.0) 58.0 (±1.2) GROVER 60.3 (±5.8) 63.3 (±5.0) ROBERTA **70.5** (±0.3) **69.8** (±1.1) Table 5: Evaluation of various detectors on the PROPANEWS development and test set. We report the mean and the standard deviation over four runs. Moreover, in order to create training samples for *loaded language* IPT, we gather all the appearances of adjectives pointing to a noun or adverbs pointing to a verb via dependency parsing graphs without considering whether the samples actually contain *loaded* terms since the goal here is to enable BART to identify where properly to insert which adjectives or adverbs. ## G Benchmarking Detectors The performance of various detectors on the PROPANEWS dataset is shown in Table 5. We find that ROBERTA and GROVER demonstrate advantages over BERT. This could be explained by the fact that ROBERTA and GROVER are pre-trained on news domain corpora, whereas BERT has no access to such domains during pre-training. In addition, we find that HDSF performs much worse than the other three models. This reflects that largescale pre-training of language models brings more benefit to detection performance than explicit modeling of discourse-level features. ## H Human Validation Details Next, we describe the details of human validation, where AMT workers were tasked to validate whether the generated sentences contained inaccurate information. We recruited AMT workers from USA and Canada. To ensure the annotation quality, only workers who had an acceptance rate greater than 95% and more than 100 accepted HITs in the past were allowed to work on our annotation task. This greatly reduced the chances of collecting annotations from scammers. Each HIT was designed such that the annotators were rewarded $12-$15 per hour, which complies with the ethical research standards outlined by AMT (Salehi et al., 2015). In each HIT, the annotators were presented an article with the generated part marked in boldface. The questions and the guidelines are given below. (Note that we only use the annotators' response for Q1 to validate our generated data. The annotations for the other questions will be used for future research.) Step Generated Sequence Table 6: An illustration of how appeal to authority is performed. In step 1, we generate a statement using BART with the prefix [*Panmure Gordon analyst Peter Hitchens confirmed that "*]. In step 2, we move the subject and the verb to the back of the sentence to diversify the sentence structure. In step 3, we swap the verb with another verb from the verb set V . In step 4, we append a preposition in to the sequence in step 3 and we use the resulting sequence as a prefix to BART's decoder to generate the rest of the context. For steps 1 and 4, we mark the prefix sequence to the decoder in yellow, and the generated sequence in blue. To increase the diversity of the generated sequences, step 2 to 4 are each performed 50% of the time. | 1 | Panmure Gordon analyst Peter Hitchens confirmed that " the US government is likely to agree to reduce its estimate of the size of the spill, which would cut BP fines ". | |-----|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2 | " The US government is likely to agree to reduce its estimate of the size of the spill, which would cut BP fines, " Panmure Gordon analyst Peter Hitchens confirmed. | | 3 | " The US government is likely to agree to reduce its estimate of the size of the spill, which would cut BP fines, " Panmure Gordon analyst Peter Hitchens said. | | 4 | " The US government is likely to agree to reduce its estimate of the size of the spill, which would cut BP fines, " Panmure Gordon analyst Peter Hitchens said in a conference. | Q1: Is the generated text in boldface **Accurate** or **Inaccurate**? (If you cannot find any supporting evidence, please select **Inaccurate**.) Note that a statement (in quotation marks) made by a person is only accurate if this person actually made the exact same statement. If the statement in quotation marks is just a paraphrase of what the person actually said, then the statement is inaccurate. - **Inaccurate**: Any false information presented in the generated text makes it inaccurate. - **Accurate**: All the information in the generated text must be accurate. Q2: Enter the URL of the news article you found that supports your decision in the previous response in the below box. Put down "from context" if the evidence can be found in the context. Q3: Does the generated text in boldface deliver the same sentiment as the rest of the article? - **False**: The sentiment of the generated text is NOT the same as the rest of the article. - **True**: The sentiment of the generated text is the same as the rest of the article. Q4: Is the discourse of the generated text in boldface consistent with the rest of the article? - **False**: The discourse of the generated text is NOT consistent with the rest of the article. - **True**: The discourse of the generated text is consistent with the rest of the article. Q5: If there is any grammatical error or inconsistent discourse, please rewrite and correct generated text and put it in the below box. Just put down the corrected generated text in bold is enough. For example, "Harry is a boy. He likes go to school." Please put in "He likes to go to school." in the box below. ## I Statistics About The Evaluation Datasets In Table 7, we give some statistics about the two evaluation datasets used in our experiments. The reported numbers are not the same as those in the original papers (Nguyen et al., 2020; Shu et al., 2018) since some of the articles were no longer accessible via the provided URLs. Table 7: Statistics about the two evaluation datasets, SNOPES and POLITIFACT. | Dataset | # Real | # Fake | |------------|----------|----------| | SNOPES | 430 | 280 | | POLITIFACT | 517 | 369 | ![14_image_0.png](14_image_0.png) ## J Detector Implementation Details For our experiments with BERT and ROBERTA, we used AdamW (Loshchilov and Hutter, 2019) with a batch size of 2 and gradient accumulation steps of 8. We set the learning rate and the weight decay to 5e-5 and 1e-5 for the parameters that have been pre-trained, and 1e-3 and 1e-3 for the other parameters. For the GROVER detector, we follow the original detection setting. GROVER is trained using Adam (Kingma and Ba, 2015) with a learning rate of 2e-5 and a batch size of 64. Similarly, we follow the original recipe to train HDSF, which is optimized with Adam with a learning rate of 1e-2. All detectors are fine-tuned for at most 20 epochs where the best model is determined by the accuracy on the development set. All experiments are conducted on an Ubuntu 18.04 machine with NVIDIA Tesla V100. We use PyTorch 1.10.0 and Transformers 4.3.0 for constructing all models and loading pre-trained weights, except for GROVER, which operates on Tensorflow 1.13.1. The training time for BERT and ROBERTA, each of which has 340M parameters, is around 2-3 hours, while for GROVER, which contains 355M parameters, it is about 1 hour. ## K Human Evaluation Details In this section, we describe the survey we did with AMT workers for evaluating the quality of the generated articles. The annotators were presented with a generated article and were asked to answer a few questions regarding its quality. Q2 is only applicable for evaluating generated articles from PROPANEWS, in which we show the sentence that contains propaganda. The low, the medium, and the high ratings in the response correspond to the 1, 2, and 3 scores described in §5.2. The questions and the guidelines we gave were as follows: Q1: How plausible do you think the article above is? - Low: It likely contains inaccurate information. - **Medium**: Not sure. - **High**: It is unlikely to contain inaccurate information. Q2: How much does this sentence in the article affect your decision for the previous answer? - Low: This sentence does not affect my answer for the previous question. - **Medium**: This sentence somehow affects my answer for the previous question. - **High**: This sentence largely affects my answer for the previous question. The score distribution for Q1 is shown in Figure 5. We can see that our approach yields higherquality fake news compared to GROVER-GEN. ![15_image_0.png](15_image_0.png) ## L Scientific Artifacts We re-purposed the *Timeline17* dataset (Tran et al., 2013) to produce our PROPANEWS dataset. Among the software we use, SpaCy ("en_core_web_md") is licensed under the MIT License, Transformers and Stanza are licensed under the Apache License 2.0, and PyTorch is released under the Modified BSD License. In terms of models, both BART and the NLI model we used are licensed under the MIT License. In summary, all artifacts are produced with software and models released under open-source licenses. We will release the PROPANEWS dataset as well as the detectors with licenses compatible with the software discussed above such that they are consistent with their intended use. | Method | Generated Article | |-----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GROVER-GEN | Libya to restart three oilfields on Sept 1 : minister BENGHAZI , Libya , Aug 28 ( Reuters ) - Libya will restart two eastern oil fields in mid-September and resume crude shipment from Tobruk by the end of the month , SNCM deputy oil minister Habib Hummed said on Sunday . He said the west of Sirte was being restored and a third field in the eastern province of Brega was about to restart operations . " Production at the Brega field has been increased by two to four percent and will resume its production on the end of the month , " he said at a political conference . " The second phase of production at the eastern field of Sirte came to full production by the end of the month and restarting the production from Brega field will begin in September , " he said . He said production at the west of Sirte and east of Tobruk had already reached 95,000 barrels per day ( bpd ) on their main export terminals . Libya , awash with millions of dollars , most of it believed to have been siphoned off through crude theft , is pumping around 290,000 bpd in total . The first fuel terminals in Libya have reopened and are loaded with diesel and petrol and tankers from Europe are ploughing in cargo to get through a key smuggling route from Derna to the Mediterranean . Libya was sharply divided last year when armed tribesmen struck crude export terminals along its border , denying it the lucrative gas exports it was using to service southern EU countries . Libya 's southern oil terminal was hit in September by militant attacks . ( Writing by Ulf Laessing ; Editing by David Holmes ) | | FACTGEN | Shares Augmented and stored pipelines replaced by light pipes .Disputed concerns over biggest damage to Libya since 2003 . Despite concerns oil could be leaking . 'Russia says production is safe and it is ready to resume production . 'Greece could use more sulfur oil by 2020 . 'Energy ministers say it could be used to power two-thirds of the world economy . 'Saudi had been the biggest oil producer in the world before meltdown . "French to continue to contain crisis . "Russia on Libyan crisis : ' we are facing a serious and continuing public health crisis that is needed ' "But not in a vacuum of oil ' "World affairs . "Greece : ' we need to be doing more to combat damage to the energy system . ' " leftasing all oil from Libya crisis . " | | Ours (PN-SILVER) | Tue Aug 23 , 2011 4:05 pm EDT * Limited Libyan oil production could start soon * Condition of oil export terminals still unclear * Clearing pipes clogged by waxy crude could delay output By David Sheppard and Sarah Young NEW YORK/LONDON , Aug 23 -LRB- Reuters -RRB- - Limited Libyan oil production ought to resume quite quickly as most fields appear to have survived the civil war unscathed , international oil services companies say , but many are still waiting for more stability in the country before sending employees back in . In interviews with some of the biggest oil services players in pre-civil war Libya , most echoed the view that major oil fields had largely avoided serious damage during the six months of fighting , though some voiced concerns about damage to export terminals and pipelines . OPS International Chairman Gavin De Salis told Reuters Insider television that Libyan crude oil , prized for its high yield of valuable light products such as gasoline and for its low sulfur content , was quite waxy , which could clog up pipelines if they had been left unused for some time . " There might be a little bit of effort unplugging pipelines , which is two to three months ' worth of effort before they can resume full production , " De Salis said . " But that will not affect all of the pipelines or all of the fields , so they can certainly start limited production quite quickly . " Nilsson said contacts at Libya 's rebel oil firm Arabian Gulf Oil Company -LRB- AGOCO -RRB- informed him there had been little damage to the oilfields in the east of the country during the six-month power struggle . " We have n't been able to work at the oilfields during the civil war as it has not been safe , but I think within a couple of weeks we could be back to almost normal , " Nilsson said by telephone from his office in Stockholm . " The oil income is essential to Libya and the new government so they will want to bring it back online as soon as possible . " Nilsson said they had several Swedish , Indian and Sudanese employees who had stayed in the country during the civil war , but total staff numbers in the country were down from around 250-300 . Nilsson said there was still a lot of work to be done in the country . De Salis said that " a lot of damage " had been done to Libya 's oil infrastructure , including the destruction of some of the country 's main oil export terminals , but he said it was too early to estimate the full extent of the damage . DAMAGE Oil firm 's who supported the rebel government during the civil war are expected to win the lion 's share of contracts to help relaunch the Libyan oil industry , which before the war produced some 1.6 million barrels per day of crude ... | | Table 8: A qualitative comparison between the generated articles from different approaches. The texts marked in | | Table 8: A qualitative comparison between the generated articles from different approaches. The texts marked in orange indicate disinformation, and the texts in blue denote propaganda. We see that other approaches generate a large amount of inaccurate information, which contrasts with the property of human-written fake news mentioned in §1. We also note that the article generated using FACTGEN appears to be low-quality. This is likely caused by the fact that the checkpoints reported in the paper were not released and we trained FACTGEN from scratch by closely following the recipe described in Shu et al. (2021). It is possible that some details about the training process of FACTGEN were missing from their paper, which in turn affected our training, and resulted in low generation quality. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8. ✓ A2. Did you discuss any potential risks of your work? Section 9. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract & Section 1. ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly is used to fix grammar errors throughout all sections of the paper. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix L. ✓ B1. Did you cite the creators of artifacts you used? Appendix L. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix L. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix L. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No personal/sensitive information is collected. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix L and K. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.1. ## C ✓ **Did You Run Computational Experiments?** Appendix J. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix J. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix J. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 and Table 3. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix J and L. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3 and Appendix H. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix H. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3 and Appendix H. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We do not curate annotators' personal data. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? IRB 22841 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3 and Appendix H.
sun-etal-2023-length
A Length-Extrapolatable Transformer
https://aclanthology.org/2023.acl-long.816
Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define \textit{attention resolution} as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at \url{https://aka.ms/LeX-Transformer}.
# A Length-Extrapolatable Transformer Yutao Sun1∗ , Li Dong2, Barun Patra2, Shuming Ma2**, Shaohan Huang**2 Alon Benhaim2, Vishrav Chaudhary2, Xia Song2**, Furu Wei**2 Tsinghua University1 Microsoft2 Https://Github.Com/Microsoft/Torchscale ## Abstract ![0_Image_0.Png](0_Image_0.Png) Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define *attention resolution* as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better efficiency. The proposed architecture is named Length-Extrapolatable (LEX) Transformer. We evaluate different Transformer variants on language modeling. Experimental results show that our model achieves better performance in both interpolation and extrapolation settings. The code will be available at https://aka.ms/ LeX-Transformer. ## 1 Introduction Transformer (Vaswani et al., 2017) has shown strong performance in NLP and become a de-facto backbone (Dosovitskiy et al., 2020; Radford et al., 2021; Wang et al., 2022). However, most of them have a crucial shortcoming: they can only deal with the in-distribution size of inputs. Figure 1 shows that the perplexity of previous Transformers increases rapidly when the input sequence is getting longer. It is usually infeasible to train a model with all possible input lengths. Therefore, a length-extrapolatable Transformer is essential for wider usage. In sequence modeling, position information plays a crucial role in building the correct representation and understanding of the latent meaning. For Recurrent Neural Networks such as LSTM (Hochreiter and Schmidhuber, 1997), the calculation is done along the sequence order in O(N) time. However, the parallel attention module ∗Work done during internship at Microsoft Research. makes it hard to encode position effectively. First, Vaswani et al. (2017) proposes absolute sinusoidal position embedding, and Devlin et al. (2019) adjusts it to a learnable one. The absolute design is computation-efficient, but not comparable with subsequent relative ones (Shaw et al., 2018; Su et al., 2021; Press et al., 2021). Among many relative position embeddings, ROPE (Su et al., 2021) shows better performance and is used to many PLMs such as PaLM (Chowdhery et al., 2022). However, it can't deal with sequences with exceeding length. Alibi (Press et al., 2021) mitigates the extrapolation problem but sacrifices the general performance. Since different strategies concentrate on some part of the position feature, it is essential to build a comprehensive view and guide the Transformer's design systematically. First, a Transformer should be sensitive to order. Otherwise, it will degenerate into a bag-of-word model which confuses the whole meaning. Then, position translation can't hurt the representation, especially, when a prefix is added to the target sentence, the representation should stay the same with an attention mask on the prefix. After that, a good sequence model needs to 14590 | Models | Translation Invariance | Length Extrapolation | |-----------------------------------------------------|--------------------------|------------------------| | Absolute Position Modeling Transformer (Sinusoidal) | ✘ | ✘✘ | | GPT-2 (Learnable) | ✘ | ✘✘ | | Relative Position Modeling PaLM / Roformer (ROPE) | ✔ | ✘ | | T5 | ✔ | ✘ | | BLOOM / Alibi | ✔ | ✔ | | LEX Transformer (Ours) | ✔ | ✔✔ | deal with any input length. As illustrated before, the length problem is not universal but special for Transformer. Especially, when a Transformer is pre-trained under a maximal length, it is not affordable to re-train for applying to tasks with longer sequences. Finally, when a Transformer satisfies the principles above, we evaluate its performance, which requires thorough experiments and empirical analysis. Considering all the properties above, we propose Extrapolatable Position Embedding (XPOS), which is a universal-good design for Transformers. Based on ROPE's design, we propose *attention resolution* as a metric to measure position monotonicity. Then, we generalize its mathematical form, where an exponential decay is added to the rotation matrix. XPOS preserves the advantage of ROPE, and behaves stably at long-term dependency. Besides, inspired by sparse attention methods (Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020; Xiong et al., 2021), we choose blockwise causal attention to increase attention resolution, which improves the performance of length extrapolation for language modeling. We train different Transformers from scratch. We evaluate models on PG22 and QMSum (Zhong et al., 2021) with various input lengths. On the interpolation experiments, LEX Transformer reaches minimal perplexity. In the extrapolation experiments, our methods can continue decreasing the perplexity while other methods either can't extrapolate (i.e., perplexity increases) when the input length is very long. Figure 1 shows clearly that LEX Transformer has an opposite tendency compared with others. We summarize our contributions as follows: - We summarize the design principles of Transformers for position modeling. - We define attention resolution to indicate a Transformer's capability on encoding position. - We propose an extrapolatable position embedding and use blockwise causal attention to improve length extrapolation. - We conduct experiments on language modeling and show that the proposed LEX Transformer achieves strong performance on both short and long texts. ## 2 Design Principles Of Transformers For Position Modeling 2.1 Order Variance A transformer without position information is actually a bag-of-word model. Although bag-of-words models can achieve comparable performance for some tasks (Wang et al., 2020a), position information is essential for sequence modeling. Most of the existing position modeling satisfies this goal (Vaswani et al., 2017; Devlin et al., 2019; Shaw et al., 2018; Wang et al., 2020a; Raffel et al., 2020; Su et al., 2021). With effective position information, Transformer models should be variant with permuting the order (Dufter et al., 2022). Give a permutation function Pπ(X) : [x1, x2*, ..., x*n] → [xπ1 , xπ2 , ..., xπn], where [π1, π2*, ..., π*n] is a random order, a Transformer model f(*input*) should satisfy: f(Pπ(X)) ̸= Pπ(f(X)) (1) $\mathbf{v}$ ## 2.2 Translation Invariance The representation of a sequence should be robust with the translation of its positions. For instance, a sentence's meaning is invariant with padding before or after the whole sentence. Similar to (Wang et al., 2020a), we give a general form for translation invariance: given a Transformer model f(input, mask), any input sequence X = [x0, x1*, ..., x*n] with mask M = [m0, m1*, ..., m*n], the output should be same with the padding ones: $$\begin{array}{c}{{X_{\mathrm{pad}}=[0]_{i}\oplus X\oplus[0]_{j}}}\\ {{M_{\mathrm{pad}}=[0]_{i}\oplus M\oplus[0]_{j}}}\\ {{f(X,M)=f(X_{\mathrm{pad}},M_{\mathrm{pad}})[i:i+n]}}\end{array}\tag{2}$$ Relative positions (Shaw et al., 2018; Raffel et al., 2020; Wang et al., 2020a; Su et al., 2021) satisfy this condition, while most of the absolute positions do not (Vaswani et al., 2017; Devlin et al., 2019). Although sinusoidal embedding has a similar property (Vaswani et al., 2017): P Epos+k can be represented as a linear function of P Epos, the addition operation in the initial word embedding messes the attention weight, where the spread form of QKT has 4 components whose geometric connection with position is unclear. ## 2.3 Length Extrapolation As the cost of pre-training is getting bigger due to the larger model size and corpus, it is infeasible to retrain a model for a longer context. A Transformer model with a suitable design should be capable of dealing with any input length. First, learnable absolute position embedding (Devlin et al., 2019) is not able to extrapolate because it does not have any pre-defined position knowledge. With the evaluation of perplexity on different lengths (Press et al., 2021), almost every position embedding's performance drops significantly (Vaswani et al., 2017; Raffel et al., 2020; Su et al., 2021). Alibi (Press et al., 2021) solves this problem by adding an exponential decay on the attention matrix, which lower the influence of outof-distribution position like a soft sliding window. However, the absence of long-term dependency contributes to a performance drop compared with other relative strategies. Table 2 shows that Alibi's perplexity is larger than ROPE by about 0.3. However, the extrapolation ability needs a systematic design where position embedding is a crucial but not only component. With the proper attention mask, the relative position can deal with long text. The ideal situation is to use the long context in the right way, in that case, the perplexity should decrease as the input length increases. ## 3 A Length-Extrapolatable Transformer We define attention resolution as the indicator of the Transformer's capability on encoding position in Section 3.1. Then we propose two ways to maximize the resolution metric, i.e., improve the length interpolation and extrapolation of Transformers. First, we introduce a relative position encoding method (Section 3.2) to explicitly maximize attention resolution. Second, we propose to use blockwise causal masking (Section 3.3) during extrapolation inference for improved resolution. In the following section, we denote d as the hidden dimension and l as the input length. For each attention layer, query, key, and value are calculated by input x: q = WQx, k = WK*x, v* = WV x. ## 3.1 Attention Resolution The monotonicity of attention scores is essential to represent distance in language models. In an attention layer of the vanilla Transformer, we measure the attention score expectation as s[n] when the distance of two tokens is n: $$s[n]=\operatorname*{\mathbb{E}}_{0\leq i\leq N}({\frac{q_{i+n}k_{i}^{T}}{\sqrt{d}}})\qquad\qquad(3)$$ We define *attention resolution* R(s) as a metric to evaluate attention's ability to recognize position: $$R(s)=\sum_{i=0}^{N}{\frac{e^{s[i]}(e^{s[i]}-e^{s[i+1]})}{(\sum_{i=0}^{N}e^{s[i]})^{2}}}\qquad\qquad(4)$$ First, s[i] > s[i+ 1] is preferred to ensure monotonicity. Besides, we implement softmax operation on s[n] to simulate the attention probability. To mitigate the influence of long-tail distribution, the factor e s[i]is multiplied. We can estimate s[n] and R(s) quantitatively when we design Transformers. ## 3.2 Improve Resolution By Position Encoding Su et al. (2021) propose that by adding absolute position embedding on query and key, the attention matrix is actually encoded with relative position information. ROPE shows a strong performance in interpolation tasks, but its s[n] oscillates dramatically in Figure 2, which harms the *resolution*. We use a similar but generalized strategy to improve *resolution*. First, a pseudo inner product is defined as ⟨*x, y*⟩ =PRe(xi· y∗ i ), which is consistent with the exact inner product's definition when we map C d/2 → R d. Before calculating attention, the query and key are encoded with position information. Generally, the attention function is as follows: $$\begin{array}{l}{{a_{i j}=\frac{\langle f_{q}(q_{i},i),f_{k}(k_{j},j)\rangle}{\sqrt{d}}}}\\ {{o_{i}=\sum_{j=0}^{i}\frac{e^{a_{i j}}}{\sum_{j=0}^{i}e^{a_{i j}}}v_{j}}}\end{array}\qquad\qquad(5)$$ Formally, the encoding must satisfy: $$\langle f_{q}(q,n+r),f_{k}(k,n)\rangle=\langle f_{q}(q,r),f_{k}(k,0)\rangle\,\,\,(6)$$ A simple solution is as follows: $$\begin{array}{l}{{f_{q}(q,n)=A_{q}q e^{\lambda n}}}\\ {{f_{k}(k,n)=A_{k}k e^{-\lambda^{*}n}}}\end{array}$$ $$\mathbf{\Pi}(T)$$ The scaling factor Aq, Ak is unnecessary because *q, k* is obtained by a linear transformation. Since λ ∈ C d/2, it can be represented as λ = ξ+iθ where *ξ, θ* ∈ R d/2: $$\begin{array}{l}{{f_{q}(q,n)=q e^{\xi n+i\theta n}}}\\ {{f_{k}(k,n)=k e^{-\xi n+i\theta n}}}\end{array}$$ If ξ = 0, the form is the same as ROPE (Su et al., 2021). Geometrically, the transformation provides a rotation on vectors. If the relative angle between q and k is larger, the inner product is smaller. However, the cosine function is not monotonic if the rotating angle is large than π, which causes an unstable phenomenon in that the expectation of the inner product oscillates dramatically with the growth of relative distance. Following the parameters (Vaswani et al., 2017; Su et al., 2021) θ = {θi = 10000−2i/d, i ∈ [0, 1*, ..., d/*2]}, we will calculate the expectation as follows. For generative models, we assume E(∠q) ≤ E(∠k) to ensure the monotonicity: nsure the monotonicity. $$\begin{split}&\mathbb{E}[\langle qe^{m\xi+im\theta},ke^{-n\xi+in\theta}\rangle]\\ &=\sum_{x=0}^{d/2}\mathbb{E}[Re(\mathbf{q}_{x}\mathbf{k}_{x}e^{(m-n)\xi_{x}+i(m-n)\theta_{x}})]\\ &\leq\sum_{x=0}^{d/2}Re(\mathbb{E}[|\mathbf{q}_{x}\mathbf{k}_{x}|]e^{(m-n)\xi_{x}+i(m-n)\theta_{x}})\\ &\propto\sum_{x=0}^{d/2}cos(m-n)\theta_{x}e^{(m-n)\xi_{x}}\end{split}\tag{9}$$ The inference here is different from (Su et al., 2021) because of two reasons: 1) there is an additional assumption brought by generative language models where E(∠q) ≤ E(∠k); 2) the inequality scaling of (Su et al., 2021) is too strong to lose generality. We calculate expectation instead of the upper bound. Now we define a function to represent the property of relative position: $$g_{\zeta}[n]=\sum_{i=0}^{d/2}\cos n\theta_{i}\zeta_{i}^{n}\qquad\qquad(10)$$ g[n] simplifies Equation 9 by defining ζi = e ξi. Stabilizing the g[n] curve is intuitive. Although attention bias can achieve this goal, we try to avoid additional position calculations. Instead, we can accomplish this goal using a good ζ to maximize R(gζ ). Obviously, the oscillation mainly comes from large θi. Manually setting ζ can achieve this goal: $$\widetilde{\zeta}_{i}=\frac{i/(d/2)+\gamma}{1+\gamma}\in[0,1]\qquad\qquad(11)$$ $$(8)$$ where ζei becomes smaller when θiis larger. In this way, we punish the oscillation of unstable dimensions and keep the distribution of stable ones. Numerical optimization methods are tried to find optimal values for ζ. However, the results rely on the initial value and lack control when the hidden dimension changes. Besides, the numerical precision should be considered because of fp16's range. Finally, we find a sub-optimal solution by manually setting γ to both satisfy the resolution is recognizable (R(gζ ) is partially optimized) and ζ n i can be represented by fp16 when n is big (8192 in our setting). Besides, in implementation, the position is re-scaled with base B in the exponential calculation to avoid overflow and underflow (e ξn → e ξn/B in Equation (8)). We use γ = 0.4 and B = 512 as the final implementation in LEX Transformer. The curves of ζ = 1, ˆζ are shown in Figure 2. The default rotary embedding contributes to a dramatic oscillation, especially in the large relative distance, which causes bad extrapolation performance and restricts the model's convergence speed. After adding a decay, the curve is almost stable, especially on long-term dependency. What's more, it does not hurt pure rotation's fitting ability because ζ n i ≈ 1 when i is large or n is small. In that way, short-term and long-term dependencies are divided continuously. Finally, we have Extrapolatable Position Embed- ![4_image_2.png](4_image_2.png) ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Algorithm 1: Attention with XPOS def rot(x): return [−x1, x0, −x3, x2*, ...*] Initialization: θi = 1/100002⌊i/2⌋/d, θ ∈ R d ˆζi = (2⌊i/2⌋/d + γ)/(1 + γ), ˆζ ∈ R d Input: Q, K, V ∈ R h×l×d, M ∈ R d×d Cmn = cos mθn, C ∈ R l×d Smn = sin mθn, S ∈ R l×d Tmn = ˆζ m n, T ∈ R l×d Q = Q ⊙ (C ⊙ T) + rot(Q) ⊙ (S ⊙ T) K = K ⊙(C ⊙T−1)+ rot(K)⊙(S ⊙T−1) output = softmax( QKT √d· M)V return *output* ding (XPOS): fq*(q, n*) = fk*(k, n*) = $\mathbf{v}$ q1 cos nθ1 ˆζ n/B 1 − q2 sin nθ1 ˆζ n/B 1 q2 cos nθ1 ˆζ n/B 1 + q1 sin nθ1 ˆζ n/B 1 ... qn−1 cos nθd/2 ˆζ n/B d/2 − qn sin nθd/2 ˆζ n/B d/2 qn cos nθd/2 ˆζ n/B d/2 + qn−1 sin nθd/2 ˆζ n/B d/2 k1cosnθ1 ˆζ −n/B 1 − k2 sin nθ1 ˆζ −n/B 1 k2 cos nθ1 ˆζ −n/B 1 + k1 sin nθ1 ˆζ −n/B 1 ... kn−1 cos nθd/2 ˆζ −n/B d/2 − kn sin nθd/2 ˆζ d/2 kn cos nθd/2 ˆζ −n/B d/2 + kn−1 sin nθd/2 ˆζ d/2 −n/B −n/B (12) In the implementation, the transformation for key and value can be easily calculated by parallel addition and multiplication as shown in Algorithm 1. Since position embedding's size *C, S, T* ∈ R l×dis much smaller than batched multi-head at- Training Phase Inference Phase tention matrix and doesn't require gradients, the cost is almost the same with ROPE and bigger than Absolute Position with 6% additional time. ## 3.3 Blockwise Causal Attention To deal with length extrapolation, a simple way to improve attention resolution (Section 3.1) is using windowed attention. During inference, we use blockwise masking (Dai et al., 2019; Zaheer et al., 2020; Xiong et al., 2021) for self-attention. Notice that other window strategies, such as sliding window (Child et al., 2019), also work. We use blockwise causal attention because it is cache-friendly and easy to implement. As shown in Figure 3, if the pre-training length is l, we divide the query as blocks with l/2 length, and each query interacts with its own block and the last block. In this way, the context information can be delivered by the reuse of key and value. The window constraint helps models encode longer input with improved resolution. Different from training a long-sequence model with a stop gradient, we use vanilla attention in the training phase, because the pre-training corpus is not very long on average. However, during the inference phase, when dealing with long sequences, we directly implement BCA to help the model to be more position-recognizable. ## 4 Experiments 4.1 Pre-Training To fairly evaluate different Transformer variants, we pre-train the Transformer from scratch. We use 1024 hidden dimensions, 16 heads, and 24 layers, i.e., comparable to medium-size GPT-3 (Brown et al., 2020). The training corpus includes a subset of the Pile (Gao et al., 2020): Books3, OpenWebText2, Stack Exchange, PubMed Abstracts, Wikipedia, Gutenberg (PG-19), BookCorpus2, NIH ExPorter, and Pile-CC datasets. The training procedure is performed on 16×V100 GPUs. We use the tokenizer from GPT2 (Radford et al., 2019). The maximal length is 1024 for saving memory and extrapolation evaluation. The learning rate is 3×10−4and polynomial decay is used to adjust the learning rate. The global batch size is 512 to follow GPT-3 (Brown et al., 2020), i.e., 0.5M token size. We use Adam (Kingma and Ba, 2015) optimizer with β1 = 0.9, β2 = 0.98, ϵ = 10−6. The code is based on TorchScale (Ma et al., 2022a). ## 4.2 Language Modeling We measure perplexity on long document datasets, which can show the model's ability for longdependency modeling. We use books from Project Gutenberg whose years are later than 2019 to ensure no overlap with PG19, and we name it as PG22. Besides, we pick QMSum (Zhong et al., 2021) from SCROLLS (Shaham et al., 2022) with above 9k length on average. We care about the performance on different input lengths to evaluate the model's interpolation and extrapolation capability. For experiment results in Table 2, we divide the same input into the target length to fairly compare the perplexity of different lengths. For interpolation capability, we analyze the results where the length is no more than 1024. Since the validation distribution is very similar to training data, all Transformers' generalization capabilities are also close. XPOS have a stable advantage on others with a 0.09 perplexity drop on PG22, and 0.27 on QMSum, which proves that XPOS increases the interpolation ability. For extrapolation lengths, we do not use BCA in other Transformers, and the following ablation study will discuss the performance with that. Press et al. (2021)'s experiment shows that most of the position strategies can't deal with input length longer than pre-training directly. XPOS shows a stable decrease when the sequence length increases, which satisfies the assumption that a longer context makes the prediction better. While others' perplexity increases when the input length is 4096. To illustrate the tendency of perplexities, Figure 1 visualizes the relation between input length and perplexity. When the length is larger than 4096, Alibi's perplexity increases gradually. However, LEX's perplexity decreases continuously when the length extends to 8192. The experiment shows that XPOS gets better performance on language modeling. With the stable advantage of any length, users can input any sentence freely without the concern of position. Besides, results also indicate that is not essential to build an explicit decay on the attention matrix, Instead, a proper design for an attention mask is actually better to deal with long-context tasks. ## 4.3 Measuring Resolution We empirically evaluate the resolution of different Transformer variants. In the previous section, we define attention resolution as a quality indicator of position modeling in Transformers. The expectation of s[n] is computed as: $$\mathbb{E}[s[n]]={\frac{1}{N-n}}\mathbb{E}[\sum_{i=n}^{N-1}a_{i(i-n)}]\qquad(13)$$ where aij has the same meaning in Equation 5. Then the attention resolution can be calculated by combining Equation (4) and Equation (13). The final expectation is averaged over input texts and different layers. Table 3 reports the average resolution of various Transformer variants. The results show that XPOS makes the position more recognizable in both 1024 (i.e., training length) and 2048 (i.e., length extrapolation). For Alibi (Press et al., 2021), the stable resolution comes from explicit decay, but it prevents the model from learning position dependency itself. In addition, we ablate BCA in 1024 and 2048. The results support that BCA helps the model distinguish positions better, achieving higher attention resolution. ## 4.4 Ablation Studies 4.4.1 Rotation Computation As shown in Table 4, we discuss the necessity of the combination of vector rotation and exponential decay. XPOS without rotation means Equation (12) degenerates to θi = 0: $\hat{f}_q(q,n)=\begin{pmatrix}q_1\hat{\zeta}_1^n\\ q_2\hat{\zeta}_1^n\\ \vdots\\ q_{n-1}\hat{\zeta}_{d/2}^n\\ q_n\hat{\zeta}_{d/2}^n\end{pmatrix}\hat{f}_k(k,n)=\begin{pmatrix}k_1\hat{\zeta}_1^{-n}\\ k_2\hat{\zeta}_1^{-n}\\ \vdots\\ k_{n-1}\hat{\zeta}_{d/2}^{-n}\\ k_n\hat{\zeta}_{d/2}^{-n}\end{pmatrix}.$ 95. Length 256 512 1024 **2048 4096 8192** Interpolation Extrapolation | PG22 QMSum | |--------------| Transformer 38.1 33.5 30.54 132.46 1446.95 12747.41 Alibi 34.25 30.01 27.34 26.01 28.46 32.8 Roformer 33.27 29.2 26.68 68.86 235.71 458.83 LEX Transformer (Ours) 33.18 29.11 26.59 **25.53 25.07 24.89** Transformer 24.25 18.81 16.05 86.56 1196.92 10781.38 Alibi 22.85 17.74 15.17 13.97 15.36 18.37 Roformer 22.66 17.65 15.12 36.54 146.61 331.56 LEX Transformer (Ours) 22.01 17.24 14.85 **13.92 13.56 13.48** Table 2: Results of perplexity with different lengths. The language models are trained with a length of 1024 and then evaluated on various lengths. LEX obtains better performance not only on shorter texts (i.e., interpolation) but also on longer texts (i.e., extrapolation). The red color indicates that the perplexity begins increasing compared with the shorter length. LEX is the only method that has lower perplexity along with increased evaluation length. Moreover, the setting of ζ = 0 is RoPE (Su et al., 2021), which can be viewed as a special case of our method. Besides, we discuss the situation when ζ is a scalar instead of a vector, where we choose ζ = γ/(1 + γ) as the value. After pre-training on 1024, we evaluate the perplexity of PG22 with 1024 and 8192 lengths. Table 4 shows that simple scaling operation cannot match the performance of LEX. The vector ζ also performs better than ζ = 0 and γ/(1 + γ). Therefore, the combination of rotation and decay means the combination of in-distribution and outof-distribution capability in terms of length. | Length | 1024 | 2048 | |---------------|---------------|--------| | Interpolation | Extrapolation | | | Transformer | 0.87 | 0.28 | | Alibi | 0.81 | 0.88 | | Roformer | 0.91 | 0.08 | | LEX (Ours) | 0.98 | 1.08 | | − BCA | 0.98 | 0.54 | | Methods | 1024 | 8192 | |---------------|---------------|--------| | Interpolation | Extrapolation | | | LEX | 26.59 | 24.89 | | w/o Rotation | 37.11 | 34.5 | | ζ = 0 | 26.68 | 26.16 | | Scalar ζ | 26.85 | 25.1 | ## 4.4.2 Blockwise Causal Attention As shown in Table 5, we run the evaluation using different position embeddings (i.e., Absolute, Alibi, ROPE, and XPOS) with or without blockwise Table 4: Ablation results on the PG22 set show that rotation of XPOS is necessary for strong performance. ## Causal Attention. First, Blockwise Causal Attention works for ROPE whose perplexity will explode without that. Alibi performs well without windowed attention because its "soft window" is broader than a hard block window. However, when the sequence length increases to 8192, windowed attention outperforms vanilla attention again (also shown in Figure 1). XPOS's perplexity without BCA increases by about 1.5 in 2048, and 40 in 8192. However, with its high resolution, XPOS can recognize position with BCA's constraint. Besides, we compare BCA with Sliding Attention (Child et al., 2019). In this experiment, we set the window size as 1024 to align with pre-training. Sliding Attention performs better as shown in the last row of Table 5 because its interaction range is broader than Block Causal Attention. The reason to use block windows instead of sliding windows is efficiency. According to (Xiong et al., 2021), the training speed of Blockwise Attention is 1.5x Methods **2048 8192** Extrapolation Absolute 132.46 12747.41 Absolute + BCA 322.73 28787.01 ROPE 68.86 458.83 ROPE + BCA 26.37 26.16 Alibi 26.01 32.8 Alibi + BCA 27.53 31.82 XPOS 27.29 63.99 XPOS + BCA 25.53 24.89 XPOS + Sliding Window 25.33 24.61 Table 5: Results of perplexity on PG22 dataset. "BCA" is short for blockwise causal attention. faster than using sliding windows. Therefore, LEX makes a trade-off and uses BCA in our implementation. Without losing generality, our method is also compatible with Sliding Attention and other local attention variants. ## 5 Related Work 5.1 Long-Sequence Transformers Long-sequence Transformers aim to solve two key problems. First, the computation or memory consumption is not efficient enough for long sequences. Second, there is a trade-off between performance and efficiency. One popular solution (Wang et al., 2020b; Katharopoulos et al., 2020; Choromanski et al., 2020) is linear attention, i.e., using a kernel-based or low-rank approximation to replace vanilla attention. The methods typically target efficiency while underperforming vanilla Transformers for regular length. Another strand is sparse attention (Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020; Xiong et al., 2021), which usually leverages structured sparsity to reduce computation. For causal sequence modeling, the recurrent-style designs (Dai et al., 2019; Hutchins et al., 2022; Ma et al., 2022b) are also competitive. In comparison, we focus on length extrapolation (Press et al., 2021) for language modeling, i.e., training on short texts while evaluating long texts. The training process is kept the same as vanilla Transformers. The capability of long-sequence modeling is given for free during inference. So training efficiency (which is typically expensive for large-scale language models) is not affected compared with previous work. Moreover, the performance on regular length is perfectly retained, without trade-offs for long-sequence modeling. ## 5.2 Position Modeling 5.2.1 Absolute Position Embedding Absolute sinusoidal position embedding is proposed by Vaswani et al. (2017), which is the initial design of the Transformer. For each dimension, different frequencies are encoded from 2π to 10000 × 2π: $$\begin{array}{l}{{\mathrm{PE}_{(p o s,2i)}=\cos(p o s/10000^{2i}/d_{\mathrm{model}})}}\\ {{\mathrm{PE}_{(p o s,2i+1)}=\sin(p o s/10000^{2i}/d_{\mathrm{model}})}}\end{array}\tag{14}$$ where PEpos+k is represented as a linear function of PEpos to restore a relative-position property. 5.2.2 Relative Position Embedding Shaw et al. (2018) propose relative position embedding as an alternative approach. Denote aij as attention weight, αij = softmax(aij ), oi as output, we have: $$a_{ij}=\frac{q_{i}\cdot k_{j}}{\sqrt{d}}\Longrightarrow\frac{q_{i}\cdot(k_{j}+p_{ij}^{K})}{\sqrt{d}}\tag{15}$$ $$o_{i}=\sum_{j}\alpha_{ij}\mathbf{v}_{j}\Longrightarrow\sum_{j}\alpha_{ij}(\mathbf{v}_{j}+\mathbf{p}_{ij}^{V})$$ where $\mathbf{p}_{ij}^{K}=\omega_{\min(i-j,k)}^{K},\mathbf{p}_{ij}^{V}=\omega_{\min(i-j,k)}^{V}$, and ω K and ω Vare learnable parameters. The clipping strategy helps length generalization but cannot distinguish the positions that are larger than k. Yang et al. (2019) and He et al. (2020) further reparameterize the relative position vectors for better performance. T5 (Raffel et al., 2020) uses a simpler strategy to encode relative position: $$a_{i j}=\frac{q_{i}\cdot k_{j}}{\sqrt{d}}+p_{\mathrm{bucket}(i-j)}\qquad(16)$$ where log-bucket scalars are added to attention scores. Recently, pre-defined position embedding is brought back by ROPE (Su et al., 2021). Alibi (Press et al., 2021) proposes to explicitly build an exponential decay on the attention matrix, which contributes to length extrapolation: $$a_{i j}=\frac{\mathbf{q}_{i}\cdot\mathbf{k}_{j}}{\sqrt{d}}-\mathrm{m}(i-j),\quad\mathrm{m}(\cdot)>0\quad\mathrm{~(17)}$$ where the values of m(·) are manually defined. However, Alibi (Press et al., 2021)'s performance tends to be inferior to ROPE for the context whose length is shorter than the pre-training length. In this work, we propose a theoretically derived relative position embedding XPOS that optimizes the attention resolution between tokens. The XPOS method not only has the nice property of length extrapolation but also achieves strong performance. ## 6 Conclusion We propose LEX Transformer to accurately capture position information for Transformers. We define attention resolution as the metric of length extrapolation and design a solution to improve the modeling. Extensive experiments on language modeling show that our method achieves lower perplexity on longer sequences while training on short texts. The simplicity also makes the method a go-to augmentation for Transformer-based language models. In addition, attention resolution provides a more principled view for position modeling, which sheds light on future architecture design. ## Limitations In this work, we focus on causal language modeling. It needs additional efforts to integrate the proposed methods into bidirectional attention, such as masked language modeling (Devlin et al., 2019). Moreover, XPOS introduces about 6% inference cost compared with absolute position embeddings, although it accelerates training convergence. ## References Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander I Rudnicky. 2022a. Kerple: Kernelized relative positional embedding for length extrapolation. *arXiv preprint arXiv:2205.09921*. Ta-Chung Chi, Ting-Han Fan, and Alexander I Rudnicky. 2022b. Receptive field alignment enables transformer length extrapolation. *arXiv preprint* arXiv:2212.10356. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. URL https://openai.com/blog/sparse-transformers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2020. Rethinking attention with performers. *arXiv preprint arXiv:2009.14794*. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek B Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Oliveira Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Philipp Dufter, Martin Schmitt, and Hinrich Schütze. 2022. Position information in transformers: An overview. *Computational Linguistics*, 48(3):733– 763. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. *arXiv preprint arXiv:2101.00027*. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9:1735– 1780. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, and Behnam Neyshabur. 2022. Block-recurrent Transformers. In *Advances in Neural Information* Processing Systems. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine* Learning, pages 5156–5165. PMLR. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations*, San Diego, CA. Tomáš Kocisk ˇ y, Jonathan Schwarz, Phil Blunsom, Chris ` Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328. Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022a. TorchScale: Transformers at scale. *CoRR*, abs/2211.13184. Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. 2022b. Mega: Moving average equipped gated attention. *arXiv preprint* arXiv:2209.10655. Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2020a. On position embeddings in bert. In International Conference on Learning Representations. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020b. Linformer: Self-attention with linear complexity. *arXiv preprint arXiv:2006.04768*. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. 2022. Image as a foreign language: BEiT pretraining for all vision and vision-language tasks. arXiv preprint arXiv:2208.10442. Wenhan Xiong, Barlas Oguz, Anchit Gupta, Xilun ˘ Chen, Diana Liskovich, Omer Levy, Wen-tau Yih, and Yashar Mehdad. 2021. Simple local attentions remain competitive for long-context tasks. *arXiv* preprint arXiv:2112.07210. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. *Advances in Neural Information* Processing Systems, 33:17283–17297. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprint arXiv:2104.05938. ## A Additional Experiments Besides the experiments in Section 4, we run language modeling evaluation on Arxiv and NarrativeQA (Kocisk ˇ y et al. ` , 2018). The results are shown in Table 6. These datasets have their shortcomings. The article length of Arxiv is usually less than 8192, and part of NarrativeQA's corpus is sampled from PG19, which is in the training dataset. Therefore, we show them in the appendix instead of the main content. ## B Hyperparameters For Pre-Training As shown in Table 7, we present the hyperparameters for pre-training. The setting keeps the same among all Transformer variants. We follow medium-size GPT3 (Brown et al., 2020), 24 layers, 1024 hidden size, 4096 FFN inner hidden size, and 16 attention heads. The number of batch tokens is 0.5M, for pre-training 1024, and the number of batch sentences is 512. We use Adam (Kingma and Ba, 2015) optimizer with β1 = 0.9, β2 = 0.98, ϵ = 10−6. The warmup steps are 20k, and we use 50k checkpoints for evaluation. | Length | 256 | 512 | 1024 | 2048 | 4096 | |------------------------|---------------|-------|--------|--------|---------| | Interpolation | Extrapolation | | | | | | arXiv | | | | | | | Transformer | 29.74 | 23.6 | 19.59 | 102.09 | 1240.77 | | Alibi | 26.53 | 21.07 | 17.53 | 15.38 | 16.88 | | Roformer | 25.89 | 20.6 | 17.24 | 49.29 | 199.25 | | LEX Transformer (Ours) | 25.73 | 20.48 | 17.14 | 15.81 | 15.19 | | NarrativeQA | | | | | | | Transformer | 16.74 | 14.42 | 13.02 | 58.95 | 574.91 | | Alibi | 15.58 | 13.45 | 12.15 | 11.4 | 12.09 | | Roformer | 15.21 | 13.16 | 11.93 | 20.72 | 35.14 | | LEX Transformer (Ours) | 14.82 | 12.86 | 11.67 | 11.14 | 10.93 | | Hyperparameters | Value | |------------------------|-------------| | Layers | 24 | | Hidden size | 1024 | | FFN inner hidden size | 4096 | | Attention heads | 16 | | Training steps | 50K | | Batch tokens per task | 0.5M | | Adam ϵ | 1e-6 | | Adam β | (0.9, 0.98) | | Learning rate | 3e-4 | | Learning rate schedule | Polynomial | | Warmup steps | 20,000 | | Gradient clipping | 2.0 | | Weight decay | 0.01 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the final part after conclusion A2. Did you discuss any potential risks of your work? Not applicable. It is fundamental research and not tied to particular applications. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. These tools and models are publicly available and free of use for research purposes. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The use is consistent with their intended use. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 4 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section4.1 and Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section4.1 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
lu-etal-2023-survey
A Survey of Deep Learning for Mathematical Reasoning
https://aclanthology.org/2023.acl-long.817
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
# A Survey Of Deep Learning For Mathematical Reasoning Pan Lu1, Liang Qiu1, Wenhao Yu2, Sean Welleck3∗**, Kai-Wei Chang**1∗ 1UCLA, 2University of Notre Dame, 3University of Washington https://github.com/lupantech/dl4math ## Abstract Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain. ## 1 Introduction "The study of mathematics, like the Nile, begins in minuteness but ends in magnificence." - Charles Caleb Colton, English writer Mathematical reasoning is a key aspect of human intelligence that enables us to comprehend and make decisions based on numerical data and language. It is applicable in various fields, including science, engineering, finance, and everyday life, and encompasses a range of abilities, from basic skills such as pattern recognition and numerical operations to more advanced skills like problemsolving, logical reasoning, and abstract thinking. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has been a long-standing focus of research in the fields of machine learning and ∗denotes co-senior authors. natural language processing (NLP), dating back to the 1960s (Feigenbaum et al., 1963; Bobrow, 1964). In recent years, there has been a surge of interest in this area: for instance, the number of papers has grown from approximately 10 in 2018 to 66 in 2022 (see Figure 3 in the Appendix). As deep learning continues to revolutionize NLP tasks such as question answering and machine translation (Sutskever et al., 2014; Devlin et al., 2019), it has also made significant strides in the field of mathematical reasoning (Wang et al., 2017; Yang and Deng, 2019; Geva et al., 2020; Wei et al., 2022). However, despite the impressive capabilities of these models, there is still a lack of a clear taxonomy of the different types of mathematical reasoning tasks and the specific capabilities required of deep learning models to solve them. Previous literature has been limited to the discussion of specific aspects, such as solving math word problems (Bhattacharya, 2017; Zhang et al., 2019; Ughade and Kumbhar, 2019), representing numbers representation (Thawani et al., 2021), or solving informal problems (Meadows and Freitas, 2022). Additionally, with the recent advancements in large language models like GPT-3 (Brown et al., 2020), there is a growing need to understand the capabilities and limitations of these models in the context of mathematical reasoning. This is where a comprehensive survey of this rapidly advancing domain becomes crucial, as it can provide an overview of the current state and limitations of the field, and indicate further research areas. In this paper, we survey over 180 papers from the NLP and AI communities in the field of deep learning for mathematical reasoning. We study various types of mathematical reasoning problems, such as math word problems, theorem proving, geometry problem solving, math question answering, and other quantitative problems (§2, §A). Additionally, we explore different deep learning architectures for mathematical reasoning, including neural networks 14605 ![1_image_0.png](1_image_0.png) | Textual | E.g., MathQA (Amini et al., 2019), SVAMP (Patel et al., 2021) | | |----------------------------------|-------------------------------------------------------------------------------|--------------------------------------------------------------------| | Multimodal | E.g., IconQA (Lu et al., 2021b), TabMWP (Lu et al., 2022b) | | | Formal | E.g., CoqGym (Yang and Deng, 2019) | | | Informal | E.g., NaturalProofs (Welleck et al., 2021) | | | Formal + Informal | E.g., miniF2F+informal (Jiang et al., 2022a) | | | Without Annotations | E.g., GEOS (Seo et al., 2015), GEOS++ (Sachan et al., 2017) | | | With Annotations | E.g., Geometry3K (Lu et al., 2021a), UniGeo (Chen et al., 2022a) | | | Single Benchmark | E.g., DROP (Dua et al., 2019), Mathematics (Saxton et al., 2020) | | | Unified Benchmark | E.g., Lila (Mishra et al., 2022a), TheoremQA (Chen et al., 2023) | | | Diagram | E.g., FigureQA (Kahou et al., 2018), DVQA (Kafle et al., 2018) | | | Finance | E.g., ConvFinQA (Chen et al., 2022c) | | | Science | E.g., ScienceQA (Lu et al., 2022a) | | | Programming | E.g., P3 (Schuster et al., 2021) | | | Seq2Seq-based (§3.1) | E.g., DNS (Wang et al., 2017), AnsRat (Ling et al., 2017) | | | Graph-based (§3.2) | E.g., GTS (Xie and Sun, 2019), Graph2Tree (Li et al., 2020b) | | | Attention-based (§3.3) | E.g., Math-EN (Wang et al., 2018a), GROUP-ATT (Li et al., 2019) | | | Other (§3.4) | E.g., CNNTP (Loos et al., 2017), MathDQN (Wang et al., 2018b) | | | Self-Supervised Learning (§4.1) | E.g., GenBERT (Geva et al., 2020), Minerva (Lewkowycz et al., 2022) | | | Task-specific Fine-tuning (§4.2) | E.g., Scratchpad (Nye et al., 2021), Bhaskara (Mishra et al., 2022a) | | | In-context Learning (§5) | Example Selection (§5.1) | E.g., Few-shot-CoT (Wei et al., 2022), PromptPG (Lu et al., 2022b) | | High-quality Chains (§5.2) | E.g., Self-Consistency (Wang et al., 2023), Least-to-most (Zhou et al., 2023) | | (§3), pre-trained language models (§4), and recent in-context learning for large language models (§5). We also analyze existing benchmarks and find that there is less focus on multi-modal and lowresource settings (§6.1). Our evidence-based studies suggest that current numeracy representations are insufficient and deep learning methods are inconsistent for mathematical reasoning (§6.2). Following this, we suggest future research directions related to generalization and robustness, trustworthy reasoning, learning from feedback, and multimodal mathematical reasoning (§7). ## 2 Mathematical Reasoning Tasks In this section, we briefly introduce different tasks for mathematical reasoning. A detailed summary and discussion of commonly used datasets can be found in Table 7 and Appendix A. Math Word Problem Solving. Developing algorithms to automatically solve math word problems (MWPs) has been of interest to NLP researchers for decades (Feigenbaum et al., 1963; Bobrow, 1964). An example of a MWP is shown in Table 1. A question involves four basic arithmetic operations with single or multiple operation steps. The challenge posed by MWPs lies in the need for language com- Question: Bod has 2 apples and David has 5 apples. How many apples do they have in total? Rationale: x = 2 + 5 Solution: 7 Table 1: A typical math word problem. prehension, semantic parsing, and the application of multiple mathematical reasoning skills. Theorem Proving. Automating theorem proving is a long-standing challenge in AI (Newell et al., 1957; Feigenbaum et al., 1963). The problem is to demonstrate the truth of a mathematical claim (a *theorem*) through a sequence of logical arguments (a *proof*). Theorem proving tests various skills, such as choosing effective multi-step strategies, using background knowledge, and performing symbolic manipulations. Geometry Problem Solving. Automated geometry problem solving (GPS) is also a long-standing mathematical reasoning task (Gelernter et al., 1960; Wen-Tsun, 1986). As shown in Figure 2, a geometry problem consists of a textual description and a diagram. The multimodal inputs describe the entities, attributes, and relationships of geometric elements, and the goal is to find the numeric solution to an unknown variable. ![2_image_0.png](2_image_0.png) Figure 2: An example of geometry problems. Math Question Answering. There is a wide range of question answering (QA) benchmarks that center around mathematical reasoning, which we refer to as math question answering (MathQA). For example, DROP (Dua et al., 2019) is a MathQA dataset that requires discrete reasoning to answer questions such as "Which kicker kicked the most field goals?" over the content of paragraphs. ## 3 Neural Networks For Mathematical Reasoning Neural networks have become a popular tool in the field of mathematical reasoning, mirroring their success in NLP. In recent years, a number of different neural network architectures have been proposed for mathematical reasoning tasks, including Seq2Seq-based networks, graph-based networks, and attention-based networks. These methods are outlined in more detail in Table 8 in the Appendix. ## 3.1 Seq2Seq-Based Networks For Math Sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) neural networks have been successfully applied to mathematical reasoning tasks, such as math word problem solving (Wang et al., 2017), theorem proving (Yang and Deng, 2019), geometry problem solving (Robaidek et al., 2018), and math question answering (Tafjord et al., 2019). A Seq2Seq model uses an encoder-decoder architecture and usually formalizes mathematical reasoning as a sequence generation task. The basic idea behind this approach is to map an input sequence (e.g. a mathematical problem) to an output sequence (e.g. an equation, program, and proof). Common encoders and decoders include Long Short Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997), Gated Recurrent Unit (GRU) (Cho et al., 2014), and their bidirectional variants: BiLSTM and BiGRU. A large amount of work has shown the performance advantage of Seq2Seq models over previous statistical learning approaches (Ling et al., 2017; Wang et al., 2018a; Huang et al., 2018; Wang et al., 2019; Li et al., 2019). ## 3.2 Graph-Based Networks For Math Seq2Seq approaches show their advantages of generating mathematical expressions without relying on hand-crafted features. It is noteworthy that mathematical expressions can be represented as tree-based structures, such as abstract syntax trees (ASTs) and graph-based structures, which capture the structural information in the expressions. However, Seq2Seq methods do not explicitly this important information. To address this limitation, graph-based neural networks have been developed to explicitly model the structure within expressions. Sequence-to-tree (Seq2Tree) models explicitly model the tree structure when encoding the output sequences (Xie and Sun, 2019; Wu et al., 2020; Zaporojets et al., 2021; Qin et al., 2021). For example, Liu et al. (2019a) devise a Seq2Tree model to better use information from an equation's AST. Seq2DAG (Cao et al., 2021), instead, applies a sequence-to-graph (Seq2Graph) framework when generating the equations since the graph decoder is able to extract complex relationships among multiple variables. The graph-based information can also be embedded when encoding the input mathematical sequences (Zhang et al., 2020b; Shen and Jin, 2020; Li et al., 2020b; Wu et al., 2021a). ## 3.3 Attention-Based Networks For Math The attention mechanism has been successfully applied to NLP (Bahdanau et al., 2015) and vision problems (Xu et al., 2015; Woo et al., 2018), taking into account the hidden vectors of the inputs during the decoding processing. Recently, researchers have been exploring its usefulness in mathematical reasoning tasks, as it can be used to identify the most important relationships between mathematical concepts. For instance, MATH-EN (Wang et al., 2018a) is a math word problem solver which benefits from long-distance dependency information learned by self-attention. Attention-based methods have also been applied to other mathematical reasoning tasks such as geometry problems solving (Robaidek et al., 2018; Chen et al., 2021a) and theorem proving (Yang and Deng, 2019). Various attention mechanisms have been studied to extract better representations, such as Group-ATT (Li et al., 2019) which uses different multi-head attention to extract various types of MWP features, and graph attention which is applied to extract knowledgeaware information in (Wu et al., 2020). ## 3.4 Other Neural Networks For Math Deep learning approaches to mathematical reasoning tasks can also make use of other neural networks, such as convolutional neural networks (CNN) and multimodal networks. Some work encodes the input text using a convolutional neural network architecture, giving the model the ability to capture long-term relationships between symbols in the input (Gehring et al., 2017; Wang et al., 2018a,a; Robaidek et al., 2018; Alemi et al., 2016; Loos et al., 2017). For example, the first application of deep neural networks for theorem proving is proposed in (Alemi et al., 2016), which relies on convolutional networks for premise selection. Multimodal mathematical reasoning tasks, such as geometry problem solving and diagram-based mathematical reasoning, are formalized as visual question answer (VQA) problems (Kafle et al., 2018; Chen et al., 2021a; Lu et al., 2021b). In this domain, visual inputs are encoded using ResNet (He et al., 2016) or Faster-RCNN (Ren et al., 2015), while textual representations are obtained via GRU or LTSM. Subsequently, the joint representation is learned using multimodal fusion models, such as BAN (Kim et al., 2018), FiLM (Perez et al., 2018), and DAFA (Gao et al., 2019). Other deep neural network structures can also be used in mathematical reasoning. A Graph Neural Network (GNN) is employed for geometry problem parsing in Zhang et al. (2022), taking advantage of its success in spatial reasoning. WaveNet has been applied to theorem proving (Loos et al., 2017; Bansal et al., 2019), due to its ability to address longitudinal time-series data. Furthermore, Transformers are found to outperform GRU in generating mathematical equations in DDT (Meng and Rumshisky, 2019). Finally, MathDQN (Wang et al., 2018b) is the first work to explore reinforcement learning for math word problem solving, taking advantage of its strong search capabilities. ## 4 Pre-Trained Language Models For Mathematical Reasoning Pre-trained language models (Devlin et al., 2019; Radford et al., 2020; Brown et al., 2020) have demonstrated remarkable performance gains on a wide range of NLP tasks. By pre-training on a large corpus of text, the models learn valuable world knowledge (Guu et al., 2020), which could be applied to downstream tasks. Similar ideas can be applied to math-related problems, and previous work has shown the promising performance of pretrained language models in answering math word problems (Kim et al., 2020), assisting with theorem proving (Wu et al., 2022b), as well as solving other mathematical tasks (Charton, 2022). However, though large language models excel in modeling natural language, there are several challenges to using them for mathematical reasoning. First, pre-trained language models are not specifically trained on mathematical data. This likely contributes to them being less proficient in math-related tasks compared to natural language tasks. There is also less mathematical or scientific data available for large-scale pre-training compared to text data. Second, the size of pre-trained models continues to grow, making it expensive to train the entire model from scratch for specific downstream tasks. Additionally, downstream tasks may deal with different input formats or modalities, such as structured tables (Zhao et al., 2022) or diagrams (Lu et al., 2021b). To address these challenges, researchers have to adjust pre-trained models by finetuning them on downstream tasks or adapting the neural architectures. ## 4.1 Self-Supervised Learning For Math Self-supervised learning is a machine learning approach in which an algorithm learns to perform a task without being explicitly provided with labeled training data. Table 2 provides a list of language models pre-trained with self-supervised tasks for mathematical reasoning. Model scale. There is a clear trend that pre-trained language models have become increasingly larger in the past few years (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020; Radford et al., 2020; Brown et al., 2020). A recent study (Liang et al., 2022a) shows that model scale within a model family reliably predicts model accuracy. The study also mentions an interesting thresholding effect: "all models that win head-to-head model comparisons for accuracy at a rate well above chance are at least 50B parameters". A similar size-growing trend can be observed in the field of mathematical reasoning with pre-trained language models. For example, MWP-BERT (Liang et al., 2022b) uses a backbone of BERT (110M) (Devlin et al., 2019) and RoBERTa (123M) (Liu et al., 2019b) for Math Word Problems. Most recently, Minerva (Lewkowycz et al., 2022), which is based on the PaLM (Chowdhery et al., 2022) pre-trained | Paper | Backbone | Size | Corpus | Pre-training task | |---------------------------------------------------------------------------------|-------------------------|---------------|----------------|---------------------------------| | GPT-f (Polu and Sutskever, 2020) | Transformer (2017) | 774M | Math | Causal language modeling | | LISA (Jiang et al., 2021) | Transformer (2017) | 163M | Math | Causal language modeling | | MATH-PLM (Hendrycks et al., 2021b) | GPT-2 (2020) | 1.5B | Math | Causal language modeling | | MWP-BERT (Liang et al., 2022b) | RoBERTa (2019b) | 123M | Math | 8 numeracy augmented tasks | | TaPEx (Liu et al., 2022b) | BART (2020) | 406M | SQL | Query result generation | | HTPS (Lample et al., 2022) | Transformer (2017) | 600M | Math | Masked Seq2Seq modeling | | Thor (Jiang et al., 2022b) | Transformer (2017) | 700M | Github, arXiv | Causal language modeling | | PACT (Han et al., 2022) | Transformer (2017) | 837M | Math | Masked/Causal language modeling | | Minerva (Lewkowycz et al., 2022) | PaLM (2022) | 540B | Science & Math | Causal language modeling | | GenBERT (Geva et al., 2020) | BERT (2019) | 110M | Number, Text | Masked/Causal language modeling | | NF-NSM (Feng et al., 2021) | RoBERTa (2019b) | 110M | Number | Number prediction | | LIME (Wu et al., 2021d) | Transformer (2017) | 11B | Math | Causal language modeling | | Set (Wu et al., 2022c) | T5 (2020) | 60M | Math | Unique token generation | | Table 2: Comparison of pre-training language models for mathematical reasoning. | | | | | | language model, has a size up to 540B parameters. | Paper | Backbone | Task | | | Pre-training corpus. | There are generally two | | | | | types of pre-training corpus for mathematical language models. (i) Curated datasets from openly accessible sources. For example, Hendrycks et al. (2021b) present the first large-scale mathematics pre-training dataset with step-by-step solutions in natural language and LATEX, called the Auxiliary Mathematics Problems and Solutions (AMPS). AMPS consists of Khan Academy and Mathematica data. Minerva (Lewkowycz et al., 2022) collects a high-quality dataset containing scientific and mathematical data, which contains 38.5B tokens from webpages filtered for mathematical content and from papers submitted to the arXiv preprint server. Thor (Jiang et al., 2022b) pre-trains a language model on the GitHub + arXiv subsets of The Pile (Gao et al., 2020). (ii) Synthetic datasets | EPT (2020) | ALBERT (2019) | MWP | | | Generate & Rank (2021) | BART (2020) | MWP | | | | RPKHS (2021b) | RoBERTa (2019b) | MWP | | | | PatchTRM (2021b) | ResNet+BERT (2019) | MWP | | | | GSM8K-PLM (2021) | GPT-3 (2020) | MWP | | | | BERT-TD+CL (2022b) | BERT (2019) | MWP | | | | DeductReasoner (2022) | RoBERTa (2019b) | MWP | | | | Self-Sampling (2023) | GPT-Neo (2020) | MWP | | | | Bhaskara (2022a) | GPT-Neo (2020) | MWP | | | | miniF2F-PLM (2022) | GPT-f (2020) | TP | | | | NaturalProver (2022a) | GPT-3 (2020) | TP | | | | Inter-GPS (2021a) | BART (2020) | GPS | | | | UniGeo (2022a) | VL-T5 (2021) | GPS | | | | DPE-NGS (2022) | RoBERTa (2019b) | GPS | | | | Aristo (2020) | RoBERTa (2019b) | MathQA | | | | FinQANet (2021c) | RoBERTa (2019b) | MathQA | | | | TAGOP (2021) | RoBERTa (2019b) | MathQA | | | | MT2Net (2022) | RoBERTa (2019b) | MathQA | | | | Scratchpad (2021) | Transformer (2017) | Mixed | | | | LAMT (2022) | Transformer (2017) | Mixed | | | Pre-training corpus. There are generally two types of pre-training corpus for mathematical language models. (i) Curated datasets from openly accessible sources. For example, Hendrycks et al. (2021b) present the first large-scale mathematics pre-training dataset with step-by-step solutions in natural language and LATEX, called the Auxiliary Mathematics Problems and Solutions (AMPS). AMPS consists of Khan Academy and Mathematica data. Minerva (Lewkowycz et al., 2022) collects a high-quality dataset containing scientific and mathematical data, which contains 38.5B tokens from webpages filtered for mathematical content and from papers submitted to the arXiv preprint server. Thor (Jiang et al., 2022b) pre-trains a language model on the GitHub + arXiv subsets of The Pile (Gao et al., 2020). (ii) Synthetic datasets based on templates or interaction with engines. Recent work (Wu et al., 2021d; Krishna et al., 2021; Ri and Tsuruoka, 2022; Anderson and Farrell, 2022; Wu et al., 2022c) shows that pre-training on data that is fully synthetically generated—synthetic pre-training can actually provide substantial gains. Representative work includes TaPEX (Liu et al., 2022b), which obtains a pre-training corpus by automatically synthesizing executable SQL queries and their execution outputs. LISA (Jiang et al., 2021) extracts lemmas and theorems by interacting with the Isabelle standard library and the Archive of Formal Proofs. GenBERT (Geva et al., 2020) generates numerical and textual pre-training datasets based on manually crafted and extracted templates. Pre-training tasks. General pre-training language models have two typical self-supervised learning tasks: (i) Masked Language Modeling (MLM), where it randomly masks a portion of words in each sequence to predict the outcome; (ii) Causal Language Modeling (CLM), where the model is trained to predict the next token in a sequence of tokens. Following the same paradigm, researchers pre-train language models with MLM and CLM tasks on mathematical or scientific corpora for downstream tasks (Polu and Sutskever, 2020; Hendrycks et al., 2021b; Han et al., 2022; Jiang et al., 2022b). There is also recent work that designs customized tasks to inject mathematical reasoning capabilities into language models. For instance, Liang et al. (2022b) pre-train language models with a suite of 8 numeracy-augmented tasks with consideration of reasoning logic and numerical properties. LIME (Wu et al., 2021d) proposes synthetic pretraining tasks to learn three reasoning primitives: deduction, induction, and abduction before learning more complex reasoning skills, which also be regarded as a form of curriculum learning. ## 4.2 Task-Specific Fine-Tuning For Math Task-specific fine-tuning is a technique to improve the performance of a pre-trained language model on a specific task. This is also a common practice when there is not enough data for training the large models from scratch. As shown in Table 3, existing work fine-tunes pre-trained language models on a variety of downstream tasks, such as math word problems (Kim et al., 2020; Shen et al., 2021), MathQA (Zhao et al., 2022), geometry problem solving (Lu et al., 2021a), linear algebra (Charton, 2022), and theorem proving (Welleck et al., 2022a). Apart from fine-tuning the model parameters, some work also uses pre-trained language models as encoders and ensembles them with other modules for downstream tasks (Lu et al., 2021b). ## 5 In-Context Learning For Mathematical Reasoning Large language models (LLMs), such as GPT3 (Brown et al., 2020), have recently revolutionized the field of natural language processing (NLP), especially on account of their powerful few-shot incontext learning capabilities (Brown et al., 2020). In-context Learning (ICL) enables LLMs to perform target tasks by providing some task examples as conditions at inference time, without updating model parameters (Radford et al., 2020; Brown et al., 2020). ICL allows users to quickly build models for new use cases without worrying about fine-tuning and storing a large amount of new parameters for each task, so it is widely used in fewshot settings nowadays (Min et al., 2022). An in-context example typically contains an input-output pair with some prompt words, e.g., Please select the largest number from the list. Input: [2, 4, 1, 5, 8]. Output: 8, and few-shot works by giving multiple examples, and then a final input example, where the model is expected to predict the output. However, such standard few-shot promptings, in which the LLM is given in-context examples of input–output pairs in front of test-time examples, have not yet proved sufficient to achieve high performance on challenging tasks such as mathematical reasoning (Rae et al., 2021). Chain-of-thought prompting (CoT) (Wei et al., 2022) leverages intermediate natural language rationales as prompts to enable LLMs to first generate reasoning chains and then predict an answer for an input question. For example, a CoT prompt for solving the math word problem could be Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. Then, how many tennis balls does Roger have now? Answer: Roger started with 5 balls. 2 cans of 3 tennis balls each are 6 tennis balls. 5 + 6 = 11. The answer is 11. Apart from Kojima et al. (2022) showing that LLMs are decent zero-shot reasoners when given the "Let's think step by step!" prompt, most of the recent work has focused on how to improve chainof-thought reasoning under the few-shot setting. This work is mainly divided into two parts, (i) selecting better in-context examples and (ii) creating better reasoning chains. ## 5.1 In-Context Example Selection Early chain-of-thought work randomly or heuristically selects in-context examples. However, recent studies have shown that this type of few-shot learning can be highly unstable across different selections of in-context examples (Rubin et al., 2022; Liu et al., 2022a). Therefore, which incontext reasoning examples make the most effective prompts is still an unknown problem in the literature. To address the limitation, recent work has investigated various methods to optimize the in-context examples selection process (Rubin et al., 2022; Zhang et al., 2023; Lu et al., 2022b; Yu et al., 2023; Fu et al., 2023). For example, Rubin et al. (2022) attempt to address this issue by retrieving semantically similar examples. In addition, Fu et al. (2023) propose complexity-based prompting, which chooses examples with complex reasoning chains, i.e., chains with more reasoning steps, as the prompt. PromptPG (Lu et al., 2022b) learns to select optimal in-context examples via reinforcement learning (RL) from a candidate pool. ## 5.2 High-Quality Reasoning Chains Early chain of thought work (e.g., Wei et al. (2022)) mainly relies on a single human-annotated reasoning chain as a prompt. However, manually creating reasoning chains has two disadvantages. First, as tasks become more complex, current models may not be sufficient to learn to perform all necessary reasoning steps and cannot easily generalize to different tasks. Second, a single decoding process is vulnerable to incorrect inference steps, leading to an incorrect prediction as the final answer. To address this limitation, recent studies mainly fo- | Models | Engine | ICL | Rationale | Rationale | Post method | |------------------------------------------|--------------|------------|-------------|----------------|------------------| | (best performed) | source | type | source | | | | Few-shot-CoT (Wei et al., 2022) | PaLM (540B) | Random | Language | Hand-crafted | - | | Self-Consistency-CoT (Wang et al., 2023) | Codex (175B) | Random | Language | Hand-crafted | Self-consistency | | Least-to-most CoT (Zhou et al., 2023) | Codex (175B) | Random | Language | Hand-crafted | - | | PromptPG-CoT (Lu et al., 2022b) | GPT-3 (175B) | RL | Language | Hand-crafted | - | | Retrieval-CoT (Zhang et al., 2023) | GPT-3 (175B) | Retrival | Language | Auto-generated | - | | Auto-CoT (Zhang et al., 2023) | Codex (175B) | Clustering | Language | Auto-generated | - | | Complexity-CoT (Fu et al., 2023) | GPT-3 (175B) | Complexity | Language | Hand-crafted | Self-consistency | | Few-shot-PoT (Chen et al., 2022b) | GPT-3 (175B) | Random | Code | Hand-crafted | - | cus on two aspects, (i) hand-crafting more complex demonstrations, which we refer to as *process-based* approaches (Zhou et al., 2023; Chen et al., 2022b), (ii) leveraging ensemble-like methods, which we refer to as *outcome-based approaches* (Wang et al., 2023; Li et al., 2022a). Process-based approaches aim to improve the chain-of-thought reasoning quality, especially for complex reasoning tasks. In least-to-most prompting (Zhou et al., 2023), the problem-solving process is implemented through two-stage prompting: (i) reducing a complex problem into a list of subproblems; (ii) solving these sub-problems sequentially, so that solving a given sub-problem is facilitated by the answers to previously solved subproblems. Similarly, Khot et al. (2022) leverage diverse decomposition structures and use different prompts to answer each sub-question. Apart from these multi-step reasoning methods, Chen et al. (2022b); Gao et al. (2022) propose programof-thoughts (PoT), an alternative solution that uses large language models to express the reasoning process as a program. The computation is then relegated to an external computer, which executes the generated programs to derive the answer. A more recent work, Chameleon (Lu et al., 2023), integrates different tools to enhance the abilities of LLMs for compositional reasoning. Outcome-based approaches acknowledge the potential incorrectness of an individual reasoning path, and instead use multiple reasoning paths (Wang et al., 2023; Li et al., 2022a). Selfconsistency (Wang et al., 2023) generates a set of reasoning paths by sampling from the language model, and marginalizes out the reasoning paths by choosing the most common answer. In addition to using sampling with a single prompt to produce multiple reasoning paths, Li et al. (2022a) propose to introduce diverse prompts through "selfteaching", as a complementary solution to produce a higher degree of diversity. ## 6 Discussion And Findings 6.1 Analysis Of Benchmarks The Multi-Modal Setting Is Underexplored But is gaining increasing attention. Most existing benchmarks for mathematical reasoning have targeted the textual-only modality. However, visual elements can provide a rich source of quantitative information, making multi-modal datasets beneficial for reasoning over quantitative relations in natural images (Lu et al., 2022a), abstract diagrams (Lu et al., 2021b), figures (Kahou et al., 2018), and charts (Kafle et al., 2018). Tables, which are commonly found in daily documents and contain hierarchically structured information, have also been the focus of tasks that require quantitative reasoning over textual and tabular context (Chen et al., 2021c; Zhu et al., 2021; Zhao et al., 2022; Lu et al., 2022b). In addition, recent datasets have been developed for mathematical reasoning grounded on conversations (Sun et al., 2019; Zhang et al., 2021; Chen et al., 2022c), as well as reports (Chen et al., 2022c). ## Pioneering Work Is Emerging In The Exploration of low-resource settings. Despite the creation of various datasets, mathematical reasoning in lowresource settings remains largely under-explored. Pioneering research has developed mathematical reasoning benchmarks for financial (Chen et al., 2021c; Zhu et al., 2021; Zhao et al., 2022) and scientific domains (Lu et al., 2022a). Additionally, there have been attempts to build non-English datasets for Chinese (Wang et al., 2017; Qin et al., 2020; Yu et al., 2021a) and Arabic (Alghamdi et al., 2022) for mathematical reasoning. Diverse rationale annotations have been widely explored. Complex reasoning usually involves multiple steps to arrive at the final answer. To bridge this gap, datasets annotated with intermediate rationales such as logic forms (Tafjord et al., | T5 | UnifiedQA | GPT-3 | GPT-3 | | |--------------------------|-------------|-----------------------------|-------------|-------------| | (Large) | (Large) | (davinci-002) (davinci-003) | | | | 3 balls + 5 balls = | ✗ | 5 balls | 8 balls | 8 balls | | 23 balls + 145 balls = | ✗ | ✗ | 58 balls | 168 balls | | 23 balls + 1,855 balls = | ✗ | ✗ | 2,878 balls | 2,988 balls | Table 5: Language models struggle with large numbers. 2019; Lu et al., 2021a), programs (Amini et al., 2019; Chen et al., 2021c,a; Cao and Xiao, 2022; Chen et al., 2022a), and reasoning graphs (Zhang et al., 2021) have been proposed to train models for complex reasoning tasks. Python programs are used as reasoning annotations in (Austin et al., 2021; Mishra et al., 2022a) due to their enhanced accessibility and readability. To imitate the reasoning process of a human, a more recent trend is to annotate solutions in natural language (Ling et al., 2017; Cobbe et al., 2021; Lu et al., 2022b; Hendrycks et al., 2021b; Lu et al., 2022a). ## 6.2 Analysis Of Deep Learning Methods Is the current representation of numeracy sufficient? The standard practice for deep learning techniques is to treat numbers in the same way as words. Early neural network methods create a vocabulary that maps input words and numbers to token IDs, resulting in less frequent numbers being collapsed into an "UNK" token. Recent language models use subword tokenization techniques (Wu et al., 2016; Sennrich et al., 2016) to split numbers into atomic tokens. Recent studies have shown that these tokenization approaches are suboptimal (Wallace et al., 2019; Lin et al., 2020; Zhang et al., 2020d; Thawani et al., 2022). Two numbers on the same or close number line could have surface forms with no shared common tokens. For example, a number like 1598 is tokenized as "15" and "98" in GPT-3, while another format like 1, 598 is split as three different tokens: "1", ",", and "598". This lack of consistent representation can make it difficult for deep learning models to effectively process numbers, especially when compared to pure text. The insufficient representations of numbers can lead to out-of-distribution (OOD) problems. Table 5 provides examples of where language models tend to struggle with large numbers. Although increasing model scales could help, even the state-of-the-art large language model GPT-3 performs poorly when reasoning over large numbers. Some recent work suggests that using scientific notation (Zhang et al., 2020d) and digitlevel decomposition (Geva et al., 2020) may be helpful in improving numeracy representation, but | Problems | GPT-3 (text-davinci-002) | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------| | John had 8 balls and he gave 3 to Mary. John has 5 balls. How many balls does John have now? John had 3 apples. John had 8 balls and Mary has 5 balls. he gave 3 to Mary. How many balls does Mary have now? John had 8 balls and he gave 3 to Mary. John has more balls. Who has more balls now? John had 8 balls and he gave 3 to Mary. No, John has 5 balls now. Does John have more balls now? John had 8 balls and he gave 4 to Mary. No, John has 4 balls now. Does John have more balls now? John had 8 balls and he gave 4 to Mary. John has more balls. Who has more balls now? | | Table 6: Examples where large language models are not consistent for mathematical reasoning. this remains an open problem. Are deep learning methods consistent for mathematical reasoning? Recent developments in deep learning have led to impressive results on various mathematical reasoning tasks. The zero-shotCoT Minerva 540B achieves a score of 75.0% on the MMLU-STEM benchmark (Hendrycks et al., 2021a), which assesses multitask reasoning ability in the fields of science, technology, engineering, and mathematics (STEM) at both high school and college levels. Similarly, few-shot-CoT GPT-3 175B achieves a high accuracy of 93.0% on the MultiArith task. However, the question remains as to whether these methods are sufficiently advanced to tackle more complex problems. There is strong evidence that deep learning methods for mathematical reasoning are not robust and susceptible to adversarial attacks (Lin et al., 2020; Patel et al., 2021; Mishra et al., 2022b,a; Welleck et al., 2022b). The SVAMP (Patel et al., 2021) dataset is a collection of one-unknown arithmetic word problems up to grade 4, with slight word variations from previous datasets. It is surprising that current state-of-the-art (SOTA) methods perform poorly on this dataset, with Graph2Tree achieving only a 43.8% accuracy and zero-shot-CoT GPT-3 (175B) only reaching 63.7%, which is just above an "F" grade. Table 6 also shows the inconsistent performance of the zero-shot GPT-3 model in scenarios with slightly different descriptions, while human performance remains unchanged. This indicates a lack of consistency in the mathematical reasoning ability of SOTA large language models. ## 7 Future Work 7.1 Generalization And Robustness Despite impressive progress, neural models commonly display generalization and robustness failures on reasoning tasks. For example, above we discussed difficulties in generalizing to larger numbers (Table 5) or remaining robust to nearby problems (Table 6), while others identify failures in generalizing to longer problems than those observed in training (e.g., Anil et al. (2022)). One direction is to explore new inference-time (Jung et al., 2022; Mitchell et al., 2022) or fine-tuning (Anil et al., 2022) strategies. Another aspect of generalization relates to the role of *memorization*. For example, is the ability to produce a complex solution dependent on seeing many similar solutions during training, or even on memorizing the solution? Term frequency in the pretraining corpus is known to impact accuracy in simple arithmetic tasks (Razeghi et al., 2022) or factual question answering (Kandpal et al., 2022). On the other hand, Lewkowycz et al. (2022) did not find evidence of memorization in complex outputs, yet their training set and model are not available for inspection. Gaining a full understanding of these factors for complex problems and outputs (e.g., multi-step solutions or proofs) requires more analysis, as well as accessible datasets and models. ## 7.2 Trustworthy Reasoning Recent advances in language models have demonstrated their powerful capabilities for mathematical reasoning. However, due to the potential for generating ungrounded answers (Nakano et al., 2021), users can't always trust the predicted outcomes or have to verify then with extra efforts. Even with recent prompting strategies that provide rationales before making predictions (Wei et al., 2022), language models can still hallucinate statements, produce flawed reasoning, and output wrong answers. Consequently, novel approaches that enable more reliable reasoning are needed urgently. Some potential directions for this include: (i) using language models to provide evidence, such as theorems, to support the reasoning process; (ii) incorporating a mechanism that makes a judgment when the model is unsure of the answer; and (iii) using a model itself or another module to detect and locate mistakes in a model's reasoning. ## 7.3 Learning From Feedback Another important direction to further improve language models for mathematical reasoning is to let the model learn from feedback. Such a process makes the continual improvement of models' output quality and safety possible. An example is using reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022) to align language models with instructions. The idea is to let humans rank the generated outputs of language models and use the learned reward function to finetune the language model with policy gradient (Ouyang et al., 2022; Glaese et al., 2022; Qiu et al., 2022a). In the context of mathematical reasoning, feedback does not necessarily come from humans directly. The outcome of a theorem-proof engine (Jiang et al., 2021; Wu et al., 2021d, 2022c) or the execution result of model-generated scripts can also be used as the reward source (Polu and Sutskever, 2020). ## 7.4 Multi-Modal Mathematical Reasoning In recent years, there has been growing interest in multi-modal mathematical reasoning, which involves using multiple sources of information, such as text, tables, natural images, and diagrams (Kahou et al., 2018; Kafle et al., 2018; Lu et al., 2021b, 2022b). However, currently available datasets in this domain tend to be small (Zhao et al., 2022), generated from templates (Kahou et al., 2018), or focus on specific topics (Lu et al., 2021a; Chen et al., 2022a). One line of current research involves applying VQA-based frameworks to analyze figures and plots, but this approach can result in significant semantic gaps due to the fact that most VQA models are trained on natural images. One potential direction for future work is to enhance the ability of multi-modal mathematical reasoning systems to tackle more complex and realistic problems. This may involve creating unified models for interpreting and integrating different modalities, as well as developing better evaluation benchmarks to assess the performance of these systems. ## 8 Conclusion In this paper, we present a comprehensive survey of deep learning for mathematical reasoning. We review the various tasks, datasets, and deep learning approaches. We also identify several gaps in the existing datasets and methods. Finally, we outline directions for future research and highlight the potential for further exploration in this field. Our goal with this paper is to provide a comprehensive and useful resource for readers interested in the development of deep learning for mathematical reasoning. To aid in this effort, we have created a reading list that will be continually updated in a GitHub repository at https://github.com/lupantech/dl4math. ## Limitations One limitation of our survey work is that it is focused on the intersection of mathematical reasoning and deep learning over the past decade, which may not encompass the entire field and its history. Additionally, our evaluation of existing benchmarks and methods is based on a curated set of papers and may not fully represent the state of the art in the field. Furthermore, due to the fast-paced nature of the field, our survey may not reflect the latest developments and advancements which may have come out close to or after the survey was conducted. Despite these limitations, our survey still provides a valuable overview of the current state and key trends in the field of mathematical reasoning and deep learning, and can serve as a valuable resource for researchers and practitioners working in this field. ## Broader Impact Our survey paper on the intersection of mathematical reasoning and deep learning has the potential to significantly impact the field of artificial intelligence. By providing a comprehensive overview of the key tasks, datasets, and methods that have been developed in the past decade, we give researchers and practitioners a clear understanding of the current state-of-the-art and help them make informed decisions about their own research. Additionally, by evaluating existing benchmarks and methods and discussing future research directions, we aim to identify gaps in the current state of the art and guide future research and development efforts towards more advanced and effective mathematical reasoning systems. Overall, our survey has the potential to contribute to the advancement of mathematical reasoning and deep learning, and have a profound impact on machine learning and natural language processing. ## References Alexander A. Alemi, François Chollet, Niklas Een, Geoffrey Irving, Christian Szegedy, and Josef Urban. 2016. Deepmath - deep sequence models for premise selection. Advances in neural information processing systems (NeurIPS), 29. Reem Alghamdi, Zhenwen Liang, and Xiangliang Zhang. 2022. Armath: a dataset for solving arabic math word problems. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference (LREC)*, pages 351–362. Chris Alvin, Sumit Gulwani, Rupak Majumdar, and Supratik Mukhopadhyay. 2017. Synthesis of solutions for shaded area geometry problems. In The Thirtieth International Flairs Conference. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2357–2367. Connor Anderson and Ryan Farrell. 2022. Improving fractal pre-training. In *Proceedings of the IEEE/CVF* Winter Conference on Applications of Computer Vision, pages 1300–1309. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In *Proceedings of the IEEE conference on computer vision* and pattern recognition (CVPR), pages 6077–6086. Cem Anil, Yuhuai Wu, Anders Johan Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Venkatesh Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam Neyshabur. 2022. Exploring length generalization in large language models. In Advances in Neural Information Processing Systems (NeurIPS). Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *International Conference on Learning Representations (ICLR)*. Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. 2019. Holist: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (ICML), pages 454–463. PMLR. Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Yann Coscoy, David Delahaye, Daniel de Rauglaudre, Jean-Christophe Filliâtre, Eduardo Giménez, Hugo Herbelin, et al. 1999. The coq proof assistant reference manual. *INRIA, version*, 6(11). Taylor Berg-Kirkpatrick and Daniel Spokoyny. 2020. An empirical investigation of contextualized number prediction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4754–4764. Arindam Bhattacharya. 2017. A survey of question answering for math and science problem. *arXiv* preprint arXiv:1705.04530. Daniel G Bobrow. 1964. Natural language input for a computer problem solving system. AI Technical Reports. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33:1877–1901. Jie Cao and Jing Xiao. 2022. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In Proceedings of the 29th International Conference on Computational Linguistics (COLING), pages 1511–1520. Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo. 2021. A bottom-up dag structure extraction model for math word problems. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 39–46. François Charton. 2022. Linear algebra with transformers. *Transactions on Machine Learning Research*. Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. 2022a. Unigeo: Unifying geometry logical reasoning via reformulating mathematical expression. In *The 2022 Conference on Empirical Methods in Natural Language* Processing (EMNLP). Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric Xing, and Liang Lin. 2021a. Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning. In *Findings of the* Association for Computational Linguistics (ACL), pages 513–523. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021b. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022b. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. *arXiv preprint* arXiv:2211.12588. Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. 2023. Theoremqa: A theoremdriven question answering dataset. *arXiv preprint* arXiv:2305.12524. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan R Routledge, et al. 2021c. Finqa: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3697–3711. Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, and William Yang Wang. 2022c. Convfinqa: Exploring the chain of numerical reasoning in conversational finance question answering. arXiv preprint arXiv:2210.03849. Ting-Rui Chiang and Yun-Nung Chen. 2019. Semantically-aligned equation generation for solving and reasoning math word problems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2656–2668. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. In *Proceedings of the 38th International Conference on Machine Learning (ICML)*, pages 1931– 1942. Kyunghyun Cho, Bart van Merrienboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In *Proceedings of the* 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734. Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. 1996. Automated generation of readable proofs with geometric invariants. *Journal of Automated Reasoning*, 17(3):325–347. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Peter Clark, Oren Etzioni, Tushar Khot, Daniel Khashabi, Bhavana Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, et al. 2020. From 'f'to 'a'on the ny regents science exams: An overview of the aristo project. AI Magazine, 41(4):39–53. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *arXiv preprint* arXiv:2110.14168. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171–4186. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 2368–2378. Edward A Feigenbaum et al. 1963. *Computers and* thought. McGraw-Hill. Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu, Cuiping Li, and Hong Chen. 2021. Injecting numerical reasoning skills into knowledge base question answering models. *arXiv preprint arXiv:2112.06109*. Deborah Ferreira and André Freitas. 2020a. Natural language premise selection: Finding supporting statements for mathematical text. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2175–2182. Deborah Ferreira and André Freitas. 2020b. Premise selection in natural language mathematical texts. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 7365–7374. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2023. Complexity-based prompting for multi-step reasoning. In *International Conference on* Learning Representations (ICLR). Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. *arXiv preprint arXiv:2101.00027*. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language models. *arXiv preprint arXiv:2211.10435*. Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven CH Hoi, Xiaogang Wang, and Hongsheng Li. 2019. Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6639–6648. Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. 2021. TacticToe: Learning to Prove with Tactics. *Journal of Automated Reasoning*. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In *International conference on machine learning (ICML)*, pages 1243–1252. PMLR. Herbert Gelernter, James R Hansen, and Donald W Loveland. 1960. Empirical explorations of the geometry theorem machine. In Papers presented at the May 3-5, 1960, western joint IRE-AIEE-ACM computer conference, pages 143–149. Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 946–958. Kevin Gimpel, Dipanjan Das, and Noah A Smith. 2010. Distributed asynchronous online learning for natural language processing. In *Proceedings of the Fourteenth Conference on Computational Natural Language Learning*, pages 213–222. Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents via targeted human judgements. *arXiv preprint* arXiv:2209.14375. Adam Grabowski, Artur Korniłowicz, and Adam Naumowicz. 2015. Four decades of mizar. *Journal of* Automated Reasoning, 55(3):191–198. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning (ICML)*, pages 3929– 3938. PMLR. Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. 2022. Proof artifact cotraining for theorem proving with language models. In *International Conference on Learning Representations (ICLR)*. Yihan Hao, Mingliang Zhang, Fei Yin, and Linlin Huang. 2022. Pgdp5k: A diagram parsing dataset for plane geometry problems. In *26th International* Conference on Pattern Recognition (ICPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)*, pages 770–778. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. In *International Conference on Learning* Representations (ICLR). Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. In *35th Conference on Neural Information Processing Systems* (NeurIPS) Track on Datasets and Benchmarks. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2744–2751. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Mueller, Francesco Piccinno, and Julian Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics (ACL), pages 4320–4333. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and Song-Chun Zhu. 2021a. Learning by fixing: Solving math word problems with weak supervision. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 4959–4967. Yining Hong, Qing Li, Ran Gong, Daniel Ciao, Siyuan Huang, and Song-Chun Zhu. 2021b. Smart: A situation model for algebra story problems via attributed grammar. In *AAAI*, pages 13009–13017. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. 2019. Gamepad: A learning environment for theorem proving. In *International Conference on* Learning Representations (ICLR). Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin. 2018. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 213–223. Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to solve math word problems. In *Proceedings of Empirical* Methods in Natural Language Processing (EMNLP), pages 805–814. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 887–896. Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. 2022a. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. In *Submitted to The Eleventh* International Conference on Learning Representations. Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. 2021. Lisa: Language models of isabelle proofs. In *6th Conference on Artificial Intelligence and Theorem Proving (AITP)*. Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó´zd´z, Piotr Miłos, Yuhuai Wu, and Mateja Jamnik. 2022b. Thor: ´ Wielding hammers to integrate language models and automated theorem provers. Advances in Neural Information Processing Systems (NeurIPS), 35:8360– 8373. Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning to reason deductively: Math word problem solving as complex relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 5944–5955. Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and Yejin Choi. 2022. Maieutic prompting: Logically consistent reasoning with recursive explanations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1266–1279. Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In *Proceedings of the* IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5648–5656. Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Ákos Kádár, Adam Trischler, and Yoshua Bengio. 2018. Figureqa: An annotated figure dataset for visual reasoning. In International Conference on Learning Representations (ICLR). Cezary Kaliszyk, François Chollet, and Christian Szegedy. 2017. Holstep: A machine learning dataset for higher-order logic theorem proving. In *International Conference on Learning Representations* (ICLR). Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark. 2021. How much coffee was consumed during emnlp 2019? fermi problems: A new reasoning challenge for ai. In *Proceedings of the 2021 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 7318–7328. Nikhil Kandpal, H. Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. *ArXiv*, abs/2211.08411. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single qa system. In Findings of the Association for Computational Linguistics (EMNLP), pages 1896–1907. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2022. Decomposed prompting: A modular approach for solving complex tasks. *arXiv preprint* arXiv:2210.02406. Bugeun Kim, Kyung Seo Ki, Donggeon Lee, and Gahgene Gweon. 2020. Point to the expression: Solving algebraic word problems using the expression-pointer transformer model. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 3768–3779. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Advances in Neural Information Processing Systems (NeurIPS), pages 1571–1581. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 5583–5594. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In *36th Conference on Neural Information Processing Systems* (NeurIPS). Rik Koncel-K., Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In *Proceedings of the* 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 1152– 1157. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics (TACL)*, 3:585–597. Kundan Krishna, Jeffrey Bigham, and Zachary C Lipton. 2021. Does pretraining for summarization require knowledge transfer? In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 3178–3189. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 271–281. Guillaume Lample and François Charton. 2020. Deep learning for symbolic mathematics. In International Conference on Learning Representations (ICLR). Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. 2022. Hypertree proof search for neural theorem proving. *Advances in Neural Information Processing* Systems (NeurIPS), 35:26337–26349. Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan, Bing Tian Dai, Yan Wang, Dongxiang Zhang, and EePeng Lim. 2022. Mwptoolkit: an open-source framework for deep learning-based math word problem solvers. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 13188–13190. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. *arXiv preprint* arXiv:1909.11942. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics* (ACL), pages 7871–7880. Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. In Advances in Neural Information Processing Systems (NeurIPS). Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, and Dongxiang Zhang. 2019. Modeling intrarelation in math word problems with different functional multi-head attentions. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6162–6167. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2017. Dialogue learning with human-in-the-loop. In *International Conference on Learning Representations* (ICLR). Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020a. What does bert with vision look at? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 5265–5275. Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, and Sheng Zhong. 2020b. Graphto-tree neural networks for learning structured inputoutput translation with applications to semantic parsing and math word problem. In *Findings of the Association for Computational Linguistics (EMNLP)*, pages 2841–2852. Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. 2021. Isarstep: a benchmark for high-level mathematical reasoning. In International Conference on Learning Representations (ICLR). Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022a. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336. Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, and Yunbo Cao. 2022b. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In Findings of the Association for Computational Linguistics (ACL), pages 2486–2496. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022a. Holistic evaluation of language models. *arXiv preprint arXiv:2211.09110*. Percy Liang and Dan Klein. 2009. Online em for unsupervised models. In *Proceedings of human language* technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics (NAACL), pages 611–619. Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, and Xiangliang Zhang. 2022b. Mwp-bert: Numeracy-augmented pre-training for math word problem solving. In *Findings of the Association for Computational Linguistics (NAACL)*, pages 997–1009. Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! numersense: Probing numerical commonsense knowledge of pretrained language models. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6862–6868. Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen, Qi Liu, Hao Wang, and Shijin Wang. 2021. Hms: A hierarchical solver with dependency-enhanced understanding for math word problem. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 4232–4240. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 158–167. Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen. 2022a. What makes good in-context examples for gpt-3? In *Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures*, pages 100–114. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022b. TAPEX: Table pre-training via learning a neural SQL executor. In *International Conference on Learning* Representations. Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2020. Reverse operation based data augmentation for solving math word problems. *IEEE Transactions on Audio,* Speech and Language Processing. Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. 2019a. Tree-structured decoding for solving math word problems. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 2370–2379. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. 2017. Deep network guided proof search. *arXiv preprint arXiv:1701.06972*. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. 2021a. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In The 59th Annual Meeting of the Association for Computational Linguistics (ACL). Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022a. Learn to explain: Multimodal reasoning via thought chains for science question answering. In *The 36th Conference on Neural Information Processing Systems (NeurIPS)*. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, KaiWei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. 2022b. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In International Conference on Learning Representations (ICLR). Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. 2021b. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. In *The 35th Conference on Neural Information* Processing Systems (NeurIPS) Track on Datasets and Benchmarks. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022c. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 8086–8098. The mathlib Community. 2020. The lean mathematical library. In *CPP 2020 - Proceedings of the 9th ACM* SIGPLAN International Conference on Certified Programs and Proofs, co-located with POPL 2020. Jordan Meadows and Andre Freitas. 2022. A survey in mathematical language processing. arXiv preprint arXiv:2205.15231. Norman D. Megill and David A. Wheeler. 2019. Metamath: A Computer Language for Mathematical Proofs. Lulu Press, Morrisville, North Carolina. http://us.metamath.org/downloads/metamath.pdf. Yuanliang Meng and Anna Rumshisky. 2019. Solving math word problems with double-decoder transformer. *arXiv preprint arXiv:1908.10924*. Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 975–984. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *Proceedings* of Empirical Methods in Natural Language Processing (EMNLP). Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2021. Deep learning based text classification: a comprehensive review. ACM Computing Surveys (CSUR), 54(3):1–40. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022a. Lila: A unified benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022b. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)*, pages 3505–3523. Eric Mitchell, Joseph J. Noh, Siyan Li, William S. Armstrong, Ananth Agarwal, Patrick Liu, Chelsea Finn, and Christopher D. Manning. 2022. Enhancing selfconsistency and performance of pretrained language models with nli. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer. 2015. The lean theorem prover (system description). In *International Conference on Automated Deduction*, pages 378–388. Springer. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. arXiv preprint arXiv:2112.09332. Allen Newell, John Clifford Shaw, and Herbert A Simon. 1957. Empirical explorations of the logic theory machine: A case study in heuristic. In Proceedings of the Western Joint Computer Conference, IRE-AIEEACM 1957. Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Oleksandr Polozov, Christopher Meek, Dragomir Radev, and Jianfeng Gao. 2023. Learning from self-sampled correct and partially-correct programs. In International Conference on Learning Representations (ICLR). Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. *arXiv preprint arXiv:2112.00114*. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems (NeurIPS). Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HIT), pages 2080– 2094. Lawrence C. Paulson. 1994. *Isabelle - A Generic Theorem Prover (with a contribution by T. Nipkow)*, volume 828 of *Lecture Notes in Computer Science*. Springer. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. 2023. Formal mathematics statement curriculum learning. In *International Conference on Learning* Representations (ICLR), volume abs/2202.01344. Stanislas Polu and Ilya Sutskever. 2020. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393. Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng Tang, and Liang Lin. 2021. Neural-symbolic solver for math word problems with auxiliary tasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL), pages 5870–5881. Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang, and Liang Lin. 2020. Semantically-aligned universal tree-structured solver for math word problems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3780–3789. Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin Peng, Jianfeng Gao, and Song-Chun Zhu. 2022a. Valuenet: A new dataset for human value driven dialogue system. In *Proceedings of the AAAI Conference on* Artificial Intelligence (AAAI), pages 2468–2484. Liang Qiu, Yizhou Zhao, Yuan Liang, Pan Lu, Weiyan Shi, Zhou Yu, and Song-chun Zhu. 2022b. Towards socially intelligent agents with mental state transition and human value. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 146–158. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872– 1897. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2020. Language models are unsupervised multitask learners. OpenAI Blog. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research* (JMLR), 21:1–67. Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, and Eduard Hovy. 2019. Equate: A benchmark evaluation framework for quantitative reasoning in natural language inference. In *Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)*, pages 349–361. Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot numerical reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 840–854. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems (NeurIPS), 28. Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining with artificial language: Studying transferable knowledge in language models. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 7302–7315. Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for solving algebra word problems. *arXiv preprint* arXiv:1804.10718. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1743–1752. Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In *Proceedings of the AAAI Conference on* Artificial Intelligence (AAAI). Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. *Transactions of the Association for Computational Linguistics (TACL)*, 6:159–172. Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. *Transactions of the Association for Computational Linguistics (TACL)*, 3:1–13. Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. North American Chapter of the Association for Computational Linguistics (NAACL). Mrinmaya Sachan, Kumar Dubey, and Eric Xing. 2017. From textbooks to knowledge: A case study in harvesting axiomatic knowledge from textbooks to solve geometry problems. In *Proceedings of Empirical* Methods in Natural Language Processing (EMNLP), pages 773–784. Mrinmaya Sachan and Eric Xing. 2017. Learning to solve geometry problems from natural language demonstrations in textbooks. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, pages 251–261. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2020. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations (ICLR). Tal Schuster, Ashwin Kalyan, Alex Polozov, and Adam Tauman Kalai. 2021. Programming puzzles. In *Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks* Track. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (ACL), pages 1715–1725. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In *Proceedings of Empirical Methods in Natural* Language Processing (EMNLP), pages 1466–1476. Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word problems. In *Findings of the Association for Computational Linguistics (EMNLP)*, pages 2269–2279. Yibin Shen and Cheqing Jin. 2020. Solving math word problems with multi-encoders and multi-decoders. In Proceedings of the 28th International Conference on Computational Linguistics (COLING), pages 2924– 2934. Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 conference on empirical methods in natural language processing (EMNLP), pages 1132–1142. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In *36th International Conference on Machine Learning (ICML)*. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics (TACL)*, 7:217–231. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems (NeurIPS), 27. Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019. Quarel: A dataset and models for answering questions about qualitative relationships. In *Proceedings of the AAAI Conference* on Artificial Intelligence (AAAI), pages 7063–7071. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL), pages 1556–1566. Avijit Thawani, Jay Pujara, and Ashwin Kalyan. 2022. Estimating numbers without regression. In *36th Conference on Neural Information Processing Systems* (NeurIPS 2022) Workshop on MATH-AI. Avijit Thawani, Jay Pujara, Pedro A Szekely, and Filip Ilievski. 2021. Representing numbers in nlp: a survey and a vision. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HIT), pages 644–656. Shounaak Ughade and Satish Kumbhar. 2019. Survey on mathematical word problem solving using natural language processing. In 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), pages 1–5. IEEE. Shyam Upadhyay and Ming-Wei Chang. 2015. Draw: A challenging and diverse algebra word problem set. Technical report, Citeseer. Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (ACL), pages 494–504. Josef Urban. 2006. Mptp 0.2: Design, implementation, and initial experiments. *Journal of Automated* Reasoning, 37(1):21–43. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems (NeurIPS)*, pages 5998–6008. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do nlp models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307–5315. Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. 2018a. Translating a math word problem to a expression tree. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1064–1069. Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019. Template-based math word problem solvers with recursive neural networks. In *Proceedings of the AAAI* Conference on Artificial Intelligence (AAAI), pages 7144–7151. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations (ICLR). Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 845–854. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems (NeurIPS). Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho. 2021. Naturalproofs: Mathematical theorem proving in natural language. In *Thirty-fifth Conference on Neural* Information Processing Systems (NeurIPS) Datasets and Benchmarks Track. Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. 2022a. Naturalprover: Grounded mathematical proof generation with language models. In Advances in Neural Information Processing Systems (NeurIPS). Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. 2023. Generating sequences by learning to selfcorrect. In *International Conference on Learning* Representations (ICLR). Sean Welleck, Peter West, Jize Cao, and Yejin Choi. 2022b. Symbolic brittleness in sequence models: on systematic generalization in symbolic mathematics. In *AAAI*. Wu Wen-Tsun. 1986. Basic principles of mechanical theorem proving in elementary geometries. Journal of automated Reasoning, 2(3):221–252. Daniel Whalen. 2016. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv preprint arXiv:1608.02644. Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19. Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing Huang. 2020. A knowledge-aware sequence-to-tree network for math word problem solving. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7137–7146. Qinzhuo Wu, Qi Zhang, and Zhongyu Wei. 2021a. An edge-enhanced hierarchical graph-to-tree network for math word problem solving. In *Findings of the Association for Computational Linguistics (EMNLP)*, pages 1473–1482. Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuan-Jing Huang. 2021b. Math word problem solving with explicit numerical values. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL), pages 5859–5869. Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. 2022a. A survey of human-in-the-loop for machine learning. *Future* Generation Computer Systems. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*. Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Baker Grosse. 2021c. Int: An inequality benchmark for evaluating generalization in theorem proving. In International Conference on Learning Representations (ICLR). Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats, Mateja Jamnik, and Christian Szegedy. 2022b. Autoformalization with large language models. In Advances in Neural Information Processing Systems (NeurIPS). Yuhuai Wu, Felix Li, and Percy Liang. 2022c. Insights into pre-training via simpler synthetic tasks. *arXiv* preprint arXiv:2206.10139. Yuhuai Wu, Markus N Rabe, Wenda Li, Jimmy Ba, Roger B Grosse, and Christian Szegedy. 2021d. Lime: Learning inductive bias for primitives of mathematical reasoning. In *International Conference* on Machine Learning (ICML), pages 11251–11262. PMLR. Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In *International Joint Conference on Artificial Intelligence (IJCAI)*, pages 5299–5305. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In *International conference on machine learning (ICML)*, pages 2048–2057. PMLR. Kaiyu Yang and Jia Deng. 2019. Learning to prove theorems via interacting with proof assistants. In *International Conference on Machine Learning (ICML)*, pages 6984–6994. PMLR. Zheng Ye, Shang-Ching Chou, and Xiao-Shan Gao. 2008. An introduction to java geometry expert. In International workshop on automated deduction in geometry, pages 189–195. Springer. Wei Yu, Mengzhu Wang, Xiaodong Wang, Xun Zhou, Yongfu Zha, Yongjian Zhang, Shuyu Miao, and Jingdong Liu. 2021a. Geore: A relation extraction dataset for chinese geometry problems. In 35th Conference on Neural Information Processing Systems (NeurIPS) Workshop on Math AI for Education (MATHAI4ED). Weijiang Yu, Yingpeng Wen, Fudan Zheng, and Nong Xiao. 2021b. Improving math word problems with pre-trained knowledge and hierarchical reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3384–3394. Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. In *International Conference on Learning Representations (ICLR)*. Klim Zaporojets, Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2021. Solving arithmetic word problems by scoring equations with recursive neural networks. *Expert Systems with* Applications, 174:114704. Dongxiang Zhang, Lei Wang, Luming Zhang, Bing Tian Dai, and Heng Tao Shen. 2019. The gap of semantic parsing: A survey on automatic math word problem solvers. *IEEE transactions on pattern analysis and* machine intelligence, 42(9):2287–2305. Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a. Teacher-student networks with multiple decoders for solving math word problem. In International Joint Conference on Artificial Intelligence (IJCAI). Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graphto-tree learning for solving math word problems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 3928–3937. Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and ChengLin Liu. 2022. Learning to understand plane geometry diagram. In *36th Conference on Neural Information Processing Systems (NeurIPS) Workshop on* MATH-AI. Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, and Ee-Peng Lim. 2021. Noahqa: Numerical reasoning with interpretable graph question answering dataset. In *Findings of the* Association for Computational Linguistics (EMNLP), pages 4147–4161. Wenhe Zhang, Chi Zhang, Yixin Zhu, and Song-Chun Zhu. 2020c. Machine number sense: A dataset of visual arithmetic problems for abstract and relational reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 1332–1340. Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. 2020d. Do language embeddings capture scales? In *Proceedings of the* Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 292–299. Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. 2020e. Do language embeddings capture scales? In *Proceedings of the* Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 292–299. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020f. Dialogpt: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023. Automatic chain of thought prompting in large language models. In *International Conference on Learning Representations (ICLR)*. Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and Jingming Liu. 2020. Ape210k: A large-scale and template-rich dataset of math word problems. *arXiv* preprint arXiv:2009.11506. Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022. Multihiertt: Numerical reasoning over multi hierarchical tabular and textual data. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6588–6600. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning (ICML), pages 12697–12706. PMLR. Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. 2022. Minif2f: a cross-system benchmark for formal olympiad-level mathematics. In *International* Conference on Learning Representations (ICLR). Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. "Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding. In *Proc. of the Conference on* Empirical Methods in Natural Language Processing (EMNLP). Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2023. Leastto-most prompting enables complex reasoning in large language models. In *International Conference* on Learning Representations (ICLR). Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. 2021. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-JCNLP), pages 3277–3287. ![20_image_0.png](20_image_0.png) ## A Mathematical Reasoning Datasets In this section, we will examine the various datasets currently available for the study of mathematical reasoning using deep learning methods. A summary of the commonly used datasets in this field can be found in Table 7. ## A.1 Math Word Problem Solving Developing algorithms to solve math word problems (MWPs) automatically has been an interest of NLP researchers for decades (Feigenbaum et al., 1963; Bobrow, 1964). A math word problem (also termed an algebraic or arithmetic word problem) describes a brief narrative that involves characters, entities, and quantities. The mathematical relationship of an MWP can be modeled with a set of equations whose solution reveals the final answer to the question. A typical example is shown in Table 1. A question involves the four basic arithmetic operations of addition, subtraction, multiplication, and division with single or multiple operation steps. The challenge of MWPs for NLP systems lies in the need for language comprehension, semantic parsing, and multiple mathematical reasoning skills. Existing MWP datasets cover grade school problems, which are crawled from online learning websites (Koncel-Kedziorski et al., 2015), collected from textbooks, or manually annotated by human workers (Patel et al., 2021). Early math word problem datasets are relatively small or limited to a small number of operation steps (Hosseini et al., 2014; Kushman et al., 2014; Roy et al., 2015). Some recently curated datasets aim to increase problem diversity and difficulty levels. For example, Ape210K (Zhao et al., 2020) consists of 210k elementary math word problems, which is the largest publicly available. The problems in GSM8K (Cobbe et al., 2021) can involve up to 8 steps to solve. SVAMP (Patel et al., 2021) is a benchmark that tests the robustness of deep learning models to math word problems with simple variations. More recently built datasets involve modalities beyond text. For example, IconQA (Lu et al., 2021b) provides an abstract diagram as a visual context, while TabMWP (Lu et al., 2022b) provides a tabular context for each problem. Most MWP datasets provide annotated equations as a rationale for the solution (e.g., Table 1). To improve the performance and interpretability of the learned solvers, MathQA (Tafjord et al., 2019) is annotated with precise operation programs, and MathQA-Python (Austin et al., 2021) is provided with specific Python programs instead. Another line of datasets annotates the problems with multistep natural language solutions that are regarded as more human-readable (Ling et al., 2017; Cobbe et al., 2021; Lu et al., 2022b). Lila (Mishra et al., 2022a) annotates many of the previously mentioned MWP datasets with Python program rationales. ## A.2 Theorem Proving Recently, there has been increased interest in using language models for theorem proving in formal interactive theorem provers (ITP) (e.g., Polu and Sutskever (2020); Han et al. (2022); Polu et al. (2023); Jiang et al. (2022b,a); Lample et al. (2022)). Example ITPs include Lean (Moura et al., 2015), Isabelle (Paulson, 1994), Coq (Barras et al., 1999), and Metamath (Megill and Wheeler, 2019). To prove a theorem in an ITP, the theorem is stated in the ITP's programming language, then simplified by generating "proof steps" until it is reduced to known facts. The result is a sequence of steps that constitutes a verified proof. Data sources for neural theorem proving in ITPs include interactive learning environments that interface with ITPs, and datasets derived from proofs in ITP libraries. For example, CoqGym (Yang and Deng, 2019) provides an interactive environment and 71K human-written proofs for the Coq ITP. For Isabelle, PISA (Jiang et al., 2021) enables interaction and provides a dataset of 183k proofs mined from the Isabelle standard library and Archive of Formal Proofs. For Lean, LeanStep (Han et al., 2022) provides a dataset of proof-steps from Lean's mathematical library along with auxiliary tasks, while Lean-Gym (Polu et al., 2023) provides an interactive REPL. The miniF2F (Zheng et al., 2022) benchmark aims to provide a shared benchmark across ITPs, consisting of 488 problem statements sourced from mathematical competitions. Other resources provide proxy environments or tasks. For example, INT (Wu et al., 2021c) provide a synthetic proving environment to measure six different types of generalization. Li et al. construct IsarStep using the Isabelle Archive of Formal Proofs, and propose a task of filling in a missing intermediate proposition. Early applications of deep learning for formal theorem proving focus on selecting relevant premises (Alemi et al., 2016). Informal theorem proving presents an alternative medium for theorem proving, in which statements and proofs are written in the mixture of natural language and symbols used in "standard" mathematics (e.g., in LATEX), and are checked for correctness by humans. Early work focuses on selecting relevant premises (Ferreira and Freitas, 2020b,a). Welleck et al. (2021) develop NaturalProofs, a large-scale dataset of 32k informal mathematical theorems, definitions, and proofs, and provide a benchmark for premise selection via retrieval and generation tasks. Welleck et al. (2022a) adapt NaturalProofs for full proof generation, and provide a human evaluation protocol and proxy automatic metrics. An emerging area of research aims to combine elements of informal and formal theorem proving. For example, Wu et al. (2022b) explore translating informal statements into formal statements, while Jiang et al. (2022a) release a new version of the miniF2F benchmark augmented with informal statements and proofs, which we refer to as miniF2F+informal. Jiang et al. (2022a) explore translating provided (or generated) informal proofs into formal proofs. ## A.3 Geometry Problem Solving Automated geometry problem solving (GPS) is also a long-standing AI task in mathematical reasoning research (Gelernter et al., 1960; Wen-Tsun, 1986; Chou et al., 1996; Ye et al., 2008) and has attracted much attention in recent years. Different from a math word problem, a geometry problem consists of a textual description in natural language and a geometric diagram. As shown in Figure 2, the multimodal inputs describe the entities, attributes, and relationships of geometric elements, and the goal is to find the numeric solution to an unknown variable. GPS is a challenging task for deep learning methods due to the complex skills it requires. It involves the ability to parse multimodal information, perform symbolic abstraction, utilize theorem knowledge, and conduct quantitative reasoning. Some early datasets are proposed to facilitate research in this domain (Seo et al., 2015; Alvin et al., 2017; Sachan et al., 2017; Sachan and Xing, 2017). However, these datasets are relatively small or not publicly available, which limits the development of deep learning methods. In response to this limitation, Lu et al. create the Geometry3K dataset, which consists of 3,002 multi-choice geometry problems with unified logic form annotations for the multimodal inputs. More recently, largerscale datasets such as GeoQA (Chen et al., 2021a), GeoQA+ (Cao and Xiao, 2022), and UniGeo (Chen et al., 2022a) have been introduced and are annotated with programs that can be learned by neural solvers and executed to obtain the final answers. ## A.4 Math Question Answering Numerical reasoning is a core ability within human intelligence and plays an important role in many NLP tasks. Aside from theorem proving and gradelevel math word problem solving, there is a wide range of question answering (QA) benchmarks that center around mathematical reasoning. In this work, we refer to these tasks as math question answering (MathQA). A large number of datasets have been presented recently. For example, QuaRel (Tafjord et al., 2019) is a dataset of diverse story questions that involve 19 different types of quantities. McTaco (Zhou et al., 2019) studies temporal commonsense problems, while Fermi (Kalyan et al., 2021) studies Fermi problems whose answers can only be approximately estimated. Recent studies have shown that state-of-the-art mathematical reasoning systems might suffer from brittleness in reasoning, in that the models rely on spurious signals and plug-and-chug calculations in the specific dataset to achieve "satisfactory" performance (Hendrycks et al., 2021b; Mishra et al., 2022b). To address this issue, new benchmarks are proposed from various aspects. The Mathematics dataset (Saxton et al., 2020) consists of many different types of mathematics problems, covering arithmetic, algebra, probability, and calculus. The dataset allows for measuring the algebraic generalization ability of a model. Similarly, MATH (Hendrycks et al., 2021b) consists of challenging competition mathematics to measure the problemsolving ability of models in complex scenarios. Some work incorporates tabular contexts in the question inputs. For example, FinQA (Chen et al., 2021c), TAT-QA (Zhu et al., 2021), and MultiHiertt (Zhao et al., 2022) collect questions that require both table understanding and numeric reasoning to answer. Others, instead, present large-scale unified benchmarks for mathematical reasoning (Mishra et al., 2022b,a; Chen et al., 2023). NumGLUE (Mishra et al., 2022b) is a multi-task benchmark with the goal of evaluating the performance of models on eight different tasks. Mishra et al. 2022a push this direction further and presents Lila, which consists of 23 mathematical reasoning tasks, spanning a wide range of mathematics topics, linguistic complexity, question formats, and background knowledge requirements. ## A.5 Other Quantitative Problems Numbers are an integral part of our daily lives, and we humans reason with numbers in a variety of tasks, such as understanding news, reports, elections, and markets. This has led many in the community to question whether AI systems can effectively perform quantitative reasoning in everyday scenarios. To this end, various benchmarks have been developed to evaluate the capabilities of AI systems in this area. Diagrams, such as figures, charts, and plots, are essential media that convey large amounts of information in a concise way. FigureQA (Kahou et al., 2018), DVQA (Kafle et al., 2018), MNS (Zhang et al., 2020c), PGDP5K (Hao et al., 2022), and GeoRE (Yu et al., 2021a), are released to investigate models' abilities to reason about quantitative relationships among entities grounded in diagrams. NumerSense (Lin et al., 2020), instead, examines whether and to what extent existing pre-trained language models can induce numerical commonsense knowledge. EQUATE (Ravichander et al., 2019) formalizes aspects of quantitative reasoning in a natural language inference framework. Quantitative reasoning can appear frequently in specific domains like finance, science, and programming. For instance, the ConvFinQA (Chen et al., 2022c) targets numerical reasoning over financial reports in a conversational question answering format. ScienceQA (Lu et al., 2022a) involves numerical reasoning in scientific domains, while P3 (Schuster et al., 2021) studies the function inference ability of deep learning models to find a valid input which makes the given program return True. | Dataset | Task | Size | Input | Output | Rationale | Domain | |--------------------------------------------------------------|---------|------------|--------------------|------------------|------------------|----------| | Verb395 (2014) | MWP | 395 | Question | Number | Equation | Math | | Alg514 (2014) | MWP | 514 | Question | Number | Equation | Math | | IL (2015) | MWP | - | Question | Number | Equation | Math | | SingleEQ (2015) | MWP | 508 | Question | Number | Equation | Math | | DRAW (2015) | MWP | 1,000 | Question | Number | Equation | Math | | Dolphin1878 (2015) | MWP | 1,878 | Question | Number | Equation | Math | | Dolphin18K (2016) | MWP | 18,460 | Question | Number | Equation | Math | | MAWPS (2016) | MWP | 3,320 | Question | Number | Equation | Math | | AllArith (2017) | MWP | 831 | Question | Number | Equation | Math | | DRAW-1K (2017) | MWP | 1,000 | Question | Number | Equation | Math | | Math23K (2017) | MWP | 23,162 | Question | Number | Equation | Math | | AQuA (2017) | MWP | 100,000 | Question | Option | Natural language | Math | | Aggregate (2018) | MWP | 1,492 | Question | Number | Equation | Math | | MathQA (2019) | MWP | 37,297 | Question | Number | Program | Math | | ASDiv (2020) | MWP | 2,305 | Question | Number | Equation | Math | | HMWP (2020) | MWP | 5,470 | Question | Number | Equation | Math | | Ape210K (2020) | MWP | 210,488 | Question | Number | Equation | Math | | SVAMP (2021) | MWP | 1,000 | Question | Number | Equation | Math | | GSM8K (2021) | MWP | 8,792 | Question | Number | Natural language | Math | | IconQA (2021b) | MWP | 107,439 | Figure+Question | Option+Text span | ✗ | Math | | MathQA-Python (2021) | MWP | 23,914 | Question | Number | Python program | Math | | ArMATH (2022) | MWP | 6,000 | Question | Number | Equation | Math | | TabMWP (2022b) | MWP | 38,431 | Table+Question | Option+Number | Natural language | Math | | MML (2015) | TP | 57,882 | Statement | Proof steps | ✗ | Math | | HolStep (2017) | TP | 2,209,076 | Statement | Proof steps | ✗ | Math | | Gamepad (2019) | TP | - | Statement | Proof steps | ✗ | Math | | CoqGym (2019) | TP | 71,000 | Statement | Proof steps | ✗ | Math | | HOList (2019) | TP | 29,462 | Statement | Proof steps | ✗ | Math | | IsarStep (2021) | TP | 860,000 | Statement | Proof steps | ✗ | Math | | PISA (2021) | TP | 183,000 | Statement | Proof steps | ✗ | Math | | INT (2021c) | TP | - | Statement | Proof steps | ✗ | Math | | NaturalProofs (2021) | TP | 32,000 | Statement | Proof steps | ✗ | Math | | NaturalProofs-Gen (2022a) | TP | 14,500 | Statement | Proof steps | ✗ | Math | | miniF2F (2022) | TP | 488 | Statement | Proof steps | ✗ | Math | | miniF2F+informal (2022a) | TP | 488 | Statement | Proof steps | ✗ | Math | | LeanStep (2022) | TP | 21,606,000 | Statement | Proof steps | ✗ | Math | | GEOS (2015) | GPS | 186 | Figure+Question | Option | ✗ | Geometry | | GeoShader (2017) | GPS | 102 | Figure+Question | Number | ✗ | Geometry | | GEOS++ (2017) | GPS | 1,406 | Figure+Question | Number | ✗ | Geometry | | GEOS-OS (2017) | GPS | 2,235 | Figure+Question | Option | Demonstration | Geometry | | Geometry3K (2021a) | GPS | 3,002 | Figure+Question | Option | Logical form | Geometry | | GeoQA (2021a) | GPS | 4,998 | Figure+Question | Option | Program | Geometry | | GeoQA+ (2022) | GPS | 12,054 | Figure+Question | Option | Program | Geometry | | UniGeo (2022a) | GPS/TP | 14,541 | Figure+Question | Option | Program | Geometry | | Quarel (2019) | MathQA | 2,771 | Question | Option | Logical form | Math | | McTaco (2019) | MathQA | 13,225 | Text+Question | Option | ✗ | Time | | DROP (2019) | MathQA | 96,567 | Passage+Question | Number+Text span | ✗ | Math | | Mathematics (2020) | MathQA | 2,010,000 | Question | Free-form | Number | Math | | FinQA (2021c) | MathQA | 8,281 | Text+Table+Q | Number | Program | Finance | | Fermi (2021) | MathQA | 11,000 | Question | Number | Program+Fact | Math | | MATH (2021b) | MathQA | 12,500 | Question | Number | Natural language | Math | | TAT-QA (2021) | MathQA | 16,552 | Text+Table+Q | Number+Text span | ✗ | Finance | | AMPS (2021b) | MathQA | 5,000,000 | Question | - | LATEX | Math | | MultiHiertt (2022) | MathQA | 10,440 | Text+Table+Q | Number+Text span | Expression | Finance | | NumGLUE (2022b) | MathQA | 101,835 | Text+Question | Number+Text span | ✗ | Math | | Lila (2022a) | MathQA | 134,000 | Text+Question | Free-form | Python program | Math | | FigureQA (2018) | VQA | 1,000,000+ | Figure+Question | Binary | ✗ | Math | | DVQA (2018) | VQA | 3,487,194 | Figure+Question | Text span | Number+Text span | Math | | DREAM (2019) | ConvQA | 10,197 | Dialog+Question | Option | ✗ | Math | | EQUATE (2019) | NLI | - | Premise+Hypothesis | Binary | ✗ | Math | | NumerSense (2020) | Filling | 13,600 | Masked question | Word | ✗ | Math | | MNS (2020c) | IQ Test | - | Figure | Number | ✗ | Math | | P3 (2021) | Puzzle | 397 | Text | Program | ✗ | Math | | NOAHQA (2021) | ConvQA | 21,347 | Dialog+Question | Text span | Reasoning graph | Math | | ConvFinQA (2022c) | ConvQA | 3,892 | Report+Dialog+Q | Number | Expression | Math | | PGDP5K (2022) | Parsing | 5,000 | Figure+Question | Number | ✗ | Geometry | | GeoRE (2022a) | Parsing | 12,901 | Figure+Question | Number | ✗ | Geometry | | ScienceQA (2022a) | VQA | 21,208 | Context+Question | Option | Natural language | Science | | Table 7: A summarization of mathematical reasoning datasets. | | | | | | | | Paper | Task | Problem | Network | Encod | Decod | ATT Description | | |-----------------------------------|---------|----------------|-------------|----------|---------|-------------------|-------------------------------------------| | DNS (Wang et al., 2017) | MWP | Generation | Seq2Seq | GRU | LSTM | ✗ | The first deep MWP solver | | AnsRat (Ling et al., 2017) | MWP | Generation | Seq2Seq | LSTM | LSTM | ✗ | Trained with staged back-propagation | | Math-EN (Wang et al., 2018a) | MWP | Generation | Seq2Seq | BiLSTM | LSTM | ✔ | A standard Seq2Seq model with attention | | CASS (Huang et al., 2018) | MWP | Generation | Seq2Seq | BiGRU | BiGRU | ✔ | Copy and alignment with RL | | S-Aligned (Chiang and Chen, 2019) | MWP | Generation | Seq2Seq | BiLSTM | LSTM | ✔ | Operating symbols | | T-RNN (Wang et al., 2019) | MWP | Generation | Seq2Seq | BiLSTM | BiLSTM | ✔ | Predicting a tree-structure math template | | GROUP-ATT (Li et al., 2019) | MWP | Generation | Seq2Seq | BiLSTM | LSTM | ✔ | Group attention | | SMART (Hong et al., 2021b) | MWP | Generation | Seq2Seq | - | - | ✗ | Explicitly incorporating values | | SelfAtt (Robaidek et al., 2018) | GPS | Classification | Seq2Seq | BiLSTM | - | ✔ | Multi-hop self-attention | | QuaSP+ (Tafjord et al., 2019) | MathQA | Generation | Seq2Seq | BiLSTM | LSTM | ✗ | Adopting attributed grammar | | AST-Dec (Liu et al., 2019a) | MWP | Generation | Seq2Tree | BiLSTM | Tree | ✔ | Using prefix order decoding | | GTS (Xie and Sun, 2019) | MWP | Generation | Seq2Tree | BiGRU | Tree | ✔ | A goal-driven tree-structured approach | | KA-S2T (Wu et al., 2020) | MWP | Generation | Seq2Tree | BiLSTM | Tree | ✔ | A knowledge-aware method | | TSN-MD (Zhang et al., 2020a) | MWP | Generation | Seq2Tree | BiGRU | Tree | ✔ | A teacher-student network | | T-LSTM (Zaporojets et al., 2021) | MWP | Generation | Seq2Tree | BiLSTM | Tree | ✗ | A child-sum tree-LSTM model | | NT-LSTM (Zaporojets et al., 2021) | MWP | Generation | Seq2Tree | BiLSTM | Tree | ✗ | An N-ary tree-LSTM model | | NS-Solver (Qin et al., 2021) | MWP | Generation | Seq2Tree | BiGRU | Tree | ✔ | A neural-symbolic solver with programs | | NumS2T (Wu et al., 2021b) | MWP | Generation | Seq2Tree | BiLSTM | Tree | ✔ | Explicitly incorporating values | | HMS (Lin et al., 2021) | MWP | Generation | Seq2Tree | GRU | Tree | ✔ | A word-clause-problem encoder | | LBF (Hong et al., 2021a) | MWP | Generation | Seq2Tree | BiGRU | Tree | ✔ | A learning-by-fixing (LBF) framework | | Seq2DAG (Cao et al., 2021) | MWP | Generation | Seq2Graph | GRU | Graph | ✗ | A direct acyclic graph (DAG) structure | | Graph2Tree (Zhang et al., 2020b) | MWP | Generation | Graph2Tree | Graph | Tree | ✗ | Generating better solution expressions | | Multi-E/D (Shen and Jin, 2020) | MWP | Generation | Graph2Tree | Graph | Tree | ✔ | A graph encoder and a tree-bad decoder | | Graph2Tree (Li et al., 2020b) | MWP | Generation | Graph2Tree | Graph | Tree | ✔ | A graph-to-tree neural network | | EEH-G2T (Wu et al., 2021a) | MWP | Generation | Graph2Tree | Graph | Tree | ✗ | A hierarchical graph-to-tree model | | ASTactic (Yang and Deng, 2019) | TP | Generation | Tree2Seq | TreeLSTM | GRU | ✔ | Generating tactics as programs | | MathDQN (Wang et al., 2018b) | MWP | Search | DQN | - | - | ✗ | RL with a deep Q-network (DQN) | | DDT (Meng and Rumshisky, 2019) | MWP | Generation | Transformer | Trm | Trm | ✔ | A Transformer-based model | | DeepMath (Alemi et al., 2016) | TP | Classification | CNN | CNN | - | ✗ | The first deep large scale theorem prover | | Holophrasm (Whalen, 2016) | TP | Classification | BiGRU | BiGRU | - | ✗ | A neural prover for higher-order logic | | CNNTP (Loos et al., 2017) | TP | Classification | CNN | CNN | - | ✗ | A CNN-based theorem prover | | WaveNetTP (Loos et al., 2017) | TP | Classification | WaveNet | WaveNet | - | ✗ | A WaveNet-based theorem prover | | DeepHOL (Bansal et al., 2019) | TP | Generation | WaveNet | WaveNet | - | ✗ | A neural theorem prover with RL | | NGS (Chen et al., 2021a) | GPS | Generation | VQA | LSTM* | LSTM | ✔ | The first deep geometry solver | | PGDPNet (Zhang et al., 2022) | Parsing | Generation | GNN | - | - | ✗ | A neural diagram parser with GNN | Table 8: A summarization of deep neural network models for mathematical reasoning. **Encod**: encoder, **Decod**: decoder, ATT: Attention. LSTM*: ResNet + LSTM, Trm: Transformer ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations Section on page 10. ✓ A2. Did you discuss any potential risks of your work? Limitations Section on page 10. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
calderon-etal-2023-systematic
A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
https://aclanthology.org/2023.acl-long.818
Modern Natural Language Generation (NLG) models come with massive computational and storage requirements. In this work, we study the potential of compressing them, which is crucial for real-world applications serving millions of users. We focus on Knowledge Distillation (KD) techniques, in which a small student model learns to imitate a large teacher model, allowing to transfer knowledge from the teacher to the student. In contrast to much of the previous work, our goal is to optimize the model for a specific NLG task and a specific dataset. Typically in real-world applications, in addition to labeled data there is abundant unlabeled task-specific data, which is crucial for attaining high compression rates via KD. In this work, we conduct a systematic study of task-specific KD techniques for various NLG tasks under realistic assumptions. We discuss the special characteristics of NLG distillation and particularly the exposure bias problem. Following, we derive a family of Pseudo-Target (PT) augmentation methods, substantially extending prior work on sequence-level KD. We propose the Joint-Teaching method, which applies word-level KD to multiple PTs generated by both the teacher and the student. Finally, we validate our findings in an extreme setup with no labeled examples using GPT-4 as the teacher. Our study provides practical model design observations and demonstrates the effectiveness of PT training for task-specific KD in NLG.
# A Systematic Study Of Knowledge Distillation For Natural Language Generation With Pseudo-Target Training Nitay Calderon∗ Technion - IIT Subhabrata Mukherjee Microsoft Research Roi Reichart Technion - IIT Amir Kantor Microsoft ## Abstract Modern Natural Language Generation (NLG) models come with massive computational and storage requirements. In this work, we study the potential of compressing them, which is crucial for real-world applications serving millions of users. We focus on Knowledge Distillation (KD) techniques, in which a small student model learns to imitate a large teacher model, allowing to transfer knowledge from the teacher to the student. In contrast to much of the previous work, our goal is to optimize the model for a specific NLG task and a specific dataset. Typically in real-world applications, in addition to labeled data there is abundant unlabeled task-specific data, which is crucial for attaining high compression rates via KD. In this work, we conduct a systematic study of task-specific KD techniques for various NLG tasks under realistic assumptions. We discuss the special characteristics of NLG distillation and particularly the exposure bias problem. Following, we derive a family of Pseudo-Target (PT) augmentation methods, substantially extending prior work on sequence-level KD. We propose the Joint-Teaching method, which applies wordlevel KD to multiple PTs generated by both the teacher and the student. Finally, we validate our findings in an extreme setup with no labeled examples using GPT-4 as the teacher. Our study provides practical model design observations and demonstrates the effectiveness of PT training for task-specific KD in NLG. ## 1 Introduction Modern *Natural Language Generation (NLG)* systems are based on pre-trained Language Models (LMs), which are gradually achieving remarkable milestones (Raffel et al., 2020; Brown et al., 2020; OpenAI, 2023). Alongside the impressive advances in applications such as Neural Machine Transla- ∗ Work was mainly done during an internship at Microsoft MSAI. Contact: nitay@campus.technion.ac.il. Code: https://github.com/nitaytech/KD4Gen. tion (NMT), Summarization, chatbots, such models have also become increasingly larger, deeper, slower, and more complex. The massive storage requirements and high computational complexity of NLG models discourage their deployment in real-life. As such, there is a growing demand in the industry for compressing such models while preserving their performance. Model compression methods typically either prune less informative parameters (LeCun et al., 1989) or use *knowledge distillation (KD)* (Hinton et al., 2015; Kim and Rush, 2016) to transfer knowledge from a larger model (the teacher) to a smaller model (the student). In generation tasks, KD can be applied at the word-level, by training the student to mimic the teacher's next token distribution, or at the sequence-level, by training the student on Pseudo-Targets (PTs) generated by the teacher. Although KD research is extensive (Gou et al., 2021; Gupta and Agrawal, 2022; Treviso et al., 2022; Xu and McAuley, 2022), most works focus on Natural Language Understanding (NLU) tasks, task-agnostic language modeling, or specific generation tasks (e.g., NMT). Additionally, KD works for NLG typically consider large datasets with hundreds of thousands of labeled examples, and ignore unlabeled data (Shleifer and Rush, 2020; Wang et al., 2021a; Li et al., 2022; Zhang et al., 2022a). In more realistic scenarios, however, the number of labeled examples is limited, alongside an abundance of unlabeled data (Oliver et al., 2018; Calderon et al., 2022) that may contribute to KD. To bridge these gaps, in this paper we conduct a systematic study of KD for NLG, considering a variety of tasks: Summarization, Question Generation, Abductive Reasoning, Style Transfer and Simplification, in a more realistic setup. Our realistic setup follows 5 criteria that are particularly attractive for a broad range of NLP practitioners: (1) Only several thousand labeled examples are available for training (Medium-resource), 14632 as annotation is costly or labor-intensive, especially for NLG. This is in contrast to research setups where labeled datasets can be very large. (2) Large amounts of unlabeled data are available, as is often the case in industrial setups where unlabeled data is collected during the life-cycle of the product; (3) Off-the-shelf models are used, which is more practical than training models from scratch; (4) Inference-time efficiency is our goal, meaning high compression rate; (5) One-time computational training resources are negligible, compared to inference-time, allowing extensive use of PTs. Recently, huge LMs with excellent generative capacity, such as GPT-4 (OpenAI, 2023) have been presented. While it is tempting to focus our research on them, we focus on small to medium size LMs in a fine-tuning setup. This choice is because utilizing a huge LM as a teacher is often infeasible, e.g., due to their high financial costs or when the data cannot be sent to external servers because of privacy constraints. Furthermore, research suggests that using mediator-teachers aids the distillation process (Mirzadeh et al., 2020), as might be the case in distillation from a huge LM to a medium fine-tuned teacher and finally to a small student. For an extended discussion, see §7. Our work hence focuses on a medium size finetuned teacher and we assume there are several thousand labeled examples for its fine-tuning. Despite the above limitations, applying huge LMs in some valuable setups is still possible. Therefore, we also consider the distillation of one such model (GPT-4), although this is not our main focus. We start our study by comparing architectural (Encoder-decoder vs. Decoder-only), pruning and KD design decisions, discussing the tradeoff between computational resources and task performance. We focus on practical measures like *latency* and *throughput*, which is important for batchoffline applications and is typically overlooked. We next provide the first exposure bias perspective for KD which motivates PT augmentation. This bias derives from teacher forcing when the LM conditions on ground-truth tokens during training, while at inference time it conditions on previously generated tokens (Ranzato et al., 2016). As the distillation progresses, the student's predictions gradually become similar to its teacher's, and therefore training with PTs can alleviate exposure bias. We propose extensions of the common practice of generating a single mode approximation PT via beam search, instead, we suggest sampling multiple PTs to facilitate higher exposure to conditional distribution factors. Additionally, we generate PTs for unlabeled data and demonstrate their effectiveness. Moreover, we propose a novel KD technique termed *Joint-Teaching*, which applies word-level KD to PTs generated by both the teacher and the student. This technique aims to implicitly and explicitly address the student exposure bias, ground the learning and teach it to correct its mistakes. Finally, we extend the scope of our study by working solely with limited unlabeled examples. Due to the absence of labeled examples, fine-tuning the teacher is infeasible, leading us to depend on huge LMs with zero-shot capabilities. Consequently, we investigate whether our KD findings from our realistic setup (which involves a finetuned teacher) remain applicable to the new extreme setup. To this end, we show how to successfully distill GPT-4, a huge Decoder-only model, into a small Encoder-decoder model (T5-small), which also has a different tokenizer. Our main empirical findings (§5) are: (1) Encoder-decoder architectures outperform their Decoder-only counterparts in task-specific finetuning for NLG; (2) Decoder pruning substantially outperforms encoder pruning when considering both latency and task performance; and (3) PTs can be used much more effectively compared to what was suggested in previous work and this yields substantially improved task performance on a much reduced computational cost. ## 2 Background And Related Work 2.1 Natural Language Generation Modern LMs based on Transformers leverage two primary architectures for text generation: *Encoderdecoder (ED)* (Vaswani et al., 2017) and *Decoderonly (DO)* (Radford et al., 2019). While ED models are more popular for classification, summarization, and NMT, DO models excel on open-text generation and zero/few-shot setups (Wang et al., 2022). Nonetheless, the increasing popularity of massive DO models like GPT-3 and PaLM (Brown et al., 2020; Chowdhery et al., 2022) with impressive generation capabilities, has led to the question of *whether ED models are still relevant for NLG?* In §5 and in the appendix (§B) we discuss and demonstrate the differences between these two architectures. We show in line with recent work of Tay et al. (2022) that ED models outperform DO models in task-specific fine-tuning for conditional NLG. In light of this observation, we focus on KD only for ED models in the rest of the paper. Text generation is a structured prediction problem where the goal is to sample a text yˆ from the distribution learned by the LM, conditioned on the input text x. The training objective of the LM is to minimize the *negative log-likelihood (NLL)* of the training dataset, by factorizing − log P(y|x) into −P|y| i=1 log P(yi|*x, y*<i). At inference time, the LM generates one token at a time according to the conditional distribution: P(yi|x, yˆ<i). The selection of the next token is handled by the decoding method. Beam search, which aims to find the most likely target, is the the de-facto standard (Zarrieß et al., 2021). Alternatively, it is possible to frame decoding as sampling, as we do in this work. ## 2.2 Exposure Bias LMs learn the distribution P(y|*x, y*<i) at the training phase by conditioning on the ground truth y<i. This is known as *teacher forcing* which makes the training efficient and stable but also creates a mismatch at inference time, since the LM conditions on its previous predictions yˆ<i. This discrepancy between training and inference is called exposure bias. Potential side-effect is that a single error during generation may have a cascading effect by causing a deviation from the ground truth distribution and resulting in an accumulation of errors (Arora et al., 2022). Many works link exposure bias to generalization, hallucinations, and degeneration (Schmidt, 2019; Chiang and Chen, 2021). Recent works attempted to address exposure bias, most of which focused on open-text generation and NMT (Schmidt, 2019; Wang and Sennrich, 2020; Hormann and Sokolov, 2021). Other works addressed this problem by applying reinforcement learning techniques (Ranzato et al., 2016) or by scheduled sampling which replace ground truth tokens with generated tokens (Bengio et al., 2015; Liu et al., 2021b). However, it leads to training with inaccurate and noisy signals (Xu et al., 2019). In contrast to other works which study this problem in a general setting, in KD setting the teacher can be used to mitigate the student exposure bias by utilizing PTs and reliable signals from it. This is the first work to discuss exposure bias in KD. ## 2.3 Compression And Knowledge Distillation There has been extensive research on model compression on techniques such as parameter sharing, pruning, quantization and factorization (Gou et al., 2021; Gupta and Agrawal, 2022; Treviso et al., 2022; Xu and McAuley, 2022). *Pruning* (LeCun et al., 1989) aims to discard unimportant weights of a pre-trained or fine-tuned LM, making it more efficient while preserving performance. Usually, the pruning is structured, and complete blocks, rows, or layers are removed according to their magnitude, changes during training (Sanh et al., 2020), or causal effect (Rotman et al., 2021). Typically there is a performance gap between the original and the compressed model, which can be closed by applying *Knowledge distillation (KD)* (Hinton et al., 2015) - a technique for transfering knowledge from a large trained model (teacher T) to a smaller one (student S), by training the student to mimic the teacher's predictions or features. KD can be divided into two categories: *task-agnostic*, where the goal is to mimic a pre-trained LM's behavior, and *task-specific*, where the distillation is performed on a fine-tuned LM. Generally, there are three levels of KD: word-level (or class-level), inner-level, and sequence-level (only in NLG): Word-level KD, also known as *Logits KD* (Hinton et al., 2015; Kim and Rush, 2016). In this method, the student learns to match the teacher's distribution over the next token at each position, by minimizing the KL divergence between the distribution of the student PS(yi|*x, y*<i) and the distribution of its teacher PT (yi|*x, y*<i). There are variations like *Noisy KD* (Liu et al., 2021a) where noise is injected during KD by applying dropout to the teacher, Wang et al. (2021a) which applies KD only for carefully selected examples, etc. Inner-level KD aims to mimic additional inner features of the teacher, for example, Jiao et al. (2020) leverages hidden states of the teacher to train the student. Wang et al. (2020) and Wang et al. (2021b) proposed *Attention-relations KD* which trains the student to mimic the relation matrix (scaled dot-product) of the self-attention states. Sequence-level KD is commonly used for NMT (Kim and Rush, 2016; Kim et al., 2019; Kasai et al., 2020). In this approach, the teacher generates PTs for inputs in the original dataset, and student is trained to predict them. Usually, the teacher generates a single PT using beam search, which is known as "mode approximation" of PT (y|x). Gou et al. (2021) and Gupta and Agrawal (2022) present a detailed overview of KD techniques. Notably, most works in NLP explore task-agnostic KD ![3_image_0.png](3_image_0.png) Extreme setup: No labeled data. GPT-4 to T5-S S8. **JointTeaching** Fine-tune + Single PT Logits KD + Single PT JointTeaching Only Teacher Fine-tune + Multiple PTs Logits KD + Multiple PTs for encoder-only models (Sanh et al., 2019; Jiao et al., 2020; Wang et al., 2021b) or focus on NMT (Kim and Rush, 2016; Kasai et al., 2020; Wang et al., 2021a). Shleifer and Rush (2020) focused on high-resource summarization, and compared three KD strategies: pruning and fine-tuning, logits KD, and mode approximation PTs. Unlike these works, we perform a more systematic study of taskspecific KD for a variety of NLG tasks in realistic setups. Moreover, we focus on PTs and propose extensions to demonstrate their effectiveness. ## 3 Methods 3.1 Research Design Our research design illustrated in Figure 1 has eight stages. At each stage, we examine different modeling decisions and continue to the next stage after selecting the best technique according to the performance on the development set (to avoid performing selection on the test set). We linearly examine one aspect at a time since the alternative (combinatorial choices) is too expensive. Our study starts with architectural designs (**stages 1-2**), continues with comparing different KD strategies (**stages 3-4)** and proceeds to explore the usage of PTs as augmentation strategies for KD (**stages 5-8)**. ## 3.2 Architectures And Pruning In the spirit of our realistic setup, we consider offthe-shelf LMs and experiment with two model families for each architecture type (see §4.2). In appendix §B we discuss the differences between ED (Encoder-Decoder) and DO (Decoder-only) architectures (**stage 1**) and show that ED models outperform DO models on task-specific tuning for NLG. Following that, we present results only for ED in §5. In **stage 2**, we examine the effect of pruning, by discarding complete model layers. In the case of ED, layers can be dropped either from the encoder or decoder components, resulting in different impacts on the task or computational performances. ## 3.3 Objectives1 As discussed in §2.3, various works proposed different training strategies for KD. In **stage 3** we perform a comparison between three popular KD objectives (baselines), which do not involve PTs: (1) *Logits KD* - which is the most common and the simplest technique; (2) *Noisy KD* - which showed promising results for summarization in selfdistillation setup; and (3) *Attention-Relations KD* (combined with Logits KD) - which is the SOTA technique for Encoder-only models. As suggested by Mukherjee and Awadallah (2020), following the end of the KD stage, we also perform an end-to-end fine-tuning stage on the ground truth labels. This stage is extremely cheap since a teacher is not required. ## 3.4 **Pseudo-Targets (A.K.A Sequence-Level Kd)**1 Pseudo-Targets (PTs) are predictions generated by the teacher that can be utilized for training the student. Word-level or Inner-level KD can be combined with sequence-level KD (e.g., by applying Logits KD to PTs). In **stage 4** we investigate the impact of augmenting the labeled data with PTs when fine-tuning the student (sequence-level KD) or when using the objective from stage 3. Although various works demonstrated the effectiveness of PTs (Kim and Rush, 2016; Shleifer and Rush, 2020), their use of PTs was limited to a single PT per labeled training example, generated with mode approximation beam search. In this paper we demonstrate that the use of PTs can be much more 1More formal descriptions and implementation details of the methods discussed in §3.3 and §3.4 are provided in §A ![4_image_0.png](4_image_0.png) extensive: We generate multiple PTs per training example, increase their diversity with samplingbased rather than mode approximation generation, and generate PTs for both labeled and unlabeled examples, which are much more abundant by nature. Our experiments demonstrate that each of these extensions yields substantial improvements in the quality of the resulting student model. We next touch on each of these extensions. Unlabeled data In our setup unlabeled data is available in abundance. Since in autoregressive NLG the LM learns to condition on the targets (y<i), PTs are essential for utilizing unlabeled data (inputs without corresponding targets). From a generalization perspective, exposing the model to more inputs, and consequently to more P(yi|x, yˆ T <i) factors, should help the student generalize beyond the labeled data distribution. Indeed, many works in various NLP fields have shown that unlabeled data is effective for generalization (Xie et al., 2020; Mukherjee and Awadallah, 2020; Calderon et al., 2022). In **stage 5** we examine its importance. Multiple PTs We further explore alternatives to the common practice of generating a single PT with beam search (mode approximation). Unlike classification, NLG is a structured prediction problem and multiple candidates can form a correct solution. Therefore, we can generate multiple PTs resulting in stronger exposure to the teacher's knowledge. We explore the impact of multiple PTs in **stage 6**. Sampling PTs Beam search is not the only way to generate PTs. In fact, it has been demonstrated to produce generic texts that lack diversity (Finkel et al., 2006; Gimpel et al., 2013). A simple alternative that can produce more diverse and surprising texts is sampling (Roberts et al., 2020; Holtzman et al., 2020). Moreover, controlling the temperature of the logits can increase the diversity of the PTs even further (Tevet and Berant, 2021). We compare these decoding techniques in **stage 7**. Motivation for these Extensions Compared to a single mode approximation PT, sampling multiple PTs for both the labeled and unlabeled examples should add more variability to the student training and cover a larger portion of the learnable distribution, which are known to improve generalization. Furthermore, these extensions expose the student to more of the teacher's knowledge. Additionally, we also provide an exposure bias motivation. During the distillation the student's predictions gradually become similar to its teacher's predictions: yˆ S ∼ yˆ T. Therefore, we can expect that training the student with diverse PTs may mitigate its exposure bias, which occurs at inference when it conditions on yˆ S <i, and not on the groundtruth distribution. In addition, PTs of unlabeled examples can help mitigate this bias as the student is getting exposed to the teacher's knowledge rather than the gold standard. Moreover, multiple and diverse PTs results in extensive exposure to additional P(yi|x, yˆ T <i) factors. Therefore, we hypothesize that sampling multiple PTs will improve the student compared to a mode approximation PT. ## 3.5 Joint-Teaching As mentioned above, training with PTs generated by the teacher may implicitly mitigate the student exposure bias. On the other hand, we can try to mitigate this bias explicitly by training the student while conditioning on its predictions (i.e. generate PTs with the student and use them for training). Generally, this can be unstable since the student may learn from its own mistakes. Fortunately, in KD we have a strong oracle: the teacher. By applying word-level KD on yˆ S <i, the teacher can teach the student how to continue its generated sequence correctly and prevent a cascading effect. Nevertheless, this relies on the reasonable assumption that the teacher models P(yi|x, yˆ S <i) better than the student. In Figure 2 we present a single setup analysis that supports this assumption: At almost any stage of the student training, continuing the generation with the teacher results in better predictions. Moreover, as the student becomes more similar to the teacher, we can expect the teacher to model P(yi|x, yˆ S <i) even better, which makes the 14636 word-level signals more reliable. This is also supported by Figure 2: As the distillation progresses, the teacher continuations keeps getting better. Following that, we propose a novel KD method which addresses the exposure bias implicitly and explicitly namely *Joint-Teaching*: Apply wordlevel KD on PTs generated by both the teacher and the student. In our experiment we randomly use the student's PTs for 50% of the training steps. In **stage 7** we compare training only with the students' PTs or the teachers' PTs to Joint-Teaching, demonstrating the superiority of the latter. ## 4 Experimental Setup In this section we describe our four NLG tasks and datasets, the participating models and the evaluation procedures. URLs of the code and datasets, as well as implementation details and hyperparameter configurations are described in §D. Additionally, a comparison between ED and DO architectures (stage 1) is provided in §B.1; theoretical and empirical complexity analyses are provided in §B.2. ## 4.1 Tasks And Datasets We selected four English-to-English core NLG tasks, which are part of several NLG benchmarks and surveys (Fu et al., 2018; Gehrmann et al., 2021, 2022; Khashabi et al., 2021; Erdem et al., 2022; Jin et al., 2022). We built a new realistic experimental setup, in which the ratio of labeled to unlabeled data is 1:4, and the amount of labeled data is reasonable. For each task (excluding Shake7) we keep the original assignment of each example to its train-test splits. The exact numbers are provided in Table 1. Summarization (XSUM40) We use the XSUM dataset (Narayan et al., 2018) for the abstractive summarization task. The task of the NLG model is to generate an introductory sentence (summary) for a given news article. Question Generation (SQuAD17) We use the SQuAD dataset (Rajpurkar et al., 2016, 2018) for the question generation task. Given a Wikipedia document and an answer to the question, the task of the NLG model is to generate the question. Abductive Reasoning (ART10) We use the αNLG (also known as ART) dataset (Bhagavatula et al., 2020) for abductive reasoning generation task. The task of the NLG model is to generate a plausible explanation for two given observations. Style Transfer and Simplification (Shake7) We construct a new dataset for the well-explored style ![5_image_0.png](5_image_0.png) transfer task (which is also a simplification task) of translating Shakespeare's texts to modern English. We combined pairs of Shakespearean and modern English texts from Shakespeare's plots (taken from Xu et al. (2012); Jhamtani et al. (2017)), with other texts written by Shakespeare (Karpathy, 2015) and created a parallel style transfer dataset, see §D.1. ## 4.2 Models And Pruning Decoder-only We use the GPT2-family models (Radford et al., 2019): GPT2, GPT2-M, and GPT2-L; and the recent OPT-family models (Zhang et al., 2022b): OPT-125Mand OPT-350M. Encoder-decoder We use the T5-family models (Raffel et al., 2020): T5-S and T5-L; and the BARTfamily models (Lewis et al., 2020): BART-6:6 (base version) and BART-L. Pruning We apply pruning only for the pre-trained BART-6:6 model (thus our study also includes a non-pruned student, T5-S), and consider two types of pruning: Encoder pruning and decoder pruning. Following Shleifer and Rush (2020), in both pruning types we keep only the first and last layers, resulting in two models: BART-2:6 (pruned encoder) and BART-6:2 (pruned decoder). In the KD stages (3-8) we use two studentteacher pairs: T5-S and T5-L, and a pair with a pruned student: BART-2:6 and BART-L. ## 4.3 Evaluation Task Performance We report on various metrics that focus on different aspects, resulting in a more holistic evaluation of the models. To this end, we focus on the lexical similarity metrics, BLEU and ROUGE, the semantic equivalence metric BERTScore (BS, Zhang et al. (2020)) and the statistical modeling metric Perplexity (PPL), which is measured by the average NLL of the ground truth targets. To make the result tables more readable, we report the average ROUGE (of the F1 scores for R-1/2/L), and the F1 score for BS. Notice that in §D we specify for each task the appropriate metric we use for the development set. In §E we report the scores of all the metrics. | Arch | Model | E-D | Params | Mem | FLOPs | Latency | Throughput | BLEU | ROUGE | BS | PPL | Dev | |--------|----------|-------|----------|-------|---------|-----------|--------------|--------|---------|------|-------|-------| | DO | GPT2-L | 0-36 | 774 | 3210 | 42.0 | 675 | 2.2K | 11.9 | 27.1 | 70.1 | 1.9 | 13.0 | | DO | GPT2-M | 0-24 | 354 | 1444 | 19.4 | 459 | 4.8K | 9.7 | 23.2 | 66.8 | 3.7 | 10.8 | | DO | GPT2 | 0-12 | 124 | 511 | 6.8 | 235 | 13.5K | 7.8 | 20.1 | 61.4 | 2.8 | 8.5 | | DO | OPT-350M | 0-24 | 331 | 1324 | 18.1 | 371 | 5.1K | 9.8 | 24.9 | 62.7 | 3.1 | 10.7 | | DO | OPT-125M | 0-12 | 125 | 502 | 6.8 | 185 | 15.4K | 10.7 | 26.3 | 69.2 | 2.5 | 11.7 | | ED | T5-L | 24-24 | 737 | 2951 | 19.5 | 597 | 5.3K | 16.4 | 34.6 | 75.1 | 1.6 | 17.7 | | ED | T5-S | 6-6 | 60 | 242 | 1.4 | 160 | 55.2K | 13.4 | 30.8 | 72.7 | 2.4 | 14.6 | | ED | BART-L | 12-12 | 406 | 1625 | 10.0 | 281 | 7.8K | 16.4 | 34.8 | 75.4 | 1.7 | 17.9 | | ED | BART-6:6 | 6-6 | 139 | 558 | 3.0 | 147 | 13.5K | 14.5 | 32.7 | 74.2 | 1.9 | 15.9 | | ED | BART-2:6 | 2-6 | 111 | 445 | 1.7 | 146 | 16.0K | 11.4 | 28.0 | 71.6 | 2.2 | 12.8 | | ED | BART-6:2 | 6-2 | 101 | 407 | 2.6 | 75 | 15.3K | 13.3 | 31.5 | 73.3 | 2.6 | 15.0 | Computational Performance For measuring the computational performance of the models, we report the number of parameters, the memory of the models and the number of *floating-point operations* (FLOPs). These measures are device-agnostic and may not be well correlated with the actual performance in practice, which depends on the device, implementation, and hardware utilization of the accelerators (Ma et al., 2018; Hadidi et al., 2019). Therefore, we also report practical measurements such as the *latency* of generating a single output, which is important for real-time applications, and the *throughput*, which is the maximum number of examples that can be processed in a minute, and is important for offline batched applications. ## 5 Results The complete results are provided in §E. Table 2 reports the results of fine-tuned models (stages 12). Table 3 reports the results of the KD stages (3-8) as follows: For each student-teacher pair and dataset, we calculate the fraction of their performance gap that is compensated for by using distillation as opposed to only fine-tuning the student model: KD−S T −S %, where KD, T and S are the task scores of the distilled student, its teacher and the student baseline (fine-tuned), respectively. Then, we report for each dataset the average fraction of the closed gap over four metrics and two studentteacher pairs. We also report the number of wins within 32 setups (4 datasets, 4 metrics, 2 pairs). ## S1: Encoder-Decoder Models Outperform Decoder-Only Models In Task-Specific Tuning For NLG. We present our results in Table 2. For a detailed analysis of Encoder-decoder (ED) and Decoder-only (DO) models, we refer readers to Appendix §B, which reports several interesting theoretical and empirical insights. Nevertheless, it is worth noting here that ED models, such as T5-L, can have twice the number of layers and parameters of DO models, such as GPT2-M or OPT-350M. However, despite the higher number of parameters, ED models have roughly the same FLOPs and comparable latency and throughput. Regarding task performance, our experiments demonstrate that ED models consistently outperform DO models across all datasets and models, regardless of their size. Presumably, a better inductive bias is injected by applying self-attention (and not autoregressive-attention) to the conditioned input sequence. This finding is particularly relevant for NLP practitioners who aim to develop a specialized in-house model for a specific NLG task. We hence continue to the KD stages only with ED models (T5 and BART). S2: It is better to prune layers from the decoder. In stage 2, we examine whether it is better to prune encoder or decoder layers. To this end, we prune BART-6:6 and report the results at the bottom of Table 2. First, notice that pruning decoder layers greatly impacts the latency given the autoregressive nature of NLG tasks, making BART-6:2 two times faster than BART-6:6. For comparison, pruning encoder layers does not affect the latency (see the discussion in §B.2). On the other hand, BART-2:6 has a higher throughput than BART-6:2, mainly because of the long input in some tasks which is processed by the encoder. Notice, however, that the improvement of BART-6:2 in latency is more substantial than its throughput degradation. Second, BART-6:2 outperforms BART-2:6 in every task metric (and dataset), being competitive to BART-6:6. Moreover, for tasks with long inputs (e.g., summarization or question generation, A. Objective XS SQ AR SH Wins Dev (%) (%) (%) (%) Fine-tune 0.0 0.0 0.0 0.0 0 14.8 Logits 30.2 **39.7** 25.7 41.9 13 **16.0** Noisy 30.3 37.3 **35.2** 41.8 14 15.9 Att-Rel **31.3** 28.4 19.7 21.4 5 15.9 B. PTs XS SQ AR SH Wins Dev Logits 30.2 **39.7** 25.7 41.9 10 16.0 Seq-lvl 13.8 -9.1 4.2 4.2 0 15.7 Logits+Seq **33.2** 30.8 27.9 49.0 22 **16.3** C. Unlabeled XS SQ AR SH Wins Dev Labeled 33.2 30.8 27.9 49.0 0 16.3 + Unlabeled 55.8 47.1 41.5 70.0 32 **16.9** D. Decoding XS SQ AR SH Wins Dev Single PT 55.8 47.1 41.5 70.0 1 16.9 K-Beams 63.6 56.3 45.7 74.7 4 17.0 Sampling **73.0** 58.4 **48.2** 81.7 15 **17.2** H-Sampling 70.0 **63.9** 44.8 **81.8** 12 17.1 E. Joint-T XS SQ AR SH Wins Dev Only Teacher 73.0 58.4 **48.2** 81.7 4 17.2 Only Student 68.7 63.9 43.9 79.4 3 17.1 Joint-Teaching 80.8 66.7 48.2 87.7 25 **17.4** see §E), the depth of the encoder is critical and the pruned-encoder BART-2:6 underpeforms. As a rule of thumb, our results suggest that it is better to prune layers of the decoder. Besides reducing the model latency, it has a smaller impact on task performance. In the following stages we use two student-teacher pairs: T5-S and T5-L, and a pair with a pruned student, BART-6:2 and BART-L. S3: Use Logits KD as the main training objective. In stage 3 we compare different KD objectives. As seen in Table 3.A, Logits, Noisy and Attention-Relations KD techniques are competitive, and the quality of the method depends on the task. Even though Noisy KD has more wins than Logits KD, the PPL metric accounts for 8 of the 14 wins. Since Logits KD is the best-performing method according to the average performance on the development set, we continue to the next PT stages with it. Our results demonstrate the importance of KD: applying Logits KD closes more than 34.4% of the student-teacher gap, on average. S4: Combine Logits KD and PTs. In stage 4 we examine three methods: using Logits KD only on the labeled examples, fine-tuning the student with PTs (Sequence-level KD) or combining them. The corresponding rows in Table 3 show that sequencelevel KD underperforms Logits KD. However, their combination results in a better student in 22 setups and achieves a higher development score, and therefore, we use this strategy in the subsequent stages. S5: Unlabeled data should be utilized. Generating PTs for the unlabeled inputs may help extract more of the knowledge embodied in the teacher, allowing the student to generalize better. In stage 5 we explore this hypothesis. According to Table 3.C, utilizing unlabeled data greatly boosts the performance and closes an additional 19% of the gap. To the best of our knowledge, this is the first study that shows this in KD for NLG. In the next stages, we generate PTs for the labeled and unlabeled inputs. S6: Exposing the student to multiple PTs helps. By comparing the rows of Single PT and K-Beams in Table 3.D, it can be seen that exposing the student to multiple targets and covering a larger portion of learnable distribution closes an additional 6.4% of the gap on average. S7: Sampling is better than Beam-Search for generating PTs. Table 3.D also shows that generating PTs with sampling is typically better than beam search, and closes another 5.2% of the gap on average. We observe that high sampling temperature is competitive, although its effect depends on the task and model. High sampling works better for T5-S, while sampling without temperature works better for BART-6:2 (and on average). Further research could investigate a larger range of temperatures and other diversity-oriented decoding methods. Nevertheless, this is the first study that challenges the traditional mode-approximation practice, and show that generating multiple PTs via sampling significantly improves NLG distillation. S8: Joint-Teaching improves the student. The results in Table 3.E support two of our hypotheses, which we discuss in §3.5. The first is that PTs generated only by the student are less valuable for its training than PTs generated by teacher. The second is that the combination of the two types of PTs (by Joint-Teaching) can be more effective for KD than using only PTs generated by the student or teacher. Our Joint-teaching approach wins 25 out of 32 times and closes another 5.7% of the gap. Final Compression Results. The final compression results (after stage 8) are provided in Table 4. We attempt to achieve high compression rates: T5-KD and BART-KD reduce 92% and 75% of their | Dataset | Model | FLOPs | Latency | Throughput | BLEU | ROUGE | BScore | PPL | |-------------|------------|------------|--------------|----------------|-------------|------------|------------|-----------| | T5-L | 38.7 | 539 | 1.3K | 11.5 | 29.3 | 72.7 | 1.7 | | | XSUM | T5-KD | 2.7 (-93%) | 144 (x3.7) | 13.4K (x10.3) | 10.7 (80%) | 28.2 (81%) | 71.8 (80%) | 1.9 (87%) | | 40K | BART-L | 19.6 | 254 | 3.3K | 13.0 | 31.1 | 73.9 | 1.7 | | BART-KD | 5.1 (-73%) | 68 (x3.7) | 10.0K (x3.0) | 12.3 (79%) | 30.2 (79%) | 73.5 (84%) | 1.9 (73%) | | | T5-L | 26.1 | 530 | 2.0K | 22.2 | 42.3 | 77.9 | 1.3 | | | SQuAD | T5-KD | 1.8 (-93%) | 143 (x3.7) | 22.3K (x11.1) | 20.9 (57%) | 40.6 (57%) | 77.0 (50%) | 1.5 (57%) | | 17.5K | BART-L | 13.3 | 250 | 4.8K | 21.5 | 41.9 | 77.8 | 1.4 | | BART-KD | 3.4 (-74%) | 67 (x3.7) | 13.0K (x2.7) | 20.9 (84%) | 40.9 (75%) | 77.3 (77%) | 1.7 (71%) | | | T5-L | 5.9 | 533 | 10.7K | 6.0 | 21.7 | 71.5 | 1.9 | | | ART | T5-KD | 0.5 (-92%) | 142 (x3.7) | 109.8K (x10.3) | 4.8 (49%) | 19.9 (50%) | 70.4 (47%) | 2.4 (25%) | | 10K | BART-L | 3.2 | 250 | 13.7K | 6.0 | 21.4 | 71.5 | 2.1 | | BART-KD | 0.8 (-75%) | 67 (x3.7) | 23.4K (x1.7) | 5.1 (59%) | 20.3 (57%) | 71.0 (61%) | 2.4 (34%) | | | T5-L | 7.2 | 789 | 7.4K | 25.7 | 45.4 | 78.4 | 1.5 | | | Shakespeare | T5-KD | 0.6 (-91%) | 212 (x3.7) | 75.3K (x10.1) | 25.7 (100%) | 45.3 (98%) | 78.1 (79%) | 1.7 (56%) | | 7K | BART-L | 3.9 | 367 | 9.2K | 25.1 | 44.8 | 78.3 | 1.8 | | BART-KD | 1.0 (-75%) | 96 (x3.8) | 14.8K (x1.6) | 24.8 (88%) | 45.2 (123%) | 78.1 (86%) | 2.0 (68%) | | teachers' parameters, respectively. This results in great computational performance improvements. Our distilled models reduce the latency of their teachers by a factor of 3.7. In addition, T5-KD has a 10 times higher throughput, and BART-KD has double the throughput of its teacher. Our study shows that KD allows model compression and drastically improves the task performance compared to the fine-tuned baseline. In most setups, our recipe for KD closes more than 75% of the student-teacher gap. Surprisingly, in some of the tasks like Shake7 the distilled model outperforms its teacher. Finally, we also conduct a human evaluation to examine the relatively lower performance of our KD method on the ART10 dataset (see appendix §F). Our human evaluation results show that the distilled model (T5-KD) closes 72% of the gap, and this is in-line with the performance on other datasets. ## 5.1 Extreme Setup: Kd With Gpt-4 In the final phase, we explore the transferability of our KD conclusions to an *extreme setup* which involves only limited unlabeled examples. As labeled examples are unavailable, fine-tuning the teacher becomes impractical, leading to the reliance on a huge LM with zero-shot capabilities as the teacher, and this poses new challenges: (1) The teacher is a huge Decoder-only model (since this is the standard for zero-shot learning) while the student is an Encoder-decoder model; (2) The teacher and the student have different tokenizers and (3) Querying the teacher is financially costly, limiting its usage. We utilize GPT-4 (OpenAI, 2023) as our teacher and T5-S as the student. The prompt of GPT-4 consists of three labeled demonstrations. Due to its high cost, we conduct experiments only for the SQuAD17 (3000 examples) and the Shake7 (1500 examples) datasets, and with the following baselines and methods: (a) The GPT-4 teacher; (b) T5-S training with ground-truth (GT) labels; (c) Student fine-tuning with a single PT; (d) Fine-tuning with multiple (five) PTs; (e) Student training with Logits KD and a single PT (f) Logits KD with multiple PTs; More details are provided in §C. Our results in Table 7 (appendix §C.2) are mixed: Generating multiple PTs outperforms a single PT, but Logits KD only helps in the SQuAD17 dataset. Future research is called for as we attribute this result to challenges in aligning the tokenizers. ## 6 Conclusion In this paper, we present a general KD recipe for NLG. To this end, we conduct a systematic study on various tasks and evaluate the impact of different modeling decisions on computational and task performance of distilled models. Our results suggest that using ED models as students, pruning decoder layers, combining Logits KD and PTs via sampling and Joint-Teaching achieve high compression rates while maintaining competitive performance. Nevertheless, our recipe is based on average performance and may depend on the task, model, or setup. The teacher-student performance gap that still exists demonstrate the need for further research. For example, high-temperature PTs seem to be less effective for BART, and further exploration of different hyperparameters or methods for increasing PT diversity may be necessary. Integrating a smart selection of training examples or PTs (Wang et al., 2021a), refining Joint-Teaching with curriculum learning or scheduling (Liu et al., 2021b) are some future research directions. ## 7 Limitations Using a medium size fine-tuned teacher. With recent advances in huge LM such as GPT4 and their extraordinary generation capabilities, one may wonder about the relevance of this work which mainly focuses on a medium size fine-tuned teacher. Although we show the distillation of a huge LM (GPT-4), it is often infeasible. First, when the data cannot be sent to external servers because of privacy constraints or when the domain is unique or specific (e.g., in national security settings or human conversations), huge LMs that cannot be fine-tuned may be less effective. Second, we have distinguished between two types of costs: computational and financial. While training a student model with a medium-size finetuned teacher may take a few days, the entire process is feasible since training time is typically not a limited resource. In contrast, generating PTs with a huge LM like GPT-4 can easily cost (many) dozens of thousands of dollars. This financial cost is often prohibitive, particularly when training a general high-quality student or several domain-specific ones. While it is possible to utilize a huge LM to obtain a limited number of labeled examples, relying on it for generating PTs for abundant unlabeled data is not feasible. Therefore, a medium size teacher is needed. Furthermore, research suggests that using mediator/assistant teachers aids the distillation process (Mirzadeh et al., 2020; Wang et al., 2020), as might be the case in distillation from a huge LM to a medium size fine-tuned teacher, and finally to a small student. Considering the aforementioned reasons, our study holds significant relevance as it emphasizes the importance of the distillation process with a medium size teacher, regardless of whether the data is generated manually or by a huge LM. The scope of our realistic setup. While our results demonstrate the effectiveness of KD for various English-to-English NLG tasks, for the tasks that were part of the study, the output length is relatively short compared to the input (e.g., Summarization and Question Generation) or has a similar length (Abductive Reasoning, Style Transfer and Simplification). The results may differ for tasks with much longer output lengths or for non-English-to-English tasks such as NMT, data-to-text (e.g., table-to-text), multilingual, or multi-modality tasks. In addition, the results are applicable to our realistic task-specific setups, and some findings may vary in high-resource scenarios or when unlabeled data is unavailable. Although these scenarios may be less relevant to NLP application developers, they are commonly studied in academic research. Computational training costs. Another limitation of our research is that we did not consider the computational costs of the KD stages. The training time comparison between the methods was therefore overlooked. This is because we assumed that one-time resource usage for training could be neglected compared to the accumulated inference cost of a deployed model. However, it is worth noting that generating PTs with the teacher for all the training and unlabeled examples is computationally expensive (it could take one to a few days, depending on the number of unlabeled examples). Furthermore, Joint-Teaching can also be computationally heavier than other KD methods, as the student generates PTs during the training process (although the student is fast). In addition, different training objectives also have different costs, with some methods being more computationally intensive than others (e.g., Attention-Relation is more costly than Logits KD). Finally, the distillation process can be long, and multiple epochs are required until the student converges - in some setups, we trained the student for more than a few days. Utilizing huge LMs. Certain limitations arise in our extreme setup, which involves the costly utilization of huge LMs (GPT-4) provided by external companies like OpenAI. First, the comparison with the Joint-Teaching method is not conducted due to the need for repeated costly querying of the teacher model to extract its logits every time a PT is generated with the student. Nevertheless, extracting the logits of the teacher PTs (for Logits KD) and generating multiple PTs is approximately equivalent to generating a single PT. This is because the prompt, consisting of many tokens, is processed only once, and the marginal cost of generating multiple (relatively short) PTs is low. Another limitation arises from relying on external companies to enable logit extraction (for Logits KD) and there is no assurance that this feature will be supported. For instance, in the chat versions: ChatGPT and GPT-4, logits are not accessible. In this work, we rely on an internal version of GPT-4, which allows us to extract its logits. Fortunately, we demonstrate that even without Logits KD, achieving a strong student model is possible. ## Acknowledgements We would like to thank the area chair, the reviewers, the members of the *Microsoft MSAI* team, and the *NLP@Technion* team for their valuable feedback and advice. Roi Reichart has been partially supported by the *VATAT* grant on data science. ## References Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Chi Kit Cheung. 2022. Why exposure bias matters: An imitation learning perspective of error accumulation in language generation. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 700– 710. Association for Computational Linguistics. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In *Advances in Neural Information Processing Systems 28:* Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1171–1179. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Nitay Calderon, Eyal Ben-David, Amir Feder, and Roi Reichart. 2022. Docogen: Domain counterfactual generation for low resource domain adaptation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7727–7746. Association for Computational Linguistics. Ting-Rui Chiang and Yun-Nung Chen. 2021. Relating neural text degeneration to exposure bias. *CoRR*, abs/2109.08705. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311:30. Erkut Erdem, Menekse Kuyu, Semih Yagcioglu, Anette Frank, Letitia Parcalabescu, Barbara Plank, Andrii Babii, Oleksii Turuta, Aykut Erdem, Iacer Calixto, Elena Lloret, Elena Simona Apostol, Ciprian-Octavian Truica, Branislava Sandrih, Sanda Martincic-Ipsic, Gábor Berend, Albert Gatt, and Grazina Korvel. 2022. Neural natural language generation: A survey on multilinguality, multimodality, controllability and learning. *J. Artif. Intell. Res.*, 73:1131–1207. Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. 2006. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines. In EMNLP 2006, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, 22-23 July 2006, Sydney, Australia, pages 618–626. ACL. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. *CoRR*, abs/2301.12726. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In *Proceedings of the Thirty-Second* AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 663–670. AAAI Press. Sebastian Gehrmann, Tosin P. Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondrej Dusek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo Andre Niyongabo, Salomey Osei, Ankur P. Parikh, Laura PerezBeltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. *CoRR*, abs/2102.01672. Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir R. Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh D. Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Stajner, Sébastien Montella, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin P. AMahidewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, and Yufang Hou. 2022. Gemv2: Multilingual NLG benchmarking in a single line of code. *CoRR*, abs/2206.11249. Amnon Geifman. 2020. The correct way to measure inference time of deep neural networks. Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1100–1111. ACL. Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021. Knowledge distillation: A survey. *Int. J. Comput. Vis.*, 129(6):1789–1819. Manish Gupta and Puneet Agrawal. 2022. Compression of deep learning models for text: A survey. ACM Trans. Knowl. Discov. Data, 16(4):61:1–61:55. Ramyad Hadidi, Jiashen Cao, Yilun Xie, Bahar Asgari, Tushar Krishna, and Hyesoon Kim. 2019. Characterizing the deployment of deep neural networks on commercial edge devices. In IEEE International Symposium on Workload Characterization, IISWC 2019, Orlando, FL, USA, November 3-5, 2019, pages 35–48. IEEE. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In *9th International* Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *8th International Conference on* Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Luca Hormann and Artem Sokolov. 2021. Fixing exposure bias with imitation learning needs powerful oracles. *CoRR*, abs/2109.04114. Harsh Jhamtani, Varun Gangal, Eduard H. Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence-to-sequence models. *CoRR*, abs/1707.01161. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 4163–4174. Association for Computational Linguistics. Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. *Comput. Linguistics*, 48(1):155– 205. Andrej Karpathy. 2015. The unreasonable effectiveness of recurrent neural networks. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2020. Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation. *CoRR*, abs/2006.10369. Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel S. Weld. 2021. GENIE: A leaderboard for human-in-the-loop evaluation of text generation. *CoRR*, abs/2101.06561. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1317–1327. The Association for Computational Linguistics. Young Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Roman Grundkiewicz, and Nikolay Bogoychev. 2019. From research to production and back: Ludicrously fast neural machine translation. In *Proceedings* of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pages 280–288. Association for Computational Linguistics. Yann LeCun, John S. Denker, and Sara A. Solla. 1989. Optimal brain damage. In *Advances in Neural Information Processing Systems 2, [NIPS Conference,* Denver, Colorado, USA, November 27-30, 1989], pages 598–605. Morgan Kaufmann. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew O. Arnold, Bing Xiang, and Dan Roth. 2022. DQ-BART: efficient sequenceto-sequence model via joint distillation and quantization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 203–211. Association for Computational Linguistics. Yang Liu, Sheng Shen, and Mirella Lapata. 2021a. Noisy self-knowledge distillation for text summarization. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 692–703. Association for Computational Linguistics. Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021b. Scheduled sampling based on decoding steps for neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 3285–3296. Association for Computational Linguistics. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. 2018. Shufflenet V2: practical guidelines for efficient CNN architecture design. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIV, volume 11218 of Lecture Notes in Computer Science, pages 122–138. Springer. Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In *The Thirty-Fourth AAAI* Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 5191–5198. AAAI Press. Subhabrata Mukherjee and Ahmed Hassan Awadallah. 2020. Xtremedistil: Multi-stage distillation for massive multilingual models. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2221–2234. Association for Computational Linguistics. Rafael Müller, Simon Kornblith, and Geoffrey E. Hinton. 2019. When does label smoothing help? In *Advances in Neural Information Processing Systems 32:* Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4696–4705. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797–1807. Association for Computational Linguistics. Saul B Needleman and Christian D Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443–453. Avital Oliver, Augustus Odena, Colin Raffel, Ekin Dogus Cubuk, and Ian J. Goodfellow. 2018. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3239– 3250. OpenAI. 2023. GPT-4 technical report. *CoRR*, abs/2303.08774. Markus N. Rabe and Charles Staats. 2021. Selfattention does not need o(n2) memory. *CoRR*, abs/2112.05682. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 784–789. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. The Association for Computational Linguistics. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Nicholas Roberts, Davis Liang, Graham Neubig, and Zachary C. Lipton. 2020. Decoding and diversity in machine translation. *CoRR*, abs/2011.13477. Guy Rotman, Amir Feder, and Roi Reichart. 2021. Model compression for domain adaptation through causal effect estimation. *Trans. Assoc. Comput. Linguistics*, 9:1355–1373. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. Movement pruning: Adaptive sparsity by finetuning. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020,* December 6-12, 2020, virtual. Florian Schmidt. 2019. Generalization in generation: A closer look at exposure bias. In Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pages 157–167. Association for Computational Linguistics. Sam Shleifer and Alexander M. Rush. 2020. Pre-trained summarization distillation. *CoRR*, abs/2010.13002. Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022. Unifying language learning paradigms. *CoRR*, abs/2205.05131. Guy Tevet and Jonathan Berant. 2021. Evaluating the evaluation of diversity in natural language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 326–346. Association for Computational Linguistics. Marcos V. Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro Henrique Martins, André F. T. Martins, Peter A. Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, and Roy Schwartz. 2022. Efficient methods for natural language processing: A survey. *CoRR*, abs/2209.00099. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. *CoRR*, abs/2005.03642. Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021a. Selective knowledge distillation for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6456–6466. Association for Computational Linguistics. Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022. What language model architecture and pretraining objective works best for zero-shot generalization? In *International* Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 22964–22984. PMLR. Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021b. Minilmv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In *Findings of the Association for Computational Linguistics: ACL/IJCNLP* 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 2140– 2151. Association for Computational Linguistics. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November* 16-20, 2020, pages 38–45. Association for Computational Linguistics. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Canwen Xu and Julian J. McAuley. 2022. A survey on model compression for natural language processing. CoRR, abs/2202.07105. Dongkuan Xu, Subhabrata Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed Hassan Awadallah, and Jianfeng Gao. 2022. Autodistil: Few-shot task-agnostic neural architecture search for distilling large language models. CoRR, abs/2201.12507. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In *COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference:* Technical Papers, 8-15 December 2012, Mumbai, India, pages 2899–2914. Indian Institute of Technology Bombay. Weijia Xu, Xing Niu, and Marine Carpuat. 2019. Differentiable sampling with flexible reference word order for neural machine translation. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2047–2053. Association for Computational Linguistics. Sina Zarrieß, Henrik Voigt, and Simeon Schüz. 2021. Decoding methods in neural language generation: A survey. *Inf.*, 12(9):355. Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, and Furu Wei. 2022a. Attention temperature matters in abstractive summarization distillation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 127–141. Association for Computational Linguistics. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022b. OPT: open pre-trained transformer language models. *CoRR*, abs/2205.01068. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. A Study Methods In this section, we formally describe the objectives and methods we consider in our study and discuss in §3. A description of the notations is provided in Table 5. In addition, for each method we mention the stage in which we examine it and its corresponding name in the results Table 3. More implementation details including hyperparameters are provided in §D. ## Conditional Language Modeling (Fine-Tuning) Stages 1 And 2. "Fine-Tune" In Table 3.A. The objective of the autoregressive LM is to minimize the Negative Log Likelihood (NLL) of the training dataset: $$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{NLL}}(x,y)=-\log P(y|x)}}\\ {{=-\sum_{i=1}^{|y|}\log P(y_{i}|x,y_{<i})}}\end{array}$$ Notice that in our experiments we also conduct a fine-tuning stage for 10 epochs on the labeled data after the distillation stage of the following KD methods. ## Logits Kd (A.K.A Word-Level Kd) Stage 3. "Logits" in Table 3.A and 3.B. The objective of the student is to minimize the KL divergence (or the Cross-Entropy) of the next token distribution of the student and the teacher: $$\begin{array}{l}{{{\mathcal{L}}_{\mathrm{Log}}(x,y)=}}\\ {{-\sum_{i=1}^{|y|}K L(P_{S}(y_{i}|x,y_{<i})||P_{T}(y_{i}|x,y_{<i}))}}\end{array}$$ Noisy KD Stage 3. "Noisy" in Table 3.A. For more details see Liu et al. (2021a) and §D. ## Attention Relation Kd Stage 3. "Att-Rel" in Table 3.A. For more details see Wang et al. (2020), Wang et al. (2021b) and §D. Fine-tune + PTs (a.k.a. sequence-Level KD) Stage 4. "Seq-lvl" in Table 3.B. | x | Input text: | A sequence of m tokens, | |--------------------|------------------------------------------------------------------------------|---------------------------| | x = (x1, ..., xm). | | | | y | Target text: A sequence of n tokens, y = (y1, ..., yn). | | | P(yi|x, y<i) | The next token distribution that the autoregressive LM learns via teacher forcing. | | | yˆ | Generated text: The inference output of the LM. | | | P(yi|x, yˆ<i) | The next token distribution that is used during inference for generating yˆ. | | | T | The teacher LM. | | | S | The student LM (|S| ≪ |T|). | | | T | A pseudo target (PT) generated by the | | | yˆ | teacher model. | | | S | An output generated by the student | | | yˆ | model (student PT). | | | PT (yi|x, y<i) | The teacher's next token distribution. | | | PS(yi|x, y<i) | The student's next token distribution. Table 5: Notations. | | For each labeled input x, we use the teacher to generate a single mode approximation PT via beam search: yˆ T. Then we fine-tune the student by minimizing LNLL(x, yˆ T). Notice that in our experiments we actually minimize LNLL(x, yˆ T)+LNLL(*x, y*), i.e., an interpolation between the ground truth target and the PT. We find this interpolation to work better than using only the PT. Kim and Rush (2016) proposed another interpolation, by selecting the most similar PT to the ground truth from a set of K PTs generated by beam search. ## Logits Kd + Pts Stage 4 and 5. "Logits+Seq" in Table 3.B and "Labeled" in Table 3.C. Same as "Fine-tune + PTs", but we train the student to minimize: LLog(x, yˆ T). Following the note above, we actually minimize the interpolation: LLog(x, yˆ T) + LLog(*x, y*) (this is also the case for the following methods). ## Logits Kd + Pts For Unlabeled Inputs Stages 5 and 6. "+Unlabeled" in Table 3.C and "Single PT" in Table 3.D. Same as "Logits KD + PTs", but we also generate a single mode approximation PT for each unlabeled input. ## Logits Kd + Multiple Pts Stage 6. "K-Beams" In Table 3.D. We use the teacher to generate K PTs for every labeled or unlabeled input, using beam search with a beam size of K. We kept all the final K beams (sequences), YK, and used them to distill the student by minimizing: Pyˆ T ∈YKLLog(x, yˆ T). This technique can be viewed as generating the top-K mode approximations. In our experiments we use a different single PT for each input at every epoch (i.e., if we generate K PTs for each input, it takes K epochs until the student sees all of them). We mainly do it for a fair comparison between the different methods (see §D for additional details). ## Logits Kd + Sampling Multiple Pts Stage 7 and 8. "Sampling" in Table 3.D and "Only Teacher" in Table 3.E. Same as "Logits KD + Multiple PTs", but rather than generating PTs via beam search, we sample them. Notice that in every distillation epoch a different single PT is sampled. ## Logits Kd + High Temperature Sampling Of Multiple Pts Stage 7. "H-Sampling" in Table 3.D. Same as "Logits KD + Sampling Multiple PTs", but we apply softmax temperature adjustment to the next token distribution when we sample PTs. High temperature values cause the next token distribution to be more flat (and increase its entropy). Therefore, high-temperature sampling generates more diverse and surprising PTs (Tevet and Berant, 2021). We use τ = 1.5 in our experiments. ## Logits Kd + Student Pts Stage 8. "Only Student" In Table 3.E. Same as "Logits KD + Sampling Multiple PTs", but instead of generating PTs with the teacher, we use the student to generate PTs. We generate PTs on-the-fly since the student is continuously updated during training. In other words, for every training input, we use the student to sample a student PT yˆ S. Then, we calculate LLog(x, yˆ S) and update the student weights. The process is repeated for every input until the student finishes the training. ## Joint-Teaching Stage 8. "Joint-Teaching" in Table 3.E. This method combines "Logits KD + Sampling Multiple PTs" and "Logits KD + Student PTs". Accordingly, we generate a PT for every training input using either the teacher or the student. The student is trained to minimize: $$+\left(1-\frac{1}{2}\right)$$ ## Αllog(X, Yˆ T) + (1 − Α)Llog(X, Yˆ S) Where in our experiments α = 0.5, since we find it to work nicely. However, in future extensions of this method, α can also be a scheduled variable or a variable that depends on the student's learning. ## B Language Models Architectures As discussed in §3, the first stage (**stage 1**) of our study is to select the backbone architecture of the NLG model. In this section, we thoroughly discuss and demonstrate the differences between the two common transformer architectures for NLG: Encoder-decoder (ED) models and Decoder-only (DO) models. We start by providing a background on these architectures in §B.1. Following that, in §B.2 we present a theoretical and empirical complexity analysis. Finally, in Subsection §B.3 we compare various off-the-shelf LMs from different families by fine-tuning them on several NLG tasks in the realistic setups we consider in this work. An important note: We acknowledge that the generation capabilities of huge LMs such as GPT3, GPT-4, and PaLM are exceptional. We do not claim that Encoder-decoder models outperform huge Decoder-only models. We consider fine-tuned small or medium-sized LMs since our teachers and students are such. In this case, Encoder-decoders are preferable for task-specific fine-tuning of NLG. ## B.1 Transformer Background Modern LMs are based on the Multi-layer Transformer architecture (Vaswani et al., 2017). The core building block of the Transformer is Attention, which processes the sequence by replacing each token with a weighted average of the rest of the sequence (self-attention), the preceding tokens (autoregressive-attention), or another input sequence (cross-attention). For text generation, there are two dominant types of models: *Encoderdecoder (ED)* (Vaswani et al., 2017; Raffel et al., 2020; Lewis et al., 2020) and *Decoder-only (DO)* (Radford et al., 2019; Zhang et al., 2022b). ED models, which consist of two components (an encoder and a decoder), process inputs and targets (outputs) independently, with different parameter sets: The encoder processes the inputs with selfattention layers and passes its output to the decoder. Then, the decoder autoregressively generates the target token by token by applying autoregressiveattention and cross-attention (with the output of the encoder). On the other hand, DO models consist of autoregressive-attention layers that process inputs and targets together. Typically, the target sequence is concatenated to the input sequence (sometimes, with a separation token between them, such as "TL;DR" for summarization). Notice that in contrast to the DO model, the encoder component represents each token of the input sequence by sharing information from all the tokens in the input (via self-attention), while the DO model represents an input token by sharing information only from its preceding tokens (via autoregressive-attention). Another difference between the two architectures is that each layer of the decoder component of the ED model, applies cross-attention to the target tokens by conditioning on the last hidden states of the input tokens. This is in contrast to the decoder layers of the DO model which apply autoregressive-attention to the target inputs by conditioning on the same layer hidden states of the input tokens. ED and DO models differ not only in the architecture but also in the pre-training objectives. Whereas DO models are trained with an autoregressive language modeling objective (given previous tokens, predict the following one), ED models are trained with a masked language modeling objective (given a sequence with masked spans, predict the missing tokens). As a result of these differences (encoder component, attention mechanisms, and training objectives), the models exhibit different inductive biases, which affect their performance. While ED models are more popular for classification, summarization, and NMT tasks, DO models excel on open-text generation and zero-shot or few-shot learning (Raffel et al., 2020; Wang et al., 2022). Furthermore, the two architectures have different computational complexities (see the discussion in the next subsection, §B.2). Nonetheless, the increasing popularity of huge DO models like GPT-3/4 and PaLM (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023), which have impressive generation capabilities, has led to the question of "whether ED models are still relevant for NLG", a question that we aim to answer in the first stage of our study. To build an NLG system, it is necessary to select an architecture that meets its needs. In the spirit of our realistic setup, we compare various off-theshelf ED and DO LMs from different families, and show that ED models outperform DO models in conditional generation tasks. These findings are in line with the recent work of Tay et al. (2022), which in contrast to us, trained from scratch LMs. For the DO architecture, we use the GPT2-family models (Radford et al., 2019): GPT2, GPT2-M, and GPT2-L; and the recent OPT-family models (Zhang et al., 2022b): OPT-125Mand OPT-350M. For ED models ![17_image_0.png](17_image_0.png) we use the same models which are described in the main paper: T5-family models (Raffel et al., 2020): T5-S and T5-L; and the BART-family models (Lewis et al., 2020): BART-6:6 (base version) and BART-L. ## B.2 Complexity Analysis For the theoretical complexity analysis, we assume all the transformer models have the same hidden size dimension and ignore it in our analysis. We consider two types of models: ED with E encoder layers and D decoder layers, and a DO model with D decoder layers. The input and target lengths are m and n, respectively. For decoding, we assume that hidden states of previously generated tokens are cached and re-used (i.e., for the i-th token, the decoder layers perform operations only for it). We do not discuss space complexity, as it depends on the exact implementation (Rabe and Staats, 2021) and memory utilization of the device. Therefore, we do not connect the throughput measure to theoretical analysis. A single encoder layer has a quadratic time complexity in terms of input length O(m2). An ED decoder layer consists of a causal-attention and cross-attention and therefore has a time complexity of O(n(m + n). Thus, ED model has a complexity of O(m2E + n(m + n)D). Since we concatenate the input and the target for DO models, a single decoder layer of a DO has a time complexity of O((m + n) 2). Thus, a DO model has a complexity of O(m2D + n(m + n)D). This suggests that an ED model can have the same time complexity as a DO model (when E = D) while having double parameters because the encoder and the decoder layers do not share parameters (excluding cross-attention weights, which account for a small portion of the total weights (Raffel et al., 2020)). Note that the number of floating-point operations (FLOPs, see subsection §4.3), is compatible with the theoretical complexity. As a result, it is possible to verify the observation above - an ED model with double the number of layers and parameters as a DO model should have roughly the same number of FLOPs. Consider Table 3.2 and take for example GPT2-M, which has 24 decoder layers, and compare it to T5-L which has 48 layers (both of them have the same number of heads and the same hidden dimension, see Table 6). Indeed, they have the same number of FLOPs. On the other hand, there are differences in the practical measures. While the latency of GPT2-M is smaller than T5-L, its memory footprint is larger, which results in smaller throughputs. This highlights the complex nature of the connection between the theoretical and the practical measures (e.g., FLOPs and latency), which depend on the device, implementation, and hardware utilization that enable parallelism. Now, compare models from the same family but with different sizes. As can be seen, the ratio between the latencies of the models does not reflect the large compression rate between their sizes. For example, T5-L is 12 times larger than T5-S, however, it is only 3.7 times faster. Likewise, GPT2-L is 6 times larger than GPT2, but is only 2.9 times faster. On the other hand, the throughput reflects much better the size differences. This demonstrates the complex relationship between architectural decisions and computational measurements and suggests that architectural decisions should be taken according to the (specific) task and system needs. We next present a big O notion when assuming that operations can be parallelized (as in GPUs). This notation reflect better the latency: a practical measure of the time for generating a single target example. With full parallelism, the complexity of processing the input in a single encoder layer (for ED) is reduced from O(m2) to O(m) (see Kasai et al. (2020)). The same is true for the DO decoder layer when it processes the input since it is capable of processing all of it at the same time. However, since the target is generated by one token at a time (autoregressive), the processing complexity in each layer that processes the target remains O(n(m + n)). As a result, the time complexity of ED is | Arch. | Model | Enc. | Dec. | Heads | Hidden | Params | |---------|----------|--------|--------|---------|----------|----------| | DO | GPT2-L | 0 | 36 | 20 | 1280 | 774 | | DO | GPT2-M | 0 | 24 | 16 | 1024 | 354 | | DO | GPT2 | 0 | 12 | 12 | 768 | 124 | | DO | OPT-350M | 0 | 24 | 16 | 1024 | 331 | | DO | OPT-125M | 0 | 12 | 12 | 768 | 125 | | ED | T5-L | 24 | 24 | 16 | 1024 | 737 | | ED | T5-S | 6 | 6 | 8 | 512 | 60 | | ED | BART-L | 12 | 12 | 16 | 1024 | 406 | | ED | BART-6:6 | 6 | 6 | 16 | 768 | 139 | | ED | BART-6:2 | 6 | 2 | 16 | 768 | 111 | | ED | BART-2:6 | 2 | 6 | 16 | 768 | 101 | O(mE + n(m + n)D) and of DO is O(mD + n(m + n)D), which is equal to the ED complexity when E = D. Nevertheless, there are differences in practical measurements. The theoretical analysis when allowing parallelism sheds light on two observations that come up from the practical analysis. The first one is that the length of the target has a higher impact on the latency, than the length of the input. This is expected in the autoregressive generation process, where the relationship between the complexity and the input length is linear, while quadratic for the target length. This is supported by Table 8: for all models, altering only the input size minimally affects the latency. The second observation is pruning decoder layers has a higher impact on the latency than pruning encoder layers. This is also expected since each decoder layer contributes O(n(m + n)) to the total latency complexity, whereas a single encoder layer contributes O(m). This is verified in Table 8 and in Figure 3: the encoder pruned model, BART-2:6, has roughly the same latency as its full version, BART-6:6. Conversely, the decoder pruned model BART-6:2 has a smaller latency from both. The behavior of throughput is more complex than latency. While the pruned decoder model consistently has a smaller latency regardless of input length (as shown in Figure 3), the pruned encoder (BART-2:6) has a higher throughput than the pruned decoder (BART-6:2) for longer inputs, as indicated by the crossover at around 0.8 on the X-axis in Figure 3. ## B.3 Task Performance Analysis In this subsection, we discuss the differences in task performance between off-the-shelf ED and DO models, which are finetuned on our four datasets. The average results (over the four tasks) are provided in Table 2. For all datasets and models, ED models outperform DO models. Presumably, a better inductive bias is injected to the ED models: (1) By applying self-attention (and not autoregressiveattention) to the conditioned input sequence; (2) By the fact that in contrast to the DO model, the decoder component of the ED model attends to the last hidden states of the conditioned input sequence from its first layer. This is unlike the DO model, where each layer applies attention to hidden states of the same layer. Our results for conditional generation tasks in a finetuning setup are in line with other works (Raffel et al., 2020; Tay et al., 2022) which trained LMs from scratch. This finding is particularly relevant for NLP practitioners who aim to develop a specialized in-house model for a specific NLG task. Our findings also raise the question of why huge language models, such as GPT-3 and PaLM (Brown et al., 2020; Chowdhery et al., 2022) are DO, and Wang et al. (2022) answer it by showing that DO models excel in zero and few-shot setups. Indeed, in the final part of our study, which involves an extreme setup where labeled data is unavailable, we use GPT-4, a Decoder-only model with zero-shot capabilities, to generate PTs. Our equivocal results lead us to continue only with ED models (T5 and BART) for our compression study (stages 1-8). ## C Kd Without Labeled Data In the final phase of our study, we intend to explore the possibility of scaling up our experimental setup. This is accomplished by working with only a limited number of unlabeled examples and without any labeled examples. We refer to this setup as extreme setup. It is important to note that unlike the realistic setup, which incorporates a mediumsized labeled dataset, the extreme setup poses a challenge for fine-tuning a teacher model due to the lack of labeled examples. In that case, we need to utilize as our teacher a huge LM, such as GPT-4, which has zero-shot and few-shot capabilities and can generate plausible PTs. The main goal of this phase is to investigate the transferability of the KD conclusions from our realistic setup to the extreme setup, which possess the following differences since it involves a huge zero-shot LM as the teacher: (1) The teacher is a Decoder-only model (since this is the standard architecture for zero-shot and few-shot LMs) and the student is an Encoder-decoder model (following our findings that they outperform Decoderonly models, see §5); (2) The teacher and the student have different tokenizers, which means that a sequence-alignment algorithm is needed to perform Logits KD; (3) Unlike the realistic setup where the computational training cost could be neglected, in the extreme setup we assume that querying the huge teacher is financially costly and therefore we limit its usage. The third difference above impacts the design choice of the extreme setup, and we limit the number of unlabeled data to a few thousand. In addition, we do not consider the Joint-Teaching method due to its high cost compared to other methods. This is because it requires querying the teacher every time we generate a PT with the student (to extract the teacher logits). However, notice that extracting the logits of GPT-4 and generating multiple PTs is approximately equivalent to generating a single PT. This is because the prompt and the input, consisting of many tokens, are processed only once, and the marginal cost of generating multiple (relatively short) PTs is low. ## C.1 Experimental Setup Models and datasets We utilize GPT-4 as our teacher model and T5-S as the student model. For generating PTs with GPT-4, we use a prompt that contains a task instruction and three demonstrations of labeled examples (few-shot learning). We consider two NLG tasks: (1) Question Generation - we use the SQuAD17 dataset and sample 3000, 250 and 500 examples as the train, development and test, respectively; (2) Simplification and style transfer - we use the Shake7 dataset and sample 1500, 250 and 350 examples as the train, development, and test, respectively. Notice that both the training and development sets do not contain any labeled data. Only the test set includes labeled data, which is used for evaluation purposes. Methods and baselines We present the test results for the following baselines and methods: (a) The GPT-4 teacher; (2) A T5-S model which is trained using ground-truth (GT) targets to compare with the GPT-4 teacher; (c) Student fine-tuning with a single PT; (d) Student fine-tuning with multiple PTs; (e) Student training with a single PT and Logits KD; (f) Student training with multiple PTs and Logits KD. ![19_image_0.png](19_image_0.png) <|endoftext|>: 0.0, 200: 0.0, ?: 0.02, 202: 0.03, 201**: 0.95** How | much | was | Brian | L | . | Roberts | ' | total | compensation | in | 201 | 0 | ? (e) (f) s: 1.0 2010**: 1.0** We train each model (except GPT-4) using four learning rates: [0.003, 0.001, 0.0005, 0.0003]. In addition, as we explain §C.2, we also include results when we train the models of (c)-(f) using golden targets (ground-truth labels) for the development set. Tokenizers Alignment In the extreme setup, the teacher is a Decoder-only model (GPT-4), and the student is an Encoder-decoder model (T5-S) that does not share the same tokenizer. Therefore, to perform Logits KD, where the probabilities of the next token's logits are used for distillation, two types of token alignment are required: (1) Matching each token in the teacher's tokenized PT sequence with its corresponding token in the student's tokenized PT sequence; (2) Matching the tokens from the teacher's logits to tokens from the student's vocabulary. For example, consider the black and blue arrows in Figure 4. These arrows demonstrate the first type of match, where we align the tokens of the tokenized PT sequences. Additionally, some tokens might be inserted (c) or deleted (f). Similar to Fu et al. (2023), we use the well-known dynamic programming Needleman–Wunsch algorithm (Needleman and Wunsch, 1970) for sequence alignment to find this mapping. The output of the algorithm is a sequence of edit operations: match, replacement, insertion, and deletion. We consider two tokens as a match if the algorithm determines them as such or if they are replaced and one is a prefix of the other. For instance, the blue arrows in Figure 4 represent a match via replacement. The OpenAI API allows us to extract only the probability distribution over the top five tokens at each decoding step. However, their probability is usually close to 1. We align the top five tokens to the student's vocabulary by performing an exact | dev contains GTs | dev contains PTs | | | | | | | | | | | | | | | | |------------------------|--------------------|---------|--------|------|------|------|------|------|------|------|------|------|------|------|------|------| | SQuAD17 | Shake7 | SQuAD17 | Shake7 | | | | | | | | | | | | | | | Method | BL | RG | BS | PP | BL | RG | BS | PP | BL | RG | BS | PP | BL | RG | BS | PP | | a. GPT-4 (Teacher) | 13.6 | 37.6 | 75.0 | 21.4 | 42.3 | 79.4 | | | | | | | | | | | | b. T5-S + GT labels | 17.8 | 36.4 | 75.2 | 2.1 | 21.2 | 42.1 | 76.4 | 2.1 | | | | | | | | | | c. T5-S + PT | 11.3 | 31.9 | 72.0 | 2.49 | 19.1 | 41.1 | 76.0 | 2.37 | 11.7 | 32.6 | 72.4 | 2.65 | 18.9 | 41.1 | 76.0 | 2.47 | | d. T5-S + PTs | 11.4 | 32.2 | 72.0 | 2.4 | 19.0 | 41.1 | 76.1 | 2.36 | 11.6 | 32.7 | 72.4 | 2.84 | 19.2 | 41.1 | 76.1 | 2.36 | | e. T5-S + Logits + PT | 11.3 | 32.0 | 72.1 | 2.51 | 18.4 | 40.7 | 75.8 | 2.47 | 11.6 | 32.4 | 72.4 | 2.72 | 18.5 | 41.0 | 75.9 | 2.66 | | f. T5-S + Logits + PTs | 12.0 | 32.7 | 72.5 | 2.4 | 18.9 | 41.0 | 75.9 | 2.42 | 11.9 | 32.8 | 72.5 | 2.48 | 19.1 | 40.7 | 75.0 | 2.52 | match. Then, we apply softmax to the logits to make their probabilities sum to one. For example, (a) and (b) in Figure 4 present such an alignment. Notice that some of the tokens of the teachers are omitted ("Robsert" and "|<endoftext|>"). In case the student token does not have a match in the teacher's top five tokens, we determine its probability as one. For example, (c) and (e) in Figure 4: (c) "s" is a token that is inserted and therefore its probability is one; and (e) the token "2010" does not appear in the top five tokens (d), and therefore its probability is one. The second type of alignment we need to perform is matching the teacher's logits of the next token prediction to tokens from the student's vocabulary. The OpenAI API allows us to extract only the probability distribution over the top five tokens at each decoding step. However, the sum of their probabilities is usually close to 1. We align the top five tokens with the student's vocabulary by performing an exact match. Then, we apply softmax to the logits to ensure their probabilities sum up to one. For example, (a) and (b) in Figure 4 demonstrate such an alignment. Note that some tokens from the teacher are omitted (e.g., "Roberts" and "|<endoftext|>"). If the student's token does not have a match in the teacher's top five tokens, we assign its probability as one. For instance, in Figure 4, (c) "s" is an inserted token, so its probability is one; and (e) the token "2010" does not appear in the top five tokens (d), hence its probability is one. ## C.2 Results In Table 7, we present the results of the extreme setup. We do not include computational performance metrics as OpenAI does not detail the exact architecture of GPT-4. We find the results vary greatly between different initializations and learning rates. The observed difference in performance can be primarily attributed to the unique extreme setup. The limited number of training instances and the distinct distribution of PTs, generated by GPT4 rather than a fine-tuned model contribute to this ![20_image_0.png](20_image_0.png) variation. Additionally, the discrepancy between the development set, consisting of PTs, and the test set, containing ground-truth targets, negatively affect model selection (Fu et al., 2023). To address the issue of variability, we present the average scores over different learning rates. Additionally, we include the results from experiments conducted with development sets that contain ground-truth (GT) targets (as shown in the four left columns of Table 7). Indeed, the correlation between the development score and the test score is considerably low (0.06 and 0.12 for the SQuAD17 and Shake7 datasets, respectively) when the development sets are PTs, in contrast to the higher correlations observed when the development sets are GTs (0.57 and 0.66). As depicted in Table 7, the overall trends in the left four columns (development set with GTs) align with those in the right four (development set with PTs). The results are mixed: for the SQuAD17 dataset, Logits KD with multiple PTs outperforms the other methods, which is in line with the conclusions from the realistic setup. Surprisingly, incorporating Logits KD has a positive effect only when there are multiple PTs. In the Shake7 dataset, Logits KD does not improve the student. We believe this is due to the difficulties with aligning the tokenizers and call for further research. Nevertheless, another conclusion from the realistic setup, which also holds in the extreme setup, is that generating multiple PTs is preferable over a single PT. ## D Additional Implementation Details Our experiments are conducted in the PyTorch framework. Models are trained on a machine equipped with 4 Nvidia Tesla v100 GPUs (for XSUM40 and SQuAD17; in that case, we use DDP training) or with Nvidia GeForce RTX 4080 (for ART10 and Shake7). Training We optimize our model with the AdamW optimizer, with a weight decay of 1e − 5, ε = 1e−8, 100 warmup steps, and a linear learning rate scheduler. We use the largest batch size that fits the GPU for every dataset and model. However, for a fair comparison, we accumulate the gradients and update the model every 96 training examples for any experiment (same number of gradient updates). For BART models we apply half-precision training and inference. The validation metric for XSUM40 is ROUGE-2 (F1) and for SQuAD17, ART10 and Shake7 is BLEU. In addition, for XSUM40 we use "summarize:" as a prefix for T5 models and "TL;DR" as a suffix for DO models. For SQuAD17 we use "ask:" as a prefix and suffix for T5 and DO models respectively. For ART10 we use "explain:" as a prefix and suffix for T5 and DO models respectively, and for Shake7, we use "modern:" as a prefix and suffix for T5 and DO models respectively. Fine-tuning We examine multiple learning rates for each model and dataset as follows: for our student models, T5-S and BART-6:6, and smaller Decoder-only models, GPT2, OPT-125M–we search within 8 different learnings rates in the range of [5e − 2, 1e − 5] and train the models for 35 epochs. For our teacher models, T5-L and BART-L and the remaining decoder-only models–6 learning rates in [5e − 3, 1e − 6] and 20 epochs. For fine-tuning the decoder-only models, we concatenate the input and the target, separated by a task suffix, and calculate the loss only on the target tokens. Evaluation We evaluate every model two times at each epoch (at the middle and the end) and select the best checkpoint according to the development set performance (see §4.1 for more details about the measure used in each dataset). For computational reasons, when we evaluate the model on the development set, we generate predictions for no more than 1K. We use DeBERTa-base model (He et al., 2021), fine-tuned on the MNLI dataset as the backbone model for calculating BSs. Knowledge Distillation For computational reasons, the learning rate which is used for training the student is selected according to the development performances in the fine-tuning stage (and reported on Tables 9, 10, 11, 12). Since we have observed that the convergence time of the students in KD setups is slow (unlike fine-tuning), we train the models for 192 epochs but stop the training if there is no improvement in the performance for 16 epochs (32 evaluation steps). We note that all of the experiments were stopped before the last epoch. Following Mukherjee and Awadallah (2020); Xu et al. (2022), we perform a fine-tuning stage for 10 epochs after selecting the best checkpoint during the KD stage. The final checkpoint is selected either from the KD or fine-tuning stages. For Logits KD we minimize the KL divergence between the student and the teacher logits. We also tried using Label-Smoothing (Müller et al., 2019) with different scaling temperatures, however, it did not help the distillation. For Sequence-Level KD we fine-tune the student on pseudo-targets generated with beam search in addition to the original ground truth targets. For Noisy KD we only apply noise to the teacher's logits (as it is shown to be more important than applying noise to the input, and since we don't focus on input manipulations in this study). For Attention-Relations KD we distill relations from the last encoder and last decoder layers and scale the weights of the loss components to 1 at the start of the training. Pseudo Targets For generating pseudo targets we use nucleus sampling with P = 0.95, for high temperature sampling we use τ = 1.5. When generating pseudo targets with the teacher using beam search, we use a beam size of 16. For sampling, we generate 48 pseudo targets (these are the largest sizes that fit on v100 GPU for XSUM40). In experiments with a single pseudo target, we select the highest-ranked prediction among the generated targets using beam search with a beam of size 16. We augment the training data with PTs by adding pairs of input and **a single** PT for each labeled or unlabeled example (depending on the experiment). In experiments with multiple PTs, we use a different single PT at every epoch (alternatively, the student could learn from multiple pseudo targets of the same input on every epoch). We do it for two main reasons: first, we want a fair comparison between experiments with single or multiple pseudo targets. Second, we have observed that the ground truth of the labeled data is important–this way, the student sees more of it as the proportion of ground truth targets is larger than the alternative. We use nucleus sampling and for high temperature sampling we use τ = 1.5. For computational reasons, we generate all the teacher PTs once and reuse them. Conversely, the student PTs are generated on-the-fly, since the student is continuously updated during training. In Joint-Teaching, we generate PTs with the student in 50% of the training steps (in the remaining 50% we use the teacher). Computational Profiling All computational profiling experiments are conducted on Nvidia GeForce RTX 4080. Following Geifman (2020), we do a GPU warmup for 10 steps and then average 100 computational measurements. For every dataset, we use the maximum input and target length as reported in Table 1. FLOPs are measured for a full forward step. Latency and memory footprint are measured for generating a single example. For measuring Throughput, which is the maximum number of examples the model can process in a minute, we find the maximum batch size that does not exceed 16GB during the generation, and then measure the throughput. ## D.1 The Shake7 **Dataset** We construct a new dataset for the well-explored style transfer task (which is also a simplification task) of translating Shakespeare's texts to modern English. We combined three existing datasets: two parallel datasets of Shakespeare's original texts and their modern versions (Xu et al., 2012; Jhamtani et al., 2017) and a third dataset containing only unlabeled texts from Shakespeare's plots that are not part of the other two datasets (Karpathy, 2015). A particular advantage of this dataset is that it consists of publically available datasets, while many other datasets for thesimplification task are not public. Moreover, in this dataset we have access to both labeled (original alongside modern texts) and unlabeled (original texts) data. Additionally, the labels (modern English texts) are of very high quality as experts produce them. Finally, the task of this dataset is harder than other style transfer and simplification cases since the difference between the original text and the simplified version is not limited to a small number of words. We hope this dataset will contribute to the NLP community. ## D.2 Urls Of Code And Data - **Code Repository** - code and datasets: github.com/nitaytech/KD4Gen. | XSUM 40K |x| = 480 |y| = 32 SQuAD 17.5K |x| = 320 |y| = 32 ART 10K |x| = 48 |y| = 32 Shakespeare 7K |x| = 48 |y| = 48 | |-------------------------------------------------------------------------------------------------------------------------| - **HuggingFace** (Wolf et al., 2020) - code and pretrained weights for language models, tokenizers, and datasets: huggingface.co/. huggingface.co/docs/accelerate/index. - **Torchprofile** - for measuring FLOPs: github.com/zhijian-liu/torchprofile. ## E Additional Results In this section, we report additional details and the complete results of our study. Table 6 presents a full description of the architecture of the models used in our study. Table 8 provide full results of our computational measurements. In Tables 9, 10, 11 and 12 we report on the performances of every experiment we conduct, for XSUM40, SQuAD17, ART10 and Shake7 datasets, respectively. | Dataset | Model | FLOPs | Lat. | G. Mem | TP | Batch | |-----------|---------|---------|--------|----------|------|---------| | GPT2-L | 84.0 | 630 | 539 | 555 | 23 | | | GPT2-M | 38.8 | 424 | 345 | 1177 | 42 | | | GPT2 | 13.6 | 211 | 198 | 3335 | 78 | | | OPT-350M | 36.3 | 344 | 294 | 1415 | 49 | | | OPT-125M | 13.6 | 170 | 180 | 3728 | 86 | | | T5-L | 38.7 | 539 | 112 | 1297 | 116 | | | T5-S | 2.7 | 144 | 30 | 13392 | 525 | | | BART-L | 19.6 | 254 | 59 | 3349 | 243 | | | BART-6:6 | 5.8 | 132 | 36 | 8361 | 428 | | | BART-2:6 | 2.8 | 131 | 36 | 12320 | 432 | | | BART-6:2 | 5.1 | 68 | 36 | 10025 | 433 | | | GPT2-L | 56.7 | 604 | 361 | 849 | 35 | | | GPT2-M | 26.1 | 410 | 223 | 1804 | 65 | | | GPT2 | 9.2 | 215 | 124 | 5128 | 124 | | | OPT-350M | 24.4 | 333 | 193 | 2143 | 76 | | | OPT-125M | 9.2 | 167 | 112 | 5748 | 138 | | | T5-L | 26.1 | 530 | 77 | 2011 | 166 | | | T5-S | 1.8 | 143 | 16 | 22256 | 984 | | | BART-L | 13.3 | 250 | 41 | 4759 | 350 | | | BART-6:6 | 3.9 | 133 | 16 | 11066 | 956 | | | BART-2:6 | 2.0 | 131 | 16 | 15000 | 963 | | | BART-6:2 | 3.4 | 67 | 16 | 13009 | 965 | | | GPT2-L | 12.5 | 582 | 58 | 4417 | 219 | | | GPT2-M | 5.7 | 401 | 34 | 9421 | 428 | | | GPT2 | 2.0 | 208 | 18 | 26177 | 824 | | | OPT-350M | 5.3 | 325 | 32 | 10328 | 458 | | | OPT-125M | 2.0 | 161 | 17 | 30164 | 911 | | | T5-L | 5.9 | 533 | 22 | 10661 | 581 | | | T5-S | 0.5 | 142 | 3 | 109777 | 5101 | | | BART-L | 3.2 | 250 | 12 | 13704 | 1197 | | | BART-6:6 | 1.0 | 132 | 5 | 21259 | 3088 | | | BART-2:6 | 0.8 | 131 | 5 | 22818 | 3111 | | | BART-6:2 | 0.8 | 67 | 2 | 23408 | 7359 | | | GPT2-L | 15.0 | 883 | 70 | 3129 | 182 | | | GPT2-M | 6.9 | 600 | 38 | 6849 | 383 | | | GPT2 | 2.4 | 306 | 18 | 19237 | 824 | | | OPT-350M | 6.4 | 484 | 38 | 6439 | 386 | | | OPT-125M | 2.4 | 241 | 17 | 21902 | 911 | | | T5-L | 7.2 | 789 | 28 | 7422 | 453 | | | T5-S | 0.6 | 212 | 3 | 75309 | 4063 | | | BART-L | 3.9 | 367 | 15 | 9200 | 958 | | | BART-6:6 | 1.3 | 192 | 6 | 13251 | 2562 | | | BART-2:6 | 1.1 | 191 | 6 | 13859 | 2581 | | | BART-6:2 | 1.0 | 96 | 3 | 14836 | 5197 | | Model Obj. PT In. Decoding PT St. LR FT Dev BLEU ROUGE PPL R1 R2 RL BS-F1 BS-P BS-R MET T5-L FT - - - 5e-5 F 17.7 11.5 29.3 1.7 39.0 16.9 31.9 72.7 73.8 71.8 33.5 T5-S FT - - - 3e-3 T 12.2 7.6 23.2 3.1 32.4 11.5 25.9 68.3 69.0 67.9 26.8 T5-S FT L 1-BS T 3e-3 F 13.5 8.5 24.6 2.9 33.7 12.8 27.3 69.2 70.4 68.3 27.9 T5-S Noisy - - - 3e-3 F 13.7 8.3 24.9 2.2 34.2 12.9 27.6 69.7 71.3 68.4 28.0 T5-S Att-Rel - - - 3e-3 F 14.0 8.8 25.4 2.3 34.8 13.4 28.1 70.2 71.5 69.1 28.8 T5-S Logits - - - 3e-3 F 14.1 8.5 25.1 2.3 34.4 13.1 27.8 69.9 71.4 68.7 28.3 T5-S Logits L 1-BS T 3e-3 F 14.2 8.4 25.0 2.3 34.3 13.1 27.8 69.8 71.2 68.6 28.1 T5-S Logits L+U 1-BS T 3e-3 F 15.8 9.9 27.1 2.1 36.4 15.0 29.8 71.0 69.9 72.4 30.5 T5-S Logits L+U K-BS T 3e-3 F 15.9 10.3 27.3 2.1 36.8 15.3 29.9 71.2 70.3 72.2 31.1 T5-S Logits L+U Samp. T 3e-3 F 16.3 10.5 27.9 1.9 37.3 15.7 30.6 71.8 70.5 73.3 31.5 T5-S Logits L+U H-Samp. T 3e-3 T 16.3 10.5 27.9 1.9 37.4 15.6 30.5 71.7 70.7 72.8 31.6 T5-S Logits L+U Samp. S 3e-3 T 16.5 10.2 27.8 1.9 37.4 15.6 30.5 71.8 73.2 70.6 31.4 T5-S Logits L+U Samp. T+S 3e-3 F 16.6 10.7 28.2 1.9 37.8 16.0 30.9 71.8 70.9 73.0 32.0 BART-L FT - - - 1e-5 F 19.0 13.0 31.1 1.7 41.0 18.8 33.6 73.9 73.0 75.0 35.8 BART-6:6 FT - - - 5e-5 F 15.7 10.0 27.6 2.0 37.0 15.5 30.3 72.1 73.8 70.6 31.1 BART-2:6 FT - - - 5e-5 F 12.3 8.0 23.8 2.4 32.8 12.2 26.3 69.4 70.2 68.7 27.5 BART-6:2 FT - - - 5e-5 F 15.4 9.3 26.5 2.7 35.8 14.6 29.2 71.0 72.6 69.6 29.9 BART-6:2 FT L 1-BS T 5e-5 F 15.3 9.8 27.0 2.6 36.2 15.1 29.7 71.1 72.7 69.8 30.3 BART-6:2 Noisy - - - 5e-5 F 16.1 9.8 27.6 2.2 37.0 15.6 30.3 71.7 73.6 70.0 30.8 BART-6:2 Att-Rel - - - 5e-5 F 16.4 9.9 27.6 2.4 37.0 15.6 30.3 71.7 73.4 70.2 31.0 BART-6:2 Logits - - - 5e-5 F 16.2 10.0 27.6 2.4 37.0 15.6 30.2 71.8 73.4 70.4 31.1 BART-6:2 Logits L 1-BS T 5e-5 F 16.6 10.4 27.9 2.3 37.2 16.0 30.5 72.0 73.7 70.4 31.4 BART-6:2 Logits L+U 1-BS T 5e-5 F 17.5 11.1 28.8 2.2 38.2 16.8 31.4 72.5 71.1 74.2 32.4 BART-6:2 Logits L+U K-BS T 5e-5 F 17.7 11.5 29.4 2.2 38.8 17.3 32.0 72.8 71.4 74.5 33.2 BART-6:2 Logits L+U Samp. T 5e-5 F 18.3 11.6 29.6 2.0 39.1 17.6 32.3 73.1 71.7 74.8 33.4 BART-6:2 Logits L+U H-Samp. T 5e-5 F 17.9 11.3 29.4 2.0 38.8 17.3 32.0 73.0 71.4 74.8 32.9 BART-6:2 Logits L+U Samp. S 5e-5 F 18.0 11.3 29.4 2.0 39.0 17.2 32.0 73.0 74.5 71.7 33.2 BART-6:2 Logits L+U Samp. T+S 5e-5 T 18.3 12.3 30.2 1.9 39.9 18.0 32.8 73.5 74.6 72.5 34.7 GPT2-L FT - - - 5e-6 F 11.9 7.8 22.5 1.9 30.7 11.9 24.9 65.6 67.2 64.2 25.6 GPT2-M FT - - - 1e-4 F 11.2 6.8 20.6 2.1 28.7 10.4 22.9 63.9 65.1 62.9 23.7 GPT2 FT - - - 1e-3 F 8.0 4.6 17.6 2.6 25.0 7.8 19.9 62.0 63.8 60.5 19.9 OPT-350M FT - - - 1e-4 F 10.6 6.7 20.8 2.8 29.1 10.5 22.8 64.5 65.8 63.8 24.3 OPT-125M FT - - - 7e-5 F 11.5 7.0 22.5 2.2 31.1 11.4 24.9 67.9 69.5 66.5 25.2 Table 9: Results for summarization task, XSUM40 dataset. | Model | Obj. | PT In. | Decoding | PT St. | LR | FT | Dev | BLEU | ROUGE | PPL | R1 | R2 | RL | BS-F1 | BS-P | BS-R | MET | |-----------------------------------------------------------------|---------|----------|------------|----------|------|------|-------|--------|---------|-------|------|------|------|---------|--------|--------|-------| | T5-L | FT | - | - | - | 5e-5 | F | 22.0 | 22.2 | 42.3 | 1.3 | 50.6 | 29.3 | 46.8 | 77.9 | 77.8 | 77.7 | 48.9 | | T5-S | FT | - | - | - | 5e-4 | F | 19.4 | 19.1 | 38.3 | 1.9 | 46.6 | 25.3 | 43.2 | 76.1 | 75.8 | 75.8 | 44.6 | | T5-S | FT | L | 1-BS | T | 5e-4 | F | 19.7 | 18.8 | 38.7 | 2.8 | 46.8 | 26.0 | 43.3 | 75.7 | 75.5 | 76.1 | 45.5 | | T5-S | Noisy | - | - | - | 5e-4 | F | 20.3 | 20.2 | 39.6 | 1.7 | 47.8 | 26.6 | 44.4 | 76.4 | 76.7 | 76.4 | 45.9 | | T5-S | Att-Rel | - | - | - | 5e-4 | F | 20.4 | 20.1 | 39.5 | 1.7 | 47.7 | 26.5 | 44.3 | 76.4 | 76.6 | 76.4 | 45.8 | | T5-S | Logits | - | - | - | 5e-4 | F | 20.2 | 20.4 | 39.8 | 1.7 | 47.9 | 26.9 | 44.6 | 76.8 | 76.5 | 76.5 | 46.0 | | T5-S | Logits | L | 1-BS | T | 5e-4 | F | 19.9 | 19.6 | 39.4 | 1.8 | 47.7 | 26.5 | 44.0 | 76.2 | 76.3 | 76.5 | 46.1 | | T5-S | Logits | L+U | 1-BS | T | 5e-4 | F | 20.6 | 20.2 | 40.1 | 1.7 | 48.2 | 27.3 | 44.8 | 76.5 | 76.7 | 76.5 | 46.7 | | T5-S | Logits | L+U | K-BS | T | 5e-4 | F | 20.8 | 21.0 | 40.8 | 1.6 | 49.0 | 27.8 | 45.5 | 77.1 | 76.9 | 76.9 | 47.1 | | T5-S | Logits | L+U | Samp. | T | 5e-4 | F | 21.1 | 20.9 | 40.5 | 1.6 | 48.6 | 27.7 | 45.3 | 76.9 | 76.9 | 76.8 | 47.0 | | T5-S | Logits | L+U | H-Samp. | T | 5e-4 | F | 21.6 | 21.3 | 40.9 | 1.5 | 49.0 | 28.0 | 45.6 | 77.2 | 77.0 | 77.0 | 47.2 | | T5-S | Logits | L+U | Samp. | S | 5e-4 | F | 20.8 | 20.9 | 40.7 | 1.6 | 48.9 | 27.8 | 45.4 | 76.9 | 77.0 | 77.0 | 47.3 | | T5-S | Logits | L+U | Samp. | T+S | 5e-4 | F | 21.5 | 20.9 | 40.6 | 1.5 | 48.9 | 27.7 | 45.2 | 77.0 | 76.9 | 76.8 | 47.1 | | BART-L | FT | - | - | - | 1e-5 | F | 21.1 | 21.5 | 41.9 | 1.4 | 50.2 | 28.9 | 46.7 | 77.8 | 78.3 | 77.5 | 48.0 | | BART-6:6 FT | - | - | - | 1e-4 | F | 18.4 | 19.3 | 39.2 | 1.7 | 47.7 | 26.1 | 43.8 | 76.3 | 76.3 | 76.5 | 45.9 | | | BART-2:6 FT | - | - | - | 1e-4 | F | 12.3 | 11.8 | 28.7 | 1.9 | 36.3 | 16.3 | 33.4 | 71.3 | 71.5 | 71.3 | 33.8 | | | BART-6:2 FT | - | - | - | 1e-4 | F | 17.6 | 17.7 | 37.6 | 2.4 | 45.9 | 24.5 | 42.6 | 75.5 | 76.3 | 74.9 | 43.1 | | | BART-6:2 FT | - | - | - | 1e-4 | F | 17.8 | 18.1 | 38.2 | 2.0 | 46.3 | 25.0 | 43.2 | 75.7 | 76.6 | 75.0 | 43.3 | | | BART-6:2 FT | L | 1-BS | T | 1e-4 | F | 19.6 | 19.5 | 39.2 | 2.4 | 47.4 | 26.3 | 44.0 | 76.1 | 76.8 | 75.7 | 44.9 | | | BART-6:2 Noisy | - | - | - | 1e-4 | F | 19.2 | 18.8 | 39.2 | 1.8 | 47.4 | 26.0 | 44.2 | 76.4 | 77.5 | 75.7 | 44.2 | | | BART-6:2 Att-Rel | - | - | - | 1e-4 | F | 18.8 | 18.6 | 39.1 | 2.0 | 47.3 | 25.9 | 43.9 | 76.1 | 77.0 | 75.4 | 44.2 | | | BART-6:2 Logits | - | - | - | 1e-4 | F | 19.5 | 19.4 | 39.4 | 1.9 | 47.7 | 26.3 | 44.3 | 76.3 | 77.0 | 75.9 | 45.0 | | | BART-6:2 Logits | L | 1-BS | T | 1e-4 | F | 20.0 | 19.7 | 39.5 | 1.9 | 47.6 | 26.6 | 44.3 | 76.3 | 76.9 | 75.9 | 45.0 | | | BART-6:2 Logits | L+U | 1-BS | T | 1e-4 | F | 20.3 | 20.4 | 40.2 | 1.9 | 48.3 | 27.4 | 45.0 | 76.6 | 77.2 | 76.2 | 45.9 | | | BART-6:2 Logits | L+U | K-BS | T | 1e-4 | F | 20.4 | 20.2 | 39.9 | 1.8 | 47.9 | 27.2 | 44.6 | 76.4 | 77.1 | 76.1 | 45.6 | | | BART-6:2 Logits | L+U | Samp. | T | 1e-4 | F | 20.4 | 20.4 | 40.5 | 1.8 | 48.7 | 27.6 | 45.2 | 76.7 | 77.3 | 76.3 | 46.2 | | | BART-6:2 Logits | L+U | H-Samp. | T | 1e-4 | F | 20.4 | 20.2 | 40.1 | 1.7 | 48.3 | 27.3 | 44.9 | 76.7 | 77.4 | 76.3 | 45.8 | | | BART-6:2 Logits | L+U | Samp. | S | 1e-4 | F | 20.9 | 20.7 | 40.8 | 1.8 | 49.0 | 27.7 | 45.6 | 77.2 | 77.6 | 77.0 | 46.9 | | | BART-6:2 Logits | L+U | Samp. | T+S | 1e-4 | F | 21.0 | 20.9 | 40.9 | 1.7 | 49.1 | 27.8 | 45.8 | 77.3 | 77.8 | 77.0 | 47.0 | | | GPT2-L | FT | - | - | - | 7e-6 | F | 15.0 | 15.8 | 33.3 | 1.7 | 40.3 | 21.7 | 37.8 | 73.2 | 74.7 | 72.0 | 37.9 | | GPT2-M | FT | - | - | - | 5e-4 | F | 11.6 | 12.2 | 27.3 | 3.7 | 33.9 | 16.3 | 31.6 | 69.4 | 70.2 | 68.9 | 32.6 | | GPT2 | FT | - | - | - | 1e-3 | F | 6.1 | 6.6 | 17.9 | 3.6 | 22.9 | 9.6 | 21.2 | 53.0 | 53.6 | 52.6 | 21.6 | | OPT-350M FT | - | - | - | 1e-4 | F | 10.6 | 10.7 | 25.4 | 2.5 | 30.9 | 16.4 | 28.7 | 52.4 | 53.0 | 52.1 | 29.2 | | | OPT-125M FT | - | - | - | 1e-4 | F | 12.6 | 13.4 | 31.2 | 2.1 | 39.2 | 19.2 | 35.3 | 70.1 | 71.2 | 69.6 | 35.1 | | | Table 10: Results for question generation task, SQuAD17 dataset | | | | | | | | | | | | | | | | | | Model Obj. PT In. Decoding PT St. LR FT Dev BLEU ROUGE PPL R1 R2 RL BS-F1 BS-P BS-R MET T5-L FT - - - 5e-5 F 6.0 6.0 21.7 1.9 28.8 8.8 27.4 71.5 72.7 70.6 26.1 T5-S FT - - - 5e-4 F 3.7 3.6 18.1 2.5 25.2 5.4 23.7 69.4 70.3 68.7 22.4 T5-S FT L 1-BS T 5e-4 F 4.0 4.2 18.5 2.8 25.4 6.1 24.0 69.3 69.8 69.0 23.1 T5-S Noisy - - - 5e-4 F 4.6 4.3 19.2 2.4 26.2 6.6 24.8 70.1 71.3 69.2 23.4 T5-S Att-Rel - - - 5e-4 F 4.5 4.3 19.0 2.4 26.0 6.4 24.7 70.1 71.4 69.1 23.1 T5-S Logits - - - 5e-4 F 4.5 4.3 19.0 2.4 25.9 6.5 24.6 70.0 71.3 69.0 23.1 T5-S Logits L 1-BS T 5e-4 F 4.4 4.4 19.1 2.5 26.2 6.4 24.7 69.8 70.7 69.3 23.5 T5-S Logits L+U 1-BS T 5e-4 F 4.9 4.8 19.8 2.4 26.8 7.1 25.5 70.4 71.5 69.6 24.1 T5-S Logits L+U K-BS T 5e-4 F 5.0 4.8 19.8 2.4 26.9 7.1 25.5 70.4 71.6 69.6 24.0 T5-S Logits L+U Samp. T 5e-4 F 5.0 4.7 19.8 2.4 26.8 7.0 25.5 70.6 72.0 69.4 23.8 T5-S Logits L+U H-Samp. T 5e-4 F 5.0 4.7 19.9 2.3 27.0 7.2 25.5 70.5 71.5 69.7 24.2 T5-S Logits L+U Samp. S 5e-4 F 4.9 4.7 19.7 2.3 26.7 7.0 25.4 70.5 71.9 69.5 23.8 T5-S Logits L+U Samp. T+S 5e-4 F 5.3 4.8 19.9 2.4 27.0 7.3 25.5 70.4 71.3 69.7 24.2 BART-L FT - - - 5e-5 F 6.4 6.0 21.4 2.1 28.5 8.6 27.1 71.5 72.7 70.6 25.7 BART-6:6 FT - - - 5e-5 F 4.6 4.9 20.3 2.1 27.3 7.5 26.0 71.1 72.7 69.7 24.1 BART-2:6 FT - - - 5e-5 F 3.7 3.7 17.9 2.3 24.8 5.4 23.4 69.5 70.7 68.5 21.7 BART-6:2 FT - - - 5e-5 F 3.7 3.9 18.8 2.7 25.6 6.2 24.5 70.1 72.5 68.1 22.1 BART-6:2 FT L 1-BS T 5e-5 F 4.6 4.6 19.4 2.8 26.2 6.9 25.0 70.2 71.4 69.2 23.4 BART-6:2 Noisy - - - 5e-5 F 4.7 4.7 19.8 2.4 26.7 7.2 25.5 70.7 72.4 69.3 23.6 BART-6:2 Att-Rel - - - 5e-5 F 5.1 4.4 19.2 2.7 26.1 6.8 24.8 70.4 72.0 69.1 23.0 BART-6:2 Logits - - - 5e-5 F 5.0 4.7 19.5 2.6 26.4 7.0 25.2 70.6 72.1 69.3 23.5 BART-6:2 Logits L 1-BS T 5e-5 F 5.3 5.0 19.8 2.6 26.7 7.4 25.4 70.6 71.9 69.5 23.9 BART-6:2 Logits L+U 1-BS T 5e-5 F 5.6 5.1 20.1 2.6 27.0 7.6 25.6 70.7 72.1 69.6 24.1 BART-6:2 Logits L+U K-BS T 5e-5 F 5.4 5.2 20.1 2.5 27.0 7.6 25.7 70.9 72.3 69.8 24.1 BART-6:2 Logits L+U Samp. T 5e-5 F 5.6 5.2 20.2 2.5 27.2 7.7 25.8 70.9 72.2 69.8 24.3 BART-6:2 Logits L+U H-Samp. T 5e-5 F 5.3 5.0 19.9 2.5 26.9 7.4 25.6 70.8 72.3 69.6 23.8 BART-6:2 Logits L+U Samp. S 5e-5 F 5.0 4.9 20.0 2.5 26.9 7.5 25.6 70.9 72.3 69.7 23.9 BART-6:2 Logits L+U Samp. T+S 5e-5 F 5.2 5.1 20.3 2.4 27.2 7.7 25.9 71.0 72.3 69.9 24.3 GPT2-L FT - - - 5e-6 F 3.6 3.6 13.8 2.3 18.5 5.1 17.6 67.2 69.4 65.4 18.9 GPT2-M FT - - - 5e-4 F 1.9 2.0 9.8 4.8 13.7 2.8 12.9 63.5 64.7 62.6 15.5 GPT2 FT - - - 1e-3 F 2.1 2.2 10.9 2.8 15.0 3.2 14.3 65.2 67.9 62.9 16.1 OPT-350M FT - - - 1e-4 F 2.5 3.0 15.4 3.4 21.2 4.8 20.1 61.7 62.7 60.9 19.1 OPT-125M FT - - - 1e-4 F 1.9 2.1 10.9 3.6 15.5 3.0 14.3 64.1 65.0 63.7 16.0 Table 11: Results for abductive commonsense reasoning, task, ART10 dataset Model Obj. PT In. Decoding PT St. LR FT Dev BLEU ROUGE PPL R1 R2 RL BS-F1 BS-P BS-R MET T5-L FT - - - 5e-5 F 25.4 25.7 45.4 1.5 54.0 31.5 50.6 78.4 78.4 78.6 53.8 T5-S FT - - - 5e-4 F 23.3 23.3 43.4 2.1 52.3 29.0 48.9 76.9 76.9 77.0 51.6 T5-S FT L 1-BS T 5e-4 F 24.2 23.7 44.1 2.7 53.1 29.7 49.5 77.4 77.2 77.7 52.5 T5-S Noisy - - - 5e-4 F 24.5 24.2 44.3 1.9 53.1 30.1 49.7 77.5 77.6 77.5 52.5 T5-S Att-Rel - - - 5e-4 F 24.5 24.1 44.4 1.9 53.3 30.0 49.8 77.4 77.4 77.5 52.5 T5-S Logits - - - 5e-4 F 24.1 24.3 44.5 1.9 53.4 30.2 50.0 77.4 77.4 77.5 52.7 T5-S Logits L 1-BS T 5e-4 F 24.8 24.8 44.7 1.9 53.4 30.5 50.2 77.6 77.7 77.6 52.8 T5-S Logits L+U 1-BS T 5e-4 F 25.5 25.1 45.4 1.9 54.2 31.3 50.7 78.0 78.0 78.1 53.7 T5-S Logits L+U K-BS T 5e-4 F 25.5 25.4 45.5 1.8 54.2 31.3 51.0 78.1 78.1 78.2 53.7 T5-S Logits L+U Samp. T 5e-4 F 25.4 25.4 45.5 1.8 54.2 31.2 50.9 78.1 78.2 78.1 53.5 T5-S Logits L+U H-Samp. T 5e-4 F 25.5 25.5 45.1 1.7 53.9 31.0 50.6 78.1 78.1 78.1 53.4 T5-S Logits L+U Samp. S 5e-4 F 25.2 25.5 45.4 1.8 54.1 31.1 50.9 78.1 78.1 78.2 53.7 T5-S Logits L+U Samp. T+S 5e-4 F 25.3 25.7 45.3 1.7 54.0 31.2 50.8 78.1 78.2 78.2 53.6 BART-L FT - - - 5e-5 F 25.3 25.1 44.8 1.8 53.4 30.7 50.2 78.3 78.5 78.2 52.9 BART-6:6 FT - - - 5e-5 F 25.1 23.8 43.7 1.8 52.3 29.5 49.1 77.4 77.6 77.3 51.6 BART-2:6 FT - - - 5e-5 F 22.7 22.0 41.8 2.0 50.7 27.4 47.3 76.1 76.4 75.9 49.5 BART-6:2 FT - - - 5e-5 F 23.1 22.4 43.0 2.6 51.7 28.6 48.6 76.7 77.3 76.3 50.0 BART-6:2 FT L 1-BS T 5e-5 F 24.8 23.3 43.6 3.0 52.6 29.1 49.2 77.2 77.4 77.2 51.5 BART-6:2 Noisy - - - 5e-5 F 24.1 23.1 44.0 2.1 52.7 29.7 49.5 77.3 77.9 76.8 51.2 BART-6:2 Att-Rel - - - 5e-5 F 23.3 22.2 43.2 2.3 52.0 28.8 48.7 76.8 77.4 76.3 50.3 BART-6:2 Logits - - - 5e-5 F 24.2 23.8 44.1 2.4 52.9 29.8 49.5 77.5 77.8 77.2 51.8 BART-6:2 Logits L 1-BS T 5e-5 F 25.0 23.8 44.1 2.4 53.0 29.7 49.6 77.5 77.8 77.4 51.9 BART-6:2 Logits L+U 1-BS T 5e-5 F 25.2 24.4 44.8 2.3 53.6 30.5 50.3 77.8 78.1 77.6 52.7 BART-6:2 Logits L+U K-BS T 5e-5 F 25.6 24.5 44.7 2.2 53.3 30.5 50.1 77.7 78.0 77.5 52.1 BART-6:2 Logits L+U Samp. T 5e-5 F 25.5 24.7 45.0 2.1 53.7 30.8 50.4 77.9 78.3 77.7 52.6 BART-6:2 Logits L+U H-Samp. T 5e-5 F 25.2 24.7 45.0 2.0 53.7 30.8 50.5 77.8 78.2 77.6 52.6 BART-6:2 Logits L+U Samp. S 5e-5 F 25.5 24.5 44.8 2.1 53.6 30.5 50.3 77.9 78.2 77.7 52.5 BART-6:2 Logits L+U Samp. T+S 5e-5 F 26.1 24.8 45.2 2.0 53.9 30.9 50.7 78.1 78.3 78.0 53.2 GPT2-L FT - - - 1e-5 F 21.6 20.3 39.0 1.8 47.4 25.5 44.2 74.5 75.7 73.5 45.7 GPT2-M FT - - - 7e-4 F 18.6 17.7 35.0 4.3 42.9 22.4 39.6 70.6 70.7 70.7 42.3 GPT2 FT - - - 1e-3 F 17.6 17.6 34.1 2.2 41.6 22.2 38.6 65.3 65.9 65.0 41.0 OPT-350M FT - - - 3e-4 F 19.0 18.9 38.1 3.6 46.6 24.4 43.2 72.1 72.8 71.5 44.7 OPT-125M FT - - - 5e-5 F 20.8 20.4 40.5 1.9 49.1 26.7 45.7 74.6 75.4 74.0 46.3 Table 12: Results for style transfer and simplification task, Shake7 dataset | References | T5-S | T5-L | T5-KD | Gap | |--------------|--------|--------|---------|-------| | 4.45 | 2.75 | 4.0 | 3.65 | 72% | Table 13: Human evaluation for the ART10 dataset. The numbers are the average ratings for the golden references, the student baseline (T5-S), the teacher (T5-L), and the final distilled model (T5-KD). We also present the fraction of the student-teacher gap closed by T5-KD. ## F Human Evaluation For Art10 In this section, we aim is to examine the relatively lower performance of our KD method on the ART10 dataset when compared to other datasets. Accordingly, the abductive reasoning task and the ART10 dataset in particular, are a unique case where automatic evaluation is hard to perform due to the large number of diverse potential solutions (See the examples in §F.2). Therefore, we assume that the fraction of the student-teacher gap closed by the distilled model may have been underestimated. To validate our assumption, we conducted a human evaluation involving two annotators. We randomly selected 50 input examples and generated outputs using the student baseline (T5-S), the teacher (T5-L), and the final distilled model (T5-KD). The annotators were asked to rate the generated texts on a five-level scale (see §F.1 below). The inter-annotator agreement achieved by our annotators was Kendall's τ=0.52. According to Table 13, which presents the average rating for each model, we find that the distilled model closes 72% of the student-teacher gap. This result is much greater than the 50% estimated by the automatic evaluation, and is in-line with the performance of the distilled model on other datasets. ## F.1 Human Evaluation - Instructions You will be presented with two texts referred to as "The Observations". These observations occur in a specific order, with the first happening before the second. Your task is to assess four "explanations" of the second observation, which aim to explain why it occurred, given that the first observation had already occurred. A good "explanation" should be clear and provide a plausible account of what happened between the two observations. You should rate each explanation using this five-level scale: 1. The explanation is nonsensical or contains many grammatical errors. 2. The explanation is not related to the observations or repeating the observations. 3. The explanation is related to the observations but does not explain them. 4. The explanation is related to the observations but only partially explains them. 5. The explanation fully explains the observations. F.2 Generated Examples Observation 1: I went to a rap show for the first time in my life. Observation 2: Now I'm avid rap and hip hop listener. Reference: I really enjoyed the show. T5-S: I went to a hip hop show. T5-L: I fell in love with rap and hip hop. T5-KD: The rap show was very good. Observation 1: Allison wanted to renew her vows with Tom. Observation 2: Yeah even had a new baby. Reference: Allison and Tom did it and felt more love. T5-S: Tom had a baby. T5-L: Allison proposed to Tom. T5-KD: Allison asked Tom to marry her. Observation 1: Today I decided to learn how to make bread. Observation 2: I noticed I made an error in my measurements and started over. Reference: I accidentally put in twice as much salt as needed. T5-S: I went to the grocery store to learn how to make bread. T5-L: I didn't follow the recipe exactly. T5-KD: I did not follow the instructions carefully. Observation 1: Tommy called on the girl who sat next to him in class. Observation 2: Tommy decided to ask the girl for a date. Reference: The girl was very beautiful and kind. T5-S: Tommy asked the girl for a date. T5-L: Tommy liked the girl a lot. T5-KD: The girl said she liked Tommy. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 A2. Did you discuss any potential risks of your work? Not applicable. Not relevant to this work, as it is a general model compression/knowledge distillation for NLG. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix ✓ B1. Did you cite the creators of artifacts you used? 4, Appendix ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4, Appendix B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4, Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4, Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5, Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
jiang-etal-2023-vision
Vision Language Pre-training by Contrastive Learning with Cross-Modal Similarity Regulation
https://aclanthology.org/2023.acl-long.819
In this paper, we reconsider the problem of (partial) false negative samples from the Mutual Information (MI) Maximization perspective, the traditional contrastive loss (like InfoNCE loss) will equally push away the anchor of all positive samples and negative samples regardless of their possible semantic similarities. We theoretically show that InfoNCE loss will not only maximize the MI between the anchor and positive samples but minimize the MI between the anchor and false negative samples even though they share similar semantic which could provide a possible theoretical explanation for the observation of the existence of false negative samples in the cross-modal contrastive learning will decrease the downstream task performance of VLP models. Above analysis motivate us to propose the VLP model with a novel Semantic Awared Contrastive Learning framework named SACL where different negative samples are assigned with different contrastive weights according to the semantic similarity between them and the anchor.
# Vision Language Pre-Training By Contrastive Learning With Cross-Modal Similarity Regulation Chaoya Jiang1, Wei Ye1∗, Haiyang Xu2, Songfang Huang2, Fei Huang2**, Shikun Zhang**1 1 National Engineering Research Center for Software Engineering, Peking University 2 DAMO Academy, Alibaba Group {jiangchaoya, wye, zhangsk}@pku.edu.cn, {shuofeng.xhy, fei.huang}@alibaba-inc.com ## Abstract Cross-modal contrastive learning in vision language pretraining (VLP) faces the challenge of (partial) false negatives. In this paper, we study this problem from the perspective of Mutual Information (MI) optimization. It is common sense that InfoNCE loss used in contrastive learning will maximize the lower bound of MI between anchors and their positives, while we theoretically prove that MI involving negatives also matters when noises commonly exist. Guided by a more general lower bound form for optimization, we propose a contrastive learning strategy regulated by progressively refined cross-modal similarity, to more accurately optimize MI between an image/text anchor and its negative texts/images instead of improperly minimizing it. Our method performs competitively on four downstream cross-modal tasks and systematically balances the beneficial and harmful effects of (partial) false negative samples under theoretical guidance. ## 1 Introduction Large-scale pre-trained vision-language models have recently achieved tremendous success on a wide range of cross-modal tasks (Tan and Bansal, 2019; Chen et al., 2020c; Huang et al., 2020; Li et al., 2020; Yu et al., 2021; Li et al., 2021; Wang et al., 2021b; Li et al., 2022a; Xu et al., 2021; Kim et al., 2021). Self-supervised learning (SSL) (Jaiswal et al., 2020; Liu et al., 2020) have impressively contributed to vision-language pretraining (VLP) due to its capability of leveraging large-scale image-text pairs without annotations. More recently, Self-supervised Multi-modal Contrastive Learning (SMCL) triggered great progress (Li et al., 2022b; Radford et al., 2021; Yao et al., 2021; Li et al., 2021, 2022a) by conducting crossmodal alignment. SMCL consists of image-totext and text-to-image contrastive learning, e.g., ∗corresponding author. ![0_image_0.png](0_image_0.png) A big bird in the tree. Text Anchor W/ Similarity Regularization ( b ) ( c ) with the InfoNCE (Oord et al., 2018) loss. Taking the text-to-image one as an example, given a text-image pair (T, I), I will be treated as the positive sample for the anchor T, and other images in a mini-batch of text-image pairs will be regarded as negatives. The training objective is to attract the positive to the anchor while repelling all the negative samples. However, this contrasting strategy can be problematic given the many-to-many correspondences between images and texts. As shown in Figure 1 (a), a text can be semantically paired with multiple images. In this scenario, though images I4 and I5 are treated as negatives, they are actually semantically consistent (or partially consistent) with the text anchor "A bird in the tree." The (partial) false negatives like I4 and I5 will inevitably hinder 14660 the contrasting effect, yielding sub-optimal crossmodal representations. Some pioneering efforts have addressed the noisy image-text pairing problem in VLP pretraining datasets (Li et al., 2021; Andonian et al., 2022), by feeding the contrastive loss with soft labels in a self-distillation manner. Though these methods can address the problem of false negatives to some extent, the specific harmful effect of false negatives remains far from being systematically studied. For example, based on these methods (e.g., ALBEF (Li et al., 2021) ), we can easily improve the performances of downstream tasks by simply filtering false negatives, as shown in Table 1. In this paper, we investigate the problem of false negatives from the perspective of Mutual Information (MI) optimization. The InfoNCE loss used in contrastive learning has been proved to maximize the lower bound of MI between anchors and their positives (Oord et al., 2018). We revisit the theoretical proof in the presence of non-negligible false negatives. Defining the MI between anchors and positives as *MI-P*, and the counterpart between anchors and negatives as *MI-N*, we derive a more general conclusion (see the appendix A.2) that optimizing InfoNCE is equivalent to maximizing the lower bound of (MI-P − *MI-N*). The finding suggests that *MI-N* will be minimized (e.g., as close to zero as possible), even though some negatives may semantically match the anchor. The theoretical analyses explain the deficiency of the vanilla contrasting strategy on the one hand, and inspire us with another derivation (appendix A.3) that guarantees proper MI optimization for negative samples on the other hand. Guided by these theoretical analyses, we propose a novel contrasting strategy regulated by crossmodal similarity. We hypothesize that the MI between an image and text positively correlates with their semantic similarity. Therefore, we introduce a contrastive weight, which is derived based on cross-modal similarity and progressively refined with training, for each negative sample as a contrasting regulator. This regulator will guide the model to optimize *MI-N* properly, keeping it from being unexpectedly minimized and thus yielding a more semantically structural representation space. We equip our proposed contrasting strategy on ALBEF (Li et al., 2021) framework and evaluate it on various representative vision-language downstream tasks, including Visual Question Answering(VQA), ![1_image_0.png](1_image_0.png) Table 1: A pilot experiment on removing false negatives when contrasting. When training ALBEF (Li et al., 2021), we directly remove false negatives samples in a heuristic way from a mini-batch (more details in Section 4.3), achieving a new pre-trained model **ALBEF++**. We report the performance of Zero-shot Cross-modal Retrieval (Flicker 30K) and Visual Question Answering (VQA). Even by simply removing false negatives, ALBEF++ outperforms ALBEF by an evident margin, indicating that existing efforts have not sufficiently addressed the harmful effects of false negatives. Cross-modal Retrieval, Zero-shot Cross-modal Retrieval, and Natural Language for Visual Reasoning (NLVR). The experimental results show that our adjusted contrastive learning significantly improves their performances. In summary, our contributions are: - We investigate the issue of false negatives in cross-modal contrastive learning from the perspective of Mutual Information (MI) optimization. We deduce a more general form of MI's lower bound for InfoNCE loss in the presence of non-negligible false negatives, revealing that the MI between (partial) false negatives and anchors is improperly minimized. - Based on a theoretical derivation that guarantees appropriate MI optimization for negative samples, we propose a novel contrasting strategy by attaching each negative sample with a progressively refined contrastive weight based on cross-modal similarity. - Applying the contrasting strategy to VLP methods yields impressive performance improvement on various downstream tasks, and demonstrates our contrasting strategy systematically balances the positive and negative impacts of false negatives. ## 2 Theoretical Analysis From Mutual Information Perspective Mutual Information (MI) is designed to measure the relationship between random variables or determine the amount of shared information (Becker, 1996, 1993). Oord et al. (2018) has proven that the InfoNCE loss function widely used in contrastive learning can be seen as a lower bound of MI between anchors and positives. Note that Li et al. (2021) provides a conceptual yet more intuitive discussion of the correspondence between InfoNCE and MI in the VLP scenario. In this paper, we go one step further to revisit the proof of Oord et al. (2018) under a cross-modal contrastive learning context. ## 2.1 Preliminaries The standard InfoNCE loss in VLP consists of two parts: L*Inf oNCE* = L v Inf oNCE +L t Inf oNC, where the former corresponds to image-to-text alignment and the latter corresponds to text-to-image alignment. For the following discussion, we will take $\mathcal{L}_{InfoNCE}^{v}$ as an example. Suppose we randomly sample N semantically paired image-text tuples $\{(I_{i},T_{i})\},i\in\{1,2,\ldots,N\}$ from a cross-modal dataset. $\mathcal{L}_{InfoNCE}^{v}$ is defined as: $${\mathcal{L}}_{I n f o N C E}^{v}=-\,E\,l o g\left[{\frac{f\left(v_{i},t_{i}\right)}{f\left(v_{i},t_{i}\right)+\sum\limits_{t_{j}\neq t_{i}}f\left(v_{i},t_{j}\right)}}\right]\tag{1}$$ where f (vi, ti) measures the distance between vi and tiin a semantic space. According to Oord et al. (2018), the function f (vi, ti) can be utilized to model the density ratio, which preserves the mutual information between vi and ti and we can rewrite the f (vi, ti) to P(ti|vi) P(ti) . Then we can derive the well-known lower bound of MI between ti and vi: Inf oNCE (2) $$I(t_{i},v_{i})\geq l o g\left(N\right)-{\mathcal{L}}_{I n f o N C E}^{v}$$ ## Where The I(Ti, Vi) Is The Mutual Information Between Ti And Vi. The Details Of This Copy-To-Vlp Derivation Can Be Found In Appendix A.1. 2.2 Mi Derivation With False Negatives The derivation process in Appendix A.1 implicitly assumes that tj (the negative sample) and vi are independent, which is reasonable given a large enough number of negatives with little noise. So the expectation of density ratio P(tj |vi) P(tj )is equal to 1 and eliminated (e.g., from Equation 12 to Equation 13). In the presence of non-negligible false negatives, tj and vi may not be independent. Therefore, we revisit this derivation and deduce a more general conclusion (see detailed derivation in appendix A.2): I(ti, vi) − E tj I(tj , vi) ≥ log (N) − Lv Inf oNCE (3) Equation 3 provides a more general lower bound form that the InfoNCE loss optimizes. The first term on the left side of this equation is MI between an anchor and the positive, and the second term is MI expectation between an anchor and negatives. Equation 3 reveals that optimizing InfoNCE is equivalent to maximizing the lower bound of the difference between the former and the latter. ## 2.3 Theoretical Guidance For Addressing False Negatives Combining Equations 2 and 3, we can find that in addition to maximizing MI between an anchor and the positive (say *MI-P*), InfoNCE loss will also minimize the MI expectation between an anchor and negatives (say *MI-N*), e.g., to be as close to zero as possible, despite the existence of the (partial) false negative samples. Since they may semantically match the anchor, over-minimizing *MI-N* could produce less structural cross-modal representation space. To optimize *MI-N* to a proper value, we first need to provide a prior estimation of *MI-N* as a target. Here we exploit cross-modal similarity to approximate MI between an image and text. The second problem is integrating this prior estimation into the optimization process. Based on the derivation of Equation 3, we further theoretically prove that assigning a positive weight wi,j to each f (vi, ti) can push MI expectation between an anchor and negatives to a controllable positive value, given the following two conditions: - **Condition 1**. The covariance between wi,j and P(ti|vi) P(ti)is negative. - **Condition 2**. The expectation of wi,j among all negatives is equal to 1. With this theoretical guidance (see complete proof in Appendix A.3), we propose to improve InfoNCE loss by applying each negative with a contrastive weight, which is inversely proportional to its cross-modal similarity with the anchor. ## 3 Method In this section, we will first introduce our model architecture, and then introduce our Similarity- ![3_image_0.png](3_image_0.png) Anchor The Positive True Negatives False Negatives Partial False Negatives 0 1 2 3 4 5 6 7 8 9 Regulated Contrastive Learning (SRCL), followed by the details of other pre-training objectives. ## 3.1 Model Architecture Figure 2 shows an overview of our model, our model consists of two unimodal encoders for image and text independently and a multi-modal encoder. To better model the inherent modality bias information, we first use two unimodal encoders to encode the image and text separately. Following (Dou et al., 2021; Shen et al., 2021), we use a visual transformer (Dosovitskiy et al., 2020) directly on the image patches as the visual encoder, which is more computation-friendly than using pretrained object detectors for visual feature extraction (Anderson et al., 2018; Zhang et al., 2021). The visual encoder divides an input image into patches and encodes them as a sequence of embeddings {vcls, v1, v2*, ..., v*m} with an additional [CLS] token. The input text is fed to the text encoder and represented as a sequence of embeddings {tcls, t1, t2*, ..., t*n}, where tcls is the embedding of the [CLS] token and used to summarize the input text. Then, the visual and linguistic representations are fed into the multi-modal encoder, which consists of multiple transformer layers. ## 3.2 Cross-Modal Similarity Regulation In section 2, we reveal that vanilla InfoNCE loss will treat negative samples equally without considering their semantic similarity with anchors. Thus the MI between the (partial) false negative samples and the anchor is over-reduced, limiting the performance of pre-training models. We propose a novel contrasting strategy regulated by cross-modal similarity. We hypothesize that the MI between an image and text positively correlates with their semantic similarity. Therefore, we introduce a contrastive weight, which is derived based on cross-modal similarity and progressively refined with training, for each negative sample as a contrasting regulator. This regulator drives the model to optimize *MI-N* properly rather than simply minimizing it. Formally, with a batch of N semantically paired image-text tuples {(Vi, Ti)}i=1:N and the CLS embeddings v i=1:N cls and t i=1:M cls of each image and text in the batch, the image-to-text contrastive loss is: L v SRCL (4) $$\mathbb{R}C D$$ ![3_image_1.png](3_image_1.png) $$\frac{f\left(v_{cls}^i,t_{cls}^i\right)}{f\left(v_{cls}^i,t_{cls}^i\right)+\sum\limits_{j\neq i}{w_{i,j}^i*f\left(v_{cls}^i,t_{cls}^j\right)}}$$ where f v i cls, tjcls= exp (sim (vi, ti) /τ ) and w v i,j indicate the contrastive weight of j-th negative text sample in the contrastive framework. Similarly, the contrastive loss from text to image can be written as follow: $$\mathcal{L}_{SRCL}^{t}\tag{5}$$ $$=-\sum_{i=1:M}\frac{1}{M}\log\left[\frac{f\left(t_{cls}^{i},v_{cls}^{i}\right)}{f\left(t_{cls}^{i},v_{cls}^{i}\right)+\sum_{j\neq i}w_{i,j}^{t}*f\left(t_{cls}^{i},v_{cls}^{j}\right)}\right],$$ where $f\left(t_{cls}^{i},v_{cls}^{j}\right)=\exp\left(\text{{sim}}\left(t_{i},v_{j}\right)/\tau\right)$ and $w_{i,j}^{t}$ indicate the contrastive weight of j-th negative image sample in the contrastive framework. ## 3.2.1 Implementation Of Regulation Weights In this subsection, we introduce how to calculate the regulation weight of the negative samples in contrastive learning. As the regulation weights are inversely proportional to the semantic similarity between anchors and negatives, we need first to calculate the semantic similarity to estimate the regulation weight. Due to the capacity of the VLP model to align images and texts, the VLP model could be utilized to measure cross-modal semantic similarity. However, we notice that the VLP model in the earlier training stages is unreliable since the semantic structure of the embedding space is still under optimization. Therefore, in the beginning, we use the highquality human-annotated dataset (Chen et al., 2015) to train another model denoted as Hβ which shares the same structure with our VLP model Sγ. This model Hβ is optimized by InfoNCE loss and is used to estimate the semantic similarity of the image text pairs at early pre-training stages. During the pre-training our VLP model Sγ, the parameters of model Hβ are frozen. The final semantic similarity between anchors and negatives is derived by taking a weighted average of similarity computed from Sγ and Hβ. At the beginning of the pre-training stages, the weight of the VLP model Sγ for calculating the final similarity is set to 0, and the weight of Hβ is set to 1. As the number of training epochs rises, we progressively increase the weights of Fγ and decrease the weights of Hβ. Formally, given a mini-batch {(T1, I1), . . . ,(TN , IN )} which contains N image-text pairs, for an text anchor Ti and a negative image sample Ij , the similarity sˆi,j calculated from the Hβ is: $$\hat{s}_{i,j}^{t}=e x p(s i m(\hat{t}_{c l s}^{j},\hat{v}_{c l s}^{i}))\qquad\qquad(6)$$ where tˆicls is the [CLS] representation of the text Ti extracted from the text encoder of Hβ and vˆ j cls is $$\gamma_{\mathrm{f}}$$ the [CLS] representation of the Image Ti extracted from the image encoder of Hβ. Similariy, the similarity s˙i,j calculated from the Vγ is: $\qquad\dot{s}^t_{i,j}=exp(sim(t^j_{cls},v^i_{cls}))$ the finallyconnatic similarity between . cls, vicls)) (7) Then the finally semantic similarity between Ti and Vj : $$\left(7\right)$$ $$s_{i,j}^{t}=\alpha*\hat{s}_{i,j}^{t}+(1-\alpha)*\hat{s}_{i,j}^{t}$$ $$(8)$$ i,j (8) where α is a hyper-parameter and will continue to decrease with the increase of pretraining steps. The contrastive weight w t i,j can be driven as follow: $$w_{i,j}^{t}=N o r m(\delta*\frac{1}{s_{i,j}^{t}})\qquad\qquad(9)$$ Where δ is a scaling factor. Notably, w t i,j is inversely proportional to the similarity to meet **Condition 1** described in Section 2.3, and the *Norm* function makes the mean value of all negatives' weights to be 1 to meet **Condition 2**. Similarly, given an image anchor and its text negative samples, we can also calculate the imageto-text contrastive weight. ## 4 Experiments 4.1 Pre-Training Datasets We construct our pre-training data using two web datasets (Conceptual Captions (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011)) and two in-domain datasets (MSCOCO (Chen et al., 2015) and Visual Genome (Krishna et al., 2017)). The total number of unique images is 4.0M, and the number of image-text pairs is 5.1M. ## 4.2 Main Result We implement SRCL based on ALBEF (Li et al., 2021) framework and evaluate it in four widely used downstream tasks: image-text retrieval, zeroshot image-text retrieval (ZSR), visual question answering (VQA), and natural language for visual reasoning (NLVR). ## 4.2.1 Image-Text Retrieval We conduct experiments for both image-to-text retrieval (TR) and text-to-image retrieval (IR) on MSCOCO (Chen et al., 2015) and Flickr30K (Plummer et al., 2015) datasets. During fine-tuning, we jointly optimize the SRCL loss and the ITM loss. When calculating the SRCL loss, we directly use the fine-tuned model to calculate the contrastive | Models | # Pretrain | MSCOCO (5K test set) | Flickr30K (1K test set) | | | | | | | | | | | |----------|--------------|------------------------|---------------------------|------|------|------|------|------|------|-------|------|------|------| | data | TR | IR | TR | IR | | | | | | | | | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | | E2E-VLP | 4M | - | - | - | - | - | - | 86.2 | 97.5 | 98.92 | 73.6 | 92.4 | 96.0 | | UNITER | 4M | 65.7 | 88.6 | 93.8 | 52.9 | 79.9 | 88.0 | 87.3 | 98.0 | 99.2 | 75.6 | 94.1 | 96.8 | | OSCAR | 4M | 70.0 | 91.1 | 95.5 | 54.0 | 80.8 | 88.5 | - | - | - | - | - | - | | ALIGN | 1.8B | 77.0 | 93.5 | 96.9 | 59.9 | 83.3 | 89.8 | 95.3 | 99.8 | 100.0 | 84.9 | 97.4 | 98.6 | | VinVL | 4M | 74.6 | 92.6 | 96.3 | 58.1 | 83.2 | 90.1 | - | - | - | - | - | - | | ViLT | 4M | 61.5 | 86.3 | 92.7 | 42.7 | 72.9 | 83.1 | 83.5 | 96.7 | 98.6 | 64.4 | 88.7 | 93.8 | | ALBEF | 4M | 76.6 | 93.2 | 96.9 | 58.4 | 83.1 | 90.2 | 94.6 | 99.8 | 100.0 | 83.9 | 96.8 | 98.7 | | Ours | 4M | 77.3 | 94.1 | 97.2 | 60.4 | 83.9 | 90.8 | 96.3 | 99.8 | 100.0 | 85.8 | 97.8 | 99.0 | Table 2: Evaluation results of image-text retrieval on Flickr30K and COCO datasets. We initialize the visual encoder of ALBEF with CLIP (ViT-B/16). Our model takes the same architecture and experimental setting as ALBEF. The only difference is that ALBEF uses InfoNCE loss while we use the improved one of SRCL. | Models | VQA | NLVR | | | | | | |-----------|----------|--------|--------|-------|-------|----------------|-----------------| | Test-dev | Test-std | dev | Test-P | | | | | | ViLBERT | 70.55 | - | - | - | | | | | LXMER | 72.42 | - | 74.90 | 74.50 | | | | | UNITER | 72.70 | 72.91 | 77.18 | 77.85 | | | | | OSCAR | 73.16 | 73.44 | 78.07 | 78.36 | | | | | VinVL | 75.95 | 76.12 | 82.05 | 83.08 | | | | | E2E-VLP | 73.25 | 73.67 | 77.25 | 77.96 | | | | | ViLT | 71.26 | - | 75.70 | 76.13 | | | | | ALBEF | 76.09 | 76.32 | 82.21 | 83.11 | | | | | Ours | 76.66 | 76.93 | 83.43 | 83.95 | Model | Text Retrieval | Image Retrieval | | R@1 | R@5 | R@1 | R@5 | | | | | | Zero-Shot | | | | | | | | | CLIP | 88.0 | 98.7 | 68.7 | 90.6 | | | | | ALIGN | 88.6 | 98.7 | 75.7 | 93.8 | | | | | FILIP | 89.8 | 99.2 | 75.0 | 93.4 | | | | | UNITER | 83.6 | 95.7 | 68.7 | 89.2 | | | | | ALBEF | 91.02 | 98.23 | 77.44 | 93.03 | | | | | Ours | 92.42 | 99.41 | 79.43 | 94.46 | | | | Table 4: Evaluation results of zero-shot image-text retrieval on Flickr30K. weight of the negative samples. As shown in Table 2, incorporating SRCL into ALBEF brings evident improvement, achieving competitive performances compared with other VLP baselines. ## 4.2.2 Visual Question Answering Most methods (Tan and Bansal, 2019; Wang et al., 2021a; Li et al., 2020; Wang et al., 2021b) deal with visual question answering tasks as multi-label classification on pre-defined answer sets. This strategy achieves strong performance, but it is not suitable for real-world open scenarios. We treat VQA as an answer generation task and use constrained closevocab generation models like Li et al. (2021); Wang et al. (2022). As shown in Table 3, SRCL achieves 76.66 on Test-std split, outperforming state-of-theart models. Meanwhile, with the same pre-training data and experimental setting, SRCL always significantly outperforms ALBEF, again verifying the effectiveness of cross-modal similarity regulation. ## 4.2.3 Natural Language For Visual Reasoning The NLVR2 (Suhr et al., 2018) task requires the model to predict whether a sentence describes a pair of images which is a binary classification task. We follow (Li et al., 2021) and use two crossattention layers to process the two input images, and their outputs are merged and fed to the Feed Forward Network (FFN). An MLP classifier is then applied to the output embedding of the text [CLS] token. Similarly, in Table 3, our SRCL outperforms ALBEF and other existing VLP methods. ## 4.2.4 Zero-Shot Image-Text Retrieval To investigate the semantic structure of the learned representation space, we examine the SRCL on the zero-shot image-text retrieval task on Flickr30K(Plummer et al., 2015). The results are shown in Table 4 where SRCL outperforms ALBEF, indicating SRCL could yield a better semantic structural representation space. SRCL also achieves better performance than the previous stateof-the-art models (e.g., CLIP, ALIGN, and Florence) pre-trained with more image-text pairs. ![6_image_0.png](6_image_0.png) ## 4.3 False Negatives V.S. Hard Negatives An astute reader may notice that (partial) negatives will somewhat overlap with hard negatives. It is non-trivial to accurately define hard or false negatives in vision-language contrasting since the cross-modal semantic boundary is blurry. But we do face a paradox here: we want to alleviate the contrastive effect of false negatives that contain a certain number of hard ones, while many works about hard negative mining (HEM) (Hu et al., 2020; Xiong et al., 2020; Kalantidis et al., 2020) try to learn with more hard negative samples. To investigate this problem, we experiment with different proportions of false negatives (or hard negatives, approximately). Specifically, we use the contrastive weights, negatively correlated with cross-modal similarity, to roughly approximate whether a negative sample is false. If the weight is lower than a threshold, the corresponding sample is regarded as false, and true otherwise. We explicitly remove the identified false negatives when contrasting, and then check the performance of the pre-trained ALBEF on zero-shot cross-modal retrieval. As shown in Figure 3(a), there is a general trend that the performances of downstream tasks initially boost as the threshold increases and then begin to decline when a certain threshold (e.g., 0.2 and 0.3) is reached. We statistic the distribution of contrastive weight by averaging 10000 mini-batches and visualize it in Figure 3(b). We can estimate that with the threshold of 0.7, about 20% negative samples will be discarded. Combining Figure 3(a) and Figure 3(b) approximately explains the para- ![6_image_1.png](6_image_1.png) (a) R@1 of Text Retrieval ![6_image_2.png](6_image_2.png) dox: in vanilla contrastive learning, too many false negatives (or hard negatives) could bring harmful impacts, so removing some of them deliver performance improvements; but they are indispensable for a promising contrasting effect, so overly removing them also hinder performances, which is also the reason why hard negative mining methods will increase hard negatives in the absence of them. From another perspective, the above explanation validates our method's merits. With the crossmodal similarity regulation, we drive the model to optimize the MI between negatives and their anchor more appropriately rather than simply minimizing it, systematically balancing false negatives' beneficial and harmful effects. ## 4.4 The Impact Of Pretraining Data Size To better understand the correlation between pretraining data size and downstream performance, we experiment with pretraining data of 4M, 6M, 8M,10M, and 12M. Figure 4 plots the zero-shot cross-modal retrieval and VQA results for SRCL and ALBEF. We can observe that our SRCL continuously maintains higher performance, and the gap becomes more evident with the data size increase. This observation verifies that SRCL promisingly addresses the harmful effect of false negatives and thus enhances data efficiency. ## 4.5 Qualitative Analysis In this section, we conduct a qualitative analysis by visualizing the zero-shot text-image retrieval results of ALBEF and our method. We choose this zero-shot task to directly examine the model's representation capacity without fine-tuning impacts. In Figure 5, we find that ALBEF tends to focus more narrowly on one specific commonality while neglecting others. For example, in the second case, ALBEF intensely targets "a woman with blond ![7_image_0.png](7_image_0.png) hair" but misses the critical information "working on her laptop." On the other hand, our approach can successfully extract all the essential aspects of the query. These retrievals suggest that our learned features more comprehensively capture potential similarities between a text caption and the image. Meanwhile, our method's result ranking reflects a trend from full alignment to partial alignment between the retrieved images and the query. These observations clearly verify that our contrasting strategy produces better cross-modal representations for downstream tasks. Note that these two examples are not cherrypicked. The phenomenon in these two examples is commonly observed among other samples. We demonstrate more cases in Appendix 6. Meanwhile, other qualitative analyses can be found in Appendix F. ## 5 Related Work 5.1 Contrastive Learning Recently, self-supervised learning has made significant progress thanks to contrastive learning (Chen et al., 2020a; Oord et al., 2018; He et al., 2019; Chen et al., 2020b; Radford et al., 2021). InfoNCE (Oord et al., 2018) is commonly used in traditional contrasting learning, which optimizes the similarity of positive pairings and minimizes the similarity of negative pairs. In the contrastive learning framework, the negative pairs play a vital role as they prevent shortcuts and collapse solutions. However, Chen et al. (2021) shows the unfavorable effect of false negatives and proposes to incrementally detect and explicitly remove the false negative samples in the contrastive learning framework. Compared with Chen et al. (2021), we propose a more solid method by regulating the false negative samples rather than directly omitting them. ## 5.2 Vision-Language Pre-Training Recent years have seen significant success for largescale pre-trained vision-language models (Tan and Bansal, 2019; Chen et al., 2020c; Huang et al., 2020; Li et al., 2020; Yu et al., 2021; Li et al., 2021; Wang et al., 2021b; Li et al., 2022a; Xu et al., 2021; Kim et al., 2021) in a variety of cross-modal tasks. Self-supervised Multi-modal Contrastive Learning (SMCL) has lately sparked significant advancements. (Li et al., 2022b; Radford et al., 2021; Yao et al., 2021; Li et al., 2021, 2022a) by conducting cross-modal alignment. SMCL consists of image-to-text and text-to-image contrastive learning, e.g., with the InfoNCE (Oord et al., 2018) loss. However, traditional cross-modal contrasting strategy can be problematic given the many-to-many correspondences between images and texts but few works notice this issue. Recently, to solve the issue of noisy image-text pairing in VLP pre-training datasets, some pioneering work has fed the contrastive loss with soft labels in a self-distillation method (Li et al., 2022b, 2021; Andonian et al., 2022). Even while these techniques may help reduce the number of false negatives, their harmful effect has not been carefully explored. ## 6 Conclusion We have presented our cross-modal contrastive learning method that addresses the problem of (partial) false negatives with vision-language semantic similarity guidance. A series of mathematical proofs based on InfoNCE loss provides a more general lower bound for contrastive optimization and inspires us with a novel contrasting strategy that theoretically guarantees the mitigation of false negatives. Empirically, our method demonstrates performance superiority on four downstream crossmodal tasks. Meanwhile, by comparing false negatives and hard negatives, we reveal that balancing the beneficial and harmful effects of (partial) false negatives is crucial to learn robust cross-modal representations. ## Limitation We verify our method mainly based on the recent robust VLP model ALBEF (Li et al., 2021). Evaluating it more broadly by incorporating it into other VLP models can further highlight our contribution. Given the solid theoretical foundation of our method, the main conclusion regarding its effectiveness and performance will not be affected, but there can be more inspirational findings in a broader research context. Meanwhile, comparing false negatives and hard negatives is worth further exploration. We leave these problems for future work. ## Acknowledgements This research is supported by the National Key Research And Development Program of China (No. 2021YFC3340101). ## References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086. Alex Andonian, Shixing Chen, and Raffay Hamid. 2022. Robust cross-modal representation learning with progressive self-distillation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16409–16420. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Helen Suzanna Becker. 1993. An information-theoretic unsupervised learning algorithm for neural networks. Suzanna Becker. 1996. Mutual information maximization: models of cortical self-organization. Network, 7 1:7–31. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020a. A simple framework for contrastive learning of visual representations. ArXiv, abs/2002.05709. Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, and Ming-Hsuan Yang. 2021. Incremental false negative detection for contrastive learning. arXiv preprint arXiv:2106.03719. Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. ArXiv, abs/2003.04297. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020c. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104–120. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Zicheng Liu, Michael Zeng, et al. 2021. An empirical study of training end-to-end vision-and-language transformers. arXiv preprint arXiv:2111.02387. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2019. Momentum contrast for unsupervised visual representation learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9726–9735. Qianjiang Hu, Xiao Wang, Wei Hu, and Guo-Jun Qi. 2020. Adco: Adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1074–1083. Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. arXiv preprint arXiv:2004.00849. Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. 2020. A survey on contrastive selfsupervised learning. ArXiv, abs/2011.00362. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. arXiv preprint arXiv:2102.05918. Yannis Kalantidis, Mert Bulent Sariyildiz, No'e Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. ArXiv, abs/2010.01028. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128– 3137. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. arXiv preprint arXiv:2102.03334. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Chenliang Li, Haiyang Xu, Junfeng Tian, Wei Wang, Ming Yan, Bin Bi, Jiabo Ye, Hehong Chen, Guohai Xu, Zheng Cao, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou, and Luo Si. 2022a. mplug: Effective and efficient vision-language learning by cross-modal skip-connections. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022b. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. arXiv preprint arXiv:2201.12086. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in Neural Information Processing Systems, 34. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121–137. Springer. Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, and Jie Tang. 2020. Selfsupervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 35:857–876. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13–23. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Advances in neural information processing systems, pages 1143–1151. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556– 2565. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2018. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Unifying architectures, tasks, and modalities through a simple sequenceto-sequence learning framework. arXiv preprint arXiv:2202.03052. Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. 2021a. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. arXiv preprint arXiv:2111.02358. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021b. Simvlm: Simple visual language model pretraining with weak supervision. CoRR, abs/2108.10904. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ArXiv, abs/2007.00808. Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, and Fei Huang. 2021. E2e-vlp: End-to-end vision-language pretraining enhanced by visual learning. arXiv preprint arXiv:2106.01804. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2021. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783. Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3208–3216. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Making visual representations matter in vision-language models. CoRR, abs/2101.00529. ## A Proof A.1 Proof A We rewrite the proof provided by Oord et al. (2018) in the context of image-to-text contrastive learning, where vi represents an image anchor and ti and tj are positive and negative samples, respectively. L v Inf oNCE = − E t log P(ti|vi) P(ti) P(ti|vi) P(ti) +P tj̸=ti P(tj |vi) P(tj ) (10) = E t log 1 + P (ti) P (ti|vi) X tj̸=ti P (tj |vi) P (tj ) (11) ≈ E t log 1 + P (ti) P (ti|vi) (N − 1) E tj P (tj |vi) P (tj ) (12) = E t log 1 + P (ti) P (ti|vi) (N − 1)(13) ≥ E t log P (ti) P (ti|vi) N (14) = − I(ti, vi) + log (N) (15) Therefore, we have I(ti, vi) ≥ log (N) − L v Inf oNCE, where N is the number of batch size. ## A.2 Proof B In the presence of non-negligible false negatives, we re-derive the above A.1 derivation as follows: L v Inf oNCE = − E t log P(ti|vi) P(ti) P(ti|vi) P(ti) +P tj̸=ti P(tj |vi) P(tj ) (16) = E t log 1 + P (ti) P (ti|vi) X tj̸=ti P (tj |vi) P (tj ) (17) ≈ E t log 1 + P (ti) P (ti|vi) (N − 1) E tj P (tj |vi) P (tj ) (18) ≥ E t log P (ti) P (ti|vi) N E tj P (tj |vi) P (tj ) (19) = − I(ti, vi) + log (N) + E t log E tj P (tj |vi) P (tj ) (20) L − 14671 Note the false negatives account for a relatively small proportion of the overall negatives, so the expectation E tj P(tj |vi) P(tj )is less than the density ratio P(ti) P(ti|vi) , thus we have: $${\frac{P\left(t_{i}|v_{i}\right)}{P\left(t_{i}\right)}}\geq{\frac{E}{t_{j}}}\,{\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}}$$ $$(21)$$ therefore we can safely derive the inequality from equation 18 to equation 19. Now we can get : $$\begin{array}{l}{{I(t_{i},v_{i})-E\,l o g\left(E\,\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)}}\\ {{\geq\,l o g\left(N\right)-\mathcal{L}_{I n f o N C E}^{v}}}\end{array}$$ $$(22)$$ (23) $$\begin{array}{l}\left(24\right)\end{array}$$ = (25) . According to Jensen's inequality, we have: $$\begin{array}{l}{{I(t_{i},v_{i})-E\,E\,l o g\left(\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)}}\\ {{=I(t_{i},v_{i})-E\,I(t_{j},v_{i})}}\\ {{\geq I(t_{i},v_{i})-E\,l o g\left(\frac{E}{t_{j}}\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)}}\\ {{\geq l o g\left(N\right)-\mathcal{L}_{I n f o N C E}^{v}}}\end{array}$$ (24) Therefore, we have $I(t_i,v_i)-\underset{t_j}{E}I(t_j,v_i)$ $\geq\log\left(N\right)-\mathcal{L}_{InfoNCE}^{v}$ (26) - C. ## A.3 Proof C In this section, we prove that assigning a positive weight wi,j to each f (vi, ti) can push MI expectation between an anchor and negatives to a controllable positive value, under specific conditions. Using image-to-text contrasting as an example, the loss can be written as follow: $$\mathcal{L}_{S R C L}^{v}=\tag{27}$$ $$-\sum_{i=1:N}\frac{1}{N}\log\left[\frac{f\left(v^{i},t^{i}\right)}{f\left(v^{i},t^{i}\right)+\sum_{j\neq i}w_{i,j}^{v}*f\left(v^{i},t^{j}\right)}\right]$$ Following (Oord et al., 2018), the function f (vi, ti) can be seen as density ratio which preserves the mutual information between vi and ti and could be written as P(ti|vi) P(ti)and we can rewrite the equation 28 as: L v SRCL = − E t log P(ti|vi) P(ti) P(ti|vi) P(ti) +P tj̸=ti wi,j P(tj |vi) P(tj ) (28) = E t log 1 + P (ti) P (ti|vi) X tj̸=ti wi,j P (tj |vi) P (tj ) (29) ≈ E t log 1 + P (ti) P (ti|vi) (N − 1) E tj wi,j P (tj |vi) P (tj ) (30) Here we set the regulated weigth wi,j inversely proportional to P(ti) P(ti|vi) (**Condition 1** in Section 2.3), so the covariance between wi,j and P(ti) P(ti|vi) is less than 0. Thus, we have: $$Cov(w_{i,j},\frac{P\left(t_{i}\right)}{P\left(t_{i}|v_{i}\right)})\tag{31}$$ $$=E\,w_{i,j}\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}-\frac{E}{t_{j}}\,w_{i,j}\,\frac{E}{t_{j}}\,\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\quad\leq0$$ Assuming $E\,w_{i,j}=1$ (Condition 2 in Section 2.3), we have : $$\quad P\left(t_{j}|v_{i}\right)\leq E\,{\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}}\leq E\,{\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}}$$ Combine inequality 21 and 32, we have: $$(32)$$ $${\frac{P\left(t_{i}|v_{i}\right)}{P\left(t_{i}\right)}}\geq{\frac{E\,w_{i,j}}{t_{j}}}{\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}}$$ $$({\mathfrak{I}}{\mathfrak{I}}{\mathfrak{I}})$$ Therefore, we can derive that $\mathcal{L}_{SRCL}^{v}\approx$ $\begin{array}{l}E\log\left[1+\dfrac{P\left({t}_{i}\right)}{P\left({t}_{i}|{v}_{i}\right)}\left(N-1\right)E\,{w}_{i,j}\dfrac{P\left({t}_{j}|{v}_{i}\right)}{P\left({t}_{j}\right)}\right]\\ \geq\underset{t}{E\log}\left[\dfrac{P\left({t}_{i}\right)}{P\left({t}_{i}|{v}_{i}\right)}N\underset{t}{E\,{w}_{i,j}}\dfrac{P\left({t}_{j}|{v}_{i}\right)}{P\left({t}_{j}\right)}\right]\qquad0\\ =-\,I({t}_{i},{v}_{i})+\log\left(N\right)+\underset{t}{E\log}\left(\underset{t}{E\,{w}_{i,j}}\dfrac{P\left(t\right)}{P\left(t\right)}\right)\end{array}$ $$\frac{{i\left|{v_{i}}\right\rangle}}{{t_{j}})}$$ $$\left({34}\right)$$ $$\frac{{P\left({t_{j}}\left|{v_{i}}\right\rangle}\right)}{{P\left({t_{j}}\right)}}$$ $$\left({35}\right)$$ Similar with the inequality 26, we get: $$I(t_{i},v_{i})-\frac{E}{t}\frac{E}{t_{j}}log\left(w_{i,j}\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)$$ $$\geq log\left(N\right)-\mathcal{L}_{SRCL}^{v}\tag{36}$$ When optimizing the loss, the last term on the left side of the inequality will be minimized, which means $$\frac{E}{t}\frac{E}{t_{j}}l o g\left(w_{i,j}\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)=0$$ = 0 (37) Then we can get $$\begin{array}{l}{{E\,E\log\left(w_{i,j}\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)}}\\ {{=E\,E\log\left(w_{i,j}\right)+E\,E\log\left(\frac{P\left(t_{j}|v_{i}\right)}{P\left(t_{j}\right)}\right)}}\end{array}$$ = 0 $$(37)$$ (39) $$(40)$$ Thus we have: $$\mathop{E}_{t_{j}}I\left(t_{j},v_{i}\right)=\mathop{E}_{t_{j}}\mathop{E}_{t}l o g\left({\frac{1}{w_{i,j}}}\right)$$ wi,j (40) (31) $\leq0$ . As wi,j is inversely proportional to the semantic similarity between anchor vi and the negative sample tj , the MI expectation vi and tj will be optimized to a controllable positive value negative correlated with the average similarities between vi and tj . ## B Comparison Methods LXMERT (Tan and Bansal, 2019): is the first twostream region-based VLP model, which consists of an object relationship encoder, a language encoder and a cross-modality encoder. E2E-VLP (Xu et al., 2021): proposes the first endto-end VLP method for both V+L understanding and generation, with a unified Transformer encoderdecoder architecture. VILT (Kim et al., 2021): adopts linear projection and word embedding as the visual and textual encoders, and uses the visual transformer as the crossmodal encoder to align and fuse the features of both modalities in an end-to-end manner. ALIGN (Jia et al., 2021): leverages a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. OSCAR (Li et al., 2020): proposes to use object tags detected in images as anchor points to the learning of cross-modal alignments. VinVL (Zhang et al., 2021): pre-trains a largescale object-attribute detection model with much larger amounts of supervised data to extract better region-based visual features. ALBEF (Li et al., 2021): adopts a contrastive loss to align the image and text representations, then fuses them through cross-modal attention in an endto-end manner. UNITER (Chen et al., 2020c): proposes a new word-region alignment pre-training task via the use of optimal transport to help fine-grained alignment between words and image regions. ViLBERT (Lu et al., 2019): proposes one of the first work that extend the BERT architecture to a multi-modal two-stream region-based VLP model. ## C Pre-Training Objectives We pre-train our model with three standard objectives: Image-Text Contrastive learning (ITC), Image-Text Matching (ITM) and Masked Language Modeling (MLM). Since we have introduced ITC in the previous subsections, in the following, we will only introduce two other pre-training tasks. Image-Text Matching (ITM) The goal of imagetext matching is to predict whether the input image and text are matched. We follow the design of (Li et al., 2021) and select hard negative image-text pairs based on the contrastive text-image similarity. We take the text [CLS] embedding of the multimodal encoder's output as the joint representation, followed by a Multi-Layer Perceptron (MLP) layer for prediction. Masked Language Modeling (MLM) The task setup is basically the same as in BERT (Devlin et al., 2018), where we randomly mask 15% of tokens in text and the model is asked to predict these masked words with the cross-modal representations. ## D Implementation Details We implement our method based on the ALBEF (Li et al., 2021) framework and we pretrain the SRCL for 30 epochs with the total batch size of 512 on 8 NVIDIA V100 GPUs. We initialize the visual encoder by CLIP (ViT-B/16) (Radford et al., 2021) pretrained on 400M noisy image-text pairs and we use the AdamW (Loshchilov and Hutter, 2017) optimizer with a weight decay of 1e-2. The learning rate is warmed-up to 1e-5 (ViT-B/16) and 1e-4 (BERT*base*) in the first 1000 iterations. During pre-training, we take image with the resolution of 256 × 256 as input, and increase the image resolution during finetuning. We use a 6-layer Transformer for both the text encoder and the crossmodal fusion network. As Li et al. (2021), the text encoder is initialized using the first 6 layers of the BERT*base* (Devlin et al., 2018) model and the cross-modal network is initialized using the last 6 layers of the BERT*base*. ## E Downstream Task Details We evaluate SRCL on the three downstream visionlanguage tasks. The hyperparameters that we use for finetuning on the downstream tasks are listed in Table 5. Following (Li et al., 2021), all tasks adopt RandAugment, AdamW optimizer with a weight decay of 0.05 and a cosine learning rate schedule. Next we introduce the dataset settings in detail. | Task | LR (ViT-B/BERTbase) batch size epochs | | | |-----------|-----------------------------------------|------|----| | VQA | 2e-5/5e-6 | 1024 | 8 | | Retrieval | 1e-5/2e-6 | 256 | 5 | | NLVR2 | 5e-5/5e-6 | 256 | 15 | Table 5: Finetuning hyperparameters for downstream tasks. VQA. The VQA task (Antol et al., 2015) requires the model to answer natural language questions given an image. We conduct experiment on the VQA2.0 dataset (Antol et al., 2015), which contains 83k/41k/81k images for training/validation/test. Following (Li et al., 2021), we use both training and validation splits for training, and incorporate additional training data from Visual Genome (Krishna et al., 2017). Image-Text Retrieval. We conduct experiments for both image-to-text retrieval (TR) and textto-image retrieval (IR) on COCO (Chen et al., 2015) and Flickr30K (Plummer et al., 2015) datasets. We take the widely-used Karpathy split (Karpathy and Fei-Fei, 2015) for both COCO and Flickr30K. COCO contains 113k/5k/5k images for train/validation/test, and Flickr30K contains 29k/1k/1k images for train/validation/test. NLVR2. The NLVR2 (Suhr et al., 2018) task requires the model to predict whether a sentence. We conduct experiments following the original train/val/test split in (Suhr et al., 2018). ## F Visualization Of Contrastive Weight In Srcl In Figure 7, we plot the distribution of text-toimage contrastive weight in the mini-batch drawn from the Flickr30K testing set. As shown in the Figure 7, for false negative samples, our method can effectively assign them with low contrastive weights. For examples, in the sixth row and fifth column of the first case, for the text anchor "this is a cute cat.", the false negative sample is the sixth image which also contains a cat and the contrastive weight of it is 0.12. Beside, we can observe that most negatives have a high contrastive weight as semantic similarity between them and anchors are low. To further investigate the effectiveness of contrastive weight for regulating the (partial) false negative samples in contrastive learning, we visualize the false negative samples and their contrastive weights. As shown in Figure 8, for the false negative samples, they are all assigned with low contrastive weights (not more than 0.2). This also supply the results of the experiments in subsection 4.3 that masking the negatives whose contrastive weight is less than 0.2 can gets a remarkable improvement. ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png) a stained glass ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) old market | w | | |-----------|----| | = | | | 0.18 | | | sunset | | | over | | | the | | | river | w | | = | | | 0.11 | | | A | | | flower | | | i | | | bought | | | to | | | my | | | grandmom | | | for | | | mothers | | | day | | | w | | | = | | | 0.12 | | | Anchor | | | Positive | | | Sample | | | False | | | Negative | | | Sample | | | Partial | | | False | | | Negative | | | Sample | | | fisherman | | | in | | | the | | | boat | | | carried | | | boat | | | wearing | | | a | | | dog | | | and | | | a | | | a | | | man | | | in | | | a | | | boat | | | green | | | hat | | | with | | | man | | | on | | | a | | | lake | | | dog | | | the | | | cat | | | on | | | a | | | box | | | christams | | | tree. | | | cat | | | under | | | xmas | | | There | | | is | | | a | | | by | | | the | | | tree | | | tree | | | a | | | solitary | | | boat | | | boat | | | in | | | lake | | | boat | | | in | | | lugu | | | docked | | | in | | | made | | | up | | | lake | | | pensacola | | | florida | | | the | | | water | | | was | | | quite | | | calm | | w = 032 w = 0.64 w = *0.28* w = *0.62* w = 0.33 w = 0.55 Partial *False* Negative *Sample* the beautiful lake is surrounded by many trees w = 0.08 w = 0.40 w = *0.54* There is a charlie brown christams tree. cat under xmas tree the cat on a box by the tree she just sitting in tree watchin things' w = 0.20 w = 0.48 w = *0.64* one of the few portions of the ship still above water w = 0.09 w = 0.28 w = *0.79* ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 6 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
leng-etal-2023-tell2design
{T}ell2{D}esign: A Dataset for Language-Guided Floor Plan Generation
https://aclanthology.org/2023.acl-long.820
We consider the task of generating designs directly from natural language descriptions, and consider floor plan generation as the initial research area. Language conditional generative models have recently been very successful in generating high-quality artistic images. However, designs must satisfy different constraints that are not present in generating artistic images, particularly spatial and relational constraints. We make multiple contributions to initiate research on this task. First, we introduce a novel dataset, Tell2Design (T2D), which contains more than 80k floor plan designs associated with natural language instructions. Second, we propose a Sequence-to-Sequence model that can serve as a strong baseline for future research. Third, we benchmark this task with several text-conditional image generation models. We conclude by conducting human evaluations on the generated samples and providing an analysis of human performance. We hope our contributions will propel the research on language-guided design generation forward.
## Tell2Design: A Dataset For Language-Guided Floor Plan Generation Sicong Leng1,*, Yang Zhou2,*,†**, Mohammed Haroon Dupty**3, Wee Sun Lee3, Sam Conrad Joyce4**, Wei Lu**1 1StatNLP Research Group, Singapore University of Technology and Design 2Institute of High Performance Computing (IHPC), A*STAR Singapore 3School of Computing, National University of Singapore 4Meta Design Lab, Singapore University of Technology and Design {sicong_leng,sam_joyce,luwei}@sutd.edu.sg zhou_yang@ihpc.a-star.edu.sg {dmharoon,leews}@comp.nus.edu.sg ## Abstract We consider the task of generating designs directly from natural language descriptions, and consider floor plan generation as the initial research area. Language conditional generative models have recently been very successful in generating high-quality artistic images. However, designs must satisfy different constraints that are not present in generating artistic images, particularly spatial and relational constraints. We make multiple contributions to initiate research on this task. First, we introduce a novel dataset, *Tell2Design* (T2D), which contains more than 80k floor plan designs associated with natural language instructions. Second, we propose a Sequence-toSequence model that can serve as a strong baseline for future research. Third, we benchmark this task with several text-conditional image generation models. We conclude by conducting human evaluations on the generated samples and providing an analysis of human performance. We hope our contributions will propel the research on language-guided design generation forward1. ## 1 Introduction Recently, text-conditional generative AI models (Nichol et al., 2022; Saharia et al., 2022b; Ramesh et al., 2022; Dhariwal and Nichol, 2021; Ho et al., 2022) have demonstrated impressive results in generating high-fidelity images. Such models generally focus on understanding high-level visual concepts from sentence-level descriptions, and the generated images are valued for looking realistic and being creative, thereby being more suitable for generating artwork. However, besides less constrained generation like artworks, generating designs that meet various requirements specified ∗ Equal contribution † Most work done at NUS 1Code and dataset are available at https://github.com/ LengSicong/Tell2Design. ![0_image_0.png](0_image_0.png) in natural languages is also much needed in practice (Stiny, 1980; Seneviratne et al., 2022; Zhang et al., 2022; Wei et al., 2022). In particular, a design process always involves interaction between users/clients, who define objectives, constraints, and requirements that should be met, and designers, who need to develop various solutions with domain-specific experiences and knowledge. For example, users may dictate their house design requirements in text and expect expert architects to perform the floor plan generation. Previous research in layout generation aims to automate the process of layout design in different domains such as scientific documents, mobile UIs, indoor scenes, etc (Zhong et al., 2019; Deka et al., 2017; Song et al., 2015; Janoch et al., 2013; Xiao et al., 2013; Silberman et al., 2012; Cao et al., 2022; Wu et al., 2019). Most of them perform the generation either based on several hand-crafted constraints or by using unconstrained generation. In practice, it can be more convenient for users to indicate their preferences in natural language. Among various design tasks, floor plan2 design, as shown in Figure 1 is of moderate complexity. However, it still intrinsically involves multiple rounds of communications between clients and de2Architectural floor plans, i.e., interior building layouts, are documents that indicate room types, room connections, room sizes, etc. They play a crucial role while designing, understanding, or remodeling indoor spaces (Liu et al., 2017). signers for specifying requirements, and requires a high level of precision and alignment to detail. AI systems that can learn to generate practically useful floor plan designs directly from natural languages will go a long way in reducing the protracted design process and making Generative AI directly usable for design by the end users. To allow people without expertise to participate and further enhance the design process, we aim to enable users to design by "telling" instructions, with a specific focus on the floor plan domain as the initial area of research. This sets forth a new machine learning task where the model learns to generate floor plan designs directly from language instructions. However, this task brings up two technical challenges. First, a floor plan is a structured layout that needs three intrinsic components to be valid: (1) *Semantics*, which describes the functionality of rooms (e.g., for living or bathing); (2) *Geometry*, which indicates the shape and dimension of individual rooms; (3) *Topology*, which defines the connectivity among different rooms (Pizarro et al., 2022). Second, these instructions are expressed in natural languages, which, besides the diversity of expressions, inherently suffer from ambiguity, misleading information, and missing descriptions for intrinsic components. To address the above challenges, we make multiple contributions to initiate research on the task of language-guided floor plan generation. First, we contribute a novel dataset, *Tell2Design* (T2D), to the research community. The T2D dataset contains more than 80k real floor plans from residential buildings. Each floor plan is associated with a set of language instructions that describes the intrinsic components of every room in a plan. An example from the dataset is illustrated in Figure 1. Second, we propose a Sequence-to-Sequence (Seq2Seq) approach as a solution to this task which also serves as a strong baseline for future research. Our approach is strengthened by a new strategy to explicitly incorporate the floor plan boundary constraint by transforming the outline into a box sequence. Third, in order to benchmark this novel task and evaluate our proposed approach, we implement strong baselines in text-conditional image generation on our T2D dataset and ask humans to perform the same task. The generation alignment with language instructions is evaluated both quantitatively and qualitatively. Finally, we discuss several future directions that are worth exploring based on our experimental results. In summary, our main contributions are: - We introduce a novel *language-guided floor* plan generation task along with the T2D dataset consisting of both natural humanannotated and large-scale artificially generated language instructions (Section 3). - We propose a new approach that formulates the floor plan generation task as a Seq2Seq problem (Section 4). - We provide adequate quantitative evaluations on all baselines and qualitative analysis of human evaluations and performances (Section 5). ## 2 Related Work Text-Conditioned Image Generation Image generation is a well-studied problem, and the most popular techniques have been applied for both unconditional image generation and text-conditional settings. Early works apply auto-regressive models (Mansimov et al., 2015), or train GANs (Xu et al., 2018; Zhu et al., 2019; Tao et al., 2022; Zhang et al., 2021; Ye et al., 2021) with publicly available image captioning datasets to synthesize realistic images conditioned on sentence-level captions. Other works have adopted the VQ-VAE technique (Van Den Oord et al., 2017) to text-conditioned image generation by concatenating sequences of text tokens with image tokens and feeding them into autoregressive transformers (Ramesh et al., 2021; Ding et al., 2021; Aghajanyan et al., 2022). More recently, some works have applied diffusion models (Ho et al., 2020; Nichol and Dhariwal, 2021; Saharia et al., 2022c; Dhariwal and Nichol, 2021; Ho et al., 2022; Saharia et al., 2022a; Rombach et al., 2022; Nichol et al., 2022; Saharia et al., 2022b; Ramesh et al., 2022) and received wide success in image generation, outperforming other approaches in fidelity and diversity, without training instability and mode collapse issues (Brock et al., 2018; Dhariwal and Nichol, 2021; Ho et al., 2022). However, these models operate on extracting high-level visual concepts from the short text and produce artwork-like images that are expected to be realistic and creative, thereby not suitable for generating designs that must satisfy various user/client requirements. Layout Generation Layout generation is essentially a design process that requires meeting domain-specific constraints, where the desirable layouts could be documents, natural scenes, mobile phone UIs, and indoor scenes. For example, PubLayNet (Zhong et al., 2019) is proposed to generate machine-annotated scientific documents with five different element categories text, title, figure, list, and table. RICO (Deka et al., 2017) is introduced to develop user interface designs for mobile applications, which contains *button, toolbar, etc*. SUN RGB-D (Song et al., 2015) presents a combined scene-understanding task, including indoor scenes from three other datasets (Janoch et al., 2013; Xiao et al., 2013; Silberman et al., 2012). Moreover, ICVT (Cao et al., 2022) aims to produce advertisement poster layouts automatically, where the image background is given as input. The above methods are designed for different layout domains and cannot be directly applied to floor plan design. Moreover, none of them has considered generating the layout design directly from languages. Floor Plan Generation Several methods have been proposed to generate floor plan designs automatically (Wu et al., 2018; Liu et al., 2013; Merrell et al., 2010; Hua, 2016; Wu et al., 2019; Chen et al., 2020; Chaillou, 2020). Most of these methods generate floor plans conditioned on certain constraints, such as room types, adjacencies, and boundaries. For example, Merrell et al. (2010) generate buildings with interior floor plans for computer graphics applications using Bayesian networks without considering any human preferences. Liu et al. (2013) present an interactive tool to generate desired floor plan following a set of manually defined rules. Hua (2016) particularly focus on generating floor plans with irregular regions. Wu et al. (2018) cast the generation as a mixed integer quadratic programming problem where some floor plan components are formulated into a set of inequality constraints. More recently, Wu et al. (2019) propose a CNN-based method to determine the location of different rooms given boundary images as a constraint. Chen et al. (2020) provide a small amount of template-based artificial verbal commands and manually parses them to scene graphs for guiding the generation. In summary, existing methods represent the intrinsic components of floor plans in several specific formats as generation constraints. Some formats are straightforward, such as boundary images, but they only specify limited constraints and lead to less controllable generation. Other formats, such as scene graphs and inequalities, can incorporate more information but require specific domain knowledge and extra-human efforts in pre-processing. We instead provide a unified and natural way of conditioning the floor plan generation with a set of language instructions that is much more flexible and user-friendly to characterize floor plans with various constraints. ## 3 Tell2Design Dataset In this section, we introduce how we construct our T2D dataset, followed by the data analysis and a discussion of the main dataset challenges. ## 3.1 Task Definition Given a set of language instructions describing a floor plan's intrinsic components, our aim is to generate reasonable 2D floor plan designs that comply with the provided instructions. Input & Output For each data sample, the input is a set of natural language instructions that characterize the key components of the corresponding floor plan design, which include: (1) *Semantics* specifies the type and functionality of each room. For example, a room as *Kitchen* is for cooking. (2) *Geometry* specifies the shape and dimension of each room. For residential buildings, it involves the room's general orientation (e.g., the north, south, northeast, southwest), area in square feet, aspect ratio, etc. (3) *Topology* describes the relationships among different rooms. It can be divided into three categories: relative location, connectivity, and inclusion3. The desirable output is a structured interior layout that aligns with the input language instructions. ## 3.2 Floor Plan Collection We use floor plans from RPLAN4(Wu et al., 2019) to construct our Tell2Design dataset. We remove floor plans with rarely-appeared rooms and merge similar room types such as *Second Room* and Guest Room. As a result, 8 different room types (i.e., common room, bathroom, balcony, living room, master room, kitchen, storage, and dining room) and 80, 788 floor plans are selected for collecting language instructions. Each floor plan is converted into a 256×256 image where different pixel values 3As a result, language instructions specifying the above features for a floor plan lead to a document-level description in natural language. We compare the T2D dataset with several document-level NLP datasets in Appendix B. 4http://staff.ustc.edu.cn/~fuxm/projects/ DeepLayout/index.html | Human | Artificial | | |---------------------------|--------------|--------| | Avg. # words per instance | 200.30 | 260.47 | | Avg. # sent. per instance | 11.89 | 23.46 | | Avg. # words per room | 29.48 | 38.44 | | Avg. # sent. per room | 1.75 | 3.46 | Table 1: Language instruction statistics. indicate different room types, from which we extract room-type labels and bounding boxes of each room to construct our dataset. ## 3.3 Language Instruction Collection Human Instructions To collect real human language instructions, we hire crowdworkers from Amazon Mechanical Turk (MTurk)5and ask them to write a set of instructions for each room according to a given floor plan image. The requested instructions should reflect the *Semantic, Geometric,* and *Topological* information of the floor plan, such that designers could ideally reproduce the floor plan layout according to the instructions. In particular, turkers are encouraged to include (but are not limited to) attributes such as room types, locations, sides, and relationships in their instructions. The definitions of these attributes are given as follows: The room type (e.g., bathroom and kitchen) specifies the functionality of a room. The room location specifies the global location of a room in the floor plan and can be described by phrases such as "north side" and "southeastern corner". The room sides specify the length and width of a room (e.g., "8 feet wide and 10 feet long"). The room relationships specify the relative position of a room with other rooms such as "next to", "between", and "opposite" 6. Due to the noisy nature of crowdsourcing annotations, we discard some low-quality annotations to ensure the overall quality of our datasets. To this end, we manually review each annotation and discard human instructions with: (1) incoherence, grammatical errors; (2) insufficient attributes; or (3) irrelevance to the given floor plan. As a result, we collect human instructions for 8, 220 floor plans, and 5, 051 of them are finally accepted after | Human Instructions: the unit. (expression diversity) (ambiguity) Artificial Instructions: be next to the master room. | |-------------------------------------------------------------------------------------------------------------------------| ![3_image_0.png](3_image_0.png) (ambiguity) be next to the master room. manual assessment to construct our dataset7. Artificial Instructions In addition to the humanwritten instructions, we also generate language instructions artificially for the remaining 75, 737 floor plans from pre-defined templates. To ensure that the artificial instructions are as informative as human-written ones and include all the required components, we ask 5 educated volunteers with natural language processing (NLP) backgrounds to write language instructions for each room that appeared in the given floor plan. We then summarize their instructions into multiple templates and ask expert architectural designers for proofreading. Hence, each instruction template is ensured to be informative, grammatically correct, and coherent. In summary, our T2D dataset consists of 5, 051 human-annotated and 75, 737 artificially-generated language instructions8. ## 3.4 Data Analysis In this section, we analyze various aspects of Tell2Design to provide a more comprehensive understanding of the dataset. Language Instructions Table 1 shows the statistics of the language instructions in our dataset. For each floor plan, the human instructions are organized in nearly 11 sentences consisting of 200 words on average. This includes around 30 words used to describe each room in more than 2 sentences. The artificially generated instructions follow a similar pattern with slightly more words. To show the connections and differences between human and artificial instructions, we com-7Our human instruction collection involves 5, 109 different workers with 1, 723 working hours, and each worker receives full compensation in alignment with the established standards of MTurk. 8More details on the language instruction collection can be found in Appendix C. pare them for the same room type, *Balcony*, in Figure 2. Artificial instructions always exhibit complete information, including all three key components of a floor plan. They are also formatted in a structured expression, such as "on the ** side", "** sqft with an aspect ratio of **", and "next to". However, human instructions are more diverse in expression but suffer from ambiguity and missing components. Dataset Comparison In order to see our T2D dataset in perspective, we note its main differences with respect to other related datasets used for similar generation tasks9. T2D differs from other datasets in several perspectives: (1) T2D is the first large-scale dataset that aims to generate designs (i.e., floor plans) from direct user input natural language; (2) T2D has much longer text annotations (i.e., 256 words per instance) compared with other text-conditional generation datasets; (3) All text in T2D is written by humans or generated artificially, instead of being crawled from the internet. ## 3.5 Dataset Challenges In this section, we discuss three main challenges of our collected T2D dataset. We hope this dataset can facilitate the research on both design generation and language understanding. Design Generation under Constraints The first challenge is to perform the design generation under much stricter constraints compared with artworklike text-conditional image generation. Most works in text-conditional image generation operate on generating realistic and creative images that align with the main visual concepts represented by the short input text. However, creating a design from languages has much stricter requirements on precision and alignment to text details. In particular, the generated floor plan design should comply with constraints such as room type, location, size, and relationships, which are specified by users using natural languages. Our main results in Section 5 comparing different baselines demonstrate that existing text-conditional image generation techniques fail to follow detailed user requirements on this design task. Fuzzy & Entangled Information The second challenge is to understand the big picture of the entire floor plan from document-level unstructured text with fuzzy and entangled information. Be-9We provide detailed comparisons with tables in Appendix B. sides the general abilities required for language understanding, such as entity recognition, coreference resolution, relation extraction, etc, models also need to collaborate with fuzzy individual room attributes and reason over entangled relationships among different rooms to understand the entire floor plan. Specifically, one language instruction usually either specifies fuzzy descriptions for a room's *Semantic* and *Geometric* information such as "on the north side" and "at the southeast corner", or indicates the relationship of one specific room with others like "next to" and "between". The provided information in such instructions is coarse and relative, rather than complete and precise information like numerical coordinates. As a result, to determine the location of all rooms and design a reasonable floor plan, models must collaborate with fuzzy and entangled information residing in multiple instructions, and incorporate the boundary information. Human evaluations in Section 5.4 demonstrate that room relationships described in language instructions are the most challenging component to be understood and aligned with. Noisy Human Instructions The third challenge comes from the ambiguous, incomplete, or misleading information in human instructions. As introduced in Section 3.3, the artificial instructions are template-based so that they always contain precise and coherent information. However, for humanwritten language instructions, ambiguous or noisy information always exists. For example, during human instruction collection, workers are asked to write natural sentences estimating some numericrelated attributes like room size and aspect ratio referring to the floor plan image, which may sometimes be inaccurate. Moreover, as previously discussed in Figure 2, other than the expression diversity, human instructions also exhibit ambiguous phrasing and incomplete information. It is thus more challenging for models to retrieve accurate, complete, and consistent information from human instructions. ## 4 T2D Model In this section, we propose a simple yet effective method for *language-guided floor plan generation*. Unlike existing floor plan generation methods (Wu et al., 2019; Chen et al., 2020) that use a regression head to generate the bounding box of each room one at a time, we cast the floor plan generation task as a Seq2Seq problem under the encoderdecoder framework, where room bounding boxes are re-constructed into a *structured* target sequence. This way, our method can easily deal with various lengths of instructions for floor plans with different numbers of rooms. ## 4.1 Target Sequence Construction Recall that our aim is to generate a floor plan layout from language instructions, where each room can be represented by a room-type label (e.g., bathroom and kitchen) and a bounding box. One bounding box can be determined by four values (*x, y, h, w*), which indicate the x and y coordinate of the center point, height (h), and width (w), respectively. To solve language-guided floor plan generation as a Seq2Seq problem, we treat the instructions as the input sequence and consider bounding boxes of rooms as the target sequence. Specifically, each of the continuous values (*x, y, h, w*) is discretized into integers between [0, 255], and the room type is given by the plain text in natural language, so that they can be naturally represented as a sequence of tokens. The target sequence is then constructed by grouping the room type and the bounding box together with certain special tokens. For example, the target sequence for a *Balcony* with the bounding box (87, 66, 18, 23) is given as follows: [ **Balcony** | x coordinate = 87 | y coordinate = 66 | height = 18 | width = 23 ], where the special tokens "[" and "]" are used to indicate the start and end of the target sequence for one room and "|" is used to separate different target components. We have also added semantic prefixes such as "x coordinate =" and "height =" before the values of bounding boxes to assist the target sequence generation. Finally, we concatenate the target sequences of all the rooms in a floor plan and add an <eos> token at the end to indicate the end of the overall target sequence. ## 4.2 Boundary Information Incorporation The outline/boundary of a floor plan is one of the most important constraints in floor plan generation, which directly affects where each room should be placed and how different rooms should be aligned with the floor plan boundary. However, it is nontrivial to incorporate such boundary information into floor plan generation. Previous methods either fail to take the floor plan outline into account (Wu ![5_image_0.png](5_image_0.png) et al., 2018; Liu et al., 2013; Merrell et al., 2010; Hua, 2016; Chen et al., 2020) or only consider the boundary image, ignoring all other constraints (Wu et al., 2019; Chaillou, 2020), leading to less controllable floor plan design. In this work, we propose a novel approach to incorporate boundary information by representing the irregular outline as a set of boxes. The idea is to encode the boundary information by an enclosing box that is the minimum bounding region containing the entire floor plan and several exterior boxes that are inside the enclosing box but excluded from the floor plan. Figure 3 illustrate how the floor plan boundary can be characterized by the enclosing (in red) and exterior boxes (in yellow). This way, we have an enclosing box represented by (x en, yen, hen, wen) and M exterior boxes by (x ex i , yex i, hex i , wex i). Then we adopt a similar strategy in Section 4.1 to represent the enclosing and exterior boxes in a sequence as follows: $$\begin{array}{l l l l}{{+}}&{{x^{e n}\;\;y^{e n}\;\;h^{e n}\;\;w^{e n}\;\;-\;\;x_{1}^{e x}\;\;y_{1}^{e x}}}\\ {{}}&{{h_{1}^{e x}\;\;w_{1}^{e x}\;\;\ldots\;\;-\;\;x_{M}^{e x}\;\;y_{M}^{e x}\;\;h_{M}^{e x}\;\;w_{M}^{e x},}}\end{array}$$ where the coordinates of the enclosing and exterior box are following the tokens "+" and "-", respectively. Finally, the above sequence is added after the input language instructions for training. Our experimental results in Section 5 show that the proposed boundary information incorporation strategy is effective in enhancing our Seq2Seq method to generate valid rooms that align well with the floor plan boundary. ## 4.3 Architecture, Objective And Inference Treating the target sequences that we construct from floor plans as a text sequence, we turn to recent architectures and objective functions that have been effective in Seq2Seq language modeling. Architecture We use the popular Transformerbased (Vaswani et al., 2017) encoder-decoder structure to build our Seq2Seq model for floor plan generation. The model is initialized by a pre-trained language model T5 (Raffel et al., 2020) for better language understanding abilities10. Objective Similar to language modeling, our T2D model is trained to predict the next token, given an input sequence and preceding tokens, with a maximum likelihood objective function, i.e., $$\operatorname*{max}_{\theta}\sum_{j=1}^{L}\log P_{\theta}\left({\tilde{\mathbf{y}}}_{j}\mid\mathbf{x},\mathbf{y}_{1:j-1}\right),\qquad(1)$$ where x is a set of instructions in natural language concatenated with the previously defined boundary sequence, y is the target bounding box sequence, and L is the target sequence length. Inference At inference time, we sample11 tokens one by one from the model likelihood, i.e., P (y˜j | x, y1:j−1). The sequence generation ends once the <eos> token is sampled, and it is straightforward to parse the target sequence into predicted floor plans. ## 5 Experiments 5.1 Baselines Since our T2D dataset is the first to consider language-guided floor plan generation, existing layout generation methods are not applicable to this task. To further illustrate the challenge of the design generation task and the difference with the existing text-conditional image generation problem, we adapt several state-of-the-art text-conditional image generation methods as baselines for comparison. In particular, we compare our method with the following: - CogView (Ding et al., 2021) applies pre-trained VQ-VAE to transform the target image into a sequence of image tokens. Then the text and image tokens are concatenated together and fed to a Transformer decoder (i.e., GPT (Brown et al., 2020; Radford et al., 2019)) to generate text-conditional images. - Imagen (Saharia et al., 2022b) is one of the state-of-the-art text-to-image generation models that build upon both large language models (e.g., T5) for text understanding and diffusion models for high-fidelity image generation. ## 5.2 Experimental Settings Model Training For model training, we consider a Warm-up + Fine-tuning pipeline (Goyal et al., 2017), where the model is first warmed up on 75, 737 artificial instructions, and then fine-tuned on 2, 743 human instructions. To evaluate how floor plan generation methods generalize to unseen instructions, we use the remaining 2, 308 human instructions as the test set, such that there is no overlapping between annotators of the training set and the test set12. Evaluation Metrics For testing, we use macro and micro Intersection over Union (IoU) scores between the ground-truth (GT) and generated floor plans at pixel level as the evaluation metrics, whose definitions are given as follows: $${\mathrm{Micro~IoU}}={\frac{\sum_{r=1}^{R}I_{r}}{\sum_{r=1}^{R}U_{r}}},{\mathrm{Macro~IoU}}={\frac{1}{R}}\sum_{r=1}^{R}{\frac{I_{r}}{U_{r}}},$$ where the Ir and Ur, respectively, denote the intersection and union of the ground-truth and predicted rooms labeled as the r-th room type in a floor plan. R is the total number of room types. Macro IoU calculates the average IoU over different types of rooms, and Micro IoU calculates the global IoU by aggregating all rooms. Since Obj-GAN and our T2D model generate bounding boxes rather than images, we use a simple strategy to transform the outputs of Obj-GAN and the T2D model into images without any further refinement for a fair comparison. Specifically, we paint each room in descending order in terms of the total area of the room type13 and different colors 12We provide more implementation details in Appendix A. 13Total area of the room type is computed by adding up the specific room type area across all floor plans in our dataset. This gives us the following order: living room, common room, | Models | Micro IoU | Macro IoU | |-----------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------| | Training on artificial instructions only | | | | Obj-GAN | 15.74 | 11.12 | | CogView | 10.01 | 8.31 | | Imagen | 14.74 | 15.57 | | T2D (w/o bd) | 6.46 | 4.01 | | T2D | 9.13 | 6.06 | | Training on human instructions only | | | | Obj-GAN | 10.72 | 8.29 | | CogView | 13.48 | 11.26 | | Imagen | 9.29 | 6.64 | | T2D (w/o bd) | 32.22 | 26.24 | | T2D | 42.93 | 38.48 | | Warm up on artificial + fine-tune on human Obj-GAN 10.68 8.44 CogView 13.30 11.43 Imagen 12.17 14.96 T2D (w/o bd) 35.95 29.95 T2D 54.34 53.30 | | | refer to different room types. Previous colors will be replaced by the subsequent paintings if there is an overlapping between bounding boxes (rooms). For image-based approaches (e.g., CogView and Imagen), we compute the maximized IoU scores by shifting the floor plan central point in the generated image. ## 5.3 Main Results Table 2 shows the floor plan generation results on the T2D dataset, where T2D (w/o bd) indicates the T2D model without incorporating boundary information14. The T2D model achieves the highest IoU scores with a micro IoU of 54.34 and a macro IoU of 53.30, outperforming other baselines by a large margin. These can be attributed to our Seq2Seq model in controlling the target box sequence generation based on salient information extracted from language instructions. In contrast, text-conditional image generation methods fail to perform well. This is probably because those models are designed to generate artwork-like images with high-level visual concepts from the short text, instead of following multiple instructions with various constraints for a specific design. | Alignment | GT ratings | T2D ratings | |---------------|--------------|---------------| | Room type | 4.99 | 4.71 | | Room location | 4.86 | 3.67 | | Room size | 4.75 | 3.89 | | Relationships | 4.89 | 3.65 | | Meet all % | 85% | 38% | Table 3: Human evaluation results. When training only on artificial instructions while testing on human-written ones, our method cannot perform well. This indicates there is a language distribution gap between artificial and human instructions. Nevertheless, when artificial instructions are used for warming up before training on human instructions, the performance of our method is significantly improved with over 10 IoU scores increment. This suggests that despite the language gap, artificial and human instructions are mutually beneficial data portions during training. In addition, in all the training settings, representing the floor plan boundary as a sequence of boxes consistently improves the performance of our Seq2Seq approach. This demonstrates that this strategy could be one of the possible solutions to incorporate the floor plan boundary. ## 5.4 Result Analysis It is worth noting that the quantitative results *indirectly* evaluate how well the generated floor plans align with the language instructions since IoU scores essentially measure the overlap between generated and ground-truth floor plan layouts. Due to the complexity of our task, it is possible for the same language instruction to map to multiple floor plan designs. Therefore, a low IoU score does not necessarily mean a bad generation. Human Evaluations To *directly* evaluate the alignment between generated floor plans and language instructions, we conduct human evaluations on a subset of the T2D test set, which consists of 100 randomly sampled instructions written by different annotators. For this purpose, we invite 5 volunteers with NLP backgrounds to evaluate the degree of alignment between source language instructions and target floor plans. Specifically, we consider four partial alignment criteria in terms of room types, locations, sizes, and relationships. Each volunteer is asked to provide four ratings on a scale of 1 to 5, according to the | Models | Micro IoU | Macro IoU | |----------|-------------|-------------| | T2D | 55.10 | 55.16 | | Human | 64.67 | 62.32 | ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) above-mentioned alignment criteria, respectively15. Besides, we also consider global alignment and ask our volunteers to justify whether the generated floor plan meets all the specifications in the instructions. We perform the above subjective evaluations for both T2D-generated and ground-truth floor plan designs. Table 3 shows the human evaluation results. As can be seen, ground-truth floor plans get high ratings for all the partial alignment criteria, and 85% of them meet all the requirements specified in the instructions. This indicates our dataset contains high-quality human instructions that align well with the ground-truth floor plan designs. On the other hand, our T2D model receives no rating less than 3.5, indicating that at least 50% rooms with respect to their locations, sizes, and relationships can be correctly predicted. However, our method still has a gap with ground truth designs, especially in room location and relationships, which indicates the potential for improvements. Human Performance To study human performance on the T2D task, we further ask our volunteers to design floor plans on 100 instances of the same subset used for human evaluations16. Table 4 reports the IoU scores for our T2D model and human performance. Humans generally achieve better IoU scores. However, even if human-generated floor plans intrinsically have much better alignment with the instructions, they only obtain around 63% IoU scores with the ground truths. This exposes the nature of design diversity, i.e., a set of language instructions can map to multiple plausible floor plan designs. Figure 4 provides a real example that both our method and humans follow the same instructions17 but generate different floor plans. Tell2Design Ground Truth **Human** ![8_image_0.png](8_image_0.png) ![8_image_3.png](8_image_3.png) ## 6 Future Research In the future, the following directions may be worth exploring to promote the performance or extend our task: (1) How to build robust language understanding models that can adapt to the presence of noise in human instructions or even locate and refine potentially inconsistent information? (2) How to explicitly incorporate the nature of design diversity and develop techniques for diverse floor plan design? (3) How to extend the language-guided floor plan generation task to more domains or more practical but challenging scenarios, where designs should be refined according to feedback from users/clients? ## 7 Conclusion In this paper, we initiate the research of a novel language-guided design generation task, with a specific focus on the floor plan domain as a start. We formulate it as *language-guided floor plan generation* and introduce *Tell2Design* (T2D), a large-scale dataset that features floor plans with natural language instructions in describing user preferences. We propose a Seq2Seq model as a strong baseline and compare it with several text-conditional image generation models. Experimental results demonstrate that the design generation task brings up several challenges and is not well-solved by existing text-conditional image generation techniques. Human evaluations assessing the degree of alignment between text and design, along with the human performance on the task, expose the challenge of understanding fuzzy and entangled information, and the nature of design diversity in our task. We hope this paper will serve as a foundation and propel future research on the task of the language-guided design generation. ## Limitations The proposed T2D dataset has several limitations, which could be addressed in future work. First, it only considers and collects language instructions for the floor plan domain. Future work could extend this language-guided design generation task to other design domains such as documents, mobile UIs, etc. Second, it is limited in the scope of languages where we only collect instructions written in English. Future work could assess the generalizability of the T2D dataset to other languages. Third, although generating floor plan designs from languages exhibit diversity, we do not consider improving generation diversity at this moment. Future works could consider building frameworks that specifically aim at design diversity. ## Ethics Statement In this section, we discuss the main ethical considerations of *Tell2Design* (T2D): (1) Intellectual property protection. The floor plans of the T2D dataset are from the RPLAN (Wu et al., 2019) dataset. Our dataset should be only used for research purposes. (2) Privacy. The floor plan data sources are publicly available datasets, where private data from users and floor plans have been removed. Language instructions are either generated artificially or collected from Amazon Mechanical Turk, a legitimate crowd-sourcing service, and do not contain any personal information. (3) Compensation. During the language instruction collection, the salary for annotating each floor plan is determined by the instruction quality and Mturk labor compensation standard. ## Acknowledgements We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their constructive comments and support of our work. We also gratefully acknowledge the support of NVIDIA AI Technology Center (NVAITC) for our research. This research/project is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020-016). ## References Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. 2022. Cm3: A causal masked multimodal model of the internet. *ArXiv*. Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large scale gan training for high fidelity natural image synthesis. *ArXiv*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *In Proc. of NIPS*. Yunning Cao, Ye Ma, Min Zhou, Chuanbin Liu, Hongtao Xie, Tiezheng Ge, and Yuning Jiang. 2022. Geometry aligned variational transformer for imageconditioned layout generation. In *Proc. of ACM Multimedia*. Stanislas Chaillou. 2020. Archigan: Artificial intelligence x architecture. In *Architectural intelligence*. Qi Chen, Qi Wu, Rui Tang, Yuhan Wang, Shuai Wang, and Mingkui Tan. 2020. Intelligent home 3d: Automatic 3d-house design from linguistic descriptions only. In *Proc. of CVPR*. Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017. Rico: A mobile app dataset for building data-driven design applications. In *Proc. of UIST*. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. In Proc. of NIPS. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. 2021. Cogview: Mastering text-to-image generation via transformers. In Proc. of NIPS. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. *ArXiv*. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Proc. of NIPS. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. 2022. Cascaded diffusion models for high fidelity image generation. *In Proc. of JMLR*. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *In Proc. of FAIR*. Ruizhen Hu, Zeyu Huang, Yuhan Tang, Oliver Van Kaick, Hao Zhang, and Hui Huang. 2020. Graph2plan: Learning floorplan generation from layout graphs. *ACM Transactions on Graphics*. Hao Hua. 2016. Irregular architectural layout synthesis with graphical inputs. *Automation in Construction*. Allison Janoch, Sergey Karayev, Yangqing Jia, Jonathan T Barron, Mario Fritz, Kate Saenko, and Trevor Darrell. 2013. A category-level 3d object dataset: Putting the kinect to work. In *Consumer* Depth Cameras for Computer Vision. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *ArXiv*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proc. of ACL*. Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. 2019. Object-driven text-to-image synthesis via adversarial training. In *Proc. of CVPR*. Chen Liu, Jiajun Wu, Pushmeet Kohli, and Yasutaka Furukawa. 2017. Raster-to-vector: Revisiting floorplan transformation. In *Proc. of ICCV*. Han Liu, Yong-Liang Yang, Sawsan AlHalawani, and Niloy J Mitra. 2013. Constraint-aware interior layout exploration for pre-cast concrete-based buildings. The Visual Computer. Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2015. Generating images from captions with attention. *ArXiv*. Paul Merrell, Eric Schkufza, and Vladlen Koltun. 2010. Computer-generated residential building layouts. In Proc. of SIGGRAPH. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2022. Glide: Towards photorealistic image generation and editing with textguided diffusion models. *In Proc. of ICML*. Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In *Proc. of ICML*. Pablo N Pizarro, Nancy Hitschfeld, Ivan Sipiran, and Jose M Saavedra. 2022. Automatic floor plan analysis and recognition. *Automation in Construction*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *In Proc. of JMLR*. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. *ArXiv*. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In *Proc. of ICML*. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proc. of CVPR*. Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. 2022a. Palette: Image-toimage diffusion models. In *Proc. of SIGGRAPH*. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022b. Photorealistic text-to-image diffusion models with deep language understanding. *ArXiv*. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. 2022c. Image super-resolution via iterative refinement. *IEEE Transactions on Pattern Analysis and* Machine Intelligence. Sachith Seneviratne, Damith Senanayake, Sanka Rasnayaka, Rajith Vidanaarachchi, and Jason Thompson. 2022. Dalle-urban: Capturing the urban design expertise of large text to image transformers. *ArXiv*. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. 2012. Indoor segmentation and support inference from rgbd images. In *Proc. of ECCV*. Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. 2015. Sun rgb-d: A rgb-d scene understanding benchmark suite. In *Proc. of CVPR*. George Stiny. 1980. Introduction to shape and shape grammars. *Environment and planning B: planning* and design. Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, XiaoYuan Jing, Fei Wu, and Bingkun Bao. 2022. Df-gan: Deep fusion generative adversarial networks for textto-image synthesis. *In Proc. of CVPR*. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. *In Proc. of NIPS*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *In Proc. of NIPS*. Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Zhentao Tan, Lu Yuan, Weiming Zhang, and Nenghai Yu. 2022. Hairclip: Design your hair by text and reference image. In *Proc. of CVPR*. Wenming Wu, Lubin Fan, Ligang Liu, and Peter Wonka. 2018. Miqp-based layout design for building interiors. In *Proc. of CGF*. Wenming Wu, Xiao-Ming Fu, Rui Tang, Yuhan Wang, Yu-Hao Qi, and Ligang Liu. 2019. Data-driven interior plan generation for residential buildings. ACM Transactions on Graphics. Jianxiong Xiao, Andrew Owens, and Antonio Torralba. 2013. Sun3d: A database of big spaces reconstructed using sfm and object labels. In *Proc. of ICCV*. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proc. of CVPR. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. Docred: A large-scale document-level relation extraction dataset. In Proc. of ACL. Hui Ye, Xiulong Yang, Martin Takac, Rajshekhar Sunderraman, and Shihao Ji. 2021. Improving text-toimage synthesis using contrastive learning. In Proc. of BMVC. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. 2021. Cross-modal contrastive learning for text-to-image generation. In Proc. of CVPR. Xujie Zhang, Yu Sha, Michael C Kampffmeyer, Zhenyu Xie, Zequn Jie, Chengwen Huang, Jianqing Peng, and Xiaodan Liang. 2022. Armani: Part-level garment-text alignment for unified cross-modal fashion design. In *Proc. of ACM Multimedia*. Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In *Proc. of ICDAR*. Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. 2019. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In Proc. of CVPR. ## A Implementation Details T2D Parameters In practice, we initialize all weights of our proposed baseline method from T5base18. In training, we use Adam (Kingma and Ba, 2014) with β1 = 0.9, β2 = 0.999, ϵ = 1e − 08 to update the model parameters. We fine-tune our model on 3 RTX 8000 GPUs with batch size 12 and learning rate 5e − 4 for 20 epochs. 18https://huggingface.co/t5-base Baseline Implementation For the mentioned baselines, only Obj-GAN and CogView are opensourced. Therefore, we adapt and implement the models from their official GitHub repositories19. However, as Imagen's source codes are not published, we implement it from the most starred GitHub repository20 (i.e., 5.9k stars until writing this paper) and adapt it to our T2D dataset. We use the process floor plan images in Graph2Plan (Hu et al., 2020) for training. Although all baselines have provided pre-trained checkpoints for fine-tuning, our preliminary experiments indicate that training those baselines from scratch on the T2D dataset will obtain better performances. One most probable reason is the huge discrepancy between the data distributions of the baseline pretraining corpus and our T2D dataset. Those baseline checkpoints are mostly trained with real-life images with various objects and backgrounds. But our T2D dataset only focuses on the floor plan domain. Specifically, for Obj-GAN, we adopt and freeze the pre-trained text encoder, and train the rest of the networks (e.g., LSTMs) from scratch. For CogView, we freeze the pre-trained VQ-VAE and initialized the main backbone, decoder-only transformer, from GPT (Radford et al., 2019; Brown et al., 2020). During training, only the parameters of the transformer backbone will be updated. For Imagen, we import the T5-large model's encoder from *Hugging Face* for text encoding and freeze all its parameters during training. The rest U-nets for diffusion will be updated according to the loss propagation. ## B Dataset Analysis Floor Plan Statistics As shown in Tabel 5 and Figure 5, we present the statistics on the occurrence of each room type and the number of rooms per floor plan. There are 8 types of rooms in total. More than 92% of floor plans include at least 6 distinct rooms, and the most frequent room types are *Common Room, Bathroom, Balcony, Living* Room, Master Room, and Kitchen. Dataset Comparison As shown in Table 8, compared with other-related layout generation datasets, our T2D dataset is the first to have language annotations and aims to generate layout designs directly 19https://github.com/jamesli1618/Obj-GAN; https: //github.com/THUDM/CogView 20https://github.com/lucidrains/imagen-pytorch ![12_image_0.png](12_image_0.png) from languages. Since generating floor plan designs from language instructions can be naturally formulated into a text-conditional image generation problem, we compare our dataset with two benchmarking text-conditional image generation datasets in Table 6. We observe that our dataset is with a similar number of images with MSCOCO and Flickr30K but contains far longer text annotation for each image (i.e.T2D has 256 words on average describing each floor plan image). Moreover, as a set of language instructions for a floor plan results in a document-level text description, we compare our dataset with other document-level NLP datasets in Table 7. We hope that our dataset can also propel the research on document-level language understanding. It is shown that our dataset has a comparable total number of samples and words with the largest DocRED(Yao et al., 2019). More importantly, our "documents" are either humanannotated or artificially generated, instead of being crawled from the internet. Table 6: Comparisons between our T2D dataset and text-conditional image generation datasets. Table 7: Comparisons between our T2D dataset and document-level NLP datasets. | Dataset | # Img. | Avg. # Words | |------------|----------|----------------| | MS COCO | 82,783 | 11.3 | | Flickr30K | 31,000 | 11.8 | | T2D (ours) | 80,788 | 256.7 | | Dataset | # Doc. | # Word | # Sent. | |--------------------|----------|----------|-----------| | SCIERC | 500 | 60755 | 2217 | | BC5CDR | 1,500 | 282k | 11,089 | | DocRED (Human) | 5,053 | 1,002k | 40,276 | | DocRED (Distantly) | 101,873 | 21,368k | 828,115 | | T2D (Human) | 5,051 | 1,011k | 60.057 | | T2D (Artificial) | 75,737 | 19,727k | 1,776k | ## C Dataset Collection Details Human instruction We employ Amazon Mechanical Turk (MTurk)21 to let annotators write language instructions for a given RGB 2D floor plan. Amazon considers this web service "artificial intelligence," and it is applied in various fields, including data annotation, survey participation, content moderation, and more. The global workforce (called "turkers" in the lingo) is invited for a small reward to work on "Human Intelligence Tasks" (HITs), created from an XML description of the task from business companies or individual sponsors (called "requesters"). HITs can display a wide variety of content (e.g., text and images) and provide many APIs, e.g., buttons, checkboxes, and input fields for free text. In our case, turkers are required to fill the blank input fields in HITs with language instructions for each room, following our guidelines. A screenshot of one of our HITs is displayed in Figure 6. We also show a full example of human instructions in Figure 7. Artificial Instruction The artificial instructions in our T2D dataset are generated from scripts with several pre-defined templates. We carefully select volunteers with natural language processing backgrounds for drafting templates. Before participating in the annotation process, each annotator was required to undergo a qualification round consisting of a series of test annotations. We illustrate how we generate an instruction to describe one room's as-21https://www.mturk.com/ | Room type | #Floor plan | |-------------|---------------| | CommonRoom | 100,847 | | Bathroom | 97,113 | | Balcony | 86,545 | | LivingRoom | 80,788 | | MasterRoom | 80,466 | | Kitchen | 77,768 | | Storage | 3,351 | | DiningRoom | 1,312 | ![13_image_0.png](13_image_0.png) | Dataset | Domain | Basic objects | Object annotations | Other annotations | |------------|----------------------|--------------------------|-----------------------------|------------------------| | PubLayNet | scientific documents | {text,title,figure,...} | bounding boxes | None | | RICO | mobile UIs | {button,tool bar,...} | bounding boxes,interactions | animations,hierarchies | | SUB RGB-D | 3D indoor scenes | {chair,table,pillow,...} | bounding boxes | 2D&3D polygons | | ICVT | poster designs | {text,logo,...} | bounding boxes,substrates | background images | | T2D (ours) | floor plans | {kitchen,balcony,...} | bounding boxes | language instructions | Table 8: Comparisons between our T2D dataset and other layout generation datasets. Restricted ## Human Instruction Example: | Balcony one is located at the northwestern point of the floorplan. It is approximately 50 square feet in size. It can be accessed through the kitchen, bathroom and common room 2. Balcony 2 is located in the most southern point of the floorplan, just east of the master room. It is roughly 12 feet in length and five feet in width. Access points include the livingroom and master room. The bathroom is located south of the kitchen. It can be accessed through the east wall of the livingroom, as well as the southern kitchen wall. Common room 2 has acess to the bathroom through its eastern wall. It is the smallest room in the floorplan. At approximately 25 square feet in size, it is just a bit smaller than balcony 1 and half the size of balcony 2. Common room one is located on the western portion of the floor plan. It can be accessed through common room 2 at the north, the master room at the south and the livingroom from the east. Common room one is about 100 square feet, 10 feet in length and 10 feet in width, making it the second largest room in the floor plan. Commmon room 2 is just a bit smaller than Common room 1, approximately 90 square feet. It is located just north of Common room 1, with access points including the bathroom at the northeast, balcony at the north, and livingroom at the east. The kitchen is located in the most northern point of the floorplan. It is roughly 50 square feet, 10 feet in length and 5 feet in width. The bathroom can be accessed at the southern point of the kitchen, the first balcony toward the western kitchen wall and the livingroom at the eastern wall. The kitchen is relatively closest in size to the balconies. The livingroom is located in the northeastern portion of the floorplan. It is entered through the front entry door, and is approximately 480 square feet. The bathroom, kitchen, common rooms one and two, the master bedroom and second balcony can all be accessed through the livingroom. The master room is located in the southern end of the floorplan. It is roughly 200 square feet, 10 feet in width and 20 feet in length. The second balcony, livingroom and common room one can be accessed through the master room. The master room is about the size of the two common rooms combined. | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| pect ratio in Figure 8. We also show a full example of artificial instructions in Figure 9. ## D Baseline Generation Samples To better understand and compare different baselines, we provide a case study of generated samples for the same language instructions from all baselines shown in Figure 10. Obj-GAN (Li et al., 2019) has difficulties in capturing salient information from the given language instructions, resulting in generating rooms with incorrect attributes and relationships. One possible reason could be that it does not utilize any pre-trained large language model and thus suffers from understanding the given document-level instructions. CogView (Ding et al., 2021) instead auto-regressively generates the image tokens conditioned on all input instructions with a pre-trained GPT as the backbone. However, the image tokens sampled near the end of the generation show confusing information, resulting in an incomplete design. This is probably because presenting the whole floor plan design as a sequence of image tokens hinders the potential connections among different elements in the floor plan. Imagen (Saharia et al., 2022b) exhibits its strong ability to generate realistic images in the target domain. However, it also fails to meet various design requirements specified in language instructions, indicating its limitation for design generation under multiple stricter constraints. ## Instruction Backbone **Approximate** Phrase Values "Make the aspect ratio of *{room.type}* " "The aspect ratio of *{room.type}* should be " "I would like to have the aspect ratio of *{room.type}* " "Can you make the aspect ratio of *{room.type}* " "Can we have the aspect ratio of *{room.type}* to be " "It would be great to have the aspect ratio of *{room.type}* " Restricted Restricted "about" "around" "approx" "*{room.aspect_ratio}*" + + Figure 8: Illustration on generating artificial instructions describing room's aspect ratio. ## Artificial Instruction Example: It would be good to have a common **room** . I would like to place common room at the north side of the apartment. The common room should be around 200 sqft with the aspect ratio of 3 over 4. The common room should have an en-suite bathroom. The common room should be next to the bathroom, kitchen, balcony. The **bathroom** should be considered. Place bathroom at the south side of the apartment. Make bathroom around 50 sqft with the aspect ratio of 7 over 8. The bathroom can be used by guest. The bathroom connects to the common room, master room, living room. Make a **kitchen** . The kitchen should be at the south side of the apartment. Make kitchen approx 50 sqft with the aspect ratio of 7 over 4. The kitchen attaches to the common room, balcony, master room, living room. Can you make a **balcony** ? I would like to place balcony at the south side of the apartment. Can you make balcony around 50 sqft with the aspect ratio of 5 over 2? The balcony is private. The balcony connects to the common room, kitchen, master room. The master **room** should be considered. The master room should be at the south side of the apartment. Make master room approx 150 sqft with the aspect ratio of 4 over 5. The master room should have an en-suite bathroom. The master room should be next to the bathroom, kitchen, balcony. It would be great to have a living **room** . Make living room around 650 sqft with the aspect ratio of 1 over 2. Restricted Figure 9: An example of artificially-generated language instructions from the T2D dataset. Restricted ![15_image_0.png](15_image_0.png) ## Human Instructions: The north side of this home is not complete without the *balcony*. Access to the approximately 16 sq ft area can be made through the living room or through the common room beside it. Bathroom 1 is in the eastern section of the home. It is located next to the living room and is approximately 15 sq ft. The larger of the two, *Bathroom* 2, is approximately 30 sq ft. It is between the master bedroom and common area 2, along the western side of the house. Common *room* 1 occupies the northeast corner of the property. At roughly 80 sq ft it is conveniently located next to the balcony. Common *room* 2 is nearly 100 sq ft. Occupying the northwest corner, it is easily accessible from the kitchen beside it, or the shared access from the living area. The *kitchen* is positioned on the north side of the house, between the living room and second common area. It measures about 50 sq ft. The living *room* is conveniently located in the southeast corner of the home. It spans approximately 250 sq ft while offering access to almost every room in the house. Located in the southwest corner of the home is the master *bedroom*. This space is approximately 120 sq ft and is positioned next to the living room. Restricted Figure 10: Generated samples from different baselines according to the same human-written language instructions. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The individual section named Limitations after section 8. ✓ A2. Did you discuss any potential risks of your work? The individual section named Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section3, Section5 ✓ B1. Did you cite the creators of artifacts you used? Section3, Section5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section Ethics Statement, Section Appendix B ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section Ethics Statement ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section Ethics Statement ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3, Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5, Section Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5, Section Appendix B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 3, Appendix D ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3, Appendix D ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Ethics Statement ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 3 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3
yao-etal-2023-human
Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations
https://aclanthology.org/2023.acl-long.821
Human-annotated labels and explanations are critical for training explainable NLP models. However, unlike human-annotated labels whose quality is easier to calibrate (e.g., with a majority vote), human-crafted free-form explanations can be quite subjective. Before blindly using them as ground truth to train ML models, a vital question needs to be asked: How do we evaluate a human-annotated explanation{'}s quality? In this paper, we build on the view that the quality of a human-annotated explanation can be measured based on its helpfulness (or impairment) to the ML models{'} performance for the desired NLP tasks for which the annotations were collected. In comparison to the commonly used Simulatability score, we define a new metric that can take into consideration the helpfulness of an explanation for model performance at both fine-tuning and inference. With the help of a unified dataset format, we evaluated the proposed metric on five datasets (e.g., e-SNLI) against two model architectures (T5 and BART), and the results show that our proposed metric can objectively evaluate the quality of human-annotated explanations, while Simulatability falls short.
# Are Human Explanations Always Helpful? Towards Objective Evaluation Of Human Natural Language Explanations Bingsheng Yao Rensselaer Polytechnic Institute Prithviraj Sen ∗ Amazon Lucian Popa IBM Research James Hendler Rensselaer Polytechnic Institute Dakuo Wang † Northeastern University ## Abstract Human-annotated **labels** and **explanations** are critical for training explainable NLP models. However, unlike human-annotated **labels** whose quality is easier to calibrate (e.g., with a majority vote), human-crafted **free-form explanations** can be quite subjective. Before blindly using them as ground truth to train ML models, a vital question needs to be asked: How do we evaluate a human-annotated explanation's quality? In this paper, we build on the view that the quality of a human-annotated explanation can be measured based on its helpfulness (or impairment) to the ML models' performance for the desired NLP tasks for which the annotations were collected. In comparison to the commonly used Simulatability score, we define a new metric that can take into consideration of the helpfulness of an explanation for model performance at both fine-tuning and inference. With the help of a unified dataset format, we evaluated the proposed metric on five datasets (e.g., e-SNLI) against two model architectures (T5 and BART), and the results show that our proposed metric can objectively evaluate the quality of human-annotated explanations, while Simulatability falls short. ## 1 Introduction Despite the recent advances of large-scale language models (LLM) (Devlin et al., 2019; Qin et al., 2023; Lewis et al., 2019; Raffel et al., 2020), which exhibit close-to-human performance on many natural language processing (NLP) tasks (e.g., Question Answering (Rajpurkar et al., 2016; Kocisk ˇ y et al. ` , 2018; Mou et al., 2020, 2021; Xu et al., 2022), Natural Language Inference (Bowman et al., 2015; Williams et al., 2017; Wang et al., 2018), and Text ∗† Work done while Prithviraj was at IBM Research. †d.wang@northeastern.edu Corresponding Author. Generation (Duan et al., 2017; Yao et al., 2022; Zhao et al., 2022)), humans are eager to know how State-of-the-Art (SOTA) models arrive at a prediction. Researchers working on natural language explanations1turned to human annotators for help by recruiting crowd-workers or experts to annotate both the labels and corresponding natural language explanations (Camburu et al., 2018; Rajani et al., 2019; Aggarwal et al., 2021; Wang et al., 2019b); Researchers can thus leverage human-annotated explanations to boost models' prediction performance or train models to generate human-understandable natural language explanations. However, the quality issue of human-annotated explanations has yet to be explored. Researchers often leverage popular Natural Language Generation (NLG) metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) to evaluate the similarity between model-generated and human-annotated explanations, with a strong assumption that humanannotated ones are the gold standard. Nevertheless, unlike providing labels for classification or multiple-choice QA tasks (Chen et al., 2021), different people may come up with distinct natural language explanations for the same observation (Gebreegziabher et al., 2023). Two such explanations can be both correct even though the BLEU or ROUGE similarity may be low. Furthermore, human-given natural language explanations can often be subjective and task-dependent (Lee et al., 2022). As a result, human-annotated explanations should not be simply treated as the gold standard (Muller et al., 2021); instead, we take the view that the core value of explanations should be | Dataset | Task | Task Format | Data Instances | Average explanation Length (token) | | | |-------------|----------------------------|--------------------------|------------------|--------------------------------------|------|--------| | Train | Valid | Test | | | | | | CoS-E v1.0 | Commonsense QA | 3-choice Multiple-Choice | 7610 | 950 | - | 16.148 | | CoS-E v1.11 | Commonsense QA | 5-choice Multiple-Choice | 9741 | 1221 | - | 8.996 | | ECQA | Commonsense QA | 5-choice Multiple-Choice | 7598 | 1098 | 2194 | 63.572 | | e-SNLI | Natural Language Inference | 3-label Classification | 549367 | 9842 | 9824 | 15.977 | | ComVE | Commonsense Validation | 2-choice Multiple-Choice | 10000 | 1000 | 1000 | 10.288 | based on how much help they provide towards the model prediction instead of being based on notions of semantic similarity or word-matching. To summarize our contributions in this paper: 1. We provide an objective evaluation to quantify the human-annotated explanations' helpfulness towards model performance. Our evaluation metric is an extension of the Simulatability score (Doshi-Velez and Kim, 2017) and we propose a prompt-based unified data format that can convert classification or multiple choice tasks into a unified multiple choice generation task format to minimize the influence of structural variations across different tasks. 2. Through an evaluation with five datasets and two models, our metric can rank explanations quality consistently across all five datasets on two model architectures while the Simulatability score (baseline) falls short. 3. Our evaluation justifies the hypothesis that human explanations can still benefit model prediction, even if they were criticized as low-quality by prior literature's human evaluation. ## 2 Related Work 2.1 Natural Language Explanation Datasets Despite the development of new model architectures and potentially more significant parameters, these "black boxes" unavoidably lack the ability to explain their predictions; this led to increased efforts in the community to leverage human-annotated explanations to either train models with explanations or to teach them to selfrationalize. For example, Wiegreffe and Marasovic (2021) reviewed 65 datasets and provided a 3-class taxonomy of explanations: highlights, freetext, and structured. We focus on five large public datasets with free-text human-annotated explanations at the instance level (Table 1). We doublechecked these datasets' licenses, and no personally identifiable information (PII) exists. One prominent dataset is CoS-E and its two variants **CoS-E v1.0** and **CoS-E v1.11**(Rajani et al., 2019). It extended the Commonsense QuestionAnswering (CQA v1.0 and v1.11 versions) dataset (Talmor et al., 2018) by adding human-annotated explanations to the correct answer label. However, a few recent works suggest that the CoS-E's explanation quality is not good, as Narang et al. (2020) independently hand-labeled some new explanations for CoS-E and found a very low BLEU score between its original explanations and the new ones. To improve the explanation's quality, **ECQA** (Aggarwal et al., 2021) collected and summarized single-sentence explanation for each candidate answer into a natural language explanations for every data in the CQA v1.11 dataset. Sun et al. (2022) proved that CoS-E explanations are not as good as ECQA explanations based on human preferences. The fourth dataset is **e-SNLI**(Camburu et al., 2018), which consists of explanations for the Stanford Natural Language (SNLI) dataset (Bowman et al., 2015). Finally, the fifth dataset is **ComVE** (Wang et al., 2020), asking which one of two sentences is against commonsense. Later we evaluate the human-annotated explanations in the abovementioned five datasets with our metric and an established baseline, the Simulatability score. Worth mentioning that we do not include datasets such as **SBIC** (Sap et al., 2019) or E-δNLI (Brahman et al., 2021). SBIC does not provide explanations for all the data, and E-δ-NLI leverages various sources to augment the δ-NLI (Rudinger et al., 2020) dataset with explanations instead of providing human annotations. ## 2.2 Evaluation Metric For Explanations Many commonly used evaluation metrics for textbased content like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) treat human-annotated answers as the absolute gold standard without questioning or attempting to evaluate their quality. One established evaluation metric called Figure 1: Unified structure of Baseline and Infusion settings. Black bold text are fixed prompts. We provide ![2_image_0.png](2_image_0.png) examples of Infusion format in classification task like e-SNLI and multiple choice task like CoS-E and ComVE. The color schema follows: blue denotes question content; green denotes choice content; orange denotes explanations. Simulatability score derives from Human Simulatability (Doshi-Velez and Kim, 2017) and can examine gold explanations. It simply measures the change in a baseline model prediction performance, depending on whether the explanation is provided as the input. Previous works (Chandrasekaran et al., 2018; Yeung et al., 2020; Hase et al., 2020; Wiegreffe et al., 2020; Poursabzi-Sangdeh et al., 2021; Rajagopal et al., 2021) have demonstrated the usefulness of Simulatability score for evaluating explanation quality. However, this metric has a couple of inherent disadvantages. First, it only considers the helpfulness of explanations on a baseline model, where we show that explanations provide different helpfulness during fine-tuning and inference through our experiment in Section 4. In addition, model performance could also differ when we transform the original task into other tasks, such as turning a classification task into a multiple-choice task with different input data formats. In order to objectively evaluate human-annotated explanations, we define a new evaluation metric based on the Simulatability score that complements both drawbacks of Simulatability by considering the helpfulness of explanations both at fine-tuning and inference with the help of a unified structure to minimize the impact of task differences. Other works (Carton et al., 2020) attempted to evaluate and categorize different characteristics of explanations, but many of them (Chan et al., 2022a; DeYoung et al., 2020) still treat human-annotated explanations as the gold standard. ## 2.3 Usage Of Explanations For Sota Models ![2_Image_1.Png](2_Image_1.Png) Existing works have been exploring circumstances in which explanations could improve model performance; for example, Hase and Bansal (2021) argues that explanations are most suitable for use as model input for predicting, and Kumar and Talukdar (2020) proposed a system to generate label-specific explanations for the NLI task specifically. Some recent works have tried to generate better explanations with a self-rationalization setting (Wiegreffe et al., 2020; Marasovic et al. ´ , 2021), where a model is asked to generate the prediction label and explanation simultaneously. We conduct a preliminary experiment to find the best model setting to leverage explanations in Section 4.1. There exists many recent works (Paranjape et al., 2021; Liu et al., 2021; Chen et al., 2022) that explore the usage of prompts to complete explanations, generate additional information for the original task, or examine whether generated explanations can provide robustness to adversarial attacks. Ye and Durrett (2022) showed that simply plugging explanations into a prompt does not always boost the in-context learning performance, and modelgenerated explanations can be unreliable for fewshot learning. Another related line of research focuses on extracting or generating explanations with a unified framework (Chan et al., 2022b) or with a teachable reasoning system that generates chains of reasoning (Dalvi et al., 2022). ![3_image_0.png](3_image_0.png) ## 3 Unified Structure While popular metrics like BLEU and ROUGE can evaluate text coherence and similarity, one critical aspect of explanations is how beneficial they can be. Thus, we want to develop a metric that objectively evaluates explanations' utility towards model performance. Furthermore, we expect that such a metric can systematically demonstrate how good or bad the explanations are; for example, it could objectively measure what 'noisy' means in a human study (e.g., from previous works on CoS-E). With the advantage of sequence-to-sequence models like T5 that can map different types of language tasks into generation tasks, we can control and minimize the influence of varying task formats on model performance while evaluating the helpfulness of explanations by leveraging a unified data format. We realize that existing datasets with human-annotated explanations are mostly either multiple-choice tasks or classification tasks. The classification task could be viewed as a multiplechoice task where the labels are indeed choices. Inspired by several previous works that manipulated prompts for sequence-to-sequence models (Marasovic et al. ´ , 2021; Liu et al., 2021), we incorporate a few well-defined words as template-based prompts for the unified data structure to indicate the task content and corresponding explanations. Examples shown in Figure 1 explain how we map various tasks into a unified multiple-choice generation task. We propose two settings: no explanations (Baseline ) and explanations as additional input (Infusion ). Here we explain how each prompt addresses a different part of the data content: 1) '*explain:*' is followed by the question content, 2) '*choice-n:*' is followed by each candidate answer, and 3) a special token '<sep>' separates the explanations from the task content, while the explanations in Infusion are led by '*because*' so that the model knows that the explanation text explains the task content. For datasets like CoS-E and ECQA, we leverage the original task as the question content. On the other hand, we define fixed question prompts for e-SNLI: "*what is the relation between [Premise] and [Hypothesis]?*", and for ComVE: "*which sentence is against commonsense?*" to specify corresponding tasks to models. ## 4 Preliminary Experiment 4.1 Utilizing Explanations As Part Of Input Vs Part Of Output As described in Section 2.3, recent works have been exploring various circumstances that humanannotated explanations could help in different aspects. We hypothesize that leveraging explanations as additional input with the original task input allows models to use explanations for better prediction, while the self-rationalization (Marasovic et al. ´ , 2021) setting, which generate explanations along with labels, complicates the prediction task for the models and may lead to a performance decrease. In addition, the generated explanations from self-rationalization systems are not explicitly being used for label prediction. To justify our hypothesis, we conduct a preliminary experiment on CoS-E v1.0 and ECQA datasets. We fine-tune three T5-base models on each dataset with three different settings: Baseline , Infusion , and explanations as additional output (*Self-Rationalization* hereinafter). For each model, we maintain the same setting during fine-tuning and inference. For example, the model fine-tuned with Infusion will also take data under Infusion during inference. We leverage the unified structure for Baseline and Infusion shown in Figure 1 and make minor adjustments for the self-rationalization setting accordingly (shown in Appendix A). The experiment results are shown in Table 2. We notice that the self-rationalization setting performs worse than the Baseline , which is aligned with our assumption. On the other hand, the Infusion setting surprisingly achieves significant improvement on CoS-E, which was considered 'noisy' by previous works, demonstrating that the CoS-E explanations are indeed helpful toward models. The Infusion setting also approaches nearly complete correctness on the ECQA dataset. ## 4.2 Explanations As Partial Input During Fine-Tuning To examine the utility of explanations to the models during fine-tuning, we perform an in-depth experiment with the Baseline and Infusion setting while varying the amounts of training data used ![4_image_0.png](4_image_0.png) for fine-tuning. First, we randomly select nine subdatasets with amounts of data ranging from 10% to 90% of the training data in each dataset used in the first preliminary experiment. Then, for each subdataset, we fine-tune three models with different random seeds for sampling and fine-tuning, then acquire the averaged prediction performance. As a result, for each CoS-E v1.0 and ECQA dataset, we get 60 models fine-tuned with varying amounts of data for both the Baseline and Infusion setting, including the models fine-tuned on full training data, then perform prediction with the Baseline and Infusion settings. We maintain the same hyper-parameters across the models fine-tuned for this experiment and report them in Appendix B.1. The two diagrams in Figure 2 show the experiment results on two datasets (detailed results in Table 4 in the appendix). Different colors denote different fine-tuning and inference settings. We conclude with a few interesting observations: 1. By looking at **yellow** (model fine-tuned with Infusion and predict with Baseline ) and **green** (model fine-tuned and predict with Infusion ) line, we notice adding more training data during fine-tuning does not significantly improve model performance, suggesting that the fine-tuning pro- Treu =*Accu*(MInfusion Infusion) − *Accu*(MBaseline Baseline)) +*Accu*(MInfusion Baseline) − *Accu*(MBaseline Baseline)) Figure 3: The formula of our Treu metric. M denotes a model and the subscript/superscript denotes M predict setting f inetune setting. The Simulatability score only considers the second part within our formula. ## Cess Is Not Teaching The Model With New Knowledge That Is Conveyed In The Explanations. 2. By comparing **yellow** and **blue** (model finetuned and predict with Baseline ) line in each diagram, we notice the models fine-tuned with Infusion perform worse than baseline models without explanations during inference, demonstrating that **fine-tuning with** Infusion **teaches the** models to rely on the explanations to predict. 3. By comparing red (model fine-tuned with Baseline and predict with Infusion ) and **blue** line in each diagram, we observe the baseline models for CoS-E perform worse while predicting with explanations. In contrast, the baseline models for ECQA consistently exceed baseline performance significantly, which demonstrates that **the helpfulness of explanations on baseline models in** CoS-E is much worse than the ones in ECQA, which is aligned with some previous works. 4. By comparing **green** and **blue** lines in both diagrams, we notice that explanations in CoS-E can contribute to substantial improvement during inference on models fine-tuned with Infusion setting. This observation shows that **explanations in** CoS-E are able to provide helpfulness to models during fine-tuning, even though they were considered 'noisy' by humans in previous works. 5. By comparing red and **green** lines in both diagrams, we can observe that in order to take full advantage of explanations, **it is beneficial to finetune a model even with a small amount of data** that incorporates the explanations. Such finetuning can lead to a substantial improvement. This experiment shows that explanations provide different degrees of utility during fine-tuning and inference. Thus, we should consider both situations while evaluating the helpfulness of explanations. ## 5 Our Metric And Evaluation 5.1 Our Treu **Metric** Based on our observations from the preliminary experiments, we propose a novel evaluation metric that extends the Simulatability score. Figure 3 shows the formula of our Treu metric: it evaluates the helpfulness of explanations with the sum of two parts: at fine-tuning, where two models are fine-tuned with Baseline and Infusion settings correspondingly, we calculate the prediction accuracy difference using the same data format that was used during fine-tuning for each model; and at inference, we fine-tune only one model with Baseline setting and calculate the prediction accuracy difference between Infusion and Baseline settings. The second part of our metric is indeed the Simulatability metric. We observe that finetuning a model with data that incorporates explanations can provide substantial benefits. However, the Simulatability score fails to account for this component and only considers the model performance improvement that uses explanations at inference without fine-tuning first. For the models fine-tuned with Baseline setting, we believe pretrained SOTA large-scale models have the ability to understand the additional content at the input to a certain extent. The addition of explanations at input during inference will show whether it can provide helpfulness to a baseline model without additional supervision, while the models fine-tuned with Infusion setting will rely more on the explanation part of the input for inference. A positive score demonstrates that the explanations can provide overall helpfulness for better prediction, while a negative score does not necessarily mean the explanations are not helpful. Instead, a negative score indicates that the explanations lead to the model's performance drop in at least one part of the evaluation. Researchers can further analyze the intermediate score for each part. As a result, the score ranges theoretically from -2 to 2. ## 5.2 Evaluation We evaluate human-annotated natural language explanations across five popular datasets using our Treu metric and the Simulatability score. To justify that our metric is less biased by different model architectures and to examine the influence of models fine-tuned with different settings towards the prediction performance, we perform experiments on both T5 and BART models. The proposed unified data format is applied to the experiments for our metric and the Simulatability score to make it a more robust baseline. We maintain the same fine-tuning hyperparameters for all the experiments (details in Appendix B.2). The only exception is for the e-SNLI dataset, which has about 10x the size (549,367 data instances) of training data compared to the other datasets. Therefore, we only fine-tune models on the e-SNLI dataset with two epochs. Furthermore, we leverage the special token '<s>' for BART that was already used during the pre-training process instead of using and adding the special token '<sep>' to BART tokenizer during fine-tuning. We present the evaluation results in Table 3. ## 5.3 Findings Our results justify the intuition that humanannotated explanations can still provide benefits toward model prediction, even if they were evaluated as low-quality by humans in prior literature. By first comparing the models' prediction results over two architectures, the result shows all models fine-tuned on T5-base outperform those fine-tuned on BART-base with the same setting, mainly with a significant margin. Despite apparent performance differences between model architectures, by looking at the orderings of datasets in both tables, which are based on our Treu score, We can easily observe that Treu score provides the same ranking result for the quality of explanations in 5 datasets over two model architectures. Our Treu score (Table 3) ranks the explanation quality of the five datasets in the following order regardless of model architectures: ## Ecqa > Cos-E V1.11 > Cos-E V1.0 > E-Snli > **Comve** According to the Treu score, explanations in ECQA have the best quality among the five datasets. Especially, explanations in ECQA are much better than the ones in both CoS-E datasets, which is consistent with previous works' consensus. It is worth noticing that both CoS-E datasets achieve positive Treu scores, though significantly lower than the ones for ECQA, demonstrating that explanations in CoS-E datasets still have positive overall helpfulness for models' prediction performance even though they are considered 'low quality and noisy' from human experiments (Sun et al., 2022). Our Treu **score can rank explanation quality** consistently across all five datasets on two models while the Simulatability **falls short.** On the other hand, the Simulatability score cannot provide a consistent ranking of explanation quality on the two models. Instead, the Simulatability score provides two distinct rankings: | Simulatability | | | | | | |--------------------|---------------------------------------|--------------------|--------|--------------------|--------| | T5-base | M predict+Baseline f inetune+Baseline | M predict+Infusion | Score | M predict+Infusion | | | f inetune+Baseline | f inetune+Infusion | | | | | | ECQA | 0.572 | 0.746 | 0.174 | 0.989 | 0.591 | | CoS-E v1.11 | 0.608 | 0.610 | 0.002 | 0.803 | 0.197 | | CoS-E v1.0 | 0.695 | 0.645 | -0.05 | 0.878 | 0.133 | | e-SNLI | 0.907 | 0.676 | -0.231 | 0.981 | -0.157 | | ComVE | 0.88 | 0.527 | -0.353 | 0.949 | -0.284 | | Simulatability | | | | | | | BART-base | M predict+Baseline f inetune+Baseline | M predict+Infusion | Score | M predict+Infusion | | | f inetune+Baseline | f inetune+Infusion | | | | | | ECQA | 0.428 | 0.438 | 0.010 | 0.901 | 0.483 | | CoS-E v1.11 | 0.443 | 0.449 | 0.006 | 0.700 | 0.263 | | CoS-E v1.0 | 0.512 | 0.486 | -0.026 | 0.790 | 0.252 | | e-SNLI | 0.888 | 0.658 | -0.23 | 0.978 | -0.14 | | ComVE | 0.812 | 0.596 | -0.216 | 0.864 | -0.164 | Table 3: Evaluation results of human-annotated explanations in 5 datasets with our Treu score and Simulatability score. The tables above and below correspond to models fine-tuned on T5-base and BART-base, respectively. The Simulatability score only considers M predict+Baseline f inetune+Baseline and M predict+Infusion f inetune+Baseline, while our Treu score considers M predict+Infusion f inetune+Infusion additionally. ## T5-Base: $$\mathbf{E}\,\mathbf{v1.11}>$$ ECQA > **CoS-E v1.11** > **CoS-E v1.0 $>$ e-SNLI $>$** **ComVE.** ## Bart-Base: $$\mathbf{ECQA}>\mathbf{CoS\textrm{-}E\,v1.11}>$$ **CoS-E v1.0 $>$** **CornVE $>$ e-SNLI** From Table 3, the Simulatability score ranks e-SNLI and ComVE reversely on BART compared with T5 models, indicating Simulatability score could be more affected by different model architectures even with the unified data structure. One advantage of using our Treu score to evaluate the quality of explanations is that we can analyze the score by class or intermediate results from fine-tuning or inference. For instance, we observe that the Treu scores for e-SNLI with T5 and BART models are both negative, indicating that the helpfulness of explanations in e-SNLI could be limited. However, by looking into the intermediate results, though the baseline models perform significantly worse while predicting with Infusion than with Baseline setting, the models that are finetuned with Infusion still outperform the baseline models while predicting with Infusion , justifying the explanations indeed provide improvements under this setting. When we further decompose the Treu score of e-SNLI by category, we acquire 0.13/-0.483/*0.094* on T5-base and *0.015*/- 0.227/*-0.271* on BART-base corresponds to entailment/neutral/*contradiction*. We speculate that the helpfulness of humanannotated explanations to models highly depends on the task (e.g., the 'contradiction' label categories) and the explanation format (e.g., counter-factorial styles). We notice that the models fine-tuned on T5 and BART have more than a 40% prediction accuracy drop on data with 'neutral' labels when they are fine-tuned with Baseline and predicted with Infusion . In addition, we observe that the fine-tuned BART models have about a 40% prediction accuracy drop on data with ground-truth 'contradiction' labels. We suspect human annotators behave differently while providing explanations for different categories in e-SNLI. For instance, humans tend to provide counter-factorial explanations or use negation connotations to explain why two sentences are 'neutral' or 'contradiction' categories. Some representative examples for each class are provided in Appendix 5. Such behavior's tendency to use negation connotations in explanations for specific categories may increase the difficulty for the models to interpret the information and lead to false predictions eventually. From Table 3, ComVE ranks worst among the five datasets in both tables, indicating the explanations in ComVE are the least helpful for the models to either fine-tune or predict with. Since the ComVE task asks models to predict which sentence is more likely *against* commonsense, the question itself implies a negation connotation. Likewise, many ComVE explanations contain negation, such as the one in Figure 1. The concept of negation has always been a complex concept for machines. Although both T5 and BART models fine-tuned with the Baseline setting can perform relatively well on ComVE, the addition of explanations that largely contain negation during inference is likely to create more difficulties for the models to understand and eventually lead to false prediction. Our hypothesis on counter-examples or negation annotations in human-annotated explanations can find support from many recent works. A recent analysis (Joshi et al., 2022) claimed that negation connotations have high necessity but low sufficiency to describe the relation between features and labels. In addition, counterfactuallyaugmented data may prevent models from learning unperturbed robust features and exacerbate spurious correlations (Joshi and He, 2021). Therefore, we suggest human annotators avoid using counterexamples while providing explanations. Instead, using precise words to describe the degree of relations between concepts will be preferable and provide better helpfulness to models. Nevertheless, these models can correctly understand explanations for all categories after being fine-tuned with the Infusion setting. Worth pointing out that ECQA explanations are summarized from positive and negative properties for each candidate choice which also contains negation words. However, those negation words mostly appear in negative properties for wrong choices. As a result, we notice the pre-trained baseline models can leverage ECQA explanations with Infusion during the predicting process and achieve performance improvement. Since we are the first to discover such a class-level drop on e-SNLI by using Treu score, we only propose our hypothetical assumption and leave a definitive study for future work. ## 6 Conclusion In this paper, we objectively evaluate humanannotated natural language explanations from the perspective of measuring their helpfulness towards models' prediction. We conduct two preliminary experiments and based on the findings from the preliminary study, we define an evaluation metric that considers the explanations' helpfulness at both fine-tuning and inference stages; We also propose a unified prompt-based data format that minimizes the influence of task differences by mapping various tasks into a unified multiple-choice generation task. Our experiment with human-annotated explanations in five popular large-scale datasets over two sequence-to-sequence model architectures demonstrates that our metric can consistently reflect the relative ranking of explanation qualities among five datasets while the Simulatability score falls short. Our work lays a stepstone towards a high-quality human-AI collaboration future for data annotation job (Wang et al., 2019a), and we recommend researchers perform similar quality checks while collecting human-annotated explanations in the future. ## 7 Limitations In this paper, we evaluate the quality of humanannotated natural language explanations towards the models' prediction performance on multiple datasets. Although it is a natural step that our evaluation metric could be generalized to evaluate the helpfulness of model-generated explanations, we would like to caution that: our metric and evaluation experiment requires the models to generate explanations for the train split data, then use the data with generated explanations to fine-tune the second model with the Infusion setting, which may not be suitable for those systems that are trained on train split data. In addition, we acknowledge that the human-annotated explanations are very expensive to collect, thus, a better mechanism (e.g., Active-Learning approaches (Yao et al., 2023)) is needed to improve human annotators' performance. ## 8 Ethics Statement We do not see potential ethical concerns or misuse of the proposed evaluation method. One potential risk, though minimal, could be the misinterpretation of the findings of this paper. We would like to caution readers that a higher score of our metric may not necessarily reflect a higher quality perceived by humans, as the evaluation metric only measures the explanation's benefit from the modeling perspective, and it is only one of the many possible ways of automatically evaluating the quality of natural language explanations. ## Acknowledgements This work was supported by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons). ## References Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for commonsenseqa: New dataset and models. In *Workshop on Commonsense Reasoning and Knowledge Bases*. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Faeze Brahman, Vered Shwartz, Rachel Rudinger, and Yejin Choi. 2021. Learning to rationalize for nonmonotonic reasoning with distant supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12592–12601. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. *Advances in Neural Information Processing* Systems, 31. Samuel Carton, Anirudh Rathore, and Chenhao Tan. 2020. Evaluating and characterizing human rationales. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 9294–9307, Online. Association for Computational Linguistics. Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, and Xiang Ren. 2022a. Frame: Evaluating simulatability metrics for free-text rationales. arXiv preprint arXiv:2207.00779. Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, and Hamed Firooz. 2022b. Unirex: A unified learning framework for language model rationale extraction. In *International Conference on Machine Learning*, pages 2867–2889. PMLR. Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, and Devi Parikh. 2018. Do explanations make vqa models more predictable to a human? *arXiv preprint arXiv:1810.12366*. Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? *arXiv preprint arXiv:2204.11790*. Quan Ze Chen, Daniel S Weld, and Amy X Zhang. 2021. Goldilocks: Consistent crowdsourced scalar annotations with relative uncertainty. *Proceedings of the* ACM on Human-Computer Interaction, 5(CSCW2):1– 25. Bhavana Dalvi, Oyvind Tafjord, and Peter Clark. 2022. Towards teachable reasoning systems. *arXiv preprint* arXiv:2204.13074. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458, Online. Association for Computational Linguistics. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 866– 874. Simret Araya Gebreegziabher, Zheng Zhang, Xiaohang Tang, Yihao Meng, Elena L Glassman, and Toby JiaJun Li. 2023. Patat: Human-ai collaborative qualitative coding with explainable interactive rule synthesis. In *Proceedings of the 2023 CHI Conference on* Human Factors in Computing Systems, pages 1–19. Peter Hase and Mohit Bansal. 2021. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201. Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language? *arXiv preprint arXiv:2010.04119*. Nitish Joshi and He He. 2021. An investigation of the (in) effectiveness of counterfactually augmented data. arXiv preprint arXiv:2107.00753. Nitish Joshi, Xiang Pan, and Hengxing He. 2022. Are all spurious features in natural language alike? an analysis through a causal lens. *ArXiv*, abs/2210.14011. Tomáš Kocisk ˇ y, Jonathan Schwarz, Phil Blunsom, Chris ` Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328. Sawan Kumar and Partha Talukdar. 2020. NILE : Natural language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8730–8742, Online. Association for Computational Linguistics. Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines GerardUrsin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, et al. 2022. Evaluating human-language model interaction. *arXiv preprint arXiv:2212.09746*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81. Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2021. Generated knowledge prompting for commonsense reasoning. *arXiv preprint* arXiv:2110.08387. Ana Marasovic, Iz Beltagy, Doug Downey, and ´ Matthew E Peters. 2021. Few-shot selfrationalization with natural language prompts. arXiv preprint arXiv:2111.08284. Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, and Hui Su. 2021. Narrative question answering with cutting-edge opendomain QA techniques: A comprehensive study. Transactions of the Association for Computational Linguistics, 9:1032–1046. Xiangyang Mou, Mo Yu, Bingsheng Yao, Chenghao Yang, Xiaoxiao Guo, Saloni Potdar, and Hui Su. 2020. Frustratingly hard evidence retrieval for QA over books. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, pages 108–113, Online. Association for Computational Linguistics. Michael Muller, Christine T Wolf, Josh Andres, Michael Desmond, Narendra Nath Joshi, Zahra Ashktorab, Aabhas Sharma, Kristina Brimijoin, Qian Pan, Evelyn Duesterwald, et al. 2021. Designing ground truth and the social life of labels. In *Proceedings of the* 2021 CHI conference on human factors in computing systems, pages 1–16. Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain their predictions. *arXiv preprint arXiv:2004.14546*. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Prompting contrastive explanations for commonsense reasoning tasks. arXiv preprint arXiv:2106.06823. Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1–52. Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language processing task solver? *arXiv preprint arXiv:2302.06476*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Dheeraj Rajagopal, Vidhisha Balachandran, Eduard Hovy, and Yulia Tsvetkov. 2021. Selfexplain: A self-explaining architecture for neural text classifiers. arXiv preprint arXiv:2103.12279. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. *arXiv preprint arXiv:1906.02361*. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Rachel Rudinger, Vered Shwartz, Jena D Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language. In *Findings of the Association for Computational* Linguistics: EMNLP 2020, pages 4661–4675. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. 2019. Social bias frames: Reasoning about social and power implications of language. *arXiv preprint arXiv:1911.03891*. Jiao Sun, Swabha Swayamdipta, Jonathan May, and Xuezhe Ma. 2022. Investigating the benefits of freeform rationales. *arXiv preprint arXiv:2206.11083*. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020. Semeval2020 task 4: Commonsense validation and explanation. *arXiv preprint arXiv:2007.00236*. Dakuo Wang, Justin D Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019a. Human-ai collaboration in data science: Exploring data scientists' perceptions of automated ai. *Proceedings of the ACM on human-computer interaction*, 3(CSCW):1–24. Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019b. Designing theory-driven user-centric explainable ai. In *Proceedings of the 2019 CHI conference on human factors in computing systems*, pages 1–15. Sarah Wiegreffe and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable natural language processing. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). Sarah Wiegreffe, Ana Marasovic, and Noah A Smith. ´ 2020. Measuring association between labels and freetext rationales. *arXiv preprint arXiv:2010.12762*. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. 2022. Fantastic questions and where to find them: FairytaleQA - an authentic dataset for narrative comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 447–460, Dublin, Ireland. Association for Computational Linguistics. Bingsheng Yao, Ishan Jindal, Lucian Popa, Yannis Katsis, Sayan Ghosh, Lihong He, Yuxuan Lu, Shashank Srivastava, James Hendler, and Dakuo Wang. 2023. Beyond labels: Empowering human with natural language explanations through a novel active-learning architecture. *arXiv preprint*. Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Li, Mo Yu, and Ying Xu. 2022. It is AI's turn to ask humans a question: Questionanswer pair generation for children's story books. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 731–744, Dublin, Ireland. Association for Computational Linguistics. Xi Ye and Greg Durrett. 2022. The unreliability of explanations in few-shot prompting for textual reasoning. *Advances in neural information processing* systems. Arnold Yeung, Shalmali Joshi, Joseph Jay Williams, and Frank Rudzicz. 2020. Sequential explanations with mental model-based policies. arXiv preprint arXiv:2007.09028. Zhenjie Zhao, Yufang Hou, Dakuo Wang, Mo Yu, Chengzhong Liu, and Xiaojuan Ma. 2022. Educational question generation of children storybooks via question type distribution learning and event-centric summarization. *arXiv preprint arXiv:2203.14187*. Figure 4: The unified structure of Baseline , Infusion , and self-rationalization settings. Bold text are fixed ![11_image_0.png](11_image_0.png) prompts for each dataset. ## Appendix A Implementation Of Self-Rationalization Format We show the implementation of the selfrationalization setting proposed by Marasovic et al. ´ (2021) and put it together in Figure 4 with our proposed unified structure of the Baseline and Infusion setting. ## B Experiment Hyper-Parameters We perform all the computational experiments on a Google Colab instance with a single Nvidia V100 GPU and 50 Gigabytes of RAM. ## B.1 Hyper-Parameter For Preliminary Experiment For the preliminary experiment of utilizing explanations as part of input V.S. part of the output, we leverage the following hyper-parameters for all models with different data structures: max_len : 512, target_max_len : 64, train_batch_*size* : 1, learning_*rate* : 5e−5, num_train_*epochs* : 12. For the preliminary experiment of explanations as partial input during fine-tuning, we maintain the following hyper-parameters for all models finetuned with partial/full train data of CoS-E and ECQA datasets: max_len : 512, target_max_len : 16, train_batch_*size* : 1, learning_*rate* : 1e−4, num_train_*epochs* : 6. ## B.2 Hyper-Parameter For Explanation Evaluation With Five Datasets For the evaluation of human-annotated explanations on 5 different datasets, we maintain the following hyper-parameters for all the models: max_len : 512, *target*_max_len : 64, train_batch_*size* : 1, learning_*rate* : 5e−5, num_train_*epochs* : 12. The only exception is the e-SNLI dataset, which has about 10x the size (549,367 data instances) of training data compared to the other datasets. Therefore, we only fine-tune models on the e-SNLI dataset with two epochs. ## C Results For Preliminary Experiment - Explanations As Partial Input During Fine-Tuning We randomly shuffle three seeds to select the subset of data and fine-tune the model for the preliminary experiment of explanations as partial input during fine-tuning. The detailed results of each experiment and average accuracy are reported in Table 4. ## D Examples Of Diff**Erent Explanations For** Each Category In E-Snli Dataset From our evaluation results, we suspect human annotators behave differently while explaining data with various categories in e-SNLI. For instance, human annotators may explain why two sentences are 'entailment' by describing the shared information or similarities conveyed by both sentences, which is easy for models to understand. However, humans tend to provide counter-examples or negations to explain why two sentences are unrelated (neutral) or contradictory rather than explaining their reasoning in a positive way. In Table 5, we show representative examples of data with corresponding explanations for each class. | Fine-tune with Baseline on CoS-E v1.0 | | | | | | | | | | | |---------------------------------------------------------------------------------------------------------------|---------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 1 | | | 0.583 | 0.656 | 0.638 | 0.658 | 0.661 | 0.670 | 0.674 | 0.678 | 0.697 | 0.676 | | | 0.550 | 0.644 | 0.664 | 0.650 | 0.666 | 0.667 | 0.667 | 0.682 | 0.668 | 0.682 | | | 0.584 | 0.64 | 0.64 | 0.655 | 0.670 | 0.675 | 0.677 | 0.66 | 0.674 | 0.68 | | | Average | 0.572 | 0.647 | 0.647 | 0.655 | 0.665 | 0.671 | 0.673 | 0.673 | 0.680 | 0.679 | | Predict | | | | | | | | | | | | Baseline | 0.586 | 0.586 | 0.625 | 0.633 | 0.596 | 0.621 | 0.663 | 0.655 | 0.649 | 0.676 | | 0.561 | 0.591 | 0.642 | 0.609 | 0.656 | 0.630 | 0.618 | 0.650 | 0.641 | 0.652 | | | 0.525 | 0.6 | 0.631 | 0.62 | 0.631 | 0.614 | 0.658 | 0.595 | 0.647 | 0.665 | | | Average | 0.545 | 0.592 | 0.632 | 0.621 | 0.628 | 0.622 | 0.647 | 0.634 | 0.645 | 0.664 | | Predict | | | | | | | | | | | | Infusion | Fine-tune with Infusion on CoS-E v1.0 | | | | | | | | | | | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 1 | | | 0.588 | 0.622 | 0.617 | 0.613 | 0.635 | 0.616 | 0.615 | 0.625 | 0.652 | 0.629 | | | 0.592 | 0.614 | 0.573 | 0.610 | 0.650 | 0.592 | 0.632 | 0.64 | 0.610 | 0.64 | | | 0.601 | 0.609 | 0.615 | 0.618 | 0.631 | 0.629 | 0.641 | 0.635 | 0.652 | 0.634 | | | Average | 0.594 | 0.615 | 0.602 | 0.614 | 0.639 | 0.612 | 0.629 | 0.633 | 0.638 | 0.634 | | Predict | | | | | | | | | | | | Baseline | 0.867 | 0.874 | 0.884 | 0.889 | 0.902 | 0.894 | 0.890 | 0.886 | 0.910 | 0.904 | | 0.875 | 0.888 | 0.881 | 0.890 | 0.898 | 0.901 | 0.9 | 0.901 | 0.896 | 0.895 | | | 0.877 | 0.885 | 0.887 | 0.887 | 0.903 | 0.907 | 0.898 | 0.910 | 0.894 | 0.908 | | | Average | 0.873 | 0.882 | 0.884 | 0.889 | 0.901 | 0.901 | 0.896 | 0.899 | 0.900 | 0.902 | | Predict | | | | | | | | | | | | Infusion | Fine-tune with Baseline on ECQA | | | | | | | | | | | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 1 | | | 0.495 | 0.522 | 0.528 | 0.553 | 0.550 | 0.550 | 0.554 | 0.569 | 0.561 | 0.562 | | | 0.471 | 0.505 | 0.525 | 0.533 | 0.549 | 0.561 | 0.558 | 0.572 | 0.572 | 0.572 | | | 0.469 | 0.511 | 0.533 | 0.541 | 0.553 | 0.545 | 0.569 | 0.564 | 0.566 | 0.565 | | | Average | 0.478 | 0.513 | 0.529 | 0.542 | 0.551 | 0.552 | 0.560 | 0.568 | 0.566 | 0.566 | | Predict | | | | | | | | | | | | Baseline | 0.664 | 0.672 | 0.710 | 0.716 | 0.692 | 0.702 | 0.708 | 0.722 | 0.684 | 0.701 | | 0.685 | 0.682 | 0.673 | 0.697 | 0.681 | 0.682 | 0.694 | 0.677 | 0.699 | 0.641 | | | 0.678 | 0.715 | 0.693 | 0.648 | 0.706 | 0.713 | 0.686 | 0.685 | 0.688 | 0.711 | | | Average | 0.675 | 0.690 | 0.692 | 0.687 | 0.693 | 0.699 | 0.696 | 0.695 | 0.690 | 0.684 | | Predict | | | | | | | | | | | | Infusion | Fine-tune with Infusion on ECQA | | | | | | | | | | | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 1 | | | 0.417 | 0.406 | 0.402 | 0.395 | 0.381 | 0.379 | 0.365 | 0.379 | 0.375 | 0.374 | | | 0.381 | 0.363 | 0.367 | 0.366 | 0.368 | 0.400 | 0.385 | 0.349 | 0.368 | 0.371 | | | 0.381 | 0.386 | 0.345 | 0.341 | 0.369 | 0.376 | 0.361 | 0.359 | 0.386 | 0.334 | | | Average | 0.393 | 0.385 | 0.371 | 0.367 | 0.373 | 0.385 | 0.370 | 0.362 | 0.376 | 0.360 | | Predict | | | | | | | | | | | | Baseline | 0.974 | 0.983 | 0.983 | 0.989 | 0.985 | 0.988 | 0.989 | 0.984 | 0.990 | 0.992 | | 0.984 | 0.985 | 0.983 | 0.981 | 0.990 | 0.989 | 0.991 | 0.985 | 0.990 | 0.983 | | | 0.984 | 0.982 | 0.984 | 0.981 | 0.989 | 0.987 | 0.988 | 0.989 | 0.989 | 0.989 | | | Average | 0.980 | 0.983 | 0.983 | 0.984 | 0.988 | 0.988 | 0.989 | 0.986 | 0.990 | 0.988 | | Predict | | | | | | | | | | | | Infusion | | | | | | | | | | | | Table 4: Detailed results for the preliminary experiment of explanations as partial input during fine-tuning. | | | | | | | | | | | | Category | Premise | Hypothesis | Explanation | |----------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------------------------| | entailment | A young family enjoys feeling ocean waves lap at their feet. | A family is at the beach. | Ocean waves implies the beach. | | An old man with a package poses in front of an advertisement. | A man poses in front of an ad. | The word " ad " is short for the word " advertisement ". | | | A man reads the paper in a bar with green lighting. | The man is inside. | In a bar means the man could be inside. | | | neutral | An old man with a package poses | A man poses in front of an ad for beer. | Not all advertisements are ad for beer. | | in front of an advertisement. A woman with a green headscarf, blue shirt and a very big grin. | The woman is young. | the woman could've been old rather than young | | | A man reads the paper in a bar with green lighting. | The man is reading the sportspage. | The man could be reading something other than the sportspage. | | | contradiction | A woman with a green headscarf, blue shirt and a very big grin. | The woman has been shot. | There can be either a woman with a very big grin or a woman who has been shot. | | A man playing an electric guitar on stage. | A man playing banjo on the floor. | The man can't play on stage if he is on the floor. | | | A couple walk hand in hand down a street. | A couple is sitting on a bench. | The couple cannot be walking and sitting a the same time. | | | Table 5: Representative examples of data with corresponding explanations for each class in e-SNLI. | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6, 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 2 ## C ✓ **Did You Run Computational Experiments?** 4, 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, 5, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4, 5, Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4, 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yoo-etal-2023-rethinking
Rethinking Annotation: Can Language Learners Contribute?
https://aclanthology.org/2023.acl-long.822
Researchers have traditionally recruited native speakers to provide annotations for the widely used benchmark datasets. But there are languages for which recruiting native speakers is difficult, and it would help to get learners of those languages to annotate the data. In this paper, we investigate whether language learners can contribute annotations to the benchmark datasets. In a carefully controlled annotation experiment, we recruit 36 language learners, provide two types of additional resources (dictionaries and machine-translated sentences), and perform mini-tests to measure their language proficiency. We target three languages, English, Korean, and Indonesian, and four NLP tasks, sentiment analysis, natural language inference, named entity recognition, and machine reading comprehension. We find that language learners, especially those with intermediate or advanced language proficiency, are able to provide fairly accurate labels with the help of additional resources. Moreover, we show that data annotation improves learners{'} language proficiency in terms of vocabulary and grammar. The implication of our findings is that broadening the annotation task to include language learners can open up the opportunity to build benchmark datasets for languages for which it is difficult to recruit native speakers.
# Rethinking Annotation: Can Language Learners Contribute? Haneul Yoo1, Rifki Afina Putri1, Changyoon Lee1**, Youngin Lee**1, So-Yeon Ahn1, Dongyeop Kang2**, Alice Oh**1 1KAIST, South Korea, 2University of Minnesota, USA {haneul.yoo, rifkiaputri, cyoon47, conviette}@kaist.ac.kr, ahnsoyeon@kaist.ac.kr, dongyeop@umn.edu, alice.oh@kaist.edu ## Abstract Researchers have traditionally recruited native speakers to provide annotations for widely used benchmark datasets. However, there are languages for which recruiting native speakers can be difficult, and it would help to find learners of those languages to annotate the data. In this paper, we investigate whether language learners can contribute annotations to benchmark datasets. In a carefully controlled annotation experiment, we recruit 36 language learners, provide two types of additional resources (dictionaries and machine-translated sentences), and perform mini-tests to measure their language proficiency. We target three languages, English, Korean, and Indonesian, and the four NLP tasks of sentiment analysis, natural language inference, named entity recognition, and machine reading comprehension. We find that language learners, especially those with intermediate or advanced levels of language proficiency, are able to provide fairly accurate labels with the help of additional resources. Moreover, we show that data annotation improves learners' language proficiency in terms of vocabulary and grammar. One implication of our findings is that broadening the annotation task to include language learners can open up the opportunity to build benchmark datasets for languages for which it is difficult to recruit native speakers. ## 1 Introduction Data annotation is important, and in NLP, it has been customary to recruit native speakers of the target languages, even though it is difficult to recruit native speakers for many languages. Meanwhile, there are many people learning another language, for instance, Duolingo claims that 1.8 billion people are learning a foreign language using their app. 1 In this paper, we examine whether language learners can annotate data as well as native speak-1https://www.duolingo.com/ ![0_image_0.png](0_image_0.png) ers and whether their annotations can be used to train language models. We explore this question with five control variables that may affect the quality of language learner annotations. These are the language, task, learners' language proficiency, difficulty of the annotation questions, and additional resources that learners can consult. We recruited learners at various levels of proficiency in English (high-resource), Korean (mid-resource), and Indonesian (low-resource). They annotated data on four tasks, sentiment analysis (SA), natural language inference (NLI), named entity recognition (NER), and machine reading comprehension (MRC). We ask three levels of learners to complete multiple sessions of data annotation given with the help of a dictionary or machine-translated texts. Our major findings, both in terms of the quality and learning effect of learners' annotations, are summarized as follows: We measure the degree of inter-annotator agreement between learners and ground truth labels, and show that *language learners can annotate data at a fairly accurate level*, especially for the simpler tasks of SA and NER, and for easy- to medium-level questions. Language learners consulting dictionaries generate more accurate labels than learners consulting machine-translated sentences. Language models trained on data generated from the distribution of the learners' annotations achieved performance comparable to those of models trained on ground truth labels, demonstrating the efficacy of learner-annotated data. We also observe that *learners' language proficiency in vocabulary and grammar tends to improve* as they carry out the annotation tasks. We measure their proficiency by conducting pre- and post-tests before and after the annotation. Learners perceive that their language proficiency improved during data annotation, and most were willing to re-participate in the process. We hope this paper allows researchers to question the necessity of recruiting native speakers for data annotation and call on other NLP researchers carefully to consider the criteria by which to recruit crowdworkers for data annotation carefully. ## 2 Related Work We can group annotators of NLP datasets into language learners, non-speakers, and non-experts. Language learners are people who are learning the target language, while non-speakers are those who have never learned the target language. Nonexperts are people who have no expertise in NLP tasks or data annotations. We look at previous work with these three annotator groups. Language Learner Annotation. There are several tools for both language learning and crowdsourcing that create linguistic resources. The early motivation of Duolingo was to translate the web with language learners (von Ahn, 2013). Hladká et al. (2014) introduced a pilot experiment on Czech, the aim of which was both data annotation and the teaching of grammar. Sangati et al. (2015) proposed a web-based platform similar to that of Duolingo that undertakes POS tagging with grammar exercises through interactions between a teacher's validation and students' annotations. Nicolas et al. (2020) employed language learners to extend existing language resources (ConceptNet (Liu and Singh, 2004)), showing that this method also has educational values. However, they did not explicitly mention the details of their experimental settings, including the number of participants, and there was no study that recruited and employed language learners in NLP tasks with a comprehensive empirical analysis of diverse factors. Non-speaker Annotation. A recent study employed non-speakers on specific NLP tasks and provided tools for non-speaker annotators, but that study mainly focused on easy tasks such as NER and binary classification tasks. Tsygankova et al. (2021) employed non-speakers as annotators to build a NER dataset and model for Indonesian, Russian, and Hindi and compared their performances with those of fluent speakers'. The non-speakers produced meaningful results for NER in Indonesian on a combination of an easy task and an easy language written in the Latin alphabet with simple grammar. Mayhew et al. (2020); Kreutzer et al. (2022) also employed non-speakers for some easy tasks such as NER along with native or fluent speakers. Despite these efforts, it remains unclear as to whether non-speakers can undertake annotation on more complex tasks such as MRC with a paragraph to read, and NLI, requiring a comprehensive understanding of the premise and hypothesis sentences to infer the connection between the sentences correctly. Hermjakob et al. (2018); Mayhew and Roth (2018); Lin et al. (2018); Costello et al. (2020) devised assisting tools for non-speaker annotation, providing English translation, romanization, dictionary matching, and grammar-related descriptions. We expect that English translation and dictionary matching may also be helpful to language learners and adopt the same setup. However, neither romanization nor grammar-related descriptions may help the learners because they already have some background knowledge of the target language, unlike the non-speakers. Non-expert Annotation. Snow et al. (2008) suggested using a collection of non-expert annotations rather than expensive expert annotations. They analyzed and compared those two types of annotations on several NLP tasks. Only relatively few non-expert annotations are necessary to equal the performance of an expert annotator for certain simple tasks. Madge et al. (2019) suggest the training of non-expert annotators via progression in a language annotation game considering the linguistic ability of crowdworkers and the readability level of documents. ## 3 Study Design This section describes how we carefully design our controlled experiments with diverse factors that may affect the quality of learners' annotations and ## 3.1 Control Variables Table 2 shows a summary of the different control variables considered in our experiments with the corresponding values. We should take these control variables into account when simulating learners' annotations in real-world scenarios and use diverse combinations of them. We set the major control variables based on previous work on NLP data annotation (Joshi et al., 2020; Wang et al., 2018; Lin et al., 2018) and language learning (Lee and Muncie, 2006; Crossley et al., 2008; Shieh and Freiermuth, 2010). Language Selection. We choose three target languages, English (EN), Korean (KO), and Indonesian (ID), based on the availability of gold-label data, the availability of native speakers to evaluate, and the difficulty of the language. English is the highest-resource language, while Korean and Indonesian are mid- to low-resource languages, respectively (Joshi et al., 2020). Korean uses its own alphabet, while Indonesian adopts the Latin alphabet. The Foreign Service Institute (FSI) 2categorizes languages into five categories based on the amount of time it takes to learn them considering several variables, including grammar, vocabulary, pronunciation, writing system, idiomatic expressions, distance from English, dialects, and learning resources. According to the FSI ranking, Indonesian is in category 2, requiring around 36 weeks or 900 class hours, Korean is in category 4, requiring 88 weeks or 2200 class hours to reach B2/C1 level in CEFR, and English is in category 0. Task and Data. We choose four tasks from each common task type in the GLUE benchmark (Wang et al., 2018): sentiment analysis (SA) for single sentence classification, natural language inference (NLI) for sentence pair classification, named entity recognition (NER) for sequence tagging, and machine reading comprehension (MRC) for span prediction. Table 1 presents a list of the datasets used in our study. SA has two options (positive and negative), and NLI has three options (entailment, neutral, and contradict) for all languages. The NER datasets have different categories of named entities among the languages, while all languages have person and location entities. 2https://www.state.gov/foreign-languag e-training/ Participant Selection. We adopt and revise the CEFR3criteria to categorize learners into three levels: basic (A1-A2), intermediate (B1-B2), and advanced (C1-C2). Table 3 shows our recruiting criteria with respect to language fluency. We do not request official test scores for basic-level learners, as they may not have taken official language proficiency tests. We assign the learners at each level to annotate questions to facilitate majority voting among three responses from different levels of participants. All annotators in our experiments are non-experts in NLP data annotations, and three annotators are allocated to each task and each additional resource. Participants are asked to do two tasks: SA and MRC, or NER and NLI. The study involved participants with ages ranging from 19 to 44 (average 31.5, median 24) at the time of the experiment. They are primarily undergraduate or graduate students, with some office workers and unemployed individuals. Additional Resources. Lin et al. (2018) observed that additional resources such as dictionary matching or English translation may assist nonspeakers with annotation tasks. We divide the participants into two groups with the additional resources at their disposal, in this case a dictionary and translations provided by a commercial MT system. We only provide texts in the target language and ask participants to consult online or offline dictionaries if they need any help in the dictionary setting. Otherwise, we provide both the texts in the target language and corresponding translations created by the Google Translate API on our website and ask the participants not to use any other external resources. Annotation Sample Selection. We randomly sample 120 annotation samples for each task from the source datasets and categorize them into five groups based on their difficulty level. The sentencelevel difficulty score is calculated using a macro average of several linguistic features from CohMetrix (Graesser et al., 2004), a metric for calculating the coherence and cohesion of texts. The linguistic features that we use in our experiment are the lexical diversity, *syntactic complexity*, and descriptive measure. Lexical diversity is computed by the type-token ratio, syntactic complexity is computed according to the number of conjunction 3Common European Framework of Reference for Languages (https://www.coe.int/en/web/common-e uropean-framework-reference-languages) ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) Table 1: Source dataset for each language and task. ![3_image_2.png](3_image_2.png) words, and descriptive measure is computed by the sentence character length, the number of words, and the mean of the number of word syllables. We add additional metrics for MRC tasks that contain a paragraph, in this case the number of sentences in the paragraph, the character length of the answer span, and the number of unique answers. The paragraph-level difficulty score is calculated by taking the average of the sentence-level scores in the paragraph. Test Question Selection. Pre- and post-tests are used, consisting of five questions from official language proficiency tests and ten questions asking about the meanings of words appearing in annotation samples that they will solve in the same session. Standardized test questions explore whether participating in the annotation improves the learners' overall language proficiency over several days, while word meaning questions aim to inspect whether participating in the annotation helps them learn some vocabulary. We use TOPIK 8for Korean, UKBI 9and BIPA 10 for Indonesian, and TOEIC 11 and GRE 12 for English. We chose nouns and verbs from annotation questions and created multiple-choice questions whose answers are the nouns or the verbs in the annotation questions. ## 3.2 Workflow Step 1: Pre-survey As shown in Figure 2, we use a survey to ask participants about their selfrated language fluency, language background, and learning experience before the main experiments. We describe the CEFR criteria and ask participants to self-evaluate their language proficiency in general, colloquial, and formal texts and choose which of the colloquial and formal texts they are more familiar with. Step 2: Experiment Our experiments consist of a series of multiple sessions over six days. Each session consists of three steps, and we ask participants to do two sessions per task per day and 8Test Of Proficiency In Korean (https://www.topi k.go.kr/) 9Uji Kemahiran Berbahasa Indonesia (https://ukbi .kemdikbud.go.id/) 10Bahasa Indonesia untuk Penutur Asing (https://bi pa.ut.ac.id/) 11Test Of English for International Communication (http s://www.ets.org/toeic) 12Graduate Record Examination (https://www.ets. org/gre) | Basic | Intermediate | Advanced | | |---------|--------------------------------------------|--------------------------|-----------------------| | EN | Self report A | TOEFL 457-109 | TOEFL ≥ 110 | | KO | Learning experience < 1 yr & Self report A | TOPIK Level 2-4 | TOPIK ≥ Level 5 | | ID | Learning experience < 1 yr & Self report A | OPI 5≤ IH || FLEX 6≈ 600 | OPI ≥ AL || TIBA 7≥ 4 | Table 3: Learner level criteria | Accuracy | Inter-Annotator Agreement | Time (min) | | | |-------------------|-----------------------------|--------------|-----------|-----------| | Native Speakers | - | 8.53±0.09 | 0.77±0.02 | 4.07±0.78 | | Language Learners | Dictionary | 7.72±0.09 | 0.70±0.01 | 6.92±0.70 | | Translation | 7.31±0.09 | 0.67±0.01 | 6.49±0.36 | | Table 4: Annotation comparison between native speakers and learners (with dictionary and translation settings). Accuracy means the number of correct questions compared to the ground truth labels out of 10. Inter-Annotator agreement means pairwise F1-score. Time means how long annotating 10 samples takes in minutes. repeat this for six consecutive days. Before starting the main experiment, we provide a pilot session to check whether the participants fully understand our instructions. All of the experimental processes are done on our research website, and we measure the time spent by the participants on each step. Step 2.1: Pre-test Participants solve 15 test questions to check their language proficiency level. All test questions are multiple-choice types and include the "*I don't know*" option. Step 2.2: Annotation Participants annotate ten questions with the help of the additional resources assigned. Step 2.3: Post-test After completing the annotation, participants solve the same 15 test questions they solved in the pre-test. This step investigates whether data annotation has any learning effect. Step 3: Post-survey After the experiments, participants complete a post-survey about their thoughts on annotation and self-rated language proficiency. They answer the questions below for each task on a five-point Likert scale from "strongly disagree" to "*strongly agree*". ## 4 Experimental Results We discuss the results of our experiments with respect to two research questions: 1. Can we obtain a reliable dataset from learners' annotations? Which design setting would be most helpful? We answer this question via quality assessment (§4.1), training simulation (§4.2), and error analysis (§5.1). 2. Do learners improve their language proficiency while annotating the NLP tasks (§5.2)? All findings we discuss in this section were shown to be statistically significant at p level of < 0.05 using ANOVA. Specifically, comparisons for annotation accuracy, annotation time, and survey responses were analyzed with four-way ANOVA over the four between-subject factors of task, language, additional resources, and learner level. Comparisons between pre-test and post-test results were done with a mixed two-way ANOVA with learner level and additional resources as between-subject factors. Pairwise t-tests were conducted for all factors with Bonferroni corrections. ## 4.1 Annotation Quality Accuracy and Agreement. Table 4 shows the results of annotations generated by language learners compared to native speakers. Language learners made correct annotations to 7.48 questions among 10 questions on average 13, taking 6.68 minutes. They generated 1.05 less accurate labels and took 2.6 minutes longer time than the native speakers. Learners assisted by dictionaries can produce more reliable labels than learners using MT system. Meanwhile, majority voting among native speakers generated 19 incorrect labels out of 120 questions, compared to learners' 21.5 incorrect labels (Table 11 in Appendix). This shows that language learners' annotations can be aggregated by majority voting to be nearly as accurate as those of native speakers. 13Annotation accuracy was computed by a weighted averaged F1 score compared to the ground truth label on NER and MRC. The average of the weighted-averaged F1 score was used for some samples in MRC with multi-choice answers. ![5_image_0.png](5_image_0.png) Languages and Tasks. Figure 3 (a) and (b) show the task difficulty with respect to time versus annotation accuracy and inter-annotator agreement, respectively. SA and NER are easier for language learners than NLI and MRC, considering both accuracy and time. MRC, which requires paragraph comprehension, unlike sentence-level tasks, may be difficult for learners. Nonetheless, they achieved high accuracy, and most of their answer spans overlapped with the ground truth answers. Detailed results and further analysis of the outcomes in Figure 3 can be found in Appendix B. We measure inter-annotator agreement using the pairwise F1 scores. Table 10 (b) shows the level of agreement and the standard error for each language and task. Both NLI and NER show high agreement, while the token-based task MRC shows relatively low agreement compared to the other tasks. Korean SA shows low agreement, most likely due to some noisy samples in the NSMC dataset. The NSMC dataset is a movie review dataset whose negative labels come from the reviews with ratings of 1-4, and where the positive labels come from those with ratings of 9-10, respectively. This dataset contains noisy samples whose gold labels are unreliable or whose labels cannot be determined only with the text, requiring some metadata. MRC in Korean shows low agreement, and we assume this stems from the fact that Korean is a morpheme-based language while the others use word-based tokenization. The F1 score was computed based on the corresponding word overlaps in both English and Indonesian. Korean uses character-based overlap, which is stricter. It may be more complicated for annotators to clearly distinguish the answer span at the character level rather than at the word level. ![5_image_1.png](5_image_1.png) Language Proficiency and Question Difficulty. Figure 4 shows the percentage and the standard error of obtaining a correct answer for each question difficulty and learner fluency. Both intermediate and advanced learners show similar levels of accuracy regardless of question difficulty level, while basic-level learners tend to fail on complex questions. The mean number of correct questions out of 10 increases to 7.66 without basic-level learners. This implies that the intermediate level is sufficient to understand the sentences in general NLP datasets and suggests the feasibility of recruiting learners as annotators in place of native speakers, especially on easy-to-medium tasks and questions. ## 4.2 Training Simulation With Learners' Annotations In order to show the reliability of learners' annotations used as training labels for language models, we compare the performance of models trained on learners' annotations across SA and NLI to the models trained on native speakers' annotations. Because we only have a small number of learners' annotations, we generate synthetic data following the distribution of learners' annotations. We randomly select 10K samples from the training data of the original datasets and change the labels into the generated synthetic labels. We aggregate learners' annotations using a majority vote. We ran the Shapiro-Wilk test and found that the distribution of labels is Gaussian (p-value < 0.05). We then fit the probability distributions of labels for each class and generate synthetic labels for existing NLP datasets based on those distributions. The same process is used to build synthetic data representing native speakers' annotations. We set two baselines as the upper and lower bounds of LMs: models trained on the original ground truth labels (Ground Truth) and models trained on machine-translated texts of | SA | NLI | | | | | | | |-------------------|------------|------------|------------|------------|------------|------------|------------| | EN | KO | ID | EN | KO | ID | | | | Ground Truth | - | 89.56±1.11 | 85.29±0.79 | 97.20±0.86 | 79.05±1.44 | 79.00±2.48 | 68.20±1.32 | | MT Dataset | - | 79.25±1.25 | 75.27±1.33 | 87.19±1.39 | 56.78±2.26 | 47.06±1.26 | 52.35±1.35 | | Native Speakers | - | 87.59±1.30 | 89.18±1.62 | 94.18±0.31 | 71.86±1.43 | 74.09±1.41 | 67.21±1.23 | | All | 89.09±1.87 | 89.26±1.41 | 94.26±0.73 | 72.16±1.91 | 71.82±1.27 | 70.39±2.17 | | | Language Learners | Dictionary | 86.64±0.40 | 87.61±1.06 | 92.61±1.44 | 70.40±1.09 | 74.22±1.98 | 66.70±1.48 | | Translation | 85.39±1.14 | 87.47±1.65 | 92.47±1.46 | 74.69±1.63 | 73.03±1.02 | 69.84±2.49 | | | Task | Top-3 Failure Reasons - Unreliable gold label - Lack of background information - Ungrammatical sentence 14 - Task ambiguity - Unreliable gold label - Domain-specific genre and expression - Culturally-nuanced expression - Ambiguous questions with multiple answers - Low overlaps in answer span | |--------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 6: Main failure reasons on each task other languages (MT Dataset). We fine-tuned BERTBASE (Devlin et al., 2019), KLUE-BERTBASE (Park et al., 2021), and IndoBERTBASE (Wilie et al., 2020) for English, Korean, and Indonesian, respectively. Table 5 shows the experimental results of the LMs trained on different synthetic labels, averaged for each language. Ground Truth indicates LMs trained on the original label, which was annotated by native speakers and merged into one by majority vote. Models trained on synthetic labels representing learners' annotations significantly outperformed the MT Dataset. This implies that building datasets with learners' annotation can produce more reliable labels than the baseline method of using machine-translated high-resource language datasets. ## 5 Discussion 5.1 Qualitative Analysis On Learners' Annotation We analyze the annotation result of each sample, especially the samples on which learners or native 14e.g., missing period, missing spacing and blank, nominalization, and use of slang speakers failed, i.e., those that were incorrectly labeled or for which "*I don't know*" was selected as the answer. Table 6 shows the main failure reasons why learners failed to make correct annotations on each task for the samples that at most one learner correctly labeled. The number of samples for which all learners failed ranges from zero to three for all tasks, except for NER, where no sample was incorrectly predicted by all learners; i.e., there was at least one learner who answered correctly for each question for all 120 samples. We found that the incorrectly labeled samples in SA mostly occurred due to the unreliable gold label in the dataset. With regard to NLI, all incorrect samples resulted from ambiguities in the task itself. Some NLI and MRC samples are tricky for learners in that they can create correct labels only when they fully understand both the hypothesis and the premise or both the context and the question. Fluency in English may affect failures by Indonesian learners in the translation setting, considering that the provided translations were in English. Very difficult examples in MRC occasionally include difficult and culturally-nuanced phrases and require background knowledge, which can be difficult for learners. A detailed explanation of the failure reason analyses results is provided in Table 20 in the Appendix. For instance, a missing period between two short sentences, 스토리가 어려움 *(The story is difficult)* and 볼만함 *([but it's] worth watching.)*, in Table 20 (a) leads to misunderstandings among learners. Also, an ambiguity of NLI whether "*people*" and "*some people*" in premise (*People standing at street corner in France.*) and hypothesis (*Some people are taking a tour of the factory.*) are indicating the same leads all learners and native | Basic | Intermediate | Advanced | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------------|-----------| | pre-test | 2.72±0.09 | 3.68±0.08 | 3.99±0.07 | | post-test | 2.76±0.09 | 3.62±0.08 | 4.01±0.06 | | (a) Number of correct standardized test questions out of 5 Basic Intermediate Advanced pre-test 7.29±0.12 8.93±0.08 9.32±0.06 post-test 8.41±0.11 9.27±0.07 9.42±0.06 (b) Number of correct word meaning questions out of 10 | | | | speakers to get confused between neutral and contradiction, which is an ambiguity of NLI itself (Table 20 (b)). ## 5.2 Learning Effect Standardized Test Questions. We compared pre- and post-test scores for the standardized questions in Table 7 (a). There was no significant difference, implying that annotating several questions had little impact on learning grammar, structure, or general language skills in the short term. Word Meaning Questions. Table 7 (b) shows the scores of the pre-/post-tests on the word meaning questions out of 10 questions. The learning effect on vocabulary was maximized with beginnerlevel learners. Both intermediate and advanced learners achieved a mean score of about 9 out of 10 on the pre-test, implying that words used in the data annotation sentences were accessible and understandable enough for them. Long-term Learning Effect. The pre-test score for the last session is higher than that for the first session by about 4% and 7% each on both standardized test questions and word meaning questions, respectively (Table 8). The increase in the standardized test question scores implies learners' improvement on general language proficiency factors, including structure and grammar. Also, we can surmise that the vocabulary or expressions used in the NLP datasets are primarily redundant and repetitive, considering that only a few sessions can lead to an increase in pre-test scores. ## 5.3 Concerns About Learners' Annotation In Low-Resource Languages This paper suggests recruiting language learners as crowdworkers in data annotation in low-resourced | Basic | Intermediate | Advanced | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------------|-----------| | 1st | 3.23±0.02 | 3.26±0.02 | 3.30±0.02 | | last | 3.43±0.03 | 3.46±0.03 | 3.53±0.02 | | (a) Number of correct standardized test questions out of 5 Basic Intermediate Advanced 1st 8.20±0.03 8.23±0.03 8.30±0.03 last 8.91±0.03 8.95±0.03 9.00±0.03 (b) Number of correct word meaning questions out of 10 | | | | Table 8: Pre-test score of the first and the last session languages by proving the quality of learners' labels. There are clearly many low-resource languages for which the absolute number of native speakers is exceptionally small compared to learners or for which it is almost impossible to find native speakers in the locations where NLP research is active. For instance, we can think of endangered languages such as Irish, which has no monolingual native speaker and extremely few daily-using L1 speakers (73K) but more than 1M learners. We can also count local languages, such as Sundanese in Indonesia and Jejueo in Korea, that are spoken by the elderly in the community, with the younger speakers who are not fluent but who are much more accessible to the researchers for annotation. We may use either MT systems such as Google Translate considering that it supports 133 languages including several low-resource languages 15 or dictionaries for extremely low-resource languages such as Ojibwe People's Dictionary 16. For low-resource languages, it is necessary to scrape together whatever resources are accessible, regardless of whether these are (incomplete) dictionaries, semi-fluent speakers, and/or anyone willing to learn and annotate in that language. ## 6 Conclusion This study provides interesting results both for the actual dataset annotation as well as understanding the non-native speakers' annotation capabilities. We show (1) labels provided by language learners are nearly as accurate, especially for easier tasks, (2) with additional experiments of aggregating their labels, learners' are almost on par with native speakers, and (3) language models trained 15https://github.com/RichardLitt/low-r esource-languages 16https://ojibwe.lib.umn.edu/ | Time / Session (min) | Expected Hourly Wage | | | |------------------------|------------------------|------------|------------| | Native Speakers | - | 8.08±0.89 | KRW 9,282 | | Language Learners | Dictionary | 14.76±1.29 | KRW 20,325 | | Translation | 13.20±0.73 | KRW 22,727 | | on learners' less accurate labels achieved 94.44% of ground truth performance. By showing that NLP annotation does not require finding native speakers, we show the possibility of broadening NLP research for more languages, as it is very challenging to recruit native speakers for many languages. Requiring native speakers for annotation can mean traveling to remote locations and working with an older, less-technology-savvy population. We show that it is possible to work with language learners to hurdle geographic and technological barriers when attempting to build annotated NLP datasets. We believe learners with high motivations and learning effects are more likely to be engaged in data annotation. ## Limitations This paper covers only four NLP tasks. Certain other tasks requiring more background knowledge may show different results. We suggest recruiting language learners when native speakers are not available, but recruiting learners may also be difficult for languages that are not popular for learners. Our results are based on a relatively low number of participants, as we chose to cover three different languages to show generalizability across languages. Many factors that may contribute to the results remain, such as the order of the batch of annotation questions with respect to the question difficulty level. ## Ethics Statement All studies in this research project were performed under KAIST Institutional Review Board (IRB) approval. We consider ethical issues in our experiments with language learners and native speakers. The first consideration is fair wages. We estimated the average time per session (Step 2.1 to 2.3) based on a small pilot study and set the wage per session to be above the minimum wage in the Republic of Korea (KRW 9,160 ≈ USD 7.04) 17. Table 9 shows that the expected hourly wages of all experiments exceed the minimum wage. We estimated the time for watching the orientation video and reading the instruction manual as one hour and provided compensation for this time of KRW 10,000. There was no discrimination when recruiting and selecting the participants for the experiment, including all minority groups and factors such as age, ethnicity, disability, and gender. We used the sentences from publicly available datasets and manually excluded samples that may contain toxic and/or controversial contents. ## Acknowledgements This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics). This work was supported by a grant of the KAIST-KT joint research project through AI2XL Laboratory, Institute of convergence Technology, funded by KT [G01220613, Investigating the completion of tasks and enhancing UX]. Rifki Afina Putri was supported by Hyundai Motor Chung Mong-Koo Global Scholarship. ## References Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470. Cash Costello, Shelby Anderson, Caitlyn Bishop, James Mayfield, and Paul McNamee. 2020. Dragonfly: Advances in non-speaker annotation for low resource languages. In *Proceedings of the 12th Language* 17https://www.minimumwage.go.kr/ Resources and Evaluation Conference, pages 6983– 6987, Marseille, France. European Language Resources Association. Scott A. Crossley, Jerry Greenfield, and Danielle S. McNamara. 2008. Assessing text readability using cognitively based indices. *TESOL Quarterly*, 42(3):475– 493. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Arthur C Graesser, Danielle S McNamara, Max M Louwerse, and Zhiqiang Cai. 2004. Coh-metrix: Analysis of text on cohesion and language. *Behavior research methods, instruments, & computers*, 36(2):193–202. Ulf Hermjakob, Jonathan May, Michael Pust, and Kevin Knight. 2018. Translating a language you don't know in the Chinese room. In *Proceedings of ACL 2018,* System Demonstrations, pages 62–67, Melbourne, Australia. Association for Computational Linguistics. Barbora Hladká, Jirka Hana, and Ivana Lukšová. 2014. Crowdsourcing in language classes can help natural language processing. *Proceedings of the AAAI Conference on Human Computation and Crowdsourcing*, 2(1):71–72. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics. Fajri Koto, Afshin Rahimi, Jey Han Lau, and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A benchmark dataset and pre-trained language model for Indonesian NLP. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 757–770, Barcelona, Spain (Online). International Committee on Computational Linguistics. Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. *Transactions of the Association for Computational Linguistics*, 10:50–72. Siok H. Lee and James Muncie. 2006. From receptive to productive: Improving ESL learners' use of vocabulary in a postreading composition task. *TESOL* Quarterly, 40(2):295–320. Ying Lin, Cash Costello, Boliang Zhang, Di Lu, Heng Ji, James Mayfield, and Paul McNamee. 2018. Platforms for non-speakers annotating names in any language. In *Proceedings of ACL 2018, System Demonstrations*, pages 1–6, Melbourne, Australia. Association for Computational Linguistics. H. Liu and P. Singh. 2004. Conceptnet - a practical commonsense reasoning tool-kit. *BT Technology* Journal, 22(4):211–226. Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun, and Massimo Poesio. 2019. Progression in a language annotation game with a purpose. *Proceedings of the AAAI Conference on Human Computation and Crowdsourcing*, 7(1):77–85. Rahmad Mahendra, Alham Fikri Aji, Samuel Louvan, Fahrurrozi Rahman, and Clara Vania. 2021. IndoNLI: A natural language inference dataset for Indonesian. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10511–10527, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, and Burr Settles. 2020. Simultaneous translation and paraphrase for language education. In *Proceedings of the Fourth Workshop on* Neural Generation and Translation, pages 232–243, Online. Association for Computational Linguistics. Stephen Mayhew and Dan Roth. 2018. TALEN: Tool for annotation of low-resource ENtities. In *Proceedings of ACL 2018, System Demonstrations*, pages 80–86, Melbourne, Australia. Association for Computational Linguistics. Lionel Nicolas, Verena Lyding, Claudia Borg, Corina Forascu, Karën Fort, Katerina Zdravkova, Iztok Kosem, Jaka Cibej, Špela Arhar Holdt, Alice Millour, ˇ Alexander König, Christos Rodosthenous, Federico Sangati, Umair ul Hassan, Anisia Katinskaia, Anabela Barreiro, Lavinia Aparaschivei, and Yaakov HaCohen-Kerner. 2020. Creating expert knowledge by relying on language learners: a generic approach for mass-producing language resources by combining implicit crowdsourcing and language learning. In *Proceedings of the 12th Language Resources and* Evaluation Conference, pages 268–278, Marseille, France. European Language Resources Association. Lucy Park. 2016. Naver sentiment movie corpus. Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Ji Yoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, and Kyunghyun Cho. 2021. KLUE: Korean language understanding evaluation. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks* Track (Round 2). Federico Sangati, Stefano Merlo, and Giovanni Moretti. 2015. School-tagging: interactive language exercises in classrooms. In *LTLT@ SLaTE*, pages 16–19. Wenyuh Shieh and Mark R. Freiermuth. 2010. Using the dash method to measure reading comprehension. TESOL Quarterly, 44(1):110–128. Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In *Proceedings of the 2008 Conference* on Empirical Methods in Natural Language Processing, pages 254–263, Honolulu, Hawaii. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Tatiana Tsygankova, Francesca Marini, Stephen Mayhew, and Dan Roth. 2021. Building low-resource NER models using non-speaker annotations. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 62–69, Online. Association for Computational Linguistics. Luis von Ahn. 2013. Duolingo: Learn a language for free while helping to translate the web. In *Proceedings of the 2013 International Conference on Intelligent User Interfaces*, page 1–2. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Bryan Wilie, Karissa Vincentio, Genta Indra Winata, Samuel Cahyawijaya, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, and Ayu Purwarianti. 2020. IndoNLU: Benchmark and resources for evaluating Indonesian natural language understanding. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 843–857, Suzhou, China. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. ## Appendix A Experiment Setup A.1 Workflow Post-survey Questions - This task is difficult for me. - I think my vocabulary skills have improved after doing this task. - I think my grammar/structure skills have improved after doing this task. - I consulted the additional resources often. - Additional resources are helpful for completing the task. - I am willing to participate in this task again. ## A.2 Experiment Platform All experiments were done on the website that we made and all responses and time taken are recorded. Figure 5 shows the screenshots of the pre-/post-test (a) and annotation (b) steps. ## B Further Results B.1 Annotation Quality Languages and Tasks. Table 10 shows task difficulty with respect to four aspects: annotation accuracy, inter-annotator agreement, time, and perceived difficulty. Majority Voted Labels. Table 11 shows the statistics of aggregated labels using majority vote. The number of splits means how many samples are not able to be aggregated in a single label (e.g., all annotators picked different labels, some annotators answered as *I don't know* so that it was too few to be aggregated), and the number of incorrect samples means how many samples are different to the ground truth labels. Additional Resources. Table 12 shows whether the types of additional resources that learners consult affect annotation accuracy among three languages. English learners with the translation setting showed slightly better performance than those with the dictionary setting, while vice versa in Korean and Indonesian. It implies that it would be better to provide translations in high-resource languages with reliable machine translation systems, while mid- to low-resource language learners should consult dictionaries. | EN | KO | ID | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------| | SA | 7.87±0.30 | 7.59±0.21 | 8.35±0.16 | | NLI | 6.81±0.20 | 6.54±0.18 | 6.46±0.26 | | NER | 8.59±0.10 | 8.44±0.12 | 7.12±0.31 | | MRC | 7.11±0.28 | 7.18±0.19 | 7.95±0.11 | | (a) Annotation accuracy EN KO | ID | | | | SA | 0.60±0.03 | 0.62±0.02 | 0.71±0.03 | | NLI | 0.84±0.01 | 0.85±0.01 | 0.56±0.01 | | NER | 0.76±0.03 | 0.76±0.03 | 0.85±0.03 | | MRC | 0.63±0.03 | 0.51±0.03 | 0.53±0.04 | | (b) Inter-annotator agreement measured by pairwise F1 EN KO ID SA 3.34±0.34 3.28±0.25 2.43±0.18 NLI 4.48±0.74 9.72±2.03 8.50±1.33 NER 4.50±0.38 8.61±0.88 8.82±1.01 MRC 6.29±0.57 9.58±0.72 7.27±0.41 (c) Time spent (minutes) EN KO ID SA 2.33±0.88 2.83±0.31 2.33±0.42 NLI 2.60±0.60 3.17±0.48 4.00±1.00 NER 3.40±0.60 3.17±0.48 3.50±0.50 MRC 1.67±0.33 3.83±0.31 3.00±0.52 (d) Perceived difficulty from 1 (very easy) to 5 (very hard) | | | | Table 10: Difficulty according to language and task ## Language Proficiency And Native Speakers. We recruited three native speakers of each language and asked them to do the same experiments (pretest, annotation, and post-test). Table 13 shows the number of correct questions out of 10 and the time duration by each level of language learners and native speakers. Native speakers achieved the highest accuracy across all tasks taking the shortest time. It implies that there are some questions that native speakers can solve but learners cannot. We discuss those samples in Section 5.1. Time duration shows a significant gap between learners and native speakers, especially on NLI, but the gap was minimized at NER whose task requires annotators to tag all sequences. ## B.2 Training Simulation With Learners' Annotation ![12_image_0.png](12_image_0.png) (b) Annotation Soft-labeled Synthetic Data. We tried training simulations with BERT-based models on synthetic data generated using soft labeling. We used soft labeling instead of majority voting to consider the variance among the annotators. Table 14 shows experimental results of BERT-based models on synthetic data whose data distributions come from the soft-labeled aggregations. It delivers similar findings to Table 5, while showing some noises. Models trained on native speakers' synthetic labels sometimes achieved similar performance to the Ground Truth while sometimes achieving the poorest performance such as EN-SA, EN-NLI, and KO-NLI. Our native annotators showed low interannotator agreement in those languages and tasks, so the synthetic labels based on native speakers' annotations were noisy. | SA | NLI | | | | | | | |------------------|------------|--------|--------|--------|---------|---------|--------| | EN | KO | ID | EN | KO | ID | | | | Native Speaker | - | 0 / 12 | 0 / 12 | 0 / 7 | 7 / 35 | 11 / 38 | 1 / 10 | | All | 0 / 16 | 0 / 20 | 0 / 13 | 0 / 31 | 0 / 21 | 0 / 28 | | | Language Learner | Dictionary | 0 / 23 | 0 / 16 | 0 / 13 | 4 / 39 | 2 / 27 | 0 / 31 | | Translation | 0 / 14 | 0 / 26 | 0 / 14 | 3 / 29 | 11 / 39 | 11 / 45 | | | NER | MRC | | | | | | | | EN | KO | ID | EN | KO | ID | | | | Native Speaker | - | 3 / 21 | 1 / 19 | 2 / 18 | 4 / 23 | 6 / 19 | 3 / 20 | | All | 4 / 17 | 6 / 21 | 3 / 20 | 5 / 22 | 7 / 24 | 6 / 25 | | | Language Learner | Dictionary | 2 / 21 | 3 / 19 | 4 / 14 | 3 / 18 | 6 / 20 | 4 / 25 | | Translation | 5 / 19 | 3 / 15 | 2 / 17 | 6 / 27 | 4 / 14 | 5 / 16 | | | Dictionary | Translation | | |--------------|---------------|-----------| | EN | 7.21±0.24 | 7.84±0.12 | | KO | 7.77±0.12 | 7.11±0.15 | | ID | 8.14±0.10 | 7.05±0.17 | Few-shot Learning using mT5. We also tried few-shot learning with mT5BASE (Xue et al., 2021), a large-scale multilingual pretrained model which covers 101 languages including our target languages: English, Korean, and Indonesian. Table 15 shows that all models achieved comparable results to the baseline model within the margin of error. The gap among all models was relieved and we suppose that large-scale LMs with massive training data, including mT5, can perform too well on our common NLP tasks and our labeled data were too small to affect those models. ## C Further Discussions C.1 Learning Effect Additional Resources. Table 19 (b) shows that both additional resources helped learners to remind or learn vocabulary used in the annotation samples. Perceived Learning Effect. Table 17 shows similar trends to the previous results that basic-level learners perceived more learning effects on both vocabulary and grammar. They tend to show more willingness to re-participate in data annotation. Advanced-level learners show a high willingness to re-participate in data annotation, and this is because it was hard to improve their language proficiency. However, the sentences in data annotation were easy enough for them. Table 18 shows self-rated language proficiency before and after the experiments when the description of CEFR criteria was given. Basic-level learners felt that their language proficiency had improved, while other levels of learners did not show a significant difference. Advanced-level learners tend to underestimate their language proficiency humbly. ## Language Proficiency And Additional Resources. Table 19 (a) shows annotation accuracy compared to the ground truth labels concerning the learners' language proficiency level and the additional resources they used. There was no significant difference between the two settings with the learners either in the intermediate or the advanced level, while basic level learners achieved higher accuracy in dictionary settings. We suppose that basic-level learners might not be able to fill the gap of the wrong spans in the machine-translated sentence. Table 19 (b)-(c) show users' responses on how frequently they consult additional resources and how helpful they were in data annotation. The frequency that the learners consult the additional resources and how the additional resources are helpful go together. All levels of learners replied that the dictionary setting was more helpful than the translation setting. Most basic-level learners in all | Language Learners | Native Speakers | | | | |-------------------------|-------------------|-----------|------------|-----------| | Basic | Intermediate | Advanced | | | | SA | 6.96±0.25 | 8.26±0.16 | 8.43±0.16 | 9.00±0.14 | | NLI | 6.96±0.18 | 6.38±0.21 | 6.64±0.21 | 7.59±0.30 | | NER | 7.98±0.12 | 8.48±0.09 | 7.75±0.28 | 8.99±0.07 | | MRC | 6.56±0.26 | 7.88±0.12 | 7.71±0.13 | 8.51±0.14 | | (a) Annotation accuracy | | | | | | Language Learners | Native Speakers | | | | | Basic | Intermediate | Advanced | | | | SA | 2.84±0.24 | 3.74±0.26 | 2.36±0.18 | 0.92±0.07 | | NLI | 8.58±0.69 | 5.78±1.23 | 10.92±3.46 | 3.38±0.59 | | NER | 7.95±0.51 | 5.83±0.50 | 7.68±1.12 | 7.23±2.90 | | MRC | 7.03±0.51 | 9.53±0.67 | 7.13±0.58 | 4.85±1.10 | | (b) Time duration | | | | | | SA | NLI | | | | | | | |-------------------|------------|------------|------------|------------|------------|------------|------------| | EN | KO | ID | EN | KO | ID | | | | Ground Truth | - | 89.56±1.11 | 85.29±0.79 | 97.20±0.86 | 79.05±1.44 | 79.00±2.48 | 68.20±1.32 | | MT Dataset | - | 79.25±1.25 | 75.27±1.33 | 87.19±1.39 | 56.78±2.26 | 47.06±1.26 | 52.35±1.35 | | Native Speakers | - | 70.66±1.60 | 84.66±1.40 | 96.48±0.76 | 67.67±1.60 | 56.18±1.55 | 67.12±1.61 | | All | 85.75±2.21 | 80.22±0.92 | 92.37±1.45 | 78.38±2.05 | 72.51±1.22 | 61.99±3.18 | | | Language Learners | Dictionary | 77.35±1.92 | 82.94±1.09 | 91.04±0.65 | 62.40±2.86 | 70.27±1.89 | 63.33±2.24 | | Translation | 85.29±1.28 | 72.98±1.74 | 90.40±1.07 | 68.88±1.68 | 65.61±1.06 | 56.54±3.86 | | Table 14: Experimental results of BERT-based models trained on labels generated or synthesized by each group using soft-labeling Ground Truth - 89.07±3.45 89.19±2.85 94.07±3.62 78.34±2.47 80.64±3.20 68.34±2.82 MT Dataset - 85.13±2.12 84.57±3.18 90.10±2.53 74.48±3.79 77.92±2.29 63.20±2.95 Native Speakers - 88.64±3.11 88.67±3.54 93.64±2.12 77.45±2.85 79.36±3.13 66.45±3.58 Language Learners All 87.26±3.10 87.32±2.56 93.26±3.13 78.41±3.46 80.82±2.70 68.41±3.15 Dictionary 88.16±3.55 88.28±3.53 94.16±2.13 76.64±3.39 81.08±3.75 69.64±2.76 Translation 85.39±2.71 87.47±2.34 92.47±2.88 74.69±2.99 73.03±2.65 69.84±3.19 SA NLI EN KO ID EN KO ID Table 15: Experimental results of Few-shot Learning using mT5 languages consult and rely on additional resources. There was no significant trend in the learners' frequency of consulting the additional resources concerning language and types of additional resources. Still, learners of all languages replied that the dictionary setting was more helpful for data annotation than the translation setting. ## C.2 Feedback From Participants Table 10 (c) shows perceived difficulty based on users' responses on post-survey. Participants responded that NER was the most complicated task | Dictionary | Translation | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-----------| | pre-test | 3.63±0.07 | 3.32±0.07 | | post-test | 3.53±0.07 | 3.41±0.06 | | (a) Number of correct standardized test questions out of 5 Dictionary Translation pre-test 8.81±0.08 8.47±0.08 post-test 9.29±0.07 8.92±0.07 (b) Number of correct word meaning questions out of 10 | | | Table 16: Effect of additional resources in language learning with respect to language proficiency Table 17: Users' responses on post-survey in terms of learning effect on vocabulary and grammar and willingness to re-participate Table 18: Self-rated language proficiency before and after data annotation experiment and SA was the easiest. This result looks awkward considering that language learners achieved the highest accuracy in NER. Learners replied that exactly distinguishing the start and the end of the named entity was confused in NER, and some named entities were unfamiliar with them if they were not used to the domain. All learners in the translation-provided setting on NLI replied that the machine-translated sentences were incorrect and even disturbing to infer the textual entailment between two sentences. Most Indonesian learners on SA replied that the sentences usually contain multiple sentiments, representing that some points are good, but others are bad, so they are unsure about their labels. This is probably due to the characteristics of IndoLEM (Koto et al., 2020) whose sentences come from Hotel reviews with multiple features. Learners should read a passage in MRC so that it helps to improve their language proficiency, while advanced-level learners who are fluent in the target language replied that they do Table 19: Effect of additional resources with respect to language proficiency not have to read the whole passage but read the sentence that contains the answer span. ## D Qualitative Analysis D.1 Failure Reason Analysis On Learners' Annotation Table 20 shows the examples of three failure reasons: ungrammatical sentence, task ambiguity, and culturally-nuanced expression. Missing period between two short sentences in the SA sample (a) leads to misunderstandings among learners. Ambiguity, whether "*people*" and "*some people*" in premise and hypothesis are indicating the same in (b), leads all learners and native speakers to get confused between neutral and contradiction, which is an ambiguity of NLI itself. "*ajaran yang* dipercayai" in questions in the MRC sample (c) literally means "*teachings believed by*" in Indonesian, but its correct translation is "*belief* " or "*religion*". Learners failed to interpret those difficult and culturally-nuanced expressions correctly and generated wrong labels, while all native speakers found the same answer. | Dictionary | Translation | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-----------| | Basic | 7.40±0.18 | 6.90±0.17 | | Intermediate | 7.86±0.12 | 7.60±0.13 | | Advanced | 7.99±0.14 | 7.31±0.16 | | (a) Annotation accuracy Dictionary Translation | | | | Basic | 4.67±0.21 | 3.25±0.59 | | Intermediate | 2.75±0.45 | 2.70±0.42 | | Advanced | 2.75±0.41 | 2.75±0.39 | | (b) Frequency of consulting additional resources Dictionary Translation Basic 4.67±0.21 3.75±0.53 Intermediate 3.42±0.47 3.30±0.37 Advanced 3.62±0.46 3.25±0.41 (c) Help of additional resources | | | | Basic | Intermediate | Advanced | | |-------------|----------------|------------|-----------| | vocab | 4.21±0.13 | 3.41±0.13 | 3.40±0.13 | | grammar | 3.36±0.13 | 2.77±0.13 | 2.65±0.13 | | willingness | 3.93±0.21 | 2.95±0.21 | 3.30±0.21 | ## D.2 Qualitative Analysis On Pre-/Post-Test We analyze the characteristics of pre- and post-test questions that the learners get wrong. For English, two questions that every learner got wrong were | Basic | Intermediate | Advanced | | |-------------|----------------|------------|-----------| | pre-survey | 0.57±0.14 | 2.45±0.17 | 3.20±0.14 | | post-survey | 1.29±0.19 | 2.55±0.14 | 3.20±0.17 | GRE questions, which are notably difficult even for native speakers. Many learners picked the "I don't know" option for GRE questions as well. For Korean, there was no question that every learner got wrong. However, for A-level learners, a large number of them answered 'Arrange the sentences in the correct order' questions incorrectly. The difficulty may stem from their insufficient knowledge of transition signals and the logical flow in the target language. Also, learners chose "*I don't know*" option a lot for questions requiring an understanding of newspaper titles. For Indonesian, learners mostly fail on questions related to prepositions, prefixes and suffixes, and formal word formation. Most of the questions that most learners answered incorrectly require an understanding of the context and the grammatical structure. These aspects of language are difficult to learn within a short time, attributing to the insignificant difference in the scores between the pre- and post-tests. | Ungrammatical sentence | con neu neu Task ambiguity | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| | Lang. Level Type Sentence Ground Truth Language Learners Native Speakers Failure Reason | Culturallynuanced expression agama Jawa | | | | pos neg pos | | | | | Original 스토리가 어려움 볼만함 (a) KO 1 | I don't know; spiritualitas; etc agama Jawa | [Context] ... They built the community by clinging to spirituality as the foundation for their teachings formation. Not infrequently, they called their belief as Java religion. Through this belief, they re-dig trust and the spiritual values of the past Javanese society, especially during the pre-patrimonial period [Question] What is the teachings believed by (belief/religion) the Dayak Hindu Buddha Bumi Segandu Indramayu? | Table 20: Example annotation questions that all learners fail | | Correct Trans | | | | | Original [Context] ... Mereka membangun komunitas dengan berpegang teguh pada spiritu- alitas sebagai dasar pembentukan ajarannya. Tidak jarang pula mereka menyebut kepercayaannya sebagai agama Jawa. Melalui kepercayaan ini, mereka melakukan penggalian kembali kepercayaan dan nilai-nilai spiritualitas masyarakat Jawa masa lalu, terutama pada masa prapatrimonial (c) ID 5 [Question] Apakah ajaran yang dipercayai Suku Dayak Hindu Budha Bumi Segandu Indramayu? [Context] ... They built the community by clinging to spirituality as the basis for the formation of his teachings. Not infrequently also they mentioned his belief as Java religion.Through this belief, they re-excavated trust and the spirituality values of the past Javanese society, especially during the praprimonial period [Question] What is the teachings believed by the Hindu Buddhist Bumi Division of Indramayu? Machine Trans | | | | | 습니다. 여행하고 | | | | | 있 | | | | | 장을 | | | | | 사람들은 공 | | | | | 떤 | | | | | [Hypothesis] 어 | | 습니다. 관광하고 | | | 있 | | | | | 장을 | | | | | 사람들은 공 | | | | | 떤 | | | | | [Hypothesis] 어 | | | | | [Hypothesis] Some people are taking a tour of the factory. | 사람들. 는 | | | | 퉁이에 서있 랑스의 거리 모 [Premise] 프 Machine Trans | 사람들. 는 | | | | 퉁이에 서있 랑스의 거리 모 Correct Trans [Premise] 프 | | | | | Original [Premise] People standing at street corner in France. | (b) EN 2 | | | | Machine Trans Story is difficult to see Correct Trans The story if difficult, [but it's] worth watching. | 14731 | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section <Limitations> ✓ A2. Did you discuss any potential risks of your work? Section <Limitations> ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section <1. Introduction> ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section <3. Study Design>, <4. Experimental Results>, <5. Discussion> ✓ B1. Did you cite the creators of artifacts you used? Section <References> ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We only used scientific artifacts from research papers that are publicly available. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section <3. Study Design>, <Appendix> ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section <3. Study Design>, <4. Experimental Results>, <5. Discussion> <Appendix> ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section <3. Study Design>, <4. Experimental Results>, <5. Discussion> <Appendix> ## C ✓ **Did You Run Computational Experiments?** <Section 4.2. Training Simulation with Learners' Annotations> C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? <Section 4.2. Training Simulation with Learners' Annotations> C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3 <Study Design> ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section <Appendix> ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 9 <Ethics Statement> ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 9 <Ethics Statement> ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 9 <Ethics Statement> ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3 <Study Design>
wu-etal-2023-information
Information Screening whilst Exploiting! Multimodal Relation Extraction with Feature Denoising and Multimodal Topic Modeling
https://aclanthology.org/2023.acl-long.823
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation. To combat that, we propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting. First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG). Based on CMG, we perform structure refinement with the guidance of the graph information bottleneck principle, actively denoising the less-informative features. Next, we perform topic modeling over the input image and text, incorporating latent multimodal topic features to enrich the contexts. On the benchmark MRE dataset, our system outperforms the current best model significantly. With further in-depth analyses, we reveal the great potential of our method for the MRE task.
## Information Screening Whilst Exploiting! **Multimodal Relation Extraction** With Feature Denoising And Multimodal Topic Modeling Shengqiong Wu1**, Hao Fei**1∗ , Yixin Cao2, Lidong Bing3**, Tat-Seng Chua**1 1 Sea-NExT Joint Lab, School of Computing, National University of Singapore 2 Singapore Management University, 3 DAMO Academy, Alibaba Group swu@u.nus.edu haofei37@nus.edu.sg caoyixin2011@gmail.com l.bing@alibaba-inc.com dcscts@nus.edu.sg ## Abstract Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, *internal-information over-utilization* and *external-information under-exploitation*. To combat that, we propose a novel framework that simultaneously implements the idea of *internal-information screening* and *externalinformation exploiting*. First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG). Based on CMG, we perform structure refinement with the guidance of the graph information bottleneck principle, actively denoising the less-informative features. Next, we perform topic modeling over the input image and text, incorporating latent multimodal topic features to enrich the contexts. On the benchmark MRE dataset, our system outperforms the current best model significantly. With further in-depth analyses, we reveal the great potential of our method for the MRE task. Our codes are open at https://github.com/ChocoWu/MRE-ISE. ## 1 Introduction Relation extraction (RE), determining the semantic relation between a pair of subject and object entities in a given text (Yu et al., 2020), has played a vital role in many downstream natural language processing (NLP) applications, e.g., knowledge graph construction (Wang et al., 2019; Mondal et al., 2021), question answering (Cao et al., 2022). But in realistic scenarios (i.e., social media), data is often in various forms and modalities (i.e., texts, images), rather than pure texts. Thus, multimodal relation extraction has been introduced recently (Zheng et al., 2021b), where additional visual sources are added to the textual RE as an enhancement to the relation inference. The essence of a successful MRE lies in the effective utilization of multimodal ∗Corresponding author: Hao Fei ![0_image_0.png](0_image_0.png) information. Certain efforts have been made in existing MRE work and achieved promising performances, where delicate interaction and fusion mechanisms are designed for encoding the multimodal features (Zheng et al., 2021a; Chen et al., 2022b,a). Nevertheless, current methods still fail to sufficiently harness the feature sources from two information perspectives, which may hinder further task development. Internal-information over-utilization. On the one hand, most existing MRE methods progressively incorporate full-scale textual and visual sources into the learning, under the assumption that all the input information certainly contributes to the task. In fact, prior textual RE research extensively shows that only parts of the texts are useful to the relation inference (Yu et al., 2020), and accordingly propose to prune over input sentences (Zhang et al., 2018). The case is more severe for the visual inputs, as not all and always the visual sources play positive roles, especially on the social media data. As revealed by Vempala and Preo¸tiucPietro (2019), as high as 33.8% of visual information serves no context or even noise in MRE. Xu et al. (2022) thus propose to selectively remove im14734 ages from the input image-text pairs. Unfortunately, such coarse-grained instance-level filtering largely hurts the utility of visual features. We argue that a fine-grained feature screening over both the internal image and text features is needed. Taking the example \#1 in Fig. 1, the textual expressions '*Congratulations to Angela and Mark Salmons*' and the visual objects of '*gift*' and '*roses*' are valid clues to infer the '*couple*' relation between ' *Angela*' and '*Mark Salmons*', while the rest of text and visual information is essentially the task-irrelevant noise. External-information under-exploitation. On the other hand, although compensating the text inputs with visual sources, there can be still information deficiency in MRE, in particular when the visual features serve less (or even negative) utility. This is especially the case for social media data, where the contents are less-informative due to the short text lengths and low-relevant images (Baly et al., 2020). For the example \#2 in Fig. 1, due to the lack of necessary contextual information, it is tricky to infer the relation '*present in*' between 'Hot summer' (an album name) and '*Migos*' (a singer name) based on both the image and text sources. In this regard, more external information should be considered and exploited for MRE. Fortunately, the topic modeling technique offers a promising solution, which has been shown to enrich the semantics of the raw data, and thus facilitate NLP applications broadly (Zeng et al., 2018). For the above same example, if an additional '*music*' topic feature is leveraged into the context, the relation inference can be greatly eased. Taking into account the above two observations, in this work, we propose a novel framework to improve MRE. As shown in Fig. 4, we first employ the scene graphs (SGs) (Johnson et al., 2015) to represent the input vision and text, where SGs advance in intrinsically depicting the fine-grained semantic structures of texts or images. We fuse both the visual and textual SGs into a cross-modal graph (CMG) as our backbone structure. Next, we reach the first goal of internal-information screening by adjusting the CMG structure via the graph information bottleneck (GIB) principle (Wu et al., 2020), i.e., *GIB-guided feature refinement*, during which the less-informative features are filtered and the task-relevant structures will be highlighted. Then, to realize the second goal of external-information exploiting, we perform *multimodal topic integration*. We devise a latent multimodal topic module ![1_image_0.png](1_image_0.png) to produce both the textual and visual topic features based on the multimodal inputs. The multimodal topic keywords are integrated into the CMG to enrich the overall contexts, based on which we conduct the final reasoning of relation for input. We perform experiments on the benchmark MRE dataset (Zheng et al., 2021a), where the results show that our framework significantly boosts the current state of the art. Further analyses demonstrate that the GIB-guided feature refinement helps in effective input denoising, and the latent multimodal topic module induces rich taskmeaningful visual&textual topic features as extended contexts. We finally reveal that the idea of internal-information screening is especially important to the scenario of higher text-vision relevance, while the external-information exploiting particularly works for the lower text-vision relevance case. To sum up, this work contributes by introducing a novel idea of simultaneous information subtraction and addition for multimodal relation extraction. The internal-information over-utilization and external-information under-exploitation are two common co-existing issues in many multimodal applications, to which our method can be broadly applied without much effort. ## 2 Preliminary 2.1 Textual And Visual Scene Graph There have been the visual scene graph (VSG) (Johnson et al., 2015) and textual scene graph (TSG) (Wang et al., 2018), where both of them include three types of nodes: object node, *attribute* node, and *relationship node*. All the nodes come with a specific label text, as illustrated in Fig. 2. In an SG, object and attribute nodes are connected with other objects via pairwise relations. As intrinsically describing the semantic structures of scene contexts for the given texts or images, SGs are widely utilized as types of external features integrated into downstream applications for enhancements, e.g., image retrieval (Johnson et al., 2015), image generation (Johnson et al., 2018) and image captioning (Yang et al., 2019). We also take advantage of these SG structures for better cross-modal semantic feature learning. Formally, we define a scene graph as G=(*V, E*), where V is the set of nodes, and E is the set of edges. ## 2.2 Graph Information Bottleneck Principle The information bottleneck (IB) principle (Alemi et al., 2017) is designed for information compression. Technically, IB learns a minimal feature Z to represent the raw input X that is sufficient to infer the task label Y . Further, the graph-based IB has been introduced for the graph data modeling (Wu et al., 2020), i.e., by refining a raw graph G into an informative yet compact one G−, by optimizing: min G− [−I(G −, Y ) + β · I(G −, G)] , (1) where I(G−, G) minimizes the mutual information between G and G− such that G− learns to be the minimal and compact one of G. I(G−, Y ) is the prediction objective, which encourages G− to be informative enough to predict the label Y . β is a Lagrangian coefficient. We will employ the GIB principle for internal-information screening. ## 2.3 Latent Multimodal Topic Modeling We introduce a latent multimodal topic (LAMO) model. Technically, we first represent the input text T with a bag-of-word (BoW) feature b T, and represent image I with a visual BoW (VBoW)1 b I. The topic generative process is described as follows: - Draw a topic distribution θ ∼ N (µ,σ). - For each word token w T iand visual token w I j : ◦ Draw w T i ∼ *Multinomial*(χ, θ), ◦ Draw w I j ∼ *Multinomial*(ψ, θ). where µ and σ are the mean and variance vector for the posterior probability p(θ|*T, I*). χ ∈ R K×UT and ψ ∈ R K×UIare the probability matrices of textual topic-word and *visual topic-word*, respectively. K is the pre-defined topic numbers, and U T and U Iare textual and visual vocabulary size. As depicted in Fig. 3, µ and σ are produced from a cross-modal feature encoder upon T and I. The topic distribution is yielded via θ=Softmax(µ+σ · ε), where ε ∈ N (0, I). Then, we autoregressively reconstruct the input b Tand ![2_image_0.png](2_image_0.png) Figure 3: The schematic of our latent multimodal topic (LAMO) model. $\mathbf{b}^{I}$ based on $\mathbf{\theta}$: $$p(\mathbf{b}_{i}^{T}|\mathbf{\chi},\mathbf{\theta})=\text{Softmax}(\mathbf{\theta}\cdot\mathbf{\chi}|\mathbf{b}_{<i}^{T})\,,\tag{2}$$ $$p(\mathbf{b}_{i}^{I}|\mathbf{\psi},\mathbf{\theta})=\text{Softmax}(\mathbf{\theta}\cdot\mathbf{\psi}|\mathbf{b}_{<i}^{I})\,.\tag{3}$$ Then, with the activated k-th topic (via argmax over θ), we obtain the distributions of the textual and visual topic words by slicing the χ[k, :] ∈ R UT and ψ[k, :] ∈ R UI. As shown in Fig. 3, the objective of topic modeling is derived as follows: LLAMO = LKL + LRecT + LRecI . (4) Appendix §A.5 extends the description of LAMO. ## 3 Mre Framework As shown in Fig. 4, our overall framework consists of five tiers. First, the model takes as input an image I and text T, as well as the subject vs and object entity vo. We represent I and T with the corresponding VSG and TSG. Then, the VSG and TSG are assembled as a cross-modal graph, which is further modeled via a graph encoder. Next, we perform GIB-guided feature refinement over the CMG for internal-information screening, which results in a structurally compact backbone graph. Afterwards, the multimodal topic features induced from the latent multimodal topic model are integrated into the previously obtained feature representation for external-information exploitation. Finally, the decoder predicts the relation label Y based on the enriched features. ## 3.1 Scene Graph Generation We employ the off-the-shelf parsers to generate the VSG (i.e., GI=(V I, EI)) and TSG (i.e., GT =(V T, ET)), respectively. We denote the representations of VSG nodes as XI={x I1 , *· · ·* , x In}, where each node embedding x I i is the concatenation of the object region representations and the corresponding node label embeddings. We directly 1Note that visual topic words are visual objects. ![3_image_0.png](3_image_0.png) represent the TSG nodes as XT ={x T 1 , *· · ·* , x Tm}, where each x T j is the contextualized word embedding. Note that both visual objects and text token representations are obtained from the CLIP (Radford et al., 2021) encoder, which ensures an identical embedding space across the two modalities. More details are provided in the appendix §A.1&§A.2. ## 3.2 Cross-Modal Graph Construction Next, we consider merging the VSG and TSG into one unified backbone cross-modal graph (CMG). Let's denote CMG as G=(*V, E*), where V (=V T ∪ V I) is the union of V Iand V T. E(=ET ∪ EI ∪ E×) is the set of edges, including the *intra-modal* edges (EIand ET), and *inter-modal* hyper-edges E×. To build the cross-modal hyper-edges between each pair of VSG node v I i and TSG node v T j , we measure the relevance score s in between: sv I i ,vT j = cos(x I i, x T j ). (5) A hyper-edges e × i,j is created if sv I i ,vT j is larger than a pre-defined threshold λ. Node representations from VSG and TSG are copied as the CMG's node representations, i.e., X=XT ∪ XI. We denote each edge ei,j (∈ E)=1 if there is an edge between nodes, and ei,j=0 and vice versa. Next, a graph attention model (GAT; Velickovic et al., 2018) is used to fully propagate the CMG: H = {h1, *· · ·* , hm+n} = GAT(G, X). (6) ## 3.3 Gib-Guided Feature Refinement In this step, we propose a GIB-guided feature refinement (GENE) module to optimize the initial CMG structure such that we fine-grainedly prune the input image and text features. Specifically, with the GIB guidance, we 1) filter out those taskirrelevant nodes, and 2) adjust the edges based on their relatedness to the task inference. Node Filtering We assign a 0 or 1 value ρ v i to a node viindicating whether to prune or keep vi, i.e., via ρ v i ⊙vi. We sample the value from the *Bernoulli* distribution, i.e., ρ v i ∈{0, 1}∼Bernoulli(π v i ), where π v i ∈(0, 1) is a parameter. While the sampling is a discrete process, we make it differentiable via the concrete relaxation method (Jang et al., 2017): $$\rho_{i}^{v}=\mathrm{Sigmoid}(\frac{1}{\tau}(\log\frac{\pi_{i}^{v}}{1-\pi_{i}^{v}}+\log\frac{\epsilon}{1-\epsilon}))\,,$$ v )), (7) where τ is the temperature, ϵ ∼ Uniform(0, 1). We estimate π v i by considering both the vi's l-order context and the influence of target entity pair: r v i = Att(vi, φ(vi); H), π v i = Sigmoid(FFN([r v i;hs; ho])),(8) where Att(·) is an attention operation, φ(vi) is the l-order neighbor nodes of vi, hs and ho are the representations of the subject and object entity. Edge Adjusting Similarly, we take the same sampling operation (Eq. 7) to generate a signal ρ e i,j for any edge ei,j , during which we also consider the l*-order context* features and the target entity pair: r e i,j = Att(vi, φ(vi), vj *, φ(v*j ); H), π e i,j = Sigmoid(FFN([r e i,j ;hs; ho])),(9) where φ(vi) and φ(vj ) are the l-order neighbor nodes of vi and vj . Instead of directly determining the existence of ei,j with ρ e i,j , we also need to take into account the existences of vi and vj , i.e., (ρ e i,j · ρ v i· ρ v j ) ⊙ ei,j . Because even if ρ e i,j=1, an edge is non-existent when its affiliated nodes are deleted. Thereafter, we obtain an adjusted CMG, i.e., G−, which is further updated via the GAT encoder, resulting in new node representations H−. We apply pooling operation on H− to obtain the overall graph presentation g, which is concatenated with two entity representations as the context feature a: a = [g; hs; ho] . (10) GIB Optimization To ensure that the aboveadjusted graph G− is sufficiently informative (i.e., not wrongly pruned), we consider a GIB-guided optimization. We denote z as the compact information of the resulting G−, which is sampled from a Gaussian distribution parameterized by a. Then, we rephrase the raw GIB objective (Eq. 1) as: LGIB = min z[−I(z, Y ) + β · I(z*, G)]* . (11) The first term $-I(\boldsymbol{z},Y)$ can be expanded as: $$-I(\boldsymbol{z},Y)\leq-\int p(Y,\boldsymbol{z})\log q(Y|\boldsymbol{z})dYd\boldsymbol{z}+H(Y)$$ $$:=\mathcal{L}_{\text{CE}}(q(Y|\boldsymbol{z}),Y)\,,$$ where $(Y|\boldsymbol{z})$ is a matrix whose matrix is $\boldsymbol{z}$ $$(12)$$ where q(Y |z) is a variational approximation of the true posterior p(Y, z). For the second term I(z, G), we estimate its upper bound via reparameterization trick (Kingma and Welling, 2014): I(z, G) ≤ Zp(z|G) log p(z|G) r(z) dzdG := KL(p(z|G)||r(z)). $$(13)$$ We run GENE several iterations for sufficient refinement. In Appendix §A.4 we detail all the technical processes of GIB-guided feature refinement. ## 3.4 Multimodal Topic Integration We further enrich the compressed CMG features with more semantic contexts, i.e., the multimodal topic features. As depicted in Sec. §2.3, our LAMO module takes as input the backbone CMG representation H and induces both the visual and textual topic keywords that are semantically relevant to the input content. Note that we only retrieve the associated top-L textual and visual keywords, separately. Technically, we devise an attention operation to integrate the embeddings of the multimodal topic words (u Tand u I, from CLIP encoder) into the resulting feature representation z of GENE: $$\alpha_{i}^{T/I}=\frac{\exp(\mathrm{FFN}([\mathbf{u}_{i}^{T/I};\mathbf{z}]))}{\sum_{i}^{L}\exp(\mathrm{FFN}([\mathbf{u}_{i}^{T/I};\mathbf{z}]))}\,,$$ $$\mathbf{o}^{T/I}=\sum_{i}^{L}\alpha_{i}^{T/I}\mathbf{u}_{i}^{T/I}\,.$$ i $$(14)$$ We finally summarize these three representations as the final feature: s = [z; o T; o I] . (15) ## 3.5 Inference And Learning Based on s, a softmax function predicts the relation label Yˆ for the entity pair vs&vo. The training of our overall framework is based on a warm-start strategy. First, GENE is trained via LGIB (Eq. 11) for learning the sufficient multimodal fused representation in CMG, and refined features from compacted CMG. Then LAMO module is unsupervisedly pre-trained separately via LLAMO (Eq. 4) on the well-learned multimodal fused representations | Train | Develop | Test | Total | | |-----------|-----------|--------|---------|--------| | #Sentence | 7,356 | 931 | 914 | 9,201 | | #Instance | 12,247 | 1,624 | 1,614 | 15,485 | | #Entity | 16,863 | 2,174 | 2,143 | 21,180 | | #Relation | 12,247 | 1,624 | 1,614 | 15,485 | | #Image | 7,356 | 931 | 914 | 9,201 | so as to efficiently capture the task-related topic. Once the two modules have converged, we train our overall framework with the final cross-entropy task loss LCE(*Y , Y* ˆ ), together with the above two learning loss: L = LCE + η1LGIB + η2LLAMO . (16) ## 4 Experiment 4.1 Setting We experiment with the MRE dataset2, which contains 9,201 text-image pairs and 15,485 entity pairs with 23 relation categories. The statistical information of the MRE dataset is listed in Table 1. Note that a sentence may contain several entity pairs, and thus a text-image pair can be divided into several instances, each with only one entity pair. We follow the same split of training, development, and testing, as set in Zheng et al. (2021a). We compare our method with baselines in two categories: 1) Text-based RE methods that traditionally leverage merely the texts of MRE data, including, *BERT* (Devlin et al., 2019), *PCNN* (Zeng et al., 2015), MTB (Soares et al., 2019), and *DP-GCN* (Yu et al., 2020). **2) Multimodal RE methods** as in this work, including, *BERT+SG* (Zheng et al., 2021a), MEGA (Zheng et al., 2021a), *VisualBERT* (Li et al., 2019), *ViLBERT* (Lu et al., 2019), RDS (Xu et al., 2022), *MKGformer* (Chen et al., 2022a), and *HVPNet* (Chen et al., 2022b). We use the pre-trained language-vision model CLIP (vit-base-patch32) to encode the visual and textual inputs. We set the learning rate as 2e-5 for pre-trained parameters, and 2e-4 for the other parameters. The threshold value λ is set to 0.25; the temperature τ is 0.1; and β is set to 0.01. All the dimensions of node representations and GAT hidden sizes are set as 768-d. We utilize the 2order (i.e., l = 2) context of each node to refine the nodes and edges of CMG. For the latent topic modeling, we pre-define the number of topics as 10, and then we choose the Top-10 textual and visual keywords to enhance the semantic contexts 2https://github.com/thecharm/Mega | Acc. | Pre. | Rec. | F1 | | |----------------------------------------|--------|--------|-------|-------| | - Text-based Methods BERT† | - | 63.85 | 55.79 | 59.55 | | PCNN† | 72.67 | 62.85 | 49.69 | 55.49 | | MTB† | 72.73 | 64.46 | 57.81 | 60.86 | | DP-GCN♭ | 74.60 | 64.04 | 58.44 | 61.11 | | - Multimodal Methods BERT(Text+Image)♭ | 74.59 | 63.07 | 59.53 | 61.25 | | BERT+SG† | 74.09 | 62.95 | 62.65 | 62.80 | | MEGA† | 76.15 | 64.51 | 68.44 | 66.41 | | VisualBERT† base | - | 57.15 | 59.48 | 58.30 | | ViLBERT† base | - | 64.50 | 61.86 | 63.16 | | RDS† | - | 66.83 | 65.47 | 66.14 | | HVPNeT† | - | 83.64 | 80.78 | 81.85 | | MKGformer† | 92.31 | 82.67 | 81.25 | 81.95 | | Ours | 94.06 | 84.69 | 83.38 | 84.03 | | w/o GENE (Eq. 11) | 92.42 | 82.41 | 81.83 | 82.12 | | w/o I(z, G) (Eq. 13) | 93.64 | 83.61 | 82.34 | 82.97 | | w/o LAMO (Eq. 4) | 92.86 | 82.97 | 81.22 | 82.09 | | T | 93.05 | 83.95 | 82.53 | 83.23 | | w/o o w/o o I | 93.63 | 84.03 | 83.18 | 83.60 | | w/o VSG&TSG | 93.12 | 83.51 | 82.67 | 83.09 | | w/o CMG | 93.97 | 84.38 | 83.20 | 83.78 | of compressed CMG. All models are trained and evaluated using the NVIDIA A100 Tensor Core GPU. Following existing MRE work, we adopt accuracy (Acc.), precision (Pre.), recall (Rec.), and F1 as the major evaluation metrics. ## 4.2 Main Results Table 2 shows the overall results. First, compared to the traditional text-based RE, multimodal methods, by leveraging the additional visual features, exhibit higher performances consistently. But without carefully navigating the visual information into the task, most MRE baselines merely obtain incremental improvements over text-based ones. By designing delicate text-vision interactions, HVPNeT and MKGformer achieve the current state-of-theart (SoTA) results. Most importantly, our model boosts the SoTA with a very significant margin, i.e., with improvements of 1.75%(=94.06-92.31) in accuracy and 2.08%(=84.03-81.95) in F1. This validates the efficacy of our method. Model Ablation In the lower part of Table 2, we also study the efficacy of each part of our designs. First of all, we see that both the GENE and ![5_image_0.png](5_image_0.png) LAMO modules show big impacts on the results, i.e., exhibiting a drop in F1 by 1.91% and 1.94% F1, respectively. This confirms their fundamental contributions to the whole system. More specifically, the GIB guidance is key to the information refinement in GENE, while both the textual and visual topic features are key to LAMO. Also, it is critical to employ the SG for the structural modeling of the multimodal inputs. And the proposal of the cross-modal graph is also helpful to task modeling. ## 4.3 Analysis And Discussion To gain a deeper understanding of how our proposed methods succeed, we conduct further analyses to answer the following questions. ▶RQ1: *Does* GENE helps by really denoising the input features? A: We first investigate the working mechanism of GENE on internal-information screening. We plot the trajectories of the node filtering and the edge adjusting, during which we show the changing trends of overall performances and the mutual information I(G−, G) between the raw CMG (G) and the pruned one (G−, i.e., z). As shown in Fig. 5, along with the training process both the number of nodes and edges decrease gradually, while the task performance climbs steadily as I(G−, G) declines. These clearly show the efficacy of the task-specific information denoising by GENE. ▶RQ2: Are LAMO induced task-relevant topic features beneficial to the end task? A: Now, we consider visualizing the learned contextual features in our system, including the z without integrating the topic features, and the s with rich topic information injected. We separately | Topic | Textual keywords | Visual keywords (ID) | |----------|-------------------------------------------------------------------------------------|------------------------| | #Politic | trump, president, world, new, china, leader, summit, meet, korean, senate | #1388, #1068 | | #Music | tour, concert, video, live, billboard, album, styles, singer, taylor, dj | #1446, #1891 | | #Love | wife, wedding, engaged, ring, son, baby. girl, love, rose, annie | #434, #1091 | | #Leisure | photo, best, beach, lake, island, bridge, view, florida, photograph, great | #679, #895 | | #Idol | metgala, hailey, justin, taylor, rihanna, hit, show, annual, pope, shawn | #1021, #352 | | #Scene | contain, near, comes, american, in, spotted, travel, to, from, residents | #535, #167 | | #Sports | team, man, world, cup, nike, nba, football, join, play, chelsea | #1700, #109 | | #Social | google, retweet, twitter, youtube, netflix, acebook, flight, butler, series, art | #1043, #1178 | | #Show | show, presents, dress, interview, shot, speech, performing, attend, portray, appear | #477, #930 | | #Life | good, life, please, family, dog, female, people, boy, soon, daily | #613, #83 | ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) project z and s into the ground-truth relational labels of the MRE task, as shown in Fig. 6. We see that both z and s have divided the feature space into several clusters clearly, thanks to the GIB-guided information screening. However, there are still some wrongly-placed or entangled instances in z, largely due to the input feature deficiency. By supplementing more contexts with topic features, the patterns in s become much clearer, and the errors reduce. This indicates that LAMO induces topic ![6_image_3.png](6_image_3.png) information beneficial to the task. Meanwhile, we demonstrate what latent topics LAMO can induce. In Table 3 we show the top 10 latent topics with both the textual and visual keywords, where we notice that the latent topic information is precisely captured and modeled by LAMO. Further, we study the variance of the latent topics in two modalities, exploring the different contributions of each type. Technically, we analyze the numbers of the imported topic keywords of textual and visual ones respectively, by observing the attention weights α T /I i(Eq. 14). In Fig. 7 we plot the distributions. It can be found that the ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) model tends to make use of more textual contexts, compared with the visual ones. ▶RQ3: *How do* GENE and LAMO *collaborate to* solve the end task? A: As demonstrated previously, the GENE is able to relieve the issue of noisy information, and LAMO can produce latent topics to offer additional clues for relation inference. Now we study how these two modules cooperate together to reach the best results. First, we use the learned feature c∗to calculate task entropy −Pp(Y |c∗) log p(Y |c∗), where lower entropy means more confidence of the correct predictions. We compute the entropy using H (initial context feature), using z (with denoised context feature) and using s (with feature denoising and topic enriched context), respectively, which represents the three stages of our system, as shown in Fig. 9. As seen, after the information denoising and enriching by GENE and LAMO respectively, the task entropy drops step by step, indicating an effective learning process with the two modules. We further empirically perform a case study to gain an intuitive understanding of how the two modules come to play. In Fig. 8 we illustrate the two testing instances, where we visualize the constructed cross-model graph structures, the refined graphs (G−) and then the imported multimodal topic features. We see that GENE has fine-grainedly removed those noisy and redundant nodes, and adjusted the node connections that are more knowledgeable for the relation prediction. ![7_image_2.png](7_image_2.png) Figure 10: Results under varying text-image relevance. For example, in the refined graph, the task-noisy visual nodes, 'man' and textual nodes, 'in', '*plans*' are removed, and the newly-generated edges (e.g., '*Trump*'→'US', and 'Broncos'→'*football*') allow more efficient information propagation. Also, the model correctly paid attention to the topic words retrieved from LAMO that are useful to infer the relation, such as '*president*', '*leader*' in case \#1, and *team*', and '*football*' in case \#2. ▶RQ4: *Under what circumstances do the internalinformation screening and external-information exploiting help?* A: In realistic scenarios, a wide range of multimodal tasks is likely to face the issues of internal-information over-utilization and externalinformation under-exploitation (or simultaneously). Especially for the data collected from the online web, the vision and text pairs are not well correlated. Finally, we take one step further, exploring when our idea of internal-information screening and external-information exploiting aids the tasks in such cases. Technically, we first measure the vision-language relevance Ψ of each image-text pair by matching the correspondence of the VSG and TSG structures. And then, we group the instances by their relevance scores, and finally make predictions for different groups. From Fig. 10, it is observed that for the inputs with higher textvision relevance, the GENE plays a greater role than LAMO, while under the case with less crossmodal feature relevance, LAMO contributes more significantly than GENE. This is reasonable because most of the high cross-modal relevance input features come with rich yet even redundant information, where the internal-information screening is needed for denoising. When the input text-vision sources are irrelevant, the exploitation of external features (i.e., latent topic) can be particularly useful to bridge the gaps between the two modalities. On the contrary, **MKGformer** performs quite badly especially when facing with data in low visionlanguage relevance. Integrating both the LAMO and GENE, our system can perform consistently well under any case. 5 Related Works As one of the key subtasks of the information extraction track, relation extraction (RE) has attracted much research attention (Yu et al., 2020; Chen et al., 2022c; Tan et al., 2022; Guo et al., 2023). The recent trend of RE has shifted from the traditional textual RE to the recent multimodal RE, where the latter additionally adds the image inputs in the former one for better performances, under the intuition that the visual information can offer complementary features to the purely textual input from other modalities. Zheng et al. (2021b) pioneers the MRE task with a benchmark dataset, which is collected from the social media posts that come with rich vision-language sources. Later, more delicate and sophisticated methods are proposed to enhance the interactions between the input texts and images, and achieve promising results Zheng et al. (2021a); Chen et al. (2022b,a). On the other hand, increasing attention has been paid to exploring the role of different information in the RE task. As extensively revealed in prior RE studies, only a few parts of the input sentence can provide real clues for the relation inference (Xu et al., 2015; Yu et al., 2020), which inspires the proposal of textual feature pruning methods (Zhang et al., 2018; Jin et al., 2022). More recently, Vempala and Preo¸tiuc-Pietro (2019); Li et al. (2022) have shown that not always the visual inputs serve positive contributions in existing MRE models, as the social media data contains many noises. Xu et al. (2022) thus introduce an instance-level filtering approach to directly drop out those images less-informative to the task. However, such coarse-grained aggressive data deletion will inevitably abandon certain useful visual features. In this work we propose screening the noisy information from both the visual and textual input features, in a fine-grained and more controllable manner, i.e., structure denoising via graph information bottleneck technique (Wu et al., 2020). Also, we adopt the scene graph structures to model both the vision and language features, which partially inherits the success from Zheng et al. (2021a) that uses visual scene graphs to represent input images. Due to the sparse and noisy characteristics of social media data, as well as the cross-modal information detachment, MRE also suffers from feature deficiency problems. We thus propose modeling the latent topic information as additional context features to enrich the inputs. Multimodal topic modeling has received considerable explorations (Chu et al., 2016; Chen et al., 2021), which extends the triumph of the textual latent topic models as in NLP applications (Zhu et al., 2021; Fu et al., 2020; Xie et al., 2022). We however note that existing state-of-the-art latent multimodal models (An et al., 2020; Zosa and Pivovarova, 2022) fail to navigate the text and image into a unified feature space, which leads to irrelevant vision-text topic induction. We thus propose an effective latent multimodal model that learns coherent topics across two modalities. To our knowledge, we are the first to attempt to integrate the multimodal topic features for MRE. 6 Conclusion In this paper, we solve the internal-information over-utilization issue and the external-information under-exploitation issue in multimodal relation extraction. We first represent the input images and texts with the visual and textual scene graph structures, and fuse them into the cross-modal graphs. We then perform structure refinement with the guidance of the graph information bottleneck principle. Next, we induce latent multimodal topic features to enrich the feature contexts. Our overall system achieves huge improvement over the existing best model on the benchmark data. Further in-depth analyses offer a deep understanding of how our method advances the task. ## Acknowledgments The work is substantially supported by Alibaba Group through the Alibaba Innovative Research (AIR) Program, and is also partially supported by the Sea-NExT Joint Lab at the National University of Singapore. ## Limitiations The main limitations of our work lie in the following two aspects: First, we take sufficient advantage of the scene graph (SG) structures, which are obtained by external SG parsers. Therefore, the overall performance of our system is subject to the quality of the SG parser to some extent. However, our system, by equipping with the refinement mechanism, is capable of resisting the quality degradation of SG parsers to a certain extent. Second, the performance of the latent multimodal topic model largely relies on the availability of large-scale textimage pairs. However, the size of the dataset of MRE is limited, which may limit the topic model in achieving the best effect. ## References Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In *Proceedings of the ICLR*. Minghui An, Jingjing Wang, Shoushan Li, and Guodong Zhou. 2020. Multimodal topic-enriched auxiliary learning for depression detection. In Proceedings of the COLING, pages 1078–1089. Ramy Baly, Georgi Karadzhov, Jisun An, Haewoon Kwak, Yoan Dinkov, Ahmed Ali, James Glass, and Preslav Nakov. 2020. What was written vs. who read it: News media profiling using text analysis and social media context. In *Proceedings of the ACL*, pages 3364–3374. Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, and Elisabetta Fersini. 2021. Cross-lingual contextualized topic models with zero-shot learning. In *Proceedings of the EACL*, pages 1676–1683. Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, and Jinghui Xiao. 2022. Program transfer for answering complex questions over knowledge bases. In *Proceedings of the ACL*, pages 8128–8140. Jiaxin Chen, Zekai Wu, Zhenguo Yang, Haoran Xie, Fu Lee Wang, and Wenyin Liu. 2021. Multimodal fusion network with latent topic memory for rumor detection. In *Proceedings of the ICME*, pages 1–6. Xiang Chen, Ningyu Zhang, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, and Huajun Chen. 2022a. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. In *Proceedings of the SIGIR*, pages 904– 915. Xiang Chen, Ningyu Zhang, Lei Li, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022b. Good visual guidance make A better extractor: Hierarchical visual prefix for multimodal entity and relation extraction. In Proceedings of the NAACL Findings, pages 1607–1618. Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022c. Knowprompt: Knowledgeaware prompt-tuning with synergistic optimization for relation extraction. In *Proceedings of the WWW*, pages 2778–2788. Lingyang Chu, Yanyan Zhang, Guorong Li, Shuhui Wang, Weigang Zhang, and Qingming Huang. 2016. Effective multimodality fusion framework for crossmedia topic detection. IEEE Transactions on Circuits and Systems for Video Technology, 26(3):556–569. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the NAACL*, pages 4171– 4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *Proceedings of the ICLR*. Zihao Fu, Lidong Bing, Wai Lam, and Shoaib Jameel. 2020. Dynamic topic tracker for kb-to-text generation. In *Proceedings of COLING*, pages 2369–2380. Jiuxiang Gu, Shafiq R. Joty, Jianfei Cai, Handong Zhao, Xu Yang, and Gang Wang. 2019. Unpaired image captioning via scene graph alignments. In *Proceedings of the ICCV*, pages 10322–10331. Jia Guo, Stanley Kok, and Lidong Bing. 2023. Towards integration of discriminability and robustness for document-level relation extraction. In *Proceedings of the EACL*, pages 2598–2609. John R. Hershey and Peder A. Olsen. 2007. Approximating the kullback leibler divergence between gaussian mixture models. In *Proceedings of the ICASSP*, pages 317–320. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In Proceedings of the ICLR. Yifan Jin, Jiangmeng Li, Zheng Lian, Chengbo Jiao, and Xiaohui Hu. 2022. Supporting medical relation extraction via causality-pruned semantic dependency forest. In *Proceedings of the COLING*, pages 2450– 2460. Justin Johnson, Agrim Gupta, and Li Fei-Fei. 2018. Image generation from scene graphs. In *Proceedings* of the CVPR, pages 1219–1228. Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2015. Image retrieval using scene graphs. In *Proceedings of the CVPR*, pages 3668–3678. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In *Proceedings of the* ICLR. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Lei Li, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen, and Ningyu Zhang. 2022. On analyzing the role of image for visual-enhanced relation extraction. CoRR, abs/2211.07504. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. CoRR. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Proceedings of the NIPS, pages 13–23. Ishani Mondal, Yufang Hou, and Charles Jochim. 2021. End-to-end construction of NLP knowledge graph. In *Proceedings of the ACL-IJCNLP Findings*, pages 1885–1895. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *Proceedings of the ICML*, pages 8748–8763. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In *Proceedings of the NIPS*, pages 91–99. Sebastian Schuster, Ranjay Krishna, Angel X. Chang, Li Fei-Fei, and Christopher D. Manning. 2015. Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the EMNLP, pages 70–80. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In *Proceedings of the ACL*, pages 2895–2905. Qingyun Sun, Jianxin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, and Philip S. Yu. 2022. Graph structure learning with variational information bottleneck. In Proceedings of the AAAI, pages 4165–4174. Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022. Document-level relation extraction with adaptive focal loss and knowledge distillation. In Findings of the ACL, pages 1672–1681. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. 2020. What makes for good views for contrastive learning? In Proceedings of the NIPS. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *Proceedings of* the ICLR. Alakananda Vempala and Daniel Preo¸tiuc-Pietro. 2019. Categorizing and inferring the relationship between the text and image of Twitter posts. In *Proceedings* of the ACL, pages 2830–2840. Yu-Siang Wang, Chenxi Liu, Xiaohui Zeng, and Alan Yuille. 2018. Scene graph parsing as dependency parsing. In *Proceedings of the NAACL*, pages 397– 407. Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, and Wai Lam. 2019. Tackling long-tailed relations and uncommon entities in knowledge graph completion. In *Proceedings of the EMNLP*, pages 250–260. Tailin Wu, Hongyu Ren, Pan Li, and Jure Leskovec. 2020. Graph information bottleneck. In Proceedings of the NIPS, pages 20437–20448. Qianqian Xie, Jimin Huang, Tulika Saha, and Sophia Ananiadou. 2022. GRETEL: graph contrastive topic enhanced language model for long document extractive summarization. In *Proceedings of the COLING*, pages 6259–6269. Bo Xu, Shizhou Huang, Ming Du, Hongya Wang, Hui Song, Chaofeng Sha, and Yanghua Xiao. 2022. Different data, different modalities! reinforced data splitting for effective multimodal information extraction from social media posts. In *Proceedings of the COLING*, pages 1855–1864. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In *Proceedings of the EMNLP*, pages 1785– 1794. Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai. 2019. Auto-encoding scene graphs for image captioning. In *Proceedings of the CVPR*, pages 10685–10694. Bowen Yu, Mengge Xue, Zhenyu Zhang, Tingwen Liu, Yubin Wang, and Bin Wang. 2020. Learning to prune dependency trees with rethinking for neural relation extraction. In *Proceedings of the COLING*, pages 3842–3852. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. 2018. Neural motifs: Scene graph parsing with global context. In *Proceedings of the CVPR*, pages 5831–5840. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In *Proceedings of the EMNLP*, pages 1753–1762. Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2018. Topic memory networks for short text classification. In *Proceedings of the EMNLP*, pages 3120–3131. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the EMNLP, pages 2205–2215. Changmeng Zheng, Junhao Feng, Ze Fu, Yi Cai, Qing Li, and Tao Wang. 2021a. Multimodal relation extraction with efficient graph alignment. In Proceedings of the MM, pages 5298–5306. Changmeng Zheng, Zhiwei Wu, Junhao Feng, Ze Fu, and Yi Cai. 2021b. MNRE: A challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In *Proceedings of the* ICME, pages 1–6. Lixing Zhu, Gabriele Pergola, Lin Gui, Deyu Zhou, and Yulan He. 2021. Topic-driven and knowledgeaware transformer for dialogue emotion detection. In Proceedings of the ACL/IJCNLP, pages 1571–1582. Elaine Zosa and Lidia Pivovarova. 2022. Multilingual and multimodal topic modelling with pretrained embeddings. In *Proceedings of the COLING*, pages 4037–4048. ## A Extended Method Specification A.1 Scene Graph Generating A.2 Node Embedding We mainly follow the prior practice of SG applications (Yang et al., 2019; Gu et al., 2019) to acquire the visual scene graph (VSG) and textual scene graph (TSG). A VSG or TSG contains three types of nodes, including the object, attribute, and relation nodes. For VSG, we employ the FasterRCNN (Ren et al., 2015) as an object detector to obtain all the object nodes, and use MOTIFS (Zellers et al., 2018) as a relation classifier to obtain the relation labels (nodes) as well as the relational edges, which is trained using the Visual Genome (VG) dataset (Krishna et al., 2017). We then use an attribute classifier to obtain attribute nodes. For TSG generation, we first convert the sentences into dependency trees with a dependency parser, which is then transformed into the scene graph based on the rules defined at Schuster et al. (2015). Note that the object nodes in VSG are image regions, while the object nodes in TSG are textual tokens. In Section 3.1, we directly give the representations of nodes in VSG and TSG. Here, we provide the encoding process in detail. Visual Node Embedding In VSG, the visual feature vector of an object node is extracted from its corresponding image region; the feature of the attribute node is the same as its connected object, while the visual feature vector of a relationship node is extracted from the union image region of the two related object nodes. Specifically, for each visual node, we first rescale it to 224-d × 224-d. Subsequently, following Dosovitskiy et al. (2021), each visual node is split into a sequence of fixed-size non-overlapping patches {pk ∈ R P ×P }, where P × P is the patch size. Then, we map all patches of i-th visual node to a d-dimensional vector XP C i with a trainable linear projection. For each sequence of image patches, a [CLS] token embedding xCLS ∈ R d1is appended for the sequence of embedded patches, and an absolute position embeddings XP OS ialso added to retain positional information. The visual region of i-th node is represented as: Zi = [xCLS; XP C i] + X P OS i, (17) where [; ] denotes a concatenation. Then, we feed the input matrix Ziinto the CLIP vision encoder to acquire the representation xˆ I i . Note that the [CLS] token is utilized to serve as a representation of an entire image region: xˆ I i = CLIP(Zi)[CLS]. (18) where xˆ I i ∈ R d1. Since the category label of each node can provide the auxiliary semantic information, a label embedding layer is built to embed the word label of each node into a feature vector. Given the one-hot vectors of the category label of each node, we first map it into an embedded feature vector x¯ I i by an embedding matrix W*label* ∈ R d2×C*label* , where is initialized by Glove embedding (i.e., d2 = 300), C*label* is the number of categories. And then, the embedding features of the category label corresponding to the node are fused to the visual features to obtain the final visual node embedding: **note embedding**: $$\begin{array}{c}\mathbf{x}_{i}^{I}=\mbox{Tanh}(\mathbf{W}_{1}[\hat{\mathbf{x}}_{i}^{I};\mathbf{x}_{i}^{I}])\,.\end{array}$$ where $\mathbf{W}_{1}\in\mathbb{R}^{d_{1}\times(d_{1}+d_{2})}$. i]). (19) Textual Node Embedding In TSG, we utilize CLIP as the underlying encoder to yield the basic contextualized word representations for each textual node: {x T 1, *· · ·* , x T m} = CLIP({v1, · · · , vm}), (20) where x T i ∈ R d1. ## A.3 Graph Encoding In Section 3.2 and Section 3.3, we introduce a graph attention model (GAT) to encode the crossmodal graph (CMG) and refined graph (G−). Here, we provide a detail. Technically, given a graph G = (*V, E*), where V is the set of nodes, and E is the set of edges. And the feature matrix X ∈ R|V |×d1 of V with d1-dimensions. The hidden state hi of i-th node will be updated as follows: $\alpha_{i,j}=\frac{\exp(\text{LeakReLU}(\mathbf{W}_{2}[\mathbf{x}_{i};\mathbf{x}_{j}]))}{\sum_{k\in\mathcal{N}(i)}\exp(\text{LeakReLU}(\mathbf{W}_{2}[\mathbf{x}_{i};\mathbf{x}_{k}]))}$, $\mathbf{h}_{i}=\text{ReLU}(\sum_{j}^{m+n}\alpha_{i,j}(\mathbf{W}_{3}\mathbf{x}_{j}))$, (21) $\binom{22}{2}$ (22) ... where N (i) denotes the neighbors of i-th node, W2 and W3 are learnable parameters. In short, we denote the graph encoding as follows: H = GAT(G, X). (23) ## A.4 Detailed Gib-Guided Feature Refinement Introduction to GIB Here, we provide more background information about the GIB principle. Given the original graph G, and the target Y , the goal of representation learning is to obtain the compressed graph G− which is maximally informative ![12_image_0.png](12_image_0.png) $$(19)$$ w.r.t Y (i.e., sufficiency, I(*G, Y* ) = I(G−, Y )), and without any noisy information (i.e., minimality, I(G−, G) − I(G−, Y ) = 0), as indicated in Fig. 11. To encourage the information compressing process to focus on the target information, GIB was proposed to enforce an upper bound Ic to the information flow from the original graph to the compressed graph, by maximizing the following objectives: max G− I(G −, Y ) *s.t. I(G* −, G) ≤ Ic . (24) Eq. (24) implies that a compressed graph can improve the generalization ability by ignoring irrelevant distractors in the original graph. By using a Lagrangian objective, GIB allows the G− to be maximally expressive about Y while being maximally compressive about G by: max G− I(G −, Y ) − *βI(G* −*, G),* (25) where β is the Lagrange multiplier. For the sake of consistency with the main body of the paper, the objective can be rewritten to: min G−−I(G −, Y ) + βI(G −*, G).* (26) However, the GIB objective in Eq. (26) is notoriously hard to optimize due to the intractability of mutual information and the discrete nature of irregular graph data. By assuming that there is no information loss in the encoding process (Tian et al., 2020), the graph representation z of G− is utilized to optimize the GIB objective in Eq. (1), leading to −I(G−, Y ) ∼ −I(z, G), I(G−, G) ∼ I(z, Y ). Therefore, the Eq. (26) can be computed as: −I(G −, Y ) + βI(G −, G) ∼ −I(z, Y ) + βI(z*, G).* (27) Attention Operation for Node Filtering and Edge Adjusting In Section 3.3, we utilize the l-order context to determine whether a node should be filtered or an edge should be adjusted since the nodes and edges in a graph have local dependence, ![13_image_0.png](13_image_0.png) as shown in Fig. 12. Here, we give a detail of the calculation for the Att(·) operation in Eq. (8) and Eq. (9). In Eq. (8), the attention operation can be computed as: $$\alpha_{i,k}^{v}=\frac{\exp(\mathbf{W}_{4}[\mathbf{h}_{i};\mathbf{h}_{k}])}{\sum_{c\in\Phi(\{v_{i},\varphi(v_{i})\})}\exp(\mathbf{W}_{4}[\mathbf{h}_{i};\mathbf{h}_{c}])}\tag{28}$$ $$\mathbf{r}_{i}^{v}=\text{Tanh}(\sum_{k\in\Phi(\{v_{i},\varphi(v_{i})\})}\alpha_{i,k}^{v}(\mathbf{W}_{5}\mathbf{h}_{k}))$$ where $\Phi(\{v_{i},\varphi(v_{i})\})$ is a function to retrieve the where Φ({vi, φ(vi)}) is a function to retrieve the index of a node in a set. Similarly, we consider the $l$-order context to calculate the $\mathbf{r}_{i,j}^{e}$ in Eq.(9): $$\alpha_{i,j,k}^{e}=\frac{\exp(\mathbf{W}_{6}[\mathbf{h}_{i};\mathbf{h}_{j};\mathbf{h}_{k}])}{\exp(\mathbf{W}_{6}[\mathbf{h}_{i};\mathbf{h}_{j};\mathbf{h}_{c}])}\tag{29}$$ $$\sum_{c\in\Phi(\{v_{i},\varphi(v_{i}),v_{j},\varphi(v_{j})\})}\alpha_{i,j,k}^{e}(\mathbf{W}_{7}\mathbf{h}_{k}))$$ $$\mathbf{r}_{i,j}^{v}=\text{Tanh}(\sum_{k\in\Phi(\{v_{i},\varphi(v_{i}),v_{j},\varphi(v_{j})\})}\alpha_{i,j,k}^{e}(\mathbf{W}_{7}\mathbf{h}_{k}))$$ Detailed GIB Optimization First, we examine the second term I(z, G) in Eq. (11). Same as Sun et al. (2022), we employ variational inference to compute a variational upper bound for I(z, G) as follow: $$I(\mathbf{z},G)\leq\int p(\mathbf{z}|G)\log{\frac{p(\mathbf{z}|G)}{r(\mathbf{z})}}d\mathbf{z}d G\,,$$ dz*dG ,* (30) where r(z) is the variational approximation to the prior distribution p(z) of z, which is treated as a fixed d1-dimensional spherical Gaussian as in Alemi et al. (2017), i.e., r(z) = N(z|0, I). We use reparameterization trick ((Kingma and Welling, 2014)) to sample z from the latent distribution according to p(z|G), i.e., p(z|G) = N(µz,σz), where µz and σz is the mean vector and the diagonal co-variance matrix of z, which can be computed as: µz = FFN(a) ; σz = Softplus(FFN(a), (31) where a is the context feature of G− obtained from Eq.(10). z is sampled by z = µz + σz · ε, where ε ∈ N(0, I). We could reach the following optimization to approximate I(z, G): I(z, G) = KL(p(z|G)||r(z)), (32) where KL(*·||·*) is the Kullback Leibler (KL) divergence (Hershey and Olsen, 2007). Then, we examine the first term in Eq. (11), which encourages z to be informative to Y . We expand I(z, Y ) as: $$-I(\mathbf{z},Y)\leq-\int p(Y,\mathbf{z})\log q(Y|\mathbf{z})dYd\mathbf{z}+H(Y)\tag{33}$$ $$:=\mathcal{L}_{\mathbf{CE}}(q(Y|\mathbf{z}),Y)\,,$$ where $q(Y|\mathbf{z})$ is the variational approximation of where q(Y |z) is the variational approximation of the true posterior p(Y, z). Eq. (33) indicates that minimizing −I(z, Y ) is achieved by minimization of the classification loss between Y and z, we model it as an MLP classifier with parameters. The MLP classifier takes z as input and outputs the predicted label. ![13_image_1.png](13_image_1.png) $\downarrow$ . ## A.5 Detailed Latent Multimodal Topic Modeling Visual BoW Feature Extraction As mentioned in Section 2.3, we represent image I with visual BoW (VBoW) features. Here, we introduce how to extract VBoW features from an image. We compute the objective-level visual words in the following four steps, as shown in Fig. 13: - **Step 1: Detecting Objective Proposal**. We first employ a Faster-RCNN (Ren et al., 2015) as an objective detector to extract all the objective proposals in the training dataset. - **Step 2: Featuring Objective Proposal**: We use a pre-trained vision language model to obtain the feature descriptors (vectors) of each objective proposal. - **Step 3: Building the Codebook**: After obtaining the feature vectors, these feature vectors are clustered by a kmeans algorithm, where the number of clusters is set to 2,000. Cluster centroids are taken as visual words. - **Step 4: Representing Images**: Similar to the extraction of Bag-of-word (BoW) features for text representation, we build the Visual Bagof-Word (VBoW) features for images. Specifically, using this codebook, each feature vector of the objective proposal in an image is replaced with the id of the nearest learned visual word. Detailed Latent Topic Modelling Optimization In Section 2.3, we directly provide the optimal objective. In the following, we introduce how to optimize LAMO concretely. First of all, the prior parameters of θ, µ and σ are estimated from the input data and defined as: µ = fµ(f(H)), logσ = fσ(f(H)), (34) where H is the contextualized representation obtained from CMG, f(·) is an aggregation function, and f∗(·) is a neural perceptron that linearly transforms inputs, activated by a non-linear transformation. Note that we can generate the latent topic variable ϖ from p(θ|*T, I*) by sampling, i.e., ϖ = µ + σ · ε, where ε ∈ N (0, I). Then we employ Gaussian softmax to draw topic distribution θ: θ = Softmax(FFN(ϖ)) (35) Similar to previous neural topic models only for handling text (Bianchi et al., 2021), we consider autoregressively reconstructing the textual and visual BoW features of input by learned topic distribution θ: $$p({\boldsymbol{b}}_{i}^{T}|{\boldsymbol{\chi}},{\boldsymbol{\theta}})=\mathrm{Softmax}({\boldsymbol{\theta}}\cdot{\boldsymbol{\chi}}|{\boldsymbol{b}}_{<i}^{T})\,,$$ <i), (36) $$p({\boldsymbol{b}}_{i}^{I}|{\boldsymbol{\psi}},{\boldsymbol{\theta}})=\mathrm{Softmax}({\boldsymbol{\theta}}\cdot{\boldsymbol{\psi}}|{\boldsymbol{b}}_{<i}^{I})\,.$$ <i). (37) The objective function of latent multimodal topic modeling is to maximize the evidence lower bound (ELBO), as derived as follows: L*LAMO* =KL(q(θ)||p(θ|*T, I*)) − Eq(θ)[p(b T|θ, χ)] (38) − Eq(θ)[p(b I|θ, ψ)] =LKL + LRecT + L*RecI* , where q(θ) is the prior probability of θ, set as a standard Normal prior N (0, I). ## B Extended Experiments Setting B.1 Baselines We compare our model with two categories of baseline systems. Text-based Methods, which only leverage the texts of MRE data. - **BERT** (Devlin et al., 2019) is only fine-tuned on the dateset by Zheng et al. (2021a). - **PCNN** (Zeng et al., 2015) leverages external knowledge graphs to extract relations in a distantly supervised manner, which is employed in MRE dataset by Zheng et al. (2021a). - MTB (Soares et al., 2019) is a RE-oriented pretraining model based on BERT, which is applied in MRE dataset by Zheng et al. (2021a). - **DP-GCN** (Yu et al., 2020) propose dynamical pruning GCN for relation extraction, we reimplemented the framework and apply it to the MRE dataset. Multimodal Methods, which utilize the additional visual information to enhance the textual RE. - **BERT+SG** (Zheng et al., 2021a) simply concatenate the textual representation with visual features extracted. - **MEGA** (Zheng et al., 2021a) leverage the alignment between textual and visual graphs to learn better semantic representation for MRE. - **VisualBERT** (Li et al., 2019) is a singlestream structure via self-attention to discover implicit alignments between language and vision, which is then fine-tuned on the MRE dataset by Chen et al. (2022a). - **ViLBERT** (Lu et al., 2019) consider employing two parallel streams for visual and language processing, which is then fine-tuned on the MRE dataset by Chen et al. (2022a). - **HVPNet** (Chen et al., 2022b) propose to incorporate visual features into each self-attention layer of BERT. - **MKGformer** (Chen et al., 2022a) introduce a hybrid transformer architecture, in which the underlying two encoders are utilized to capture basic textual and visual features, and the upper encoder to model the interaction features between image and text. - RDS (Xu et al., 2022) design a data discriminator via reinforcement learning to determine whether data should utilize additional visual information for the relation inference. ## B.2 Calculating Text-Image Relevance In Fig. 10 we measure the relevance of input textimage pairs. Technically, we adopt the CLIP model to yield a vision-language matching score. Instead of directly feeding the whole picture and sentence into CLIP, we take a finer-grained method. Because in the MRE data, the picture and sentence pair collected from social media sources comes with low correlations, and if directly measuring their relevance at the instance level, our preliminary experiment shows that the highest text-image relevance score by CLIP is only 45%. Thus, we measure the picture and sentence pair by matching their correspondence of the VSG and TSG structures. We take their object nodes and the attribute nodes at the treatment targets, and calculate the vision-language pairs with CLIP at the node level: Ψ(*I, T*) = 1Z X i,j CLIP(x I i, x T j|G I, GT), (39) where Z is the normalization term. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 4 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.1 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
shi-huang-2023-multiemo
{M}ulti{EMO}: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations
https://aclanthology.org/2023.acl-long.824
Emotion Recognition in Conversations (ERC) is an increasingly popular task in the Natural Language Processing community, which seeks to achieve accurate emotion classifications of utterances expressed by speakers during a conversation. Most existing approaches focus on modeling speaker and contextual information based on the textual modality, while the complementarity of multimodal information has not been well leveraged, few current methods have sufficiently captured the complex correlations and mapping relationships across different modalities. Furthermore, existing state-of-the-art ERC models have difficulty classifying minority and semantically similar emotion categories. To address these challenges, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO, which effectively integrates multimodal cues by capturing cross-modal mapping relationships across textual, audio and visual modalities based on bidirectional multi-head cross-attention layers. The difficulty of recognizing minority and semantically hard-to-distinguish emotion classes is alleviated by our proposed Sample-Weighted Focal Contrastive (SWFC) loss. Extensive experiments on two benchmark ERC datasets demonstrate that our MultiEMO framework consistently outperforms existing state-of-the-art approaches in all emotion categories on both datasets, the improvements in minority and semantically similar emotions are especially significant.
# Multiemo: An Attention-Based Correlation-Aware Multimodal Fusion Framework For Emotion Recognition In Conversations Tao Shi Tsinghua Shenzhen International Graduate School, Tsinghua University shitao21@mails.tsinghua.edu.cn ## Abstract Emotion Recognition in Conversations (ERC) is an increasingly popular task in the Natural Language Processing community, which seeks to achieve accurate emotion classifications of utterances expressed by speakers during a conversation. Most existing approaches focus on modeling speaker and contextual information based on the textual modality, while the complementarity of multimodal information has not been well leveraged, few current methods have sufficiently captured the complex correlations and mapping relationships across different modalities. Furthermore, existing state-ofthe-art ERC models have difficulty classifying minority and semantically similar emotion categories. To address these challenges, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO, which effectively integrates multimodal cues by capturing cross-modal mapping relationships across textual, audio and visual modalities based on bidirectional multi-head crossattention layers. The difficulty of recognizing minority and semantically hard-to-distinguish emotion classes is alleviated by our proposed Sample-Weighted Focal Contrastive (SWFC) loss. Extensive experiments on two benchmark ERC datasets demonstrate that our MultiEMO framework consistently outperforms existing state-of-the-art approaches in all emotion categories on both datasets, the improvements in minority and semantically similar emotions are especially significant. ## 1 Introduction Emotion Recognition in Conversations (ERC) is an emerging task in the field of Natural Language Processing (NLP), which aims to identify the emotion of each utterance in a conversation based on textual, audio and visual cues of the speaker. ERC has attracted an enormous amount of attention from both academia and industry, due to its widespread ∗Corresponding author. Shao-Lun Huang∗ Tsinghua Shenzhen International Graduate School, Tsinghua University shaolun.huang@sz.tsinghua.edu.cn potentials in social media analysis (Chatterjee et al., 2019), health care services (Hu et al., 2021b), empathetic systems (Jiao et al., 2020), and so on. To solve the problem of ERC, numerous approaches have been proposed. The majority of existing works concentrate on modeling speaker dependencies and conversational contexts (Poria et al., 2017; Hazarika et al., 2018a,c; Majumder et al., 2019; Ghosal et al., 2019, 2020; Shen et al., 2021; Hu et al., 2021a,b; Li et al., 2021a; Joshi et al., 2022; Lee and Lee, 2022), while there still exist several unsolved challenges: (1) **The complementarity of multimodal information has not** been well exploited. Apart from rich information contained in the textual modality, the tone and intonation of the speaker can indicate the intensity of the emotion, facial expressions of interlocutors are also able to explicitly reveal emotional tendencies (Li et al., 2022). Figure 1 shows an example where the complementarity of acoustic and visual signals in addition to the textual modality is essential for an accurate emotion classification. Nevertheless, most existing approaches focus on the textual modality of utterances or simply utilize feature concatenation as the multimodal fusion mechanism (Poria et al., 2017; Hazarika et al., 2018a,c; Majumder et al., 2019; Zhang and Chai, 2021; Li et al., 2022) without modeling the complicated correlations and mapping relationships across textual, audio and visual modalities, which results in an inadequate integration of multimodal cues. (2) **Unsatisfactory** performances in minority emotion classes. Existing benchmark datasets in ERC, such as IEMOCAP (Busso et al., 2008) and MELD (Poria et al., 2019), suffer from the problem of imbalanced classes. As illustrated in Figure 2, both MELD and IEMOCAP are class-imbalanced, especially in MELD, where the majority class *neutral* takes up a much larger proportion than minority classes *disgust* and *fear*. Current state-of-the-art approaches fail to solve the class imbalance problem and have poor perfor14752 mances in minority emotions. (3) **The difficulty** of distinguishing between semantically similar emotions. It remains to be a challenging task to correctly classify different emotions that are semantically related, such as *disgust* and *anger* in MELD, since they share similar underlying cognitive, affective and physiological features, and tend to be expressed by speakers in similar contexts. To address the above problems, in this paper, we propose a novel attention-based correlationaware multimodal fusion framework named MultiEMO. Firstly, unimodal feature extraction and context modeling are performed for each modality, in which we introduce a visual feature extractor named VisExtNet based on a Multi-task Cascaded Convolutional Network (MTCNN) (Zhang et al., 2016) and a VGGFace2 (Cao et al., 2018) pretrained ResNet-101 (He et al., 2016). VisExtNet accurately captures visual cues of utterance videos by extracting emotion-rich facial expressions of interlocutors without modeling redundant scenerelated visual information. Secondly, we propose a multimodal fusion model called MultiAttn to effectively integrate multimodal information based on bidirectional multi-head cross-attention layers (Vaswani et al., 2017), which successfully captures complex cross-modal correlations and mapping relationships across contextualized textual, audio and visual features. Thirdly, in order to mitigate the difficulty of classifying minority and semantically similar emotion classes, enlightened by Focal Contrastive loss (Zhang et al., 2021), a Sample-Weighted Focal Contrastive (SWFC) loss is proposed, in which we assign more focus to hard-to-classify minority classes and make sample pairs with different emotion labels mutually exclusive with each other such that semantically similar emotions can be better distinguished. In addition, we utilize a Soft Hirschfeld-GebeleinRényi (Soft-HGR) loss (Wang et al., 2019) to maximize the correlations across multimodal-fused textual, audio and visual feature representations extracted from MultiAttn. Finally, extensive experiments are conducted on two ERC benchmark datasets, MELD and IEMOCAP. Experimental results demonstrate the effectiveness and superiority of our proposed MultiEMO framework compared with existing state-of-the-art approaches, the improvements in minority and semantically similar emotion categories are especially remarkable. The main contributions of this work can be sum- ![1_image_0.png](1_image_0.png) marized as follows: - We propose a novel visual feature extraction network named VisExtNet, which effectively captures visual cues of interlocutors without modeling redundant scene information. - We design a multimodal fusion model called MultiAttn based on bidirectional multi-head cross-attention layers, which successfully models the complicated correlations across textual, audio and visual modalities. - We innovatively introduce a SWFC loss to address the difficulty of classifying minority and semantically similar emotion classes. - We conduct extensive experiments on MELD and IEMOCAP, results show that our proposed MultiEMO framework achieves stateof-the-art performances on both datasets, the improvements in minority and semantically similar emotions are especially notable. ## 2 Related Work 2.1 Recurrence-Based Models (Poria et al., 2017) proposes a Long Short-Term Memory (LSTM) based network named BCLSTM to extract contextual information from dialogues. Interactive Conversational memory Network (ICON) is proposed by (Hazarika et al., 2018b), which models self- and inter-speaker influences based on gated recurrent units (GRUs). (Majumder et al., 2019) introduces a DialogueRNN to model speaker states and contextual information using GRUs. (Lu et al., 2020) proposes a GRUbased Iterative Emotion Interaction Network (IterativeERC), which models emotion interactions by iteratively using predicted emotion labels. (Ma et al., 2022) designs a Multi-View Network (MVN) to model emotion representations of queries from both word- and utterance-level views based on the attention mechanism and bidirectional GRUs. ![2_image_0.png](2_image_0.png) ## 2.2 Graph-Based Models (Ghosal et al., 2019) proposes a Dialogue Graph Convolutional Network (DialogueGCN) to model the conversational context with a directed graph. (Zhang et al., 2019) designs a graph-based model named ConGCN to capture both context- and speaker-sensitive dependencies. (Shen et al., 2021) introduces a Directed Acyclic Neural Network (DAG-ERC) to capture intrinsic structures of conversations using a directed acyclic graph (DAG). A contextualized Graph Neural Network (GNN) based model named COGMEN is proposed by (Joshi et al., 2022), which exploits both local- and global-level information in a conversation. ## 2.3 Transformer-Based Models (Li et al., 2020) introduces a transformer-based context-sensitive model named HiTrans based on two hierarchical transformers. (Li et al., 2022) designs a transformer-based model called EmoCaps to extract emotional tendencies from multimodal features. CoMPM is introduced by (Lee and Lee, 2022), which consists of a transformer-encoder based context embedding module (CoM) and a pretrained memory module (PM). ## 2.4 Multimodal-Based Models Multimodal Fused Graph Convolutional Network (MMGCN) is proposed by (Hu et al., 2021b), which leverages both multimodal information and long-distance contexts. (Li et al., 2021b) introduces a quantum-like framework named QMNN to jointly perform multimodal fusion and conversational context modeling. (Chudasama et al., 2022) designs a multimodal fusion network named M2FNet based on multi-head attention layers to capture crossmodal interactions. A unified framework named UniMSE is proposed by (Hu et al., 2022), in which multimodal representations are fused by injecting acoustic and visual signals into the T5 model. ![2_image_1.png](2_image_1.png) ## 3 Methodology 3.1 Problem Definition Given a dialogue which consists of n utterances u1, u2*, . . . ,* un uttered by speakers Su1 , Su2 , . . . , Sun , the goal of ERC is to predict the emotion label of each utterance in the dialogue from the pre-defined k-class emotion category set Y. Each utterance has its corresponding textual (t), audio (a) and visual (v) modalities, which can be illustrated as follows: $$\mathbf{u}_{i}=\{\mathbf{u}_{i}^{t},\mathbf{u}_{i}^{a},\mathbf{u}_{i}^{v}\},i\in\{1,\ldots,n\}\tag{1}$$ ## 3.2 Model Overview The overall framework of MultiEMO is illustrated in Figure 3, which is made up of four key components: Unimodal Feature Extraction, Context Modeling, Multimodal Fusion and Emotion Classification. In the subsequent subsections, we discuss each module in detail. ## 3.3 **Unimodal Feature Extraction And Context** Modeling 3.3.1 Textual Modality Existing research often adopts two different paradigms to extract contextualized textual features: (1) **Two-stage paradigm** (Li et al., 2020; Chudasama et al., 2022): Text sequences are first fed into a pre-trained language model to learn utterance-level local textual representations and then to another transformer to generate dialoguelevel global textual features by incorporating contextual information in the conversation. (2) **Onestage paradigm** (Kim and Vossen, 2021; Lee and Lee, 2022): Local utterance-level information and global dialogue-level conversational contexts are jointly captured through fine-tuning a single pretrained language model. We have explored both approaches and experimental results demonstrate ![3_image_0.png](3_image_0.png) that the one-stage paradigm slightly outperforms the two-stage paradigm. For the sake of computational efficiency, we adopt the one-stage paradigm. To be specific, following (Kim and Vossen, 2021), each textual utterance u t i is prefixed with the speaker name of the utterance Sui , such that speaker information can be effectively encoded. Then, the input sequence for the i-th utterance is composed of three segments to incorporate contextual information: preceding contextual utterances {u t1 , . . . , u t i−1}, current utterance u t i , and succeeding contextual utterances {u t i+1*, . . . ,* u tn}. These three segments are concatenated and separated by [SEP] before being fed into a pre-trained RoBERTa model and a subsequent fully-connected layer, with the embedding of the first hidden state [CLS] utilized as the learned contextualized 256dimensional textual representation c t i for u t i . ## 3.3.2 Audio Modality Audio Feature Extraction: We follow (Majumder et al., 2019) and use a OpenSMILE (Eyben et al., 2010) to extract a 6373-dimensional feature representation for each utterance audio, then a fully-connected layer is adopted to obtain a 512dimensional feature h a i for each input audio u a i . Audio Context Modeling: After unimodal audio feature extraction, we employ a DialogueRNN (Majumder et al., 2019) to capture a contextualized ![3_image_1.png](3_image_1.png) audio feature c a i with 256 dimensions for each audio clip. The speaker-modeling nature of DialogueRNN makes it effective in integrating audio cues from different speakers (Li et al., 2022). ## 3.3.3 Visual Modality Visual Feature Extraction: Most existing works (Hazarika et al., 2018a,c; Majumder et al., 2019; Zhang and Chai, 2021; Li et al., 2022) utilize a 3D-CNN (Tran et al., 2015) to capture visual features from video clips. Recently, (Chudasama et al., 2022) proposes a dual network based on a Multitask Cascaded Convolutional Network (MTCNN) (Zhang et al., 2016), which demonstrates to be effective. Both approaches encode not only facial expressions of interlocutors but also scene-related information for each utterance clip. However, we ![4_image_0.png](4_image_0.png) argue that these two approaches are flawed because visual surrounding information is redundant. Firstly, there are no explicit correlations between scene information and the emotion of the speaker, because dialogues that happen in the same scene do not tend to share similar emotional tendencies. To illustrate, a large proportion of conversations in MELD take place at home, but the emotions of these conversations vary significantly. In addition, the scene normally remains unchanged throughout the conversation. Therefore, capturing scenerelated visual information for each utterance is unnecessary and may lead to a wrong understanding of the speaker's actual emotional tendency due to the influence of irrelevant scene information. To address this problem, we propose a novel visual feature extractor named VisExtNet, which is made up of a MTCNN and a ResNet-101 (He et al., 2016) pre-trained on VGGFace2 (Cao et al., 2018). The architecture of VisExtNet is shown in Figure 4. VisExtNet aims to effectively capture visual cues by integrating facial expressions of interlocutors from multiple frames without encoding redundant scene-related information. For an utterance video u v i , visual feature extraction is performed on 20 frames of the utterance clip, with each frame selected using a step of number of frames 20 . Specifically, each frame is first sent into a MTCNN to accurately detect the faces of all interlocutors present in the scene at that frame, each detected face is then passed through a VGGFace2 pretrained ResNet-101 to extract a emotion-rich visual feature vector. The concatenation of facial expression features from all participants is regarded as the visual representation of that frame. The same process is repeated for each of the 20 frames, after which the output features of all frames are average pooled over the frame axis to obtain a 1000dimensional visual feature vector h v i . Visual Context Modeling: Similar to audio context modeling, after visual feature extraction, we utilize another DialogueRNN to learn a 256dimensional contextualized visual representation c v i for each video clip. ## 3.4 Multimodal Fusion Existing literature fails to effectively integrate multimodal information, the complex correlations and mapping relationships across multiple modalities have not been well captured. To tackle this issue, inspired by (Chudasama et al., 2022), we propose a novel multimodal fusion network named MultiAttn based on the bidirectional multi-head cross-attention mechanism (Vaswani et al., 2017), in which querys are generated from one modality while keys and values come from a different modality, and both preceding and succeeding contexts are leveraged when calculating attention distributions. The architecture of MultiAttn is shown in Figure 5. MultiAttn is made up of three components: MultiAttntext, MultiAttnaudio and MultiAttnvisual, each of which aims to integrate one modality with complementary information from the other two modalities. As illustrated in Figure 5, MultiAttntext, MultiAttnaudio and MultiAttnvisual share the same building blocks and only differ in terms of input Query, Key and Value. Thus, for the sake of brevity, we use MultiAttntext to illustrate how multimodal fusion works. MultiAttntext effectively incorporates the textual modality with audio and visual cues through a three-stage approach: (1) MultiAttntext first learns cross-modal correlations and mapping relationships between textual and audio modalities by treating the textual modality as Query and the audio modality as Key and Value for the bidirectional multi-head cross-attention operation; (2) The learned output from the first stage is then utilized as the new Query while the visual modality is regarded as Key and Value for another bidirectional multi-head cross-attention layer to fuse the textual modality with visual cues; (3) Finally, a feed-forward network consisting of two fully-connected layers with a Rectified Linear Unit (ReLU) is adopted, which operates as a key-value memory (Geva et al., 2021). In addition, we employ residual connection and layer normalization over the output of each stage to facilitate the training process. To construct a deeper and more powerful network, we utilize T layers of MultiAttntext, MultiAttnaudio and MultiAttnvisual as the full model architecture of MultiAttn, where the output of each layer is fed into the next layer as the new Query. Given the Queries of all utterances F t (j−1)= [f t (j−1) 1*, . . . ,* f t (j−1) n] T learned from layer j − 1 , audio and visual features C a = [c a 1 , . . . , c an] T and C v = [c v1 , . . . , c vn] T, the calculation of MultiAttntext at layer j is illustrated as follows: $$[\mathbf{Q}_{h}^{ta^{(j)}},\mathbf{K}_{h}^{ta^{(j)}},\mathbf{V}_{h}^{ta^{(j)}}]=[\mathbf{F}^{t^{(j-1)}}\mathbf{W}\mathbf{Q}_{h}^{ta^{(j)}},$$ $$\mathbf{C}^{a}\mathbf{W}\mathbf{K}_{h}^{ta^{(j)}},\mathbf{C}^{a}\mathbf{W}\mathbf{V}_{h}^{ta^{(j)}}],h\in\{1,\ldots,H\}\tag{2}$$ $$\mathbf{A}_{h}^{ta^{(j)}}=\text{Softmax}(\frac{\mathbf{Q}_{h}^{ta^{(j)}}\mathbf{K}_{h}^{ta^{(j)}}}{\sqrt{d_{\mathbf{K}_{h}^{ta^{(j)}}}}})\mathbf{V}_{h}^{ta^{(j)}},$$ $$h\in\{1,\ldots,H\}\tag{3}$$ MHta(j)= Cat(A ta(j) 1*, . . . ,* A ta(j) H )WOta(j)(4) F ta(j)= LayerNorm(F t (j−1)+ MHta(j)) (5) [Q tav(j) h, K tav(j) h, V tav(j) h] = [F ta(j)WQtav(j) h , C vWKtav(j) h , C vWV tav(j) h ], h ∈ {1*, . . . , H*} (6) A tav(j) h = Softmax( Q tav(j) h K tav(j) h T )V tav(j) h, qdKtav(j) h h ∈ {1*, . . . , H*} (7) MHtav(j)= Cat(A tav(j) 1*, . . . ,* A tav(j) H )WOtav(j) (8) F tav(j)= LayerNorm(F ta(j)+ MHtav(j)) (9) FFNt (j) 1 = max(0, F tav(j)WFFNt (j) 1 + bFFNt (j) 1 ) (10) FFNt (j) 2 = FFNt (j) 1 WFFNt (j) 2 + bFFNt (j) 2 (11) F t (j)= LayerNorm(F tav(j)+ FFNt (j) 2) (12) Where WQta(j) h , WKta(j) h , WV ta(j) h , WOta(j), WQtav(j) h , WKtav(j) h , WV tav(j) h , WOtav(j)(h ∈ {1*, . . . , H*}), WFFNt (j) 1 , WFFNt (j) 2 are projection matrices, bFFNt (j) 1 and bFFNt (j) 2 are bias parameters, H is the number of attention heads, Cat stands for concatenation. ## 3.5 Emotion Classification After multimodal fusion, the learned multimodalfused textual, audio and visual feature representations f t i , f a i and f v i are concatenated and then sent into a fully-connected layer and a subsequent 2layer Multilayer Perceptron (MLP) with a ReLU. Finally, a Softmax layer is utilized to compute a probability distribution over the emotion category set, where the emotion label with the highest probability is chosen as the prediction yˆi for the i-th utterance. The calculation is illustrated as follows: $$\mathbf{f}_{i}=\mathbf{f}_{i}^{t}\oplus\mathbf{f}_{i}^{a}\oplus\mathbf{f}_{i}^{v}\tag{13}$$ $$\mathbf{z}_{i}=\mathbf{W}^{z}\mathbf{f}_{i}+\mathbf{b}_{z}$$ (14) $$\mathbf{l}_{i}=\max(0,\mathbf{W}^{l}\mathbf{z}_{i}+\mathbf{b}_{l})$$ (15) $$\mathbf{p}_{i}=\text{Softmax}(\mathbf{W}^{smax}\mathbf{l}_{i}+\mathbf{b}_{smax})$$ (16) $$\hat{y}_{i}=\text{argmax}(\mathbf{p}_{i}[t])\tag{17}$$ Where ⊕ denotes concatenation, Wz, Wland W*smax* are weight matrices, bz, bl and b*smax* are bias parameters. Models IEMOCAP Happiness Sadness Neutral Anger Excitement Frustration **Weighted-F1** BC-LSTM 34.43 60.87 51.81 56.73 57.95 58.92 54.95 DialogueRNN 33.18 78.80 59.21 65.28 71.86 58.91 62.75 DialogueGCN 51.87 76.76 56.76 62.26 72.71 58.04 63.16 IterativeERC 53.17 77.19 61.31 61.45 69.23 60.92 64.37 QMNN 39.71 68.30 55.29 62.58 66.71 62.19 59.88 MMGCN 42.34 78.67 61.73 69.00 74.33 62.32 66.22 MVN 55.75 73.30 61.88 65.96 69.50 64.21 65.44 UniMSE - - - - - - 70.66 MultiEMOw/o VisExtNet 65.06 84.80 66.13 67.98 76.16 69.66 71.72 MultiEMOw/o MultiAttn 55.18 78.29 62.06 63.84 73.11 63.98 66.57 MultiEMOw/o SWFC loss 59.88 83.96 66.57 67.03 75.35 70.04 71.08 MultiEMO **65.77 85.49 67.08 69.88 77.31 70.98 72.84** ## 3.6 Training Objectives Given a batch of N samples consisting of M dialogues, where the i-th dialogue contains C(i) utterances, training objectives are defined as follows: SWFC Loss: To alleviate the difficulty of classifying minority and semantically similar emotions, we propose a novel loss function named SampleWeighted Focal Contrastive (SWFC) loss based on the Focal Contrastive loss (Zhang et al., 2021) loss by introducing a sample-weight term and a focusing parameter, in which we assign more importance to hard-to-classify minority classes during the training phase, and make sample pairs with different emotion labels mutually exclusive with each other to maximize inter-class distances, such that semantically similar emotions can be better distinguished. The SWFC loss is defined as follows: $$s_{j,g}^{(i)}=\frac{\exp\left(\mathbf{z}_{i,j}^{\mathrm{T}}\mathbf{z}_{i,g}/\tau\right)}{\sum_{\mathbf{z}_{i,s}\in A_{i,j}}\exp\left(\mathbf{z}_{i,j}^{\mathrm{T}}\mathbf{z}_{i,s}/\tau\right)}\tag{18}$$ $$L_{\mathrm{SWFC}}=-\sum_{i=1}^{M}\sum_{j=1}^{C(i)}(\frac{N}{n_{y_{i,j}}})^{\alpha}\frac{1}{|R_{i,j}|}\sum_{\mathbf{z}_{i,g}\in R_{i,j}}$$ $$(1-s_{j,g}^{(i)})^{\gamma}\log s_{j,g}^{(i)}\tag{19}$$ Where zi,j is the output of the fully-connected layer (Equation 14) for utterance j in dialogue i, Ai,j is the set of features in the batch other than zi,j , yi,j is the label of utterance j in dialogue i, Ri,j = {zi,g ∈ Ai,j |yi,g = yi,j} is the set of positive features that share the same label as zi,j , nyi,j is the count of label yi,j in the batch, α is a sampleweight parameter that controls the degree of focus on minority classes, τ is a temperature parameter that controls the strength of penalties on negative samples, γ is a focusing parameter which forces the model to focus on hard-to-classify examples. Soft-HGR Loss: We utilize a Soft HirschfeldGebelein-Rényi (Soft-HGR) loss (Wang et al., 2019) to maximize the correlations across multimodal-fused textual, audio and visual features extracted from MultiAttn. The Soft-HGR loss is defined as follows: **Lemma 1**.: _Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer._ Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. Proof.: Let $\mathbf{P}$ be a positive integer. Then $\mathbf{P}$ is a positive integer. $$\mathbf{\Sigma}^{8)}$$ sample means and sample covariances. Cross-Entropy Loss: In addition, we adopt a Cross-entropy loss to measure the difference between predicted probabilities and true labels: $$L_{\mathrm{CE}}=-\sum_{i=1}^{M}\sum_{j=1}^{C(i)}\log{\bf p}_{i,j}[y_{i,j}]\qquad(21)$$ Where pi,j is the probability distribution over the emotion classes for utterance j in dialogue i, yi,j is the ground-truth label of utterance j in dialogue i. Full Loss Function: A linear combination of SWFC loss, Soft-HGR loss and Cross-entropy loss is leveraged as the full loss function: $$L_{\text{Train}}=\frac{1}{N}(\mu_1L_{\text{SWFC}}+\mu_2L_{\text{Soft-HGR}}$$ $$+(1-\mu_1-\mu_2)L_{\text{CE}})+\lambda||\theta||_2^2,\ \mu_1,\mu_2\in[0,1]\tag{22}$$ 8. Models MELD Neutral Surprise Fear Sadness Joy Disgust Angry **Weighted-F1** BC-LSTM 73.80 47.70 5.40 25.10 51.30 5.20 38.40 55.90 DialogueRNN 76.23 49.59 0.00 26.33 54.55 0.81 46.76 58.73 DialogueGCN 76.02 46.37 0.98 24.32 53.62 1.22 43.03 57.52 IterativeERC 77.52 53.65 3.31 23.62 56.63 19.38 48.88 60.72 QMNN 77.00 49.76 0.00 16.50 52.08 0.00 43.17 58.00 MMGCN - - - - - - - 58.65 MVN 76.65 53.18 11.70 21.82 53.62 21.86 42.55 59.03 UniMSE - - - - - - - 65.51 MultiEMOw/o VisExtNet 79.16 58.22 24.80 37.61 60.65 31.73 52.08 64.89 MultiEMOw/o MultiAttn 77.72 54.05 21.76 33.10 58.28 24.80 49.98 62.50 MultiEMOw/o SWFC loss 79.51 56.54 20.59 32.96 58.52 25.81 51.23 63.83 MultiEMO **79.95 60.98 29.67 41.51 62.82 36.75 54.41 66.74** Table 2: Experimental results on MELD. The best results are highlighted in bold. "-" means that the results are unavailable from the original paper. Table 3: Experimental results of MultiEMO with different modality settings on IEMOCAP and MELD. Where µ1 and µ2 are tunable hyperparameters, λ is the L2 regularization weight, θ is the set of all trainable parameters. ## 4 Experimental Settings | Modality | IEMOCAP | MELD | |-----------------------|-----------|--------| | Text | 64.48 | 61.23 | | Audio | 38.89 | 33.55 | | Visual | 35.37 | 33.16 | | Text + Audio | 69.18 | 64.21 | | Text + Visual | 67.86 | 63.78 | | Text + Audio + Visual | 72.84 | 66.74 | ## 4.1 Datasets IEMOCAP (Busso et al., 2008): IEMOCAP contains approximately 12 hours of videos of dyadic conversations, which are segmented into 7433 utterances and 151 dialogues. Each utterance is annotated with one of six emotion labels: happiness, sadness, neutral, anger, excitement and frustration. MELD (Poria et al., 2019): MELD is a multi-party dataset with 13708 utterances and 1433 dialogues from the TV series *Friends*. Each utterance is annotated with one of seven emotion categories: anger, disgust, fear, joy, neutral, sadness and surprise. ## 4.2 Baseline Methods BC-LSTM (Poria et al., 2017): BC-LSTM models conversational contexts through bidirectional LSTMs without differentiating different speakers. DialogueRNN (Majumder et al., 2019): DialogueRNN models contextual information and speaker states through three distinct GRUs. DialogueGCN (Ghosal et al., 2019): DialogueGCN captures the context by modeling conversations using a directed graph. IterativeERC (Lu et al., 2020): IterativeERC iteratively uses predicted emotion labels instead of gold emotion labels to model emotion interactions. QMNN (Li et al., 2021b): QMNN captures conversational contexts and conducts multimodal fusion from a novel quantum perspective. MMGCN (Hu et al., 2021b): MMGCN models long-distance conversational contexts by leveraging Graph Convolutional Networks (GCNs). MVN (Ma et al., 2022): MVN effectively captures emotion representations of queries from both wordand utterance-level views. UniMSE (Hu et al., 2022): UniMSE leverages the similarities and complementaries between emotions to achieve better predictions. ## 4.3 Implementation Details Modality Setting: We utilize textual, audio and visual modalities of utterances to conduct experiments on both MELD and IEMOCAP. Hyperparameter Settings: (1) Dataset-specific settings: Since MELD is significantly more classimbalanced than IEMOCAP, the batch size is designed to be 64 on IEMOCAP and 100 on MELD. (2) Dataset-generic settings: The number of training epochs is 100, the optimizer is Adam (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.99, the learning rate is initialized with 0.0001 and decays by 0.95 after every 10 epochs, the L2 regularization weight λ is 0.00001. To avoid overfitting, we apply Dropout (Srivastava et al., 2014) layers with a dropout rate of 0.1. (3) Hyperparameters in MultiEMO: The number of layers T in MultiAttn is tuned to be 6, the temperature parameter τ , the sample-weight parameter α and the focusing parameter γ in the SWFC loss are designed to be 0.8, 0.8 and 2 respectively, the combining coefficients µ1 and µ2 in the full training loss function LTrain are tuned to be 0.4 and 0.3 respectively. Evaluation Metrics: We use the Weighted-average F1 score (Weighted-F1) for model evaluations. ## 5 Results And Analysis 5.1 Comparison With Baseline Models The comparisons between MultiEMO and existing state-of-the-art approaches on IEMOCAP and MELD are shown in Table 1 and Table 2 respectively. Experimental results demonstrate that MultiEMO achieves the new state-of-the-art performances on both datasets and outperforms existing approaches across all emotion categories, with significant improvements in minority and semantically similar classes. Specifically, on IEMOCAP, MultiEMO surpasses MVN by 17.97% Weighted-F1 in the minority class *Happiness* and achieves relative Weighted-F1 improvements of 8.49% and 10.54% in two similar classes *Sadness* and *Frustration* respectively; On MELD, MultiEMO gains a remarkable 153.59% relative improvement in minority emotion *Fear*, and outperforms the previous best baselines in semantically-similar emotion pairs Anger and *Disgust* by 11.31% and 68.12%. ## 5.2 Different Modality Settings The comparison of MultiEMO with different modality settings on IEMOCAP and MELD is illustrated in Table 3. From Table 3 we can see that the textual modality of utterances plays a major role in ERC, while the complementary cues from audio and visual modalities can bring considerable improvements over the text-based MultiEMO. ## 5.3 Ablation Study To study the contributions of different components in MultiEMO to model performances, we conduct ablation studies on both IEMOCAP and MELD, the results are shown in Table 1 and Table 2. Impact of VisExtNet: To study the effect of VisExtNet, we implement MultiEMOw/o VisExtNet, in which the proposed VisExtNet is replaced by a 3D-CNN. Experimental results show that the performances of MultiEMOw/o VisExtNet decrease in all emotion categories on both IEMOCAP and MELD, with a more notable decline on MELD, since the complicated multi-party conversations in MELD make it more challenging for a 3D-CNN to accurately capture visual cues. The inferior performances of MultiEMOw/o VisExtNet on both datasets prove the effectiveness of VisExtNet. Impact of MultiAttn: To analyze the impact of MultiAttn, we implement MultiEMOw/o MultiAttn, where we replace MultiAttn with feature concatenation to fuse contextualized multimodal features. As shown in Table 1 and Table 2, the performances of MultiEMOw/o MultiAttn fall sharply in all emotion classes of IEMOCAP and MELD, which proves the importance and superiority of capturing crossmodal correlations and dependencies across textual, audio and visual modalities using MultiAttn. Impact of SWFC Loss: To study the contribution of SWFC loss, we implement another variant MultiEMOw/o SWFC loss by removing the SWFC loss part from the training loss function. Experimental results demonstrate that the performances of MultiEMOw/o SWFC loss drop considerably on both IEMOCAP and MELD, the declines in minority and semantically similar emotion classes are remarkably striking, while the decreases in majority classes are merely marginal. In addition, the degree of decline is more noticeable on MELD since MELD is significantly more class-imbalanced than IEMOCAP. The results of MultiEMOw/o SWFC loss prove the effectiveness of SWFC loss in mitigating the difficulty of classifying minority and semantically similar emotion categories. ## 5.4 Case Study A case study is illustrated in Appendix A.1. ## 6 Conclusion In this paper, we propose a novel attention-based correlation-aware multimodal fusion framework named MultiEMO for the task of ERC, in which we design a visual feature extractor VisExtNet to accurately capture emotion-rich visual cues and introduce a multimodal fusion model MultiAttn to effectively model the cross-modal interactions and mapping relationships across multiple modalities. Furthermore, the difficulty of classifying minority and semantically similar emotions is mitigated by our proposed SWFC loss. Extensive experiments on IEMOCAP and MELD demonstrate the effectiveness and superiority of MultiEMO. ## Limitations Although our proposed MultiEMO framework has achieved state-of-the-art performances on both IEMOCAP and MELD, there are some limitations with this work: - Our proposed visual feature extractor VisExtNet does not distinguish between speakers and irrelevant people in the scene, which can be problematic in some scenarios. For instance, one scene in MELD is the cafeteria, where a lot of background actors sit and drink coffee. The facial expressions of these background people have no impact on the emotion of the speaker since they do not participant in the conversation. However, VisExtNet captures visual features of everyone appeared in the cafeteria with no differentiation, which may lead to a wrong comprehension of the speaker's emotional tendency due to the effects of facial expressions from irrelevant people. We plan to explore effective ways to distinguish between interlocutors and irrelevant people in the scene in our future work. - The effects of hyperparameters in the SWFC loss (temperature parameter τ , sample-weight parameter α and focusing parameter γ) on model performances have not been fully studied, which will be thoroughly analyzed in our future research. - Due to the class imbalanced issue with MELD, the SWFC loss requires a large batch size on MELD to ensure that for each training sample there exists at least one positive pair in the batch, which can be computationally expensive. We will investigate effective approaches to tackle this challenge in our future research. - Even though MultiEMO has achieved remarkable improvements in minority emotion categories, the performances of MultiEMO in minority emotions are still worse than majority classes. How to further improve performances in low-resource emotion classes will be explored in the future. ## Ethics Statement The significant improvements in classifying minority emotion categories brought by our method can make MultiEMO a powerful tool in psychopathological fields such as depression detection, where minority emotions sadness, *fear* and *anger* are important early indicators of depression (O'Connor et al., 2002). ## Acknowledgements The research of Shao-Lun Huang is supported in part by National Key R&D Program of China under Grant 2021YFA0715202 and the Shenzhen Science and Technology Program under Grant KQTD20170810150821146. ## References Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. *Language resources* and evaluation, 42(4):335–359. Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67–74. IEEE. Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. SemEval-2019 task 3: EmoContext contextual emotion detection in text. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 39–48, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Vishal Chudasama, Purbayan Kar, Ashish Gudmalwar, Nirmesh Shah, Pankaj Wasnik, and Naoyuki Onoe. 2022. M2fnet: multi-modal fusion network for emotion recognition in conversation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4652–4661. Florian Eyben, Martin Wöllmer, and Björn Schuller. 2010. Opensmile: The munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM International Conference on Multimedia, MM '10, page 1459–1462, New York, NY, USA. Association for Computing Machinery. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Deepanway Ghosal, Navonil Majumder, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. COSMIC: COmmonSense knowledge for eMotion identification in conversations. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2470–2481, Online. Association for Computational Linguistics. Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. 2019. DialogueGCN: A graph convolutional neural network for emotion recognition in conversation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 154–164, Hong Kong, China. Association for Computational Linguistics. Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018a. Icon: Interactive conversational memory network for multimodal emotion detection. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 2594–2604. Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018b. ICON: Interactive conversational memory network for multimodal emotion detection. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2594–2604, Brussels, Belgium. Association for Computational Linguistics. Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018c. Conversational memory network for emotion recognition in dyadic dialogue videos. In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, volume 2018, page 2122. NIH Public Access. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 770– 778. Dou Hu, Lingwei Wei, and Xiaoyong Huai. 2021a. DialogueCRN: Contextual reasoning networks for emotion recognition in conversations. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7042–7052, Online. Association for Computational Linguistics. Guimin Hu, Ting-En Lin, Yi Zhao, Guangming Lu, Yuchuan Wu, and Yongbin Li. 2022. UniMSE: Towards unified multimodal sentiment analysis and emotion recognition. In *Proceedings of the 2022* Conference on Empirical Methods in Natural Language Processing, pages 7837–7851, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jingwen Hu, Yuchen Liu, Jinming Zhao, and Qin Jin. 2021b. MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5666–5675, Online. Association for Computational Linguistics. Wenxiang Jiao, Michael Lyu, and Irwin King. 2020. Real-time emotion recognition via attention gated hierarchical memory network. *Proceedings of the* AAAI Conference on Artificial Intelligence, 34:8002– 8009. Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Singh, and Ashutosh Modi. 2022. COGMEN: COntextualized GNN based multimodal emotion recognitioN. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4148–4164, Seattle, United States. Association for Computational Linguistics. Taewoon Kim and Piek Vossen. 2021. Emoberta: Speaker-aware emotion recognition in conversation with roberta. *arXiv preprint arXiv:2108.12009*. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Joosung Lee and Wooin Lee. 2022. CoMPM: Context modeling with speaker's pre-trained memory tracking for emotion recognition in conversation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5669–5679, Seattle, United States. Association for Computational Linguistics. Jiangnan Li, Zheng Lin, Peng Fu, and Weiping Wang. 2021a. Past, present, and future: Conversational emotion recognition through structural modeling of psychological knowledge. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1204–1214, Punta Cana, Dominican Republic. Association for Computational Linguistics. Jingye Li, Donghong Ji, Fei Li, Meishan Zhang, and Yijiang Liu. 2020. HiTrans: A transformer-based context- and speaker-sensitive model for emotion detection in conversations. In *Proceedings of the 28th* International Conference on Computational Linguistics, pages 4190–4200, Barcelona, Spain (Online). International Committee on Computational Linguistics. Qiuchi Li, Dimitris Gkoumas, Alessandro Sordoni, JianYun Nie, and Massimo Melucci. 2021b. Quantuminspired neural network for conversational emotion recognition. *Proceedings of the AAAI Conference on* Artificial Intelligence, 35(15):13270–13278. Zaijing Li, Fengxiao Tang, Ming Zhao, and Yusen Zhu. 2022. EmoCaps: Emotion capsule based model for conversational emotion recognition. In *Findings of* the Association for Computational Linguistics: ACL 2022, pages 1610–1618, Dublin, Ireland. Association for Computational Linguistics. Xin Lu, Yanyan Zhao, Yang Wu, Yijian Tian, Huipeng Chen, and Bing Qin. 2020. An iterative emotion interaction network for emotion recognition in conversations. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 4078– 4088, Barcelona, Spain (Online). International Committee on Computational Linguistics. Hui Ma, Jian Wang, Hongfei Lin, Xuejun Pan, Yijia Zhang, and Zhihao Yang. 2022. A multi-view network for real-time emotion recognition in conversations. *Knowledge-Based Systems*, 236:107751. Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In *Proceedings* of the AAAI conference on artificial intelligence, volume 33, pages 6818–6825. Lynn E O'Connor, Jack W Berry, Joseph Weiss, and Paul Gilbert. 2002. Guilt, fear, submission, and empathy in depression. *Journal of affective disorders*, 71(1-3):19–27. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-dependent sentiment analysis in user-generated videos. In *Proceedings of the* 55th annual meeting of the association for computational linguistics (volume 1: Long papers), pages 873–883. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527– 536, Florence, Italy. Association for Computational Linguistics. Weizhou Shen, Siyue Wu, Yunyi Yang, and Xiaojun Quan. 2021. Directed acyclic graph network for conversational emotion recognition. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1551–1560, Online. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Lichen Wang, Jiaxiang Wu, Shao-Lun Huang, Lizhong Zheng, Xiangxiang Xu, Lin Zhang, and Junzhou Huang. 2019. An efficient approach to informative feature extraction from multimodal data. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pages 5281–5288. Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, and Yi Chang. 2021. HiTRANS: A hierarchical transformer network for nested named entity recognition. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 124–132, Punta Cana, Dominican Republic. Association for Computational Linguistics. Dong Zhang, Liangqing Wu, Changlong Sun, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. 2019. Modeling both context- and speaker-sensitive dependence for emotion detection in multi-speaker conversations. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5415–5421. International Joint Conferences on Artificial Intelligence Organization. Haidong Zhang and Yekun Chai. 2021. COIN: Conversational interactive networks for emotion recognition in conversation. In *Proceedings of the Third Workshop on Multimodal Artificial Intelligence*, pages 12– 18, Mexico City, Mexico. Association for Computational Linguistics. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499–1503. Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, and Jiashi Feng. 2021. Unleashing the power of contrastive self-supervised visual models via contrastregularized fine-tuning. *Advances in Neural Information Processing Systems*, 34:29848–29860. ![12_image_0.png](12_image_0.png) ## A Appendix A.1 Case Study Of Multiemo Since the one-stage paradigm (Section 3.3.1) simultaneously performs unimodal textual feature extraction and textual context modeling, to better illustrate the role of context modeling to emotion classification, in the section of case study, the textual modality of the selected utterance is processed using a two-stage paradigm (Yang et al., 2021): unimodal feature extraction with a pretrained RoBERTa and context modeling with another transformer 1, such that the impact of context modeling on the textual modality can be analyzed in conjunction with audio and visual modalities. Figure 6 depicts a visualization of a prone-tomisclassification utterance in MELD, in which the textual modality "*Chandler is a great name!*" appears to be positive while the true connotation of the utterance actually implies anger. The heatmaps of the utterance's textual, audio and visual modalities on the left are obtained after unimodal feature extractions, from which we can see that: (1) Textual modality: the word "great" plays a major role in the text, revealing a strong positive emotion; (2) Audio modality: the higher intensity in the latter part of the audio indicates a flat-to-sharp tone; (3) Visual modality: the frown in the speaker's face implies a negative emotion. The asynchronization of emotional tendencies from different modalities makes it challenging to identify the actual emotion of this utterance. However, by modeling contextual information and capturing complex cross-modal correlations across contextualized textual, audio and visual modalities, MultiEMO learns a highly representative feature for this utterance, as shown in the heatmap on the right of Figure 6. The learned multimodal-fused feature can be easily classified 1As mentioned in Section 3.3.1, the performance of MultiEMO with a two-stage paradigm is merely marginally outperformed by the one-stage paradigm, both approaches can learn good contextualized textual representations. to the correct emotion class since it preserves useful emotional cues while discarding irrelevant information through selectively focusing on highlycorrelated information across contextualized textual, audio and visual modalities. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The section of Limitations is after the Conclusion section. ✗ A2. Did you discuss any potential risks of your work? We do not discuss potential risks of our work because of the page limit. ✓ A3. Do the abstract and introduction summarize the paper's main claims? The sections of Abstract and section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 And Section 5. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We do not report the information mentioned in the question because of the page limit. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We do not report the information mentioned in the question because of the page limit. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We do not report the information mentioned in the question because of the page limit. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
pires-etal-2023-learning
Learning Language-Specific Layers for Multilingual Machine Translation
https://aclanthology.org/2023.acl-long.825
Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g., avoiding losing gender and formality information when translating through English).On the downside, adding more languages reduces model capacity per language, which is usually countered by increasing the overall model size, making training harder and inference slower. In this work, we introduce Language-Specific Transformer Layers (LSLs), which allow us to increase model capacity, while keeping the amount of computation and the number of parameters used in the forward pass constant. The key idea is to have some layers of the encoder be source or target language-specific, while keeping the remaining layers shared. We study the best way to place these layers using a neural architecture search inspired approach, and achieve an improvement of 1.3 chrF (1.5 spBLEU) points over not using LSLs on a separate decoder architecture, and 1.9 chrF (2.2 spBLEU) on a shared decoder one.
# Learning Language-Specific Layers For Multilingual Machine Translation Telmo Pessoa Pires Robin M. Schmidt Yi-Hsiu Liao Stephan Peitz Apple {telmo, robin_schmidt, yihsiu_liao, speitz}@apple.com ## Abstract Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g., avoiding losing gender and formality information when translating through English). On the downside, adding more languages reduces model capacity per language, which is usually countered by increasing the overall model size, making training harder and inference slower. In this work, we introduce Language-Specific Transformer Layers (LSLs), which allow us to increase model capacity, while keeping the amount of computation and the number of parameters used in the forward pass constant. The key idea is to have some layers of the encoder be source or target language-specific, while keeping the remaining layers shared. We study the best way to place these layers using a neural architecture search inspired approach, and achieve an improvement of 1.3 CHRF (1.5 SPBLEU) points over not using LSLs on a separate decoder architecture, and 1.9 CHRF (2.2 SPBLEU) on a shared decoder one. ## 1 Introduction Multilingual Neural Machine Translation (MNMT) has received much attention from the Machine Translation community in recent years (Johnson et al., 2017; Aharoni et al., 2019; Freitag and Firat, 2020; Zhang et al., 2020; Fan et al., 2021; Yang et al., 2021; Tran et al., 2021). This interest is based on the many advantages it provides: Scalability Instead of having one model per language pair, a single model suffices, significantly reducing maintenance efforts as well as the combined model size across all languages. Inference Speed and Less Error Cascading Due to the availability of data, most production systems are English-centric, meaning translation between two non-English languages naïvely involves translating twice (i.e. pivoting), once to English, and once from English. This approach increases latency and contributes to error cascades, since the translation to or from English usually implies information loss, e.g. missing gender or formality distinctions that do not exist similarly in English. Low-Resource Improvements Having a single model capable of handling multiple languages, means it can generalize across language boundaries and utilize characteristics of closely related translation directions to improve the translation quality for low-resource language-pairs (i.e. knowledge transfer). Although achieving good zero-shot translation quality remains a challenging task, MNMT has been shown to help (Johnson et al., 2017). Despite the above advantages, training high quality multilingual models is a challenging task: as more languages are added, the more they compete for the model's parameters (Sachan and Neubig, 2018). A common solution is to increase the model size, but blindly doing so comes with its own troubles, as training becomes harder, inference slower, and the storage requirements increase, which makes them challenging to deploy to portable devices. In this work, our goal is to increase the model capacity per language pair, while at the same time, letting the model share knowledge between languages, and without increasing the inference cost. To this end, and combined with the observation from Kudugunta et al. (2019) that the translation process in Transformer models starts in the top encoder layers, we propose an architecture with shared and language-specific weights. Figure 2 shows one such architecture, where layers 3 and 4 1 are source language-specific, layers 13, 14, and 15 are target language-specific, the remaining layers are shared across all languages, and the decoder is 1Throughout the paper, we use layer indices starting at 1. 14767 ![1_image_1.png](1_image_1.png) ![1_image_0.png](1_image_0.png) ![1_image_2.png](1_image_2.png) also target language-specific. For the non-shared layers, we propose using Language-Specific Transformer Layers (LSLs), illustrated in Figure 1b. Quite simply, LSLs are a combination (i.e., a dictionary) of regular Transformer layers (Figure 1a), where the sub-layer used depends on the chosen language. We consider two cases: source-indexed LSLs, and target-indexed LSLs, distinguished by whether we use the source or the target language to select the appropriate sub-layer. The main contributions of this work are: 1. We propose a way to increase the model capacity per language, without changing the inference speed. 2. We show that the model benefits from having both language-specific and shared com- ponents, as well as from having source and target language-specific components. 3. We propose a technique to aid in learning the best architecture, rather than relying purely on manual trial-and-error. ## 2 Related Work There exists a vast literature investigating parameter sharing mechanisms for MNMT. Particularly relevant is the shared-encoder, separate-decoder architecture proposed by Dong et al. (2015) which we use as the base for some of our experiments. Several works analyze which weights should be shared between languages (Sachan and Neubig, 2018; Blackwood et al., 2018; Platanios et al., 2018; Zhu et al., 2020; Wang et al., 2019, 2018). Regardless, most closely related to the presented work are the studies by Zhang et al. (2021) and Purason and Tättar (2022). Zhang et al. (2021) propose adding Conditional Language-Specific Routing (CLSR) layers inside the encoder and decoder Transformer layers. They learn to mix between language-specific and shared weights, and do this on a word by word basis. Our approach does not use learned routing but uses the same components for the whole sentence per language-pair, instead of computing a mixed representation. We also do not add extra parameters to the layer, meaning we have the same inference time complexity as regular Transformer layers. The approach in Purason and Tättar (2022) is similar to ours in the sense that they use language-specific Transformer layers on the encoder side, and also look into sharing weights on a language-family basis. In contrast to our approach, they focus on source-indexed language-specific layers, while we investigate selecting the layers based on the source or the target language. Besides, we propose a systematic method for deciding which layers to share, and which to be language specific. Connection to Adapter Layers Adapter Layers (Houlsby et al., 2019; Bapna and Firat, 2019; He et al., 2022) are a lightweight technique to finetune a pre-trained encoder model by injecting taskspecific sub-modules into the existing architecture. In contrast, LSLs are designed to be trained from scratch, and replace shared by language-specific components, rather than adding new ones, keeping the overall computational costs constant. Connection to Mixture-of-Experts LSLs enable the introduction of source- and target-specific parameters in the encoder and increase the model capacity, while at the same time keeping the inference cost and effective parameter count for the forward-pass constant (see Figure 1). As such, they are similar in nature to sparsely activated mixture-of-experts layers (MOEs, Shazeer et al., 2017; Roller et al., 2021; Lepikhin et al., 2021) but with the important differences that 1) there is no need for learning a balanced routing module; 2) sub-layer utilization is enforced by design, which tends to be a problem for MOE layers (Dua et al., 2022); and 3) sentences are always routed to the same conditional compute based on the indexinglanguage, enabling smaller binaries for on-device downloading of model weights as well as consecutive downloads to extend the on-device capabilities to new languages. In fact, Kudugunta et al. (2021) have shown that the final encoder MOE layers also learn target language-specific utilization where a subset of experts is used when translating e.g. X→EN. However, since it is commonly not strictly enforced, downloading all experts is required, increasing the download size for end users. ## 3 Methods In this section we describe our proposed LanguageSpecific Transformer Layer, as well as a way to select whether to use shared or language-specific weights for each layer. ## 3.1 Language-Specific Transformer Layer The idea of LSLs is simple: instead of sharing the same parameters across all languages, have the weights for the layer be language-specific as illustrated in Figure 1. LSLs are composed of one "regular" Transformer encoder layer *per language*. The input is routed to the appropriate sub-layer depending on the source or target language, and at any time only one of the sub-layers is used. Simply replacing all layers in the Transformer with LSLs would significantly increase the number of parameters, and reduce the sharing between languages. For example, if all LSLs are indexed by the source (or target) language it would be identical to a "separate encoder separate decoder" architecture. Instead, we propose a mix of LSLs and regular Transformer layers, which allows the model to learn languagespecific and shared weights. See Figure 2 for one such architecture. A sample implementation for FAIRSEQ (Ott et al., 2019) is given in Appendix A. ## 3.2 Learning The Architecture Intuitively, we expect the bottom layers of the encoder to require more source language knowledge, while the top ones should already capture target language information as found by Kudugunta et al. (2019). This observation motivates using sourceindexed LSLs in the bottom encoder layers, targetindexed LSLs in the top ones, and keeping the remaining layers shared as illustrated in Figure 2. This type of reasoning quickly gets out of hand, as the number of possible architectures is exponential in the numbers of layers. To avoid having to manually select which layers should be shared, and which should be source- or target-indexed LSLs, we propose a Neural Architecture Search (Elsken et al., 2019) inspired approach. For each layer in the encoder, we learn a shared layer as well as one LSL, which can be source- and target-indexed, and 3 scalar mixing weights: hi = w shared i· layershared i(hi−1) + w src i· LSLi(hi−1, src) + (1) w tgt i· LSLi(hi−1, tgt) , where hi−1 and hi are the outputs of layers i − 1 and i, respectively, and w shared i +w src i +w tgt i = 1. LSLi(hi−1*, src*) means we select the LSL weights by the source language, while LSLi(hi−1*, tgt*) corresponds to using the target weights. As there is no constraint on the mixing weights, other than that they sum to 1 and are non-negative2, the model is incentivized to use all the sub-layers, 2We implement this constraint by applying the softmax function to the 3 scalar parameters. resulting in a huge increase in the number of parameters. If we have L different languages, then each layer will have as many parameters as L + 1 "regular" Transformer layers.3 The amount of computation increases by a factor of 3, as we compute three intermediate representations: a shared one, one using the source language sub-layer, and another using the target language sub-layer, which we then mix according to Equation (1). To keep the inference time unaffected and the model size reasonable, only one of the components should be used, i.e., the mixing weights should be sparse. In this work, we propose a simple but effective approach: for each layer, we select the component with the largest converged weight. For example, if the largest weight for layer i is w tgt i, then layer i will be a target-indexed LSL. After selecting the architecture, we train it from scratch. ## 3.3 Dense Pre-Training Inspired by Dua et al. (2022), we found that initializing all encoder weights (both shared and LSLs) from a pre-trained architecture consisting only of "regular" Transformer layers helped achieve better performance. In our experiments, we copy the pretrained weights from the respective layers to the language-specific modules for initialization. The pre-trained weights come from our baseline architectures, shared-encoder models with only "regular" Transformer Layers. We use the separate decoder baseline's weights for the separate decoder models (e.g., LSL-NAS), and the shared decoder baseline's weights for the shared decoder models (e.g., LSL-NAS-SD). This procedure has multiple advantages: 1) It maximizes cross-lingual transfer by training a general representation across languages first and minimizes language interference during fine-tuning; 2) It mitigates under-trained languagespecific components for low-resource languages as they usually see significantly less data and the naïve approach of training with higher sampling temperatures typically degrades performance on high resource languages (Arivazhagan et al., 2019; Wang et al., 2020); and 3) it improves convergence speed for architectures with LSLs. ## 4 Results In the following, we will describe our experiments and discussion regarding the effectiveness of LSLs. 3Plus, of course, the mixing weights, but they amount to only 3 extra parameters per layer. ## 4.1 Experimental Setup Data In our experiments, we focus on the following 10 languages: German (DE), English (EN), Spanish (ES), French (FR), Italian (IT), Japanese (JA), Korean (KO), Portuguese (PT), Swahili (SW), and Chinese (ZH). We collect data for these languages from the WMT21 news translation task sources (composed of Europarl v10, ParaCrawl v7.1, ParaCrawl v8, Common Crawl, News Commentary v16, Wiki Titles v3, UN Parallel Corpus V1.0, Tilde Rapid, WikiMatrix, Back-translated news, Japanese-English Subtitle Corpus, The Kyoto Free Translation Task Corpus, and TED Talks) as well as Opus-100 (Zhang et al., 2020), Tatoeba (Tiedemann, 2012), and CCMatrix (Schwenk et al., 2021). We deduplicate the data, and preprocess it using the M2M-100 (Fan et al., 2021) scripts.4 The final dataset sizes can be seen in Appendix B. Since CCMatrix is a large yet low quality data source, we found it helpful to downsample it relative to the other sources using temperature sampling. For more details, see Appendix B. Evaluation For evaluation, we use the dev and devtest splits of the Flores-101 dataset (Goyal et al., 2022) as our validation and test sets, respectively. Except when stated otherwise, the reported numbers are on the test set. We report both CHRF (Popovic´, 2015) and SPBLEU (Goyal et al., 2022), a SENTENCEPIECE-based BLEU computed using the Flores-101 tokenizer, with sacreBLEU5 version 2.3.1. The evaluation signatures are nrefs:1 | case:mixed | eff:no | tok:flores101 | smooth:exp for SPBLEU, and nrefs:1 | case:mixed | eff:yes | nc:6 | nw:0 | space:no for CHRF. All our results are from a single training run of each architecture, and we perform statistical significance tests using paired bootstrap resampling (Koehn, 2004). We run the significance tests for CHRF for all language directions, using a significance level of 5%. We also provide COMET scores (Rei et al., 2020) 6for selected models in Appendix G. Tokenization We use SENTENCEPIECE (Kudo and Richardson, 2018), with a vocabulary size of 250k, and a character coverage of 0.9995. We balance the data for SENTENCEPIECE training by randomly sampling 1.5M sentences per language. Tagging We found it helpful to make the model aware of the corpus by training with corpus labels. Similarly to NLLB Team et al. (2022), we add a tag (e.g. <HQ> or <LQ>) to the beginning of the source sentence, so that the model can learn to distinguish between higher quality (WMT21, Opus-100, and Tatoeba) and lower quality examples (CCMatrix). During inference, we always use the high quality (<HQ>) tag. Additionally, we append source and target language tags to the end of the sentence. Architecture In our experiments, we use a deep encoder, shallow decoder architecture (Kasai et al., 2021) with 16 encoder layers and 3 decoder layers. We share token embeddings between the encoder, decoder, and output layer (Press and Wolf, 2017). In our experiments we consider two kinds of models: those with target language-specific decoders, following Dong et al. (2015), on which we conduct most of our experiments, and those with a shared decoder. The encoder is always shared, with the exception of the LSLs. In the baseline models, the encoder consists only of "regular" Transformer Layers, and so it is fully shared. In this work, we only consider adding LSLs to the encoder. In preliminary experiments with LSLs in the decoder, our selection criteria picked target-specific LSLs for all decoder layers, effectively choosing a separate decoder architecture. We tried different placements of the layers in the decoder, but did not achieve any improvements. We leave a deeper analysis to future work. Hyperparameters All experiments are implemented using FAIRSEQ (Ott et al., 2019). We use ADAM (Kingma and Ba, 2015) for optimization, due to its robustness (Schmidt et al., 2021) and popularity, with a learning rate of 0.0004. We train for 150k steps, by which point our models had converged, with 4000 warm-up steps, and an inverse square root learning rate scheduler (Vaswani et al., 2017). Due to the abundance of data, adding regularization in the form of dropout or weight decay did not help in our initial experiments, so we do not use any regularization in the remaining experiments. The layer and embedding sizes are 512, the hidden size of the feed-forward layers is 2048, and we use 8 attention heads. All models are trained using fp16 (Ott et al., 2018). ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ## 4.2 Architecture Search As described in Section 3.2, we train a separatedecoder model where all encoder layers are a mix of shared, source, and target weights. This architecture used a total of 804 million (M) parameters, and achieved a score of 46.6 CHRF points (27.4 SPBLEU), averaged over all language pairs. We plot the mixing coefficients of the model in Figure 3, averaged over 3 runs. We can see clear trends here: the model gives a higher weight to the source-specific sub-layers near the bottom of the encoder, while the target-specific sub-layers get a higher weight near the top. This is in line with previous studies as lower encoder layers usually capture low-level information about the source (Tenney et al., 2019), while the top encoder layers are known to already capture target language information (Kudugunta et al., 2019). Interestingly, the mixing coefficients for the shared weights are relatively stable across layers, making them dominant for the middle layers of the model. Taking the arg max of the mixing coefficients, we select the architecture in Figure 2, where layers 3 and 4 are source-indexed LSLs 7, layers 13, 14, 15 are target-indexed LSLs, and the remaining layers are "regular" Transformer encoder layers (Figure 1a). From here onward, we will refer to this architecture as LSL-NAS. We use the architecture selection method only to select the architecture, and the selected architecture is trained from scratch (**not pruned**) in the upcoming experiments. To 7For these layers there is some uncertainty in the source weights, but they are the largest weights by a small margin. Performance is improved by selecting source layers, as can be attested by comparing to LSL (SRC=∅ & TGT={13, 14, 15}). 14771 | Model | CHRF SPBLEU | |θ| | |θeff| | |---------------------------------------------|---------------|-----------|-----------| | Separate Decoder Baseline | 45.5 | 26.0 | 299M 186M | | + hidden dim 640 | 45.9 | 26.6 | 399M 240M | | + hidden dim 704 | 46.2 | 26.9 | 453M 268M | | + hidden dim 768 | 46.5 | 27.3 | 509M 296M | | Language Adapters ENC 128 | 45.8 | 26.4 | 321M 207M | | Language Adapters ENC 256 | 45.7 | 26.3 | 342M 228M | | Language Adapters ENC 512 | 45.6 | 26.3 | 384M 270M | | Language Adapters ENC (SRC+TGT) 128 | 45.7 | 26.3 | 321M 207M | | Language Adapters ENC (SRC+TGT) 256 | 46.1 | 26.7 | 342M 228M | | Language Adapters ENC (SRC+TGT) 512 | 46.0 | 26.7 | 384M 270M | | LSL-NAS | 46.4 | 27.2 | 441M 186M | | + Dense Pre-training | 46.8 | 27.5 | 441M 186M | | LSL (SRC={1, 2} & TGT={15, 16}) | 46.3 | 27.0 | 413M 186M | | LSL (SRC={1, 2, 3} & TGT={14, 15, 16}) 46.2 | 27.0 | 470M 186M | | | LSL (SRC=∅ & TGT={13, 14, 15}) | 45.8 | 26.5 | 385M 186M | | LSL (SRC={3, 4} & TGT=∅) | 46.1 | 26.7 | 356M 186M | | LSL (SRC={13, 14, 15} & TGT={3, 4}) | 45.2 | 25.7 | 441M 186M | simplify the text, we will also use the notation LSL (SRC={1, 2} & TGT={15, 16}) to refer to an architecture with source-indexed LSLs in layers 1 and 2, and target-indexed LSLs in layers 15 and 16. ## 4.3 Learned Architecture Comparison In Table 1, we compare our baseline separate decoder architecture (with a fully shared 16 layer encoder) with the learned architecture from the architecture search (LSL-NAS), and additional variants. We share CHRF and SPBLEU scores averaged over all language pairs, as well as the number of total (|θ|) and effective (|θeff|) parameters used during inference for each architecture. For the baseline models, |θ| and |θeff| differ due to the separate decoders. For an accurate comparison of CPU and GPU speed, see Appendix H. Our learned architecture (LSL-NAS in Table 1) achieves a 0.9 CHRF (1.2 SPBLEU) improvement over the baseline, which we can be further increased to 1.3 with dense pre-training, reaching a total of 46.8 CHRF (27.5 SPBLEU). These improvements are statistically significant (p < 0.05) for all but 6 of the 90 translation directions. In Table 2, we summarize the averaged results for translating to and from each language, i.e. X→ DE is the average CHRF score for translating into German from all other languages. For the full results (per language pair) on the validation and test sets, see Appendix C. Our approach gives substantial gains for both high resource languages, such as English and German, which improve by more than 1 CHRF point, as well as lower resource, such as Korean, with close to 2 CHRF points improvement for both directions, or Swahili, which improves by over 1.5 CHRF points in both directions. Although the effective number of parameters is the same for this architecture and our baseline (186M), it can be argued that this comparison is unfair, since our model is bigger. To alleviate this concern, and to show that the gains we achieve are not just due to the higher parameter count, but rather, the better way we allocate the extra parameters, we trained three bigger baselines: with hidden sizes of 640, 704, and 768. As expected, these models also show an improvement over the original baseline, but even the biggest model, with a total of 509M parameters (15% more than ours) and a higher inference cost than our method, is not able to match our performance (only 46.5 CHRF and 27.3 SPBLEU). Adapters Following Philip et al. (2020), we insert one adapter block after each Transformer layer. In our experiments, inserting adapters into a pretrained model either provided no improvement over training from scratch or suffered from numerical instabilities, even after tuning the initialization gain (Houlsby et al., 2019). For this reason we report numbers for models trained from scratch, similar to Baziotis et al. (2022). Since our models have separate decoders, we inserted adapters only on the encoder. For completeness, results using adapters on the decoder are reported in Appendix D. We consider two kinds of adapters: source language adapters (Language Adapters ENC), following Philip et al. (2020), or source language adapters in the bottom half of the encoder and target language language adapters in the top half (Language Adapters ENC (SRC+TGT)). We show the results for different bottleneck dimensions (128, 256, and 512) in Table 1. Our proposal of using source and target adapters on the encoder outperforms using only source adapters (for the same model size). The best performing model, Language Adapters ENC (SRC+TGT), achieves a score of 46.1 CHRF points, 0.3 (0.7) points lower than our model without (with) dense pre-training.These improvements are statistically significant (p < 0.05) for 38 (62) of the 90 translation directions. Results for languagepair adapters (Bapna and Firat, 2019) are shown in Appendix D, but they lag behind language adapters. | Model | DE | EN | ES | FR | IT | JA | KO | PT | SW | ZH | |-------------------------------------|------|------|------|------|------|------|------|------|------|------| | Translating into the language (X → | ) | | | | | | | | | | | Separate Decoder Baseline | 52.7 | 60.4 | 49.1 | 57.0 | 50.6 | 29.0 | 25.2 | 54.9 | 47.9 | 28.7 | | Language Adapters ENC (SRC+TGT) 256 | 53.2 | 60.9 | 49.5 | 57.7 | 50.9 | 29.9 | 25.0 | 55.4 | 49.1 | 29.1 | | LSL-NAS | 53.9 | 61.5 | 49.9 | 58.1 | 51.4 | 31.0 | 27.0 | 55.6 | 49.4 | 29.9 | | Translating from the language ( | → X) | | | | | | | | | | | Separate Decoder Baseline | 47.7 | 52.7 | 44.9 | 48.1 | 46.4 | 42.1 | 40.0 | 49.4 | 40.1 | 44.0 | | Language Adapters ENC (SRC+TGT) 256 | 48.1 | 53.0 | 45.4 | 48.5 | 46.7 | 42.8 | 41.2 | 49.9 | 40.8 | 44.5 | | LSL-NAS | 48.7 | 53.6 | 45.8 | 48.9 | 47.3 | 43.5 | 42.1 | 50.4 | 42.1 | 45.3 | | Model | CHRF | SPBLEU | |-------------------------------------|--------|----------| | LSL-NAS | 46.4 | 27.2 | | LSL (SRC={3, 4} & TGT={14, 15, 16}) | 46.4 | 27.0 | | LSL (SRC={2, 3} & TGT={13, 14, 15}) | 46.4 | 27.1 | | LSL (SRC={1, 2} & TGT={13, 14, 15}) | 46.2 | 26.9 | | LSL (SRC={1, 2} & TGT={14, 15, 16}) | 46.2 | 26.9 | ## Importance Of Bottom And Top Shared Layers LSL-NAS uses two shared layers on the bottom and one shared layer on the top of the encoder. In Table 3, we analyze the effect of removing these layers, i.e., moving the LSLs up or down. When comparing SPBLEU there is a small drop when removing either the top shared layer (row "LSL (SRC={3, 4} & TGT={14, 15, 16})") or the bottom-most shared layer (row "LSL (SRC={2, 3} & TGT={13, 14, 15})"), but the difference is negligible when comparing CHRF. In fact the difference is only statistically significant for 15 of the 90 translation directions. When removing the bottom shared layers (row "LSL (SRC={1, 2} & TGT={13, 14, 15})") or all the shared layers (row "LSL (SRC={1, 2} & TGT={14, 15, 16})"), there is a bigger difference, but it is only statistically significant for less than 1/3 of the translation directions, mostly low resource pairs including either Swahili or Korean. For an analysis regarding the number of LSLs, please refer to Appendix E. ## Alternative Configurations Additionally, We look at different configurations of LSLs. In particular, we compare using only source-specific layers LSL (SRC={3, 4} & TGT=∅) or target-specific layers LSL (SRC=∅ & TGT={13, 14, 15}) in Table 1. In both cases, the configuration is worse than LSLNAS, thus showing the importance of having both source and target-specific layers. For completeness, row LSL (SRC={13, 14, 15} & TGT={3, 4}) shows the opposite of our configuration (i.e., swapping the source and target layers), with considerably degraded performance, showing that the position of source-specific and target-specific languages is very important. In particular, it shows that forcing the model to learn source-specific representations at higher encoder layers and target language representations on the lower layers hinders learning. | Model | CHRF | SPBLEU | |θ| | |θeff| | |--------------|--------|----------|-------|----------| | LSL-NAS | 46.4 | 27.2 | 441M | 186M | | LS-FFN | 46.3 | 26.8 | 394M | 186M | | LS-ATTENTION | 45.9 | 26.5 | 347M | 186M | Layer Component Ablation We analyze the effect of using full language-specific layers (LSL), with having only language-specific feed-forward (LS-FFN), or only language-specific attention (LSATTENTION) on the LSL-NAS architecture in Table 4. We observe a small degradation of 0.1 CHRF (0.4 SPBLEU) when switching from LSL to LS-FFN, which is statistically significant for 35/90 translation directions, and a degradation of 0.5 CHRF (0.7 SPBLEU) when switching to LSATTENTION, which is significant for 49 directions. These results imply that both the language-specific feed-forward and attention are important, with the biggest contribution coming from the feed-forward part, where most of the parameters are located. ## 4.4 Shared Decoder So far we have focused on the separate-decoder architecture. In this section, we turn to a shareddecoder setup (see Table 5). As in Section 4.2, we ran an architecture search experiment and selected the following architecture: LSL (SRC={4} & TGT={12, 13, 14, 15, 16}), or LSL-NAS-SD for short. The mixing weights follow a trend similar to Figure 3. With the shared decoder, we benefit from placing more target-specific layers at the top of the encoder. Our intuition is that these layers compensate the lack of a separate decoder. As in Section 4.3, we compare against shared decoder baseline models (i.e., without LSLs) of increasing sizes, as well as models with Adapter Blocks. For the latter, we insert one block after each Transformer layer, both on the encoder and the decoder. Following Philip et al. (2020), we insert source adapters on the encoder, and target adapters on the decoder. As expected, shared-decoder models perform worse than their separate-decoder models, which have a higher parameter count. Despite this, our proposed architecture, LSL-NAS-SD, outperforms the remaining models by a wide margin, and is even better than the separate-decoder baseline (26.0 SPBLEU). The improvements of our LSL-NAS-SD model with pre-training over the shared decoder baseline are statistically significant for 86/90 translation directions. The improvements over the best adapter model (bottleneck size 512) are significant for 76/90 directions. We also show the performance for LSL (SRC={4} & TGT={13 − 16}), an architecture similar to LSL-NAS-SD, but with one less targetspecific LSL. This architecture performs worse than our selection, but has fewer parameters, which might make it a preferable candidate for deployment. This highlights a limitation of our selection approach: it does not take model complexity (i.e., model size) into account. We tried adding a prior on the mixing weights to make LSLs more costly than shared layers, but obtained mixed results, and we leave further investigation to future work. ## 4.5 Zero-Shot Translation In the previous experiments, we used training data for all language directions. We now consider a different scenario: we limit our training data to English directions (i.e., X-EN and EN-X) and languages in the same language group8. We then evaluate our models on zero shot performance for the directions between groups. 8We consider 3 groups: European, CJK, and Swahili. We use data where both the source and target languages are in the same group. Model CHRF SPBLEU |θ| |θeff| Separate Decoder Baseline 45.5 26.0 299M 186M LSL-NAS (separate decoder) 46.4 27.2 441M 186M Shared Decoder Baseline 44.7 24.9 186M 186M + hidden dim 640 45.1 25.5 240M 240M + hidden dim 704 45.8 26.2 268M 268M + hidden dim 768 45.8 26.3 296M 296M Shared Decoder Adapters 128 44.6 24.8 211M 189M Shared Decoder Adapters 256 44.9 25.0 236M 191M Shared Decoder Adapters 512 45.3 25.6 286M 196M Shared Decoder Adapters 640 45.3 25.5 311M 199M LSL-NAS-SD 46.3 26.7 356M 186M + Dense Pre-training 46.**6 27**.1 356M 186M LSL (SRC={4} & TGT={13 − 16}) 46.1 26.5 328M 186M Table 5: Results on the shared-decoder architecture. Overall Average 39.9 41.8 41.4 Overall Average (w/o *→SW) 40.8 42.5 44.7 Zero-shot Average 29.6 32.4 31.9 Zero-shot Average (w/o *→SW) 29.3 32.0 36.2 EUR → CJK 23.8 18.5 27.6 EUR → SW 34.2 37.3 11.7 CJK → EUR 41.4 45.1 45.5 CJK → SW 24.8 28.9 11.7 SW → EUR 23.5 44.9 46.4 SW → CJK 6.70 12.3 18.7 Direction Baseline Adapters LSL-NAS-SD In our initial experiments, separate decoder models performed poorly on zero-shot directions, so we focused our evaluation on shared decoder models. Table 6 shows the zero-shot results for 3 architectures: the shared decoder baseline, the best performing (shared decoder) adapter model (Shared Decoder Adapters 512), and LSL-NAS-SD. Our approach gives improvements for most zero-shot directions, except when translating *into* SW. Translating *from* SW works well, though. Our intuition is that this degradation is caused by the SW targetspecific LSLs being overfitted to EN, and thus failing to transfer to other languages. In LSL-NASSD, the top 5 encoder layers are target LSLs, and in the zero-shot scenario, the SW layers are only trained for EN-SW, which is relatively small. Indeed, if we exclude the * →SW pairs, both the overall and the zero-shot average scores increase. ## 5 Conclusion In this work, we studied how to increase the capacity of MNMT models using LSLs. We showed that LSLs are effective at increasing the model capacity per language, while keeping the computation requirements constant. We proposed a method for selecting the placement of LSLs, and showed the importance of having shared as well as source and target language-specific parameters on the encoder. ## Limitations In this work, we focused our exploration of LSLs on the encoder. Although we ran some initial explorations on the decoder side, further investigation is needed. Another venue for research is how LSLs affect language expansion. Since our approach tries to limit the language-specific weights to just a few layers, *in theory*, it should be possible to add new languages by only expanding and training the LSLs. However, blindly doing so might not work well and the interactions between languages from different families needs further studying. Lastly, it is unclear whether our arg max approach to selecting where to place LSLs is optimal, how dataset dependent it is, and if there exist alternative approaches that can lead to better results. The fact that it does not take model complexity (i.e., model size) into account can be a disadvantage in practice. ## Ethics Statement Our work uses existing datasets, so it inherits some of the risks associated with them, namely gender bias (Cho et al., 2019), or privacy leakage (Carlini et al., 2021), and mitigation strategies such as Vanmassenhove et al. (2018) may be necessary. However, replacing bilingual translation systems with multilingual systems should help reduce gender bias caused by pivoting through English. Another consideration is the energy consumption for model training, which results in green-house emissions (Strubell et al., 2019). Our proposed architectures result in smaller (and faster to train) models, than similarly-performing baselines, increasing the efficiency of translation systems. ## Acknowledgements We would like to thank Sarthak Garg, Luke Carlson, António V. Lopes, and Matthias Sperber for their comments and suggestions, which significantly improved the final work. ## References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *CoRR*, abs/1907.05019. Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China. Association for Computational Linguistics. Christos Baziotis, Mikel Artetxe, James Cross, and Shruti Bhosale. 2022. Multilingual Machine Translation with Hyper-Adapters. *ArXiv*, abs/2205.10835. Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. In *Proceedings of the* 27th International Conference on Computational Linguistics, pages 3112–3122, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650. USENIX Association. Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 173–181, Florence, Italy. Association for Computational Linguistics. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In *Proceedings of the 53rd* Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723–1732, Beijing, China. Association for Computational Linguistics. Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, and Angela Fan. 2022. Tricks for training sparse translation models. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3340–3345, Seattle, United States. Association for Computational Linguistics. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1–21. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, and Armand Joulin. 2021. Beyond english-centric multilingual machine translation. *Journal of Machine Learning Research*, 22(107):1–48. Markus Freitag and Orhan Firat. 2020. Complete multilingual neural machine translation. In *Proceedings of* the Fifth Conference on Machine Translation, pages 550–560, Online. Association for Computational Linguistics. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. Transactions of the Association for Computational Linguistics, 10:522–538. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In *10th International Conference on Learning* Representations, ICLR, virtual. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799, Long Beach, CA, USA. PMLR. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. *Transactions of the Association for Computational Linguistics*, 5:339–351. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In *9th International Conference* on Learning Representations, ICLR, virtual. OpenReview.net. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual NMT representations at scale. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1565–1575, Hong Kong, China. Association for Computational Linguistics. Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan Firat. 2021. Beyond distillation: Task-level mixture-of-experts for efficient inference. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 3577–3599, Punta Cana, Dominican Republic. Association for Computational Linguistics. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. Gshard: Scaling giant models with conditional computation and automatic sharding. In *9th International* Conference on Learning Representations, ICLR, virtual. OpenReview.net. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loïc Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. *CoRR*, abs/2207.04672. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Jerin Philip, Alexandre Berard, Matthias Gallé, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 4465–4470, Online. Association for Computational Linguistics. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 425–435, Brussels, Belgium. Association for Computational Linguistics. Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the* Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Taido Purason and Andre Tättar. 2022. Multilingual neural machine translation with the right amount of sharing. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, pages 91–100, Ghent, Belgium. European Association for Machine Translation. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. 2021. Hash layers for large sparse models. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems*, pages 17555–17566, virtual. Devendra Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual self-attentional translation models. In *Proceedings of the Third Conference on Machine Translation: Research Papers*, pages 261–271, Brussels, Belgium. Association for Computational Linguistics. Robin M. Schmidt, Frank Schneider, and Philipp Hennig. 2021. Descending through a crowded valley - benchmarking deep learning optimizers. In Proceedings of the 38th International Conference on Machine Learning, ICML, volume 139 of *Proceedings of Machine Learning Research*, pages 9367–9376, virtual. PMLR. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. CCMatrix: Mining billions of high-quality parallel sentences on the web. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6490–6500, Online. Association for Computational Linguistics. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In 5th International Conference on Learning Representations, ICLR, Toulon, France. OpenReview.net. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey. European Language Resources Association (ELRA). Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021. Facebook AI's WMT21 news translation task submission. In Proceedings of the Sixth Conference on Machine Translation, pages 205–215, Online. Association for Computational Linguistics. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020. Balancing training for multilingual neural machine translation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8526–8537, Online. Association for Computational Linguistics. Yining Wang, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2018. Three strategies to improve one-to-many multilingual translation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2955– 2960, Brussels, Belgium. Association for Computational Linguistics. Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2019. A compact and language-sensitive multilingual translation method. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1213–1223, Florence, Italy. Association for Computational Linguistics. Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan, Xia Song, and Furu Wei. 2021. Multilingual machine translation systems from Microsoft for WMT21 shared task. In *Proceedings of the Sixth Conference on Machine Translation*, pages 446–455, Online. Association for Computational Linguistics. Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan Firat. 2021. Share or not? learning to schedule language-specific capacity for multilingual translation. In *9th International Conference on Learning* Representations, ICLR, virtual. OpenReview.net. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628– 1639, Online. Association for Computational Linguistics. Changfeng Zhu, Heng Yu, Shanbo Cheng, and Weihua Luo. 2020. Language-aware interlingua for multilingual neural machine translation. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650–1655, Online. Association for Computational Linguistics. ## A Lsls In **Fairseq** Listing 1 shows our implementation of LSL in FAIRSEQ. The implementation is straightforward, and consists of a dictionary that selects the appropriate language depending on the lang_pair attribute, which FAIRSEQ dynamically sets, and is guaranteed to match that of the input. ## B Dataset Sizes For most language pairs, CCMatrix is the largest data source, and it is also the lowest quality one. To compensate for this quality imbalance, we apply temperature sampling (Arivazhagan et al., 2019) to balance the different sources, using a temperature of 5, which worked well in our experiments. In our initial experiments, we considered two approaches to apply this temperature re-sampling: either upsampling the higher quality sources (WMT21, Opus-100, and Tatoeba), or downsampling CCMatrix. The results between these two approaches were similar, and since the downsampling runs were faster and more stable, we used the downsampling for all our experiments. To avoid discarding too much data, we capped the maximum downsampling to a factor of 10. Table 7 shows the number of sentence pairs for each language direction, after de-duplication, cleaning, and downsampling CCMatrix. ## C Full Results Table 8 shows the CHRF scores on the Flores-101 test set for all language directions, of both our shared-encoder, separate-decoder baseline model and our proposed LSL-NAS architecture with pretraining. Statistically non-significant results (p ≥ 0.05) are marked with †(in total 6 of the 90 language pairs). The results on the validation set can be found in Table 9. ## D Results On Different Configurations In Table 10 we show the results of further experiments with Adapter Blocks. Besides encoder source language adapters (Language Adapters ENC) and source adapters in the bottom half of the encoder and target adapters in the top half (Language Adapters ENC (SRC+TGT), we include source adapters on the encoder and target adapters on the decoder (Language Adapters ENC+DEC, like Philip et al. (2020), and language-pair adapters ![12_image_0.png](12_image_0.png) on the encoder (Bapna and Firat, 2019) (LanguagePair Adapters ENC). Our proposed architecture, LSL-NAS, outperforms all other techniques while introducing no extra computation at inference time (i.e., it keeps |θeff| constant). ## E Number Of Lsls We look at the effect of changing the number of LSLs, illustrated in Figure 4. To this end, we change the number of LSLs from 0 to 16, in increments of 2, and, for each point, we place an additional LSL on the bottom and on the top of the encoder, using the source and target languages to index them, respectively. For example, 4 LSLs corresponds to LSL (SRC={1, 2} & TGT={15, 16}). We see that adding more LSLs helps performance, but only up to a point (in this case, 4 layers), and that afterwards, performance degrades, except for an outlier at 12 LSLs. This implies that while the language-specific layers boost performance, having shared layers is crucial for knowledge transfer. ## F Per Language Results In Table 11, we show aggregate scores for each language group: European (DE, EN, ES, FR, IT, PT), CJK (ZH, JA, KO), and SW (isolated, since it is the only language in its family). Here, we see a similar trend, with our approach showing clear improvements both within groups, and between different groups. ![13_image_0.png](13_image_0.png) 5 all_languages = sorted(set(self.get_lang(lp) for lp in args.lang_pairs)) 6 self.models = nn.ModuleDict({lang: TransformerEncoderLayer(args, layer) for lang in all_languages}) 7 14 **else**: 15 raise **ValueError**(f"Invalid language `{self.index_language}`.") 16 17 def forward(self, x, encoder_padding_mask, attn_mask: Optional[Tensor] = **None**): 19 **return** self.models[self.get_lang(self.lang_pair)].forward(x, encoder_padding_mask, attn_mask) Listing 1: Sample implementation of a Language-Specific Transformer Layer in FAIRSEQ. ![13_image_1.png](13_image_1.png) | EN | ES | FR | IT | JA | KO | PT | SW | ZH | | |------|------|-------|-------|-------|-------|------|-------|------|-------| | DE | 213M | 11.6M | 36.6M | 7.2M | 1.2M | 708K | 5.4M | 2.4M | 1.8M | | EN | − | 230M | 286M | 96.3M | 36.5M | 2.3M | 78.6M | 708K | 88.9M | | ES | − | − | 49.4M | 14.9M | 1.3M | 772K | 22.3M | 6.9M | 6.9M | | FR | − | − | − | 14.9M | 1.2M | 752K | 12.9M | 8M | 25.3M | | IT | − | − | − | − | 736K | 382K | 7M | 1.1M | 964K | | JA | − | − | − | − | − | 511K | 764K | 820K | 897K | | KO | − | − | − | − | − | − | 756K | 536K | 3M | | PT | − | − | − | − | − | − | − | 3.6M | 1.1M | | SW | − | − | − | − | − | − | − | − | 962K | Table 7: Number of training sentence pairs for each language pair, after data de-duplication, cleaning, and downsampling CCMatrix. We report only one language direction, as the data is the same for both directions. | DE | EN | ES | FR | IT | JA | KO | PT | SW | ZH | |---------------------------------------------------|------|------------------------------------------------|------|------|------|------|------|------|------| | DE → | − | 67.4 52.1 61.0 54.2 30.1 26.1 59.0 † 48.1 31.1 | | | | | | | | | EN → 64.6 | − | 55.6 70.7 58.5 † 35.0 28.8 70.6 55.4 34.9 | | | | | | | | | ES → 53.1 59.2 | − | 57.7 52.6 28.0 23.4 54.7 † 47.7 27.7 | | | | | | | | | FR → 57.8 68.3 52.7 | − | 55.6 31.1 25.9 60.5 † 50.1 † 31.1 | | | | | | | | | IT → 54.9 61.4 52.3 60.3 | − | 28.8 24.7 56.6 49.2 29.7 | | | | | | | | | JA → 46.1 53.3 44.2 50.0 45.1 | − | 26.1 47.5 42.3 25.2 | | | | | | | | | KO → 43.5 49.6 41.2 46.6 42.0 28.0 | − | 45.3 40.5 23.4 | | | | | | | | | PT → 58.6 71.1 53.8 † 64.0 55.5 30.5 26.4 | − | 53.1 31.8 | | | | | | | | | SW→ 47.0 57.3 43.6 51.0 44.5 23.0 20.5 50.0 | − | 23.9 | | | | | | | | | ZH → 48.2 56.3 46.3 52.7 47.0 26.8 24.5 49.6 44.3 | − | | | | | | | | | DE EN ES FR IT JA KO PT SW ZH DE → − 67.9 52.8 62.2 55.0 32.2 28.0 59.1 †49.2 32.0 EN → 65.1 − 56.2 71.2 58.6 †37.9 30.0 71.1 56.6 35.9 ES → 54.0 59.7 − 58.3 52.9 29.5 25.2 54.9 †49.1 29.1 FR → 58.6 68.9 53.0 − 56.2 32.0 28.0 60.8 †50.4 †32.1 IT → 55.7 61.7 53.0 60.7 − 29.9 26.4 57.1 50.6 30.2 JA → 47.9 55.0 45.1 50.6 45.9 − 27.3 48.6 44.4 26.7 KO → 45.4 52.1 42.9 48.2 43.9 30.7 − 47.2 43.2 25.6 PT → 59.5 71.5 54.0 †64.7 55.9 31.6 28.9 − 54.6 32.7 SW→ 49.2 59.7 45.2 52.9 46.1 25.8 22.7 52.0 − 24.9 ZH → 49.7 57.1 47.3 53.9 47.9 29.1 26.5 50.0 46.5 − Table 8: Comparison of the baseline model (*left*) and our learned architecture LSL-NAS with dense pre-training (*right*) for each language pair, on the Flores-101 test set. Our approach gives *significant* CHRF gains for most language pairs. Statistically *non-significant* improvements using paired bootstrap resampling are marked with †for p ≥ 0.05 (in total 6 of the 90 language pairs). Table 9: Comparison of the baseline model (*left*) and our learned architecture LSL-NAS with dense pre-training (*right*) for each language pair, on the Flores-101 validation set. | DE | EN | ES | FR | IT | JA | KO | PT | SW | ZH | |---------------------------------------------------|------|----------------------------------------------|------|------|------|------|------|------|------| | DE → | − | 67.4 50.9 60.6 53.6 31.6 26.2 58.5 48.3 30.3 | | | | | | | | | EN → 64.0 | − | 55.4 70.8 58.4 35.7 29.0 70.0 55.5 33.3 | | | | | | | | | ES → 52.4 59.7 | − | 57.6 52.6 28.5 24.0 54.2 48.4 27.8 | | | | | | | | | FR → 57.6 68.8 52.3 | − | 55.5 30.6 25.5 60.0 50.1 29.7 | | | | | | | | | IT → 54.3 61.8 51.4 59.7 | − | 29.6 24.0 55.8 49.1 29.1 | | | | | | | | | JA → 46.0 54.0 43.5 49.3 45.4 | − | 26.4 47.2 42.4 25.2 | | | | | | | | | KO → 43.1 50.1 40.6 45.9 42.2 27.8 | − | 44.7 40.3 22.7 | | | | | | | | | PT → 57.9 71.2 53.1 63.8 54.9 31.0 26.9 | − | 52.5 30.9 | | | | | | | | | SW→ 47.2 58.4 43.7 50.9 44.8 23.9 20.7 50.3 | − | 23.9 | | | | | | | | | ZH → 48.0 56.6 45.9 52.4 47.0 27.6 24.5 49.0 44.2 | − | | | | | | | | | | DE | EN | ES | FR | IT | JA | KO | PT | SW | ZH | |---------------------------------------------------|------|----------------------------------------------|------|------|------|------|------|------|------| | DE → | − | 67.9 51.1 61.2 54.3 32.9 27.8 58.3 48.4 31.5 | | | | | | | | | EN → 65.1 | − | 55.5 71.1 58.9 37.8 31.0 70.4 57.2 34.5 | | | | | | | | | ES → 53.5 59.9 | − | 58.2 52.8 29.4 25.7 54.4 49.3 28.8 | | | | | | | | | FR → 58.4 69.5 52.4 | − | 56.1 32.5 28.0 59.9 50.1 30.9 | | | | | | | | | IT → 55.1 62.0 51.7 60.0 | − | 30.6 26.8 56.1 50.5 30.1 | | | | | | | | | JA → 47.1 55.2 44.5 51.1 46.3 | − | 27.5 48.5 44.3 26.2 | | | | | | | | | KO → 45.1 52.2 42.0 47.6 43.6 30.7 | − | 45.8 43.0 24.6 | | | | | | | | | PT → 58.8 71.7 53.4 64.6 55.3 32.5 29.3 | − | 53.9 31.8 | | | | | | | | | SW→ 49.0 61.0 45.0 52.9 46.5 26.4 23.7 52.6 | − | 24.2 | | | | | | | | | ZH → 49.2 57.7 46.4 53.3 47.7 29.8 26.3 50.1 46.4 | − | | | | | | | | | | Model | CHRF SPBLEU | |θ| | |θeff| | |------------------------------------------|---------------|-----------|------------| | Separate Decoder Baseline | 45.5 | 26.0 | 299M 186M | | LSL-NAS | 46.4 | 27.2 | 441M 186M | | + Dense Pre-training | 46.8 | 27.5 | 441M 186M | | Language Adapters ENC 128 | 45.8 | 26.4 | 321M 207M | | + hidden dim 640 | 46.2 | 26.9 | 426M 261M | | Language Adapters ENC 256 | 45.7 | 26.3 | 342M 228M | | + hidden dim 640 | 46.3 | 27.1 | 452M 282M | | Language Adapters ENC 512 | 45.6 | 26.3 | 384M 270M | | Language Adapters ENC (SRC+TGT) 128 45.7 | 26.3 | 321M 207M | | | Language Adapters ENC (SRC+TGT) 256 46.1 | 26.7 | 342M 228M | | | Language Adapters ENC (SRC+TGT) 512 46.0 | 26.7 | 384M 270M | | | Language-Pair Adapters ENC 128 | 45.2 | 25.7 | 491M 207M | | Language-Pair Adapters ENC 256 | 45.3 | 25.8 | 680M 228M | | Language-Pair Adapters ENC 512 | 45.3 | 25.9 | 1057M 270M | | Language Adapters ENC+DEC 256 | 45.5 | 26.0 | 350M 236M | | Language Adapters ENC+DEC 512 | 46.0 | 26.4 | 400M 286M | | Language Adapters ENC+DEC 768 | 46.1 | 26.6 | 449M 336M | | Language Adapters ENC+DEC 1024 | 46.2 | 26.7 | 499M 385M | | Direction | Sep. Decoder | Ours | ∆ | |-------------|----------------|--------|------| | EUR → EUR | 59.1 | 59.7 | +0.6 | | EUR → CJK | 29.2 | 30.6 | +1.4 | | EUR → SW | 50.6 | 51.7 | +1.1 | | CJK→ EUR | 47.4 | 48.8 | +1.4 | | CJK→ CJK | 25.7 | 27.7 | +2.0 | | CJK→ SW | 42.4 | 44.7 | +2.3 | | SW → EUR | 48.9 | 50.9 | +2.0 | | SW → CJK | 22.4 | 24.4 | +2.0 | Table 10: Comparison of different Adapter Blocks configurations on the separate decoder architecture. ## G Comet **Results** We show COMET, CHRF, and SPBLEU scores, averaged over all language pairs in Table 12. We show the scores for the baseline (i.e., non-LSL), our LSL model, and the best Adapter model for both the separate decoder and the shared decoder architectures. In all metrics, our proposed architectures outperform the remaining models. ## H Inference Speed We report the inference times for the various architectures we considered in Table 13. We report tokens/second on the DE-EN test set9, averaged over 5 runs. Our latency measurements were collected using a single NVIDIA V100 GPU (Speed GPU) or a single-threaded Intel Xeon Platinum 8275CL CPU @ 3.00GHz (Speed CPU), both with batch 9We repeated these measurements for language pairs, such as EN-ZH, with similar results. Table 12: COMET, CHRF, and SPBLEU scores for the (non-LSL) baseline, our LSL models, and the best adapter model for the separate decoder and shared decoder architectures. These scores are averaged over all language pairs. | Model | COMET | CHRF | SPBLEU | |---------------------------------------------|---------|--------|----------| | Separate Decoder Baseline | 0.45285 | 45.5 | 26.0 | | LSL-NAS | 0.49577 | 46.4 | 27.2 | | + Dense Pre-training | 0.50759 | 46.8 | 27.5 | | Language Adapters ENC (SRC+TGT) 256 0.48265 | 46.1 | 26.7 | | | Shared Decoder Baseline | 0.36975 | 44.7 | 24.9 | | LSL-NAS-SD | 0.46542 | 46.3 | 26.7 | | + Dense Pre-training | 0.48357 | 46.6 | 27.1 | | Shared Decoder Adapters 512 | 0.41849 | 45.3 | 25.6 | size of 1, which faithfully captures the inference on a deployed neural machine translation model. As expected, the latency of shared decoder models is the same as that of similar separate decoder models (since only one of the decoders is used at inference time) so, for succinctness, we only report the separate decoder numbers. A couple of comments regarding the Adapter models: 1) we do not report speed numbers for the "Language Adapters ENC (SRC+TGT)" as the architecture is the same as "Language Adapters ENC"; 2) inference speed does not change significantly when adding encoder adapters, but only when adding adapters to the decoder. | Architecture | Speed GPU | Speed CPU | |-------------------------------|-------------|-------------| | Shared Decoder Baseline | 195.2 ± 2.6 | 61.4 ± 0.3 | | Separate Decoder Baseline | 194.3 ± 1.4 | 61.7 ± 0.2 | | + hidden dim 640 | 191.9 ± 1.6 | 54.0 ± 0.2 | | + hidden dim 704 | 189.8 ± 1.7 | 51.6 ± 0.3 | | + hidden dim 768 | 187.7 ± 2.1 | 48.4 ± 0.2 | | Language Adapters ENC 128 | 188.1 ± 1.8 | 61.2 ± 0.3 | | Language Adapters ENC 256 | 186.0 ± 1.6 | 61.1 ± 0.3 | | Language Adapters ENC 512 | 187.6 ± 1.1 | 61.0 ± 0.2 | | Language Adapters ENC+DEC 256 | 165.2 ± 2.4 | 57.6 ± 0.3 | | Language Adapters ENC+DEC 512 | 165.1 ± 4.5 | 57.2 ± 0.2 | | Language Adapters ENC+DEC 768 | 164.4 ± 2.1 | 56.9 ± 0.3 | | LSL-NAS | 195.0 ± 1.1 | 61.3 ± 0.2 | | LSL-NAS-SD | 195.5 ± 4.7 | 61.4 ± 0.3 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We have a "Limitations" section without section number, present after conclusion. ✓ A2. Did you discuss any potential risks of your work? We have an "Ethics Statement" section without section number, present after conclusion. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract has no section number. Introduction is section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We share code in Appendix A, and list the datasets we use in section 4.1. ✓ B1. Did you cite the creators of artifacts you used? We use various datasets, and cite in section 4.1. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We use the splits provided by the dataset creators, and mention this fact in the paper (section 4.1). C ✓ **Did you run computational experiments?** Section 4. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We provide parameter counts (section 4), as well as the inference infrastructure (Appendix H) and inference speed (Appendix H). The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. We provide average scores (and say so in the text), and did significance tests for the results. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yu-etal-2023-personality
Personality Understanding of Fictional Characters during Book Reading
https://aclanthology.org/2023.acl-long.826
Comprehending characters{'} personalities is a crucial aspect of story reading. As readers engage with a story, their understanding of a character evolves based on new events and information; and multiple fine-grained aspects of personalities can be perceived. This leads to a natural problem of situated and fine-grained personality understanding. The problem has not been studied in the NLP field, primarily due to the lack of appropriate datasets mimicking the process of book reading. We present the first labeled dataset PersoNet for this problem. Our novel annotation strategy involves annotating user notes from online reading apps as a proxy for the original books. Experiments and human studies indicate that our dataset construction is both efficient and accurate; and our task heavily relies on long-term context to achieve accurate predictions for both machines and humans.
# Personality Understanding Of Fictional Characters During Book Reading Mo Yu1∗ Jiangnan Li2∗ Shunyu Yao3 **Wenjie Pang**1 Xiaochen Zhou4 Xiao Zhou1 Fandong Meng1 **Jie Zhou**1 1Pattern Recognition Center, WeChat AI 2Institute of Information Engineering, Chinese Academy of Sciences 3Princeton University 4Syracuse University moyumyu@global.tencent.com lijiangnan@iie.ac.cn ## Abstract ![0_Image_0.Png](0_Image_0.Png) Comprehending characters' personalities is a crucial aspect of story reading. As readers engage with a story, their understanding of a character evolves based on new events and information; and multiple fine-grained aspects of personalities can be perceived. This leads to a natural problem of **situated and fine-grained** personality understanding. The problem has not been studied in the NLP field, primarily due to the lack of appropriate datasets mimicking the process of book reading. We present the first labeled dataset PERSONET for this problem. Our novel annotation strategy involves annotating user notes from online reading apps as a proxy for the original books. Experiments and human studies indicate that our dataset construction is both efficient and accurate; and our task heavily relies on long-term context to achieve accurate predictions for both machines and humans.1 ## 1 **Introduction** Lively fictional characters with distinct personalities are the first drive of the plotline developments. The authors shape the characters with various personality types, which distinguish a character from others and explain the motivations and behaviors of the characters. As a reverse process, the readers grasp the characters' personalities during reading a story, which helps to understand the logics of a plot and predict its future developments. The NLP community has also recognized the values of personality understanding; and conducted studies (Bamman et al., 2013; Flekova and Gurevych, 2015; Sang et al., 2022b) along this direction. In the problem definition of the existing tasks, the input is an entire book. By construction, they ask for the general impression of character personalities. Also for this reason, they only focus on coarse-grained personality types, *e.g.*, the four coarse MBTI types (Myers and McCaulley, 1988). ∗Authors contributed equally to this paper. 1Available at https://github.com/Gorov/personet_acl23. To make a personality prediction task more practical and useful, we consider two important aspects of character understanding in real life that have not been studied in the context of machine reading. First, we aim at predicting **fine-grained** personality types, with an exhaustive vocabulary of personality traits as the targets. Second and more importantly, we study the continuous-process nature of story reading - As people read, they form dynamic impressions of the characters and plots. We name this process **situated** comprehension. Specific to personality understanding, a character may have multi-faced personalities. In a certain point of the story, the character's behaviors can reflect one of them when faced the situation and events at the time. Human readers have the ability to use their knowledge of what has happened so far (*i.e.,* the history) to understand the character in the current situation. We hence propose to study **situated personality prediction**, which differs from the static prediction problem studied before. While the aforementioned two problems are practical and common in real life, they create new 14784 challenges in dataset creation, especically the latter. To accurately mimic the human reading process, annotators would need to read entire books, which is not practical due to the significant time required. We overcome this annotation difficulty and create a large-scale dataset for personality prediction in the reading processes. To achieve this goal, we propose a novel annotation strategy that utilizes publicly available book notes. Recent online reading apps such as WeRead2(shown in Figure 1) allow users to take notes while reading a book. As users read, they can add notes at the current reading position. These notes are linked to specific text in the book, which is highlighted with a dotted underline, referred to as **underlined texts** throughout this paper. This mechanism ensures that the notes accurately reflect the thoughts the user had while reading the surrounding text of the underlined text. Based on this resource, we propose our strategy of annotating *user notes as a delegate of the book* reading process. Specific to our task of personality prediction, this corresponds to (1) identifying if a user note discusses the personality trait of a character; and (2) associating the trait label to the context at the note location. We take user notes that contain at least a character name and a personality trait word, and ask human annotators to confirm if the trait is a modifier of the character in the note text (*i.e.*, the user note mentions that the character has the trait). The verified notes serve as nature labels of character personalities reflected by the surroundings of the underlined texts. By using this approach, we collect labeled data that only requires annotators to read short notes, without the need for knowledge about the books themselves. With our new strategy, we create our situated personality prediction dataset, PERSONET, that contains ∼32K instances from 33 books in the classic literature domain. We prove that our annotation strategy is *efficient* as each worker only requires a median of 23.7s to finish one sample. The whole annotation process costed in total $2,400 and 471.8 hours (distributed to 20 working days by 11 annotators). It is also *accurate* evidenced by both the over 88% inter-annotator agreement. In addition, we make the dataset bilingual in both English and Chinese with automatic book sentence alignment and manual character alignment. We conduct experiments with our dataset in two folds. First, we develop various improvement over the standard pre-trained models, including enabling the models to use different types of long contexts, equipping the models with oracle history trait information, and task-oriented unsupervised training. Second, we conduct extensive human studies with people who have read the books (*i.e.*, with the knowledge of the book history) and not. Our results show that (1) our task is challenging as humans with knowledge of book history can achieve more than 70% accuracy, compared to the best model accuracy of ∼45%; (2) our task heavily requires the long context modeling, as introducing characters' history information significantly improves the model accuracy; and humans without the book history can only perform on par with models. We make the following contributions: - A dataset, PERSONET, that is the first benchmark of *situated reading comprehension* and of fine-grained personality prediction on books. We prove that our dataset is a valid assessment to long context understanding for both machines and humans without significant shortcuts. - A novel dataset creation approach for book comprehension problems based on user notes, which is efficient and accurate. - Task-oriented unsupervised training and character history enhancement methods that improve on our task, providing insights to future work. ## 2 **Related Work** Story book understanding has been recognized as a challenging and rewarding direction (Piper et al., 2021; Sang et al., 2022a). Many evaluation benchmarks on various narrative understanding tasks have been developed, such as plot structure analysis (Saldias and Roy, 2020; Papalampidi et al., 2019), question answering (Richardson et al., 2013; Kocisk ˇ y et al. ` , 2018; Xu et al., 2022), summarization (Ladhak et al., 2020; Krysci ´ nski et al. ´ , 2021; Chen et al., 2022), character identification and relationship extraction (Elson et al., 2010; Elsner, 2012; Elangovan and Eisenstein, 2015; Iyyer et al., 2016; Chaturvedi et al., 2016; Kim and Klinger, 2019; Sang et al., 2022c). All of the prior work takes the entire long story as input to a model for predictions. None of them considers the *situated* reading process like ours. Existing strategies of dataset construction over long stories fall into the following categories: •A straightforward way is **to have labelers read the** entire stories. Because of the huge efforts, it only works for short stories for young children (Xu et al., 2022) or simpler tasks like coref (Bamman et al., 2019), NER (Bamman, 2020) and quotation attribution (Vishnubhotla et al., 2022). •**Using the** book summaries as proxy of the original stories, e.g., the creation of book-level question answering task (Kocisk ˇ y et al. ` , 2018). The created data usually only covers abstract and major events in the book, as shown in (Mou et al., 2021). Thus the types of comprehension skills that can be assessed with this strategy are limited. •**Exploiting Web resources created by fans or experts**. Flekova and Gurevych (2015) used fans' rated MBTI types to create a classification task for book characters; Ladhak et al. (2020); Krysci ´ nski et al. ´ (2021) created a book chapter summarization task based on summaries on the English learning websites; and Thai et al. (2022) created a book retrieval task based on quotes in literature reviews. The drawback of this strategy is that the tasks can be supported are limited by the available resources. •**Automatically** created cloze tests is a traditional strategy. With specifically designed techniques, the clozes can be made resolvable only with global context, *e.g.*, (Rae et al., 2020; Sang et al., 2022c; Yu et al., 2022). The problem of this method is that the created datasets usually have unclear assessment goals. The limitations of these strategies make them insufficient to create datasets for our task of situated personality understanding. ## 3 **Problem Definition** Our PERSONET is the first task on situated prediction of characters' personality traits in book contexts. That is, we aim to predict the traits reflected by a local snippet of book, given all the previous book content as the background (Figure 1). Formally, we consider a local book snippet S (i) = {sk (i) 1 , sk (i) 2 , ..., sk (i) J }. Each sk (i) j is a sentence from the book, with k (i) jthe absolute position of the sentence in the book. Each S in our task depicts a character's personality. Therefore, it is associated with a pair of (*c, t*), where c is a character name or alias and t is the personality trait of c that reflected by S. Note that different pairs may share a same snippet. Our task is then to predict: $$P(y=t|c,{\mathcal{S}}^{(i)},{\mathcal{H}}^{(i)}=s_{1:k_{1}^{(i)}}),$$ ), (1) where s1:k (i) 1 refers to all the sentences before S (i) in the book. We split the books into training, dev and test sets, so that the evaluation characters are unseen during training. For evaluation, we adopt a multi-choice setting. For each instance, we sampled 4 negative candidates, two from the top-20 most frequent traits and the rest from the whole list. Combining the negative choices with t, we have a candidate set T . Our data thus form a tuple (S, H = s1:t (i) k , c, t, T ). ## 4 Our Personet **Dataset** 4.1 **Data Source** List of Personality Traits Following previous work (Shuster et al., 2019), we use the list of 818 English personality traits from the MIT Ideonomy project.3 We translate the traits into Chinese with Youdao dictionary,4then ask human annotators to select all the translated meanings that depict personality in Chinese. There are 499 English traits and 565 Chinese traits left that are bilingually aligned. Books and Notes We collect 100 public books available in the Gutenberg project. For each book, we find all its Chinese-translated versions on the WeRead app and collect all their user notes. We kept notes that (1) contain any traits, (2) contain any person names5and (3) with lengths less than 100 words (relatively shorter notes can improve human annotation efficiency). We filtered out the books with less than 100 notes left, leaving 33 books and 194 of their Chinese translations. These books have 110,114 notes that contain 140,268 traits in total. Note Clustering It is common for multiple users to comment on the same part of a book, discussing the same character. When these users express similar opinions about a character, it leads to duplication. To remove this duplication for data annotating, we group the notes according to their positions, defined as the center token offset of its associated snippet S (i)(*i.e.*, its underlined text). Notes with distances smaller than 100 tokens are grouped, leading to 27,678 note clusters. We take the unique traits within each cluster for human labeling, which corresponds to 113,026 samples as defined in Section 3. The notes are anonymized for human annotation. Extension of the Snippets The lengths of underlined texts can vary significantly, which means they may not always provide a representative context 3http://ideonomy.mit.edu/essays/traits.html. 4https://cidian.youdao.com/. 5We use Spacy (zh_core_web_lg) for NER. for reflecting a character's personality, particularly when the texts are very short. We address this issue by extending each S (i)from the underlined text to a window of 480 tokens. This window is generally large enough to encompass a scene and ensures that the context relevant to the user note is included. The reason for choosing this window size is that it is typically longer than one page displayed by the WeRead App (as shown in Figure 1) - users often write notes on the same page while reading the context, rather than flipping through previous or subsequent pages.6 ## 4.2 **Dataset Construction** Our dataset construction consists of two major steps: (1) human annotation of user notes; (2) projection of labeled data from Chinese to English. In addition, we show that (3) our data construction strategy enables to build an accurate note classifier for automatic weakly-supervised data labeling. Step 1: Human Annotation This step requires the annotators to read each user note, and determine if it discusses the personality of a character. We present the annotators with notes that contain at least one trait word in our vocabulary in Section 4.1. The note is paired with the *underlined book content*, which is optional to read, if they think the note itself is ambiguous. The annotators are then asked to (1) judge if the note is indeed about a certain character's trait; then (2) marked the target character name with the trait from the note. The first step takes most of the human efforts. We wrote concrete guidelines (Figure 4 in Appendix A) for the decision making process. The annotators are citizens in China who have received at least high school education (which, in the Chinese education system, covers most of the general knowledge about classic literature). Therefore it is more convenient for them to work in Chinese; and Figure 4 lists both the original guidelines in Chinese and their English translations. Our annotation interface (with English translations) is shown in Appendix A. Once the annotators confirm that the given trait word describes some characters, they are required to annotate the character name by dragging from the note text. If not, the character name will be left empty. Step 2: Bilingual Projection The human annotation step has created a personality prediction dataset in Chinese. Next we project the data to English. Since the same English book may have multiple translated books in Chinese, their labeled data scattered. By projecting the labeled data to English books, the book version is unified and the annotations become dense. According to Section 3, to create an English version of our dataset, we only need to project the traits t, the characters c and the snippets (positions) S. The trait t is already from a bilingual vocabulary, so we only need to focus on the latter two. •**Book Alignment** The projection of S is equivalent to finding each labeled instance's sentence positions in the English book, which is essentially a sentence alignment problem. Specifically, we sentencize the books firstly with Spacy; then utilize the *vecalign* (Thompson and Koehn, 2019) toolkit to achieve sentence alignments among books. We represent each sentence with the default number (10) of its consecutive sentences, and employ the multilingual sentence embedding *LASER*7to embed the sentences. After that, *vecalign* performans sentence alignments with dynamic programming based on the embeddings. With bilingual sentence alignment, the position of each labeled instance can be mapped to the corresponding position in the English book, *i.e.*, Sen = {a(s)|∀s *∈ S}*, where a(s) refers to aligned position of the Chinese sentence s in the English book. For most of the S in our dataset, we can find consecutive Sen as the aligned results. There are a few instances mapped to empty. We excluded these cases in our English dataset. There are also a few instances mapped to inconsecutive English sentences, sometimes in a wide range. For this situation, we take the median position of the mapped English sentences and include the consecutive context in a window as the projection. •**Character Name Projection** We manually project the list of 377 frequent (appear >10 times in our labeled data) character names to English. We askeds two annotators to find the English names of these characters; and resolved all the inconsistency after they complete their own annotation jobs. Step 3: Weakly-Supervised Data Our method reduces the problem of annotation over books to annotation over notes. This makes it possible to build a note classifier for automatic data augmentation. We collect another 65,521 notes from the same book collection that consists of at least one trait 7https://github.com/facebookresearch/LASER. word and one person name. By pairing traits with names within the same notes, we create 154,030 examples. Then we train a binary *roberta-wwmext* (Cui et al., 2020) classifier over our humanlabeled data to determine if the note discusses the character's trait, *i.e.*, the same task in human annotation but without the need of marking target characters. For each human annotated note, if the note is recognized as describing a trait of a character, it is used as a positive example. For those labeled as irrelevant to character traits, *i.e.*, no characters are annotated, we denote them as negative examples. Cross-validation on the human-labeled data shows that our classifier is accurate: 91.1% and 90.2% on the dev and test set. Applying our classifier to these unlabeled examples, we recognize 31,346 examples as describing characters' traits. ## 4.3 **Quality Of The Annotated Data** This section proves the accuracy of our data construction method via human study. Correctness of Book Notes First of all, we need to prove that the user notes are indeed an accurate delegate of books. That is, when a note mentions a personality of a character, whether it is highly consistent to what the book content reflects. This study requires annotators who have read the books to make the correct judgement. We selected four books with two annotators who have read and are familiar with them. Each annotator labeled two books. We sampled in total 431 notes from these books. The annotators are required to judge if the note is accurate about the character or not. We present the corresponding underlined content along with the note, so that the annotators can identify which part the note is commenting. The results in Table 1 show that 89.1% of the notes are accurate understanding of the books. There are 9.7% *ambiguous* examples, meaning the annotated traits are implied by the current place of the books, but might be falsified later, *e.g.*, the authors may intend to mislead the readers to create surprisal or tension. These ambiguous labels give valid data for our problem of *dynamic personality prediction*, according to our description at the beginning of Section 1 and Eq. (1). Accuracy of Human Labels Next, we proved that our annotation process leads to accurate human labels. This accuracy is verified in two ways. First, we compute the inter-annotator agreement, with a duplicated set of 3,000 notes during an- ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Table 1: Notes (%) that consistently reflect the character personalities in the stories. Table 2: Human study: quality of bilingual alignment. | Set | #Instance | | | | |-------------|-------------|---------|---------|--------| | #Books | #Chars | English | Chinese | | | Train | 17 | 148 | 18,190 | 18,273 | | Weakly Sup | 26,244 | 26,331 | | | | Development | 6 | 54 | 3,745 | 3,751 | | Test | 10 | 72 | 3,624 | 3,647 | | Total | 33 | 274 | 51,803 | 52,002 | notation. 88.67% of the duplicated samples receive consistent labels. The Cohen's Kappa (Cohen, 1960) is 0.849, which indicates nearly perfect agreement (Viera et al., 2005). Second, as shown in the Step 3 in Section 4.2, a fairly accurate note classifier can be trained on our human-labeled data (91.1% and 90.2% accuracy on dev and test). Both tests confirm the accuracy of our annotation strategy. Considering the relevance of the book notes (Table 1), this gives an estimation of overall accuracy around 87.6∼89.1%. The two endpoints are computed with inter-annotator agreement and classifier accuracy accordingly. It confirms that our dataset is overall accurate. Table 7 in Appendix B gives some difficult examples that created disagreements. There are two major sources of difficulties: (1) the trait word has multiple meanings in Chinese and the usage does not represent the sense of the trait; (2) a trait word is used to recall the general impression or history behavior of a character in an implicit way. Accuracy of Cross-Lingual Alignment Finally, we evaluate the quality of the bilingual alignment. We randomly sampled 200 labeled instances for human study. We present to the annotators the snippet S of each instance in the Chinese book and their aligned sentences from the English books. The human annotators were asked to rate the alignments into four grades: perfect/high overlap/low overlap/no match, *i.e.*, all/>50%/<=50%/none of the Chinese sentences have their translations in the paired English sentences. Table 2 show that >97% of the cases fall into the *perfect* and *high overlap* categories. When taking texts from the median (a) Dantès (*i.e.*, Count Monte Cristo) (b) Albert (*The Count of Monte Cristo*) (c) Plots of sentiments of traits along time ![5_image_1.png](5_image_1.png) Figure 2: Word clouds and plots of sentiments of traits along time for the characters. position of the sentences for model inputs, these categories both can make accurate projections of annotations to the English books. ## 4.4 **Data Statistics And Visualization** Data Statistics Table 3 shows the statistics of our PERSONET. We give the full list of books in Appendix C. We can also see that our dataset contains a wide range of book characters. In the annotated training set, approximately 41% of the notes are about positive traits, 36% are about negative traits, and 23% are about neutral traits. This distribution reveals a slight bias, which can be attributed to the fact that users are more inclined to write notes when they have strong sentiments or opinions about a character. Visualization of Our Dataset Figure 2 visualizes the major traits and the polarity of traits over time for two of the most popular characters. It can be found that the major traits match readers' common impressions; and the trends well align with the common feelings of readers during reading. This further confirms the quality of our data. Detailed explanations of the figures and more examples can be found in Appendix D and Figure 7. ## 5 **Models For Persona Prediction** We design models based on two different types of pre-trained models, BERT (Devlin et al., 2019) and Longformer (Beltagy et al., 2020). We use the latter model to investigate the strength of models that are pre-trained to handle long contexts. ## 5.1 **Input To The Reader Models** Our data instance consists of a tuple (S, H*, c, t,* T ). Here S is a book snippet that expresses a personality trait t of a character c. H is the previous history of S in the book. T is a set of candidate traits with t as an element. The task is to rank t to the top within T given (S, H) and c. We represent the input (S, H, c) with the following format options: •**No history** represents the input as x = [c [SEP] S], ![5_image_0.png](5_image_0.png) i.e., does not use the history H. •**Extended history:** x = [c [SEP] S [SEP] Hprev], where Hprev ⊂ H includes sentences that are adjacent to S, truncated by models' length limited. •**Character history:** x = [c [SEP] S [SEP] Sc]. Sc ⊂ H includes snippets to the left of S that contains the character c in our dataset. ## 5.2 **Model Architectures** Our methods compute the score of an input x having a trait t, based on the siamese model. Text Encoding Firstly, we use a pre-trained LM (PLM, either BERT or Longformer) to encode x and t to the embedding space. The encoded contextualized embeddings of input and output are denoted as X = PLM(x) ∈ R lx×d, where lx is the length of x and d is the size of hidden states; and T = PLM(t) ∈ R lt×d, with ltthe length of t. Baseline Siamese Model As our baseline models, we compute a weighted sum over X to get a vector representation of the input. Specifically, we use a linear model to compute the attention score over each token of x: A = Attention(H), α = Softmax(A). The attention αx is then used to summarize the hidden states X a vector x = XTα. The sequence of a trait t is usually short (*e.g.*, a single word's BPE tokenization). Therefore we simply take the average t = mean(T). The model makes prediction with t = arg maxt∈T <x, t>. Contextualization with History When the input x contains the extended or character history as defined in Section 5.1, we need to separate the information of the history from the current context. We maintain a mask H ∈ R lx×1, such that H[j] = 1 if the j-th word belongs to the appended history and 0 otherwise. Two attention vectors are computed for the current snippet and the history: αs = Softmax(A ⊙ (1 − *H)), α*h = Softmax(A ⊙ H). The corresponding summaried vectors are s = XTαs and h = XTαh. The prediction function is then modified with a gating function σ(s) added: t = arg max t∈T σ(s) < s, t > +(1 − σ(s)) < h, t > . (2) ## 5.3 **Unsupervised Training** Finally, we propose an unsupervised training task to improve personality prediction. The unsupervised task is used to pre-train the classifiers, before they are fine-tuned on our labeled data. The task mimics the problem definition in Section 3 and constructs tuples of (S, t). We first extract sentences that contain trait words. If a sentence sj contains a trait t, we keep a local window of it as the book snippet, with the sentence itself removed. That is, S (i) = {sj−w, · · · , sj−1, sj+1, · · · , sj+w}. Intuitively, since S provides the context of sj , it is informative for inferring the appearance of the trait described in sj . Therefore this unsupervised task helps to find narrative clues of traits thus can help to better pre-train the encoders. The method has the limitation of not being character-specific, hence not compatible with our character-history-based models. We leave it to future work. ## 6 **Experiments** 6.1 **Experimental Settings** We use *bert-base-uncased* and *longformer-base4096* as backbones for English experiments; and Roberta-wwm-ext for the Chinese experiments. Hyperparameters For our siamese models with and without history, the most important hyperparameter is the lengths of S and H. We set the maximal length of S to 480 tokens for most of the models. For models with history we set the maximum of |S| + |H|=1,600. To show the better performance of our usage of history, we also compare with Longformer with a maximum |S|=2K tokens (the best a single A100 GPU can handle). The batch size is 40 for BERT-based models; and 8 for Longformer-based models with gradient accumulation every 5 batches. Each epoch of BERT and Longformer models takes ∼7 and ∼40 minutes respectively on a single A100 GPU. We set the learning rate to 2e−5. We conduct early-stopping on the dev set; and run 5 times to compute the average and stand derivation for all the methods. Additional Baselines Besides the models in Section 5, we further compared with the follows: •**Models with Oracle Traits in History**, which uses the character's history traits in replace of the history texts. For each instance, we take its target character c's other instances prior to it, and concatenate their groundtruth traits as a sequence to replace H in the model of Eq. (2). •**Char-Majority**, which always predicts the most frequent trait for a character. This is used to show the diversity of traits for the same character (*i.e.*, necessity of situated prediction). •**GPT-davinci** (text-davinci003), the few-shot instruct-GPT (Ouyang et al., 2022). •**ChatGPT**, which conduct zero-shot prediction on our task thus can take longer inputs. We test |S|=480 and 1.6K as in our experiments with trained models. •**Humans:** we present the same format of our instances with maximal |S|=480 to humans to get their performance. Furthermore, we added LoRA (Hu et al., 2022) fine-tuning of the **LLaMA** (Touvron et al., 2023) and **WeLM** (Su et al., 2022) on our PERSONET as additional baselines. The fine-tuning of large language models and the usage of ChatGPT reflect the latest state-of-the-arts in concurrence with our work. ## 6.2 **Overall Results** Our main results are shown in Table 4. First, all the three models without the usages of history achieve similar results. The Longformer with a 2K window does not give better performance, showing that simply increasing the length of input without including useful history information is not helpful for our task. Second, our model with character history achieves the best results. Replacing the character history with extended history slightly reduces the dev performance but lead to significant test performance drop (according to the standard derivation). Among all the supervised-only methods, this model is the only on that maintains consistent dev and test accuracy. Third, our unsupervised training significantly improve the accuracy for all the models. Fourth, the oracle history traits improve the supervised accuracy with a large margin. Yet for Longformer, adding character history and unsupervised training makes the gap smaller. Finally, the best human performance with knowledge of story history greatly outperforms all the models with and without oracle information with 20∼23%, showing the challenges and great potential of our PERSONET. These results highlight the importance of incorporating history information in solving our | System | Len | Accuracy | | |--------------------------|-------|------------|------------| | Dev | Test | | | | Random | - | 20.00 | 20.00 | | Frequent Traits | - | 14.10 | 12.75 | | BERT (no-hist) | 480 | 45.01±0.64 | 42.96±1.07 | | + unsup | 480 | 46.18±0.49 | 44.93±1.01 | | Longformer (no-hist) | 480 | 45.02±0.45 | 42.75±0.97 | | Longformer (no-hist) | 2K | 45.00±0.44 | 42.42±0.39 | | Char-Hist-Longformer | 1.6K | 45.50±0.54 | 45.33±1.11 | | + unsup | 1.6K | 46.39±0.63 | 45.85±0.72 | | w/ extend-hist | 1.6K | 45.46±0.67 | 43.44±0.72 | | + unsup | 1.6K | 45.93±0.52 | 44.54±1.49 | | w/ Oracle Information | | | | | BERT + hist traits | 480 | 50.15±1.01 | 50.02±1.03 | | Longformer + hist traits | 2K | 48.66±0.96 | 48.11±1.21 | | Char-Majority | - | 16.10 | 17.25 | | GPT-davinci 5-shot∗ | 480 | 34.88 | 31.51 | | ChatGPT 0-shot∗ | 480 | 33.72 | 42.47 | | ChatGPT 0-shot∗ | 1.6K | 36.05 | 36.99 | | LLaMA + LoRA-sft∗ | 1.6K | 47.67 | 49.32 | | Human w/o history∗ | 480 | 44.19 | 40.54 | | Human w/ history∗ | 480 | 69.77 | 65.75 | Table 4: Overall performance (%) on our PERSONET-en task. (*) Results were conducted on a subset of the dataset. task; and reveal that characters exhibit dynamic personalities that evolve over time, thus solely relying on history traits (even oracle) is limited. The two methods based on large language models, namely GPT-davinci and ChatGPT, performed worse than the models trained on our dataset. This indicates that our task is still a challenge for these general-purpose models. Moreover, although ChatGPT performed better than GPT-davinci, it was not better overall to use the longer context length of 1.6K as compared to using shorter contexts. This suggests that ChatGPT may not have been trained to effectively utilize long context in our situated reading setting. Chinese Task Performance Table 5 shows results on the Chinese version of PERSONET. The results are in general higher than those in the English setting for two reasons: (1) during annotation we have the semantic space of traits in Chinese, so their English translations may not be the most commonly used words. (2) the user notes tend to reuse words in the books, so there is higher change that some traits explicitly appear in Chinese books. Performance of Fine-Tuned LLMs To fine-tune the LLMs, we adopt the same setup in the ChatGPT experiments, where the same prompts serve as inputs and the ground truth answers are used as outputs. The optimization focuses on minimizing perplexity concerning the outputs. Regarding | System | Dev | Test | |-----------------------|-------|--------| | BERT Reader | 49.72 | 48.70 | | Multi-Row BERT Reader | 50.25 | 49.25 | | BERT w/ Trait-History | 53.29 | 51.25 | | GPT-davinci 5-shot∗ | 33.72 | 32.78 | | ChatGPT 0-shot∗ | 34.88 | 41.89 | | WeLM + LoRA-sft∗ | 51.16 | 54.05 | | Human w/ history∗ | 73.26 | 68.92 | hyperparameter tuning, we specifically adjust the rank r, weight α, and number of training epochs. For model selection, we rely on the accuracy on the development subset utilized in our human study, which sets r = 8, α = 1 and 10 training epochs. The results in Table 4 and 5 show that the finetuned LLM achieves slightly better results compared to our proposed baselines. However, it still significantly lags behind human performance by a considerable margin. Interestingly, unlike the other models and humans, the fine-tuned LLM perform better on the testing subset compared to the development one. Our hypothesis is that the testing book Notre-Dame de Paris is more popular on the Internet, thus may be more sufficiently trained during the pre-training stages of LLaMA and WeLM. The LLM fine-tuning results can be potentially improved by employing a contrastive training approach similar to our proposed models. We leave this to future study. ## 6.3 **Human Study** We conduct human study to understand the challenges of our task. We sampled instances from the two books that have most instances from the development and testing sets; and asked human annotators (who are co-authors of the paper but have not seen the labeled data before) to complete our multi-choice task. There are two types of annotators: Type-I who have not read the books before (*human w/o history*); and Type-II who have read the books (*human w/ history*). We have annotated in total 160 samples. Each sample is guaranteed to be annotated by two humans, one with history and one without history. Ratio of Ambiguous Instances Sometimes an event in a book can depict multiple aspects of personality. When the sampled negative choices share similarity to these personality traits, it leads to ambiguous cases with more than one correct answers. To investigate these cases, we require the Type-II annotators to mark the instances that they believe have ambiguous labels.8 There are 41 ambiguous samples recognized, *i.e.*, ∼25% of the cases have more than one correct answers. This indicates an ∼**87.2% approximated upperbound** accuracy of our task, if we consider each ambiguous instance has two choices that are correct. In the future, we can leverage our note clusters to mitigate this ambiguity by ensuring that negative candidates do not appear in the cluster from which the snippet originates. Main Findings The knowledge of book history is not only important to models, but also to humans. Table 6 compares humans performance with and without history. There is an ∼25% performance gap. Furthermore, human performance without history is only comparable to the best model performance (selected according to dev accuracy, which performs 47.18% and 47.21% on the full dev and test data). These results confirm that our task raises the core challenge of long context understanding. Detailed results show that the Type-I annotators labeled ∼35% of cases that they believe unsolvable because of their lacking of the book history. After verification by Type-II annotators, there are 37 cases left for close examination. It reveals that the history information is critical for these cases for two major reasons: (1) there are multiple possible answers given the snippets but with the knowledge of the characters' history behavior the incorrect traits can be resolved (17 of 37); (2) the plots in the snippets cannot be understood and linked to any personality without book history (11 of 37). There is a third difficult category (9 of 37), where reasoning is required to draw connections, *e.g.*, consequence or analogy between the current snippet and a character's previously demonstrated personality. Examples of these categories can be found in Table 10 in Appendix E. ## 6.4 **Analysis** Learning Curve Figure 3 plots the learning curve of our PERSONET task. The curves shows that the size of our dataset is large enough as the curves become flat after the point of 30K. More importantly, the results justify the accuracy of our data construction strategy. As adding weak supervision (all) significantly outperforms training with only human-labeled data (dotted lines). 8Because these people have memory of the books, they can accurately distinguish the ambiguous cases from those can be disambiguated by the history. | System | Data | | |--------------------|--------|-------| | All | Unamb | | | Best model | 48.75 | 49.58 | | GPT-davinci 5-shot | 33.33 | 38.46 | | ChatGPT zero-shot | 37.74 | 41.03 | | LLaMA + LoRA-sft | 48.43 | 52.63 | | Human w/o history | 42.50 | 50.42 | | Human w/ history | 67.92 | 73.50 | ![8_image_0.png](8_image_0.png) Difficult Trait Types We examine the traits that appear more than 20 times in the dev set. The most difficult types include *Confident* (0.00%), *Mature* (5.56%), *Liberal* (7.41%), *Humorous* (7.69%), *Impressionable* (8.82%), *Gentle* (9.09%), *Optimistic* (10.81%), *Rational* (11.36%), *Imprudent* (14.29%) and *Insincere* (16.00%). It can be found that most of the difficulty types are abstract, which are usually not explicit depicted in the books but require reasoning from characters' behaviors. ## 7 **Conclusion** We propose a dataset PERSONET for the new problem of situated personality understanding of book characters. We overcome the difficulty in dataset construction with a new strategy of annotating the user notes as a proxy for the original books. Our dataset constuction method maintains both efficiency and accuracy. Experiments show that the task raised challenges of long-text understanding for both humans and machines. ## Limitations Our propose annotation strategy can be applied to labeling other MRC problems, no matter situated comprehension ones or not. However, when generalizing to other problems other than personality prediction we studied here, the accuracy of the user notes may vary with the difficulty of tasks. Additional human verification on the correctness of notes like in our Section 4.3 need to be conducted. Our unsupervised training technique does not support the Longformer reader with character history (Char-Hist Longformer) yet. Therefore, the improvement from unsupervised training for our this model is smaller. While Longformer is common in benchmarking for long story understanding tasks. There are other families of models (Rae et al., 2020; Izacard and Grave, 2021; Ainslie et al., 2020; Xiong et al., 2021; Pang et al., 2022) handling long text encoding. We leave the comparison with these models to future work. Potential Risks Like the other work that based on the similar set of books (Bamman et al., 2019; Bamman, 2020; Vishnubhotla et al., 2022; Thai et al., 2022), the classic literature may be limited by the time of writing, thus raise fairness considerations. However, please note that our dataset construction strategy is not limited to these books, but can work with any books on WeRead to create a sampled book set without such biases. The main reason we stick with the current list of books is for reproducibility since they are publicly available. ## References Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 268–284. David Bamman. 2020. Litbank: Born-literary natural language processing. *Computational Humanites, Debates in Digital Humanities (2020, preprint)*. David Bamman, Brendan O'Connor, and Noah A Smith. 2013. Learning latent personas of film characters. In *Proceedings of the 51st Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 352–361. David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2138–2144. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150. Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, and Chris Dyer. 2016. Modeling evolving relationships between characters in literary novels. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. Summscreen: A dataset for abstractive screenplay summarization. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8602–8615. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and psychological measurement*, 20(1):37–46. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657–668, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT 2019*, pages 4171–4186. Vinodh Krishnan Elangovan and Jacob Eisenstein. 2015. "you're mr. lebowski, i'm the dude": Inducing address term formality in signed social networks. In The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1616–1626. Micha Elsner. 2012. Character-based kernels for novelistic plot structure. In *Proceedings of the 13th Conference of the European Chapter of the Association* for Computational Linguistics, pages 634–644. David K. Elson, Nicholas Dames, and Kathleen R. McKeown. 2010. Extracting social networks from literary fiction. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 138–147. The Association for Computer Linguistics. Lucie Flekova and Iryna Gurevych. 2015. Personality profiling of fictional characters using sense-level links between lexical resources. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1805–1816. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daumé III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In *Proceedings* of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1534–1544. Gautier Izacard and Édouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. pages 874–880. Evgeny Kim and Roman Klinger. 2019. Frowning frodo, wincing leia, and a seriously great friendship: Learning to classify emotional relationships of fictional characters. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 647–653. Tomáš Kocisk ˇ y, Jonathan Schwarz, Phil Blunsom, Chris ` Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328. Wojciech Krysci ´ nski, Nazneen Rajani, Divyansh Agar- ´ wal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for longform narrative summarization. arXiv preprint arXiv:2105.08209. Faisal Ladhak, Bryan Li, Yaser Al-Onaizan, and Kathleen McKeown. 2020. Exploring content selection in summarization of novel chapters. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5043–5054. Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, and Hui Su. 2021. Narrative question answering with cutting-edge opendomain QA techniques: A comprehensive study. Trans. Assoc. Comput. Linguistics, 9:1032–1046. Isabel Briggs Myers and Mary H McCaulley. 1988. Myers-Briggs type indicator: MBTI. Consulting Psychologists Press Palo Alto. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Bo Pang, Erik Nijkamp, Wojciech Krysci ´ nski, Sil- ´ vio Savarese, Yingbo Zhou, and Caiming Xiong. 2022. Long document summarization with topdown and bottom-up inference. *arXiv preprint* arXiv:2203.07586. Pinelopi Papalampidi, Frank Keller, and Mirella Lapata. 2019. Movie plot analysis via turning point identification. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1707–1717. Andrew Piper, Richard Jean So, and David Bamman. 2021. Narrative theory for computational narrative understanding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 298–311. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 193– 203. Belén Saldias and Deb Roy. 2020. Exploring aspects of similarity between spoken personal narratives by disentangling them into narrative clause types. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, NUSE@ACL 2020, Online, July 9, 2020, pages 78–86. Association for Computational Linguistics. Yisi Sang, Xiangyang Mou, Jing Li, Jeffrey Stanton, and Mo Yu. 2022a. A survey of machine narrative reading comprehension assessments. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 5580–5587. ijcai.org. Yisi Sang, Xiangyang Mou, Mo Yu, Dakuo Wang, Jing Li, and Jeffrey Stanton. 2022b. MBTI personality prediction for fictional characters using movie scripts. In *Findings of the Association for Computational* Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, pages 6715–6724. Association for Computational Linguistics. Yisi Sang, Xiangyang Mou, Mo Yu, Shunyu Yao, Jing Li, and Jeffrey Stanton. 2022c. Tvshowguess: Character comprehension in stories as speaker guessing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4267–4287. Association for Computational Linguistics. Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston. 2019. Engaging image captioning via personality. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12516–12526. Hui Su, Xiao Zhou, Houjin Yu, Yuwen Chen, Zilin Zhu, Yang Yu, and Jie Zhou. 2022. Welm: A wellread pre-trained language model for chinese. *CoRR*, abs/2209.10372. Katherine Thai, Yapei Chang, Kalpesh Krishna, and Mohit Iyyer. 2022. Relic: Retrieving evidence for literary claims. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Brian Thompson and Philipp Koehn. 2019. Vecalign: Improved sentence alignment in linear time and space. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1342– 1348, Hong Kong, China. Association for Computational Linguistics. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. *CoRR*, abs/2302.13971. Anthony J Viera, Joanne M Garrett, et al. 2005. Understanding interobserver agreement: the kappa statistic. Fam med, 37(5):360–363. Krishnapriya Vishnubhotla, Adam Hammond, and Graeme Hirst. 2022. The project dialogism novel corpus: A dataset for quotation attribution in literary texts. In *Proceedings of the Thirteenth Language* Resources and Evaluation Conference, pages 5838– 5848. European Language Resources Association. Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. 2021. Nyströmformer: A nyström-based algorithm for approximating self-attention. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 14138–14148. Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby JiaJun Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. 2022. Fantastic questions and where to find them: Fairytaleqa - an authentic dataset for narrative comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 447–460. Association for Computational Linguistics. Mo Yu, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Jing Li, Yue Yu, and Jie Zhou. 2022. Few-shot character understanding in movies as an assessment to meta-learning of theory-of-mind. *arXiv preprint* arXiv:2211.04684. ## A **Annotation Guidelines And Interface** We show our guidelines in Figure 4; and the annotation interface with translations in Figure 5. ## B **Notes That Are Difficult Or Ambiguous** To Label Table 7 in Appendix B gives some difficult examples that created disagreements. There are two majors sources of difficulties. First, the trait word has multiple meanings in Chinese and the usage does not represent the sense of the trait. In the first example, "可怕的敌人 (frightening enemy)" in Chinese usually means "an very-strong enemy that is hard/impossible to beat", *i.e.*, a terrible enemy. The enemy, here refers to the protagonist Dantès, does not necessary has the *frightening* personality. Similarly, in the second example, the annotators have disagreement because some people believe in Chinese, "非凡(extraordinary)" can be used as a personality trait only when a person possesses exceptional characteristics. While some annotators think the trait can also describe a person with exceptional abilities. Second, a trait word is used to recall the general impression or history behavior of a character in an implicit way. In the third example, the user wanted to expresses that Elizabeth used to be clear-headed but becomes a fool at the dance party. This recall of the general impression *clear-headed* is not explicit, but can be inferred from the next sentence that this note is commenting on a snippet of the dance party. Therefore the user aims to comment the *foolish* trait on the snippet instead of *clear-headed*. ## C **Full Book List** Table 9 shows the detailed information of each book included in our PERSONET. ## D **Visualization** Trait Clouds Figure 6 include more word clouds for different characters. Sentiment Plots Our trait vocabulary contains in total 818 traits with polarity annotations. Specifically, there are 234 positive traits, 292 neutral traits and 292 negative traits. Figure 7 visualizes readers' sentiments towards four popular characters through the lens of traits. We map the labeled traits to their sentiments, *i.e.*, positive or negative, and then plot the sentiment along time. Here the x-axis is the position of an note with trait label, normalized by ![12_image_0.png](12_image_0.png) Table 7: Example of a human mistake. | System | Slump | All | |----------------------|---------|-------| | BERT (no-hist) | 35.98 | 40.33 | | + unsup | 56.25 | 44.58 | | Char-Hist-Longformer | 46.53 | 45.75 | Table 8: Accuracy on the slump of Figure 2(c) for the character *Albert* (144 instances) versus on all (424) of the *Albert* instances. the lengths of the books. The curves are smoothed within a window of 50 for The Count of Monte Cristo and 20 for *Notre-Dame de Paris*, depending on the sparsity of the samples. The trends well align with the common feelings of readers during the reading process. For example, the character *Albert* is in general a brave and decent person. Most readers liked his personality until he recklessly challenged *Dantès* for a duel. Then the character's reputation is saved after he found out that his father framed many people including *Dantès* and decided to give up the duel and live off his father's ill-gotten gains. One the other hand, Claude Frollo received monotone decreased rates, who appeared first as a pious and highly knowledgeable man then turned to be evil and morbid because of his obsessive for *Esmeralda*. 标注目标: 在阅读故事的过程中,读者在阅读到某一位置时所写的评论(任务输入) , 是否在评论当前故事情节下 , 某个人物(任务输出)的某种人格特质(任务输入) . 任务内容 : ( 1 ) 对给出的人格表述词,找到 "评论文本 "中人格表述词的描述对象( 被描述人物名字 ). ( 3 ) 每个人格表述词可以选择多个符合要求的人物 , 但是每个人物只需要选择一次. ![13_image_0.png](13_image_0.png) If a reader writes a comment (task input) at a certain position of the book during the reading process, please judge whether the comment is commenting on a certain personality trait (task input) of a certain character (task output). Tasks: (1)For the given word expressing a personality, find the name of the character described by the personality-expressing word in the " Comment Text ". (2) Each personality-expressing word can select multiple qualified characters, but each character only needs to select once. (3) If the "Comment Text" does not contain any characters satisfying the requirement or even no character appears in it, you can pass the selection of character and submit this item directly. (4) (Optional reading) The task provides the "original book snippet" where the comments is located as a reference. ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) Figure 5: Our annotation interface (with English translations in blue words ). A Case Study We assessed the model performance on the points where people's ratings of Albert have dramatic fluctuations (around x=0.8). Specifically, we compared three models: the baseline BERT model without any history, the BERT model enhanced with our unsupervised objective, and the Char-Hist Longformer, which can leverage longer historical information. The results are shown in Table 8. Our findings revealed that both the enhanced models—BERT with the unsupervised objective and Char-Hist Longformer—achieved a similar level of improvement over the BERT baseline when considering the entire evaluation set of *Albert*. These results align with our experimental observations from the comprehensive evaluation data. However, it is noteworthy that the model incorporating the unsupervised objective exhibited a significantly greater enhancement at the slump of the curve. As mentioned earlier in this section, the author explicitly portrayed Albert's reckless personality through his actions and dialogues in this particular case. Even without prior knowledge of the events leading up to this point, humans can intuitively grasp Albert's personality traits. Our unsupervised task aims to capture the correlation between personality and the external expressions manifested within the narrative. This is why it proves to be more effective in this specific case. ## E **Examples Of Cases That Require** History Information The cases where history information is necessary to solve can be roughly categorized into three types according to our human study in Section 6.3. We include examples for each type in Table 10. (d) Elizabeth (*Pride and Prejudice*) (e) Quasimodo (*Notre-Dame de Paris*) (f) Claude Froll (*Notre-Dame de Paris*) (a) Dantès (*i.e.*, Count Monte Cristo) (b) Albert (*The Count of Monte Cristo*) (c) Mr. Darcy (*Pride and Prejudice*) Figure 6: Word clouds for the characters. ![15_image_1.png](15_image_1.png) ![15_image_0.png](15_image_0.png) ![15_image_2.png](15_image_2.png) ![15_image_3.png](15_image_3.png) Table 9: Detailed information of books included in our PER-SONET. Category: *(1) multiple possible answers given the snippet without history* Target Character: *Dantès* **Groundtruth Trait:** *simple* Distractors: *insincere, dirty, impressionable, loquacious* Snippet: i am the abbe faria, and have been imprisoned as you know in this chateau d ' if since the year 1811 ; previously to which i had been confined for three years in the fortress of fenestrelle. in the year 1811 i was transferred to piedmont in france. it was at this period i learned that the destiny which seemed subservient to every wish formed by napoleon, had bestowed on him a son, named king of rome even in his cradle. i was very far then from expecting the change you have just informed me of ; namely, that four years afterwards, this colossus of power would be overthrown. then who reigns in france at this moment - napoleon ii.? " " no, louis xviii. " ... *dantes ' whole attention was riveted* on a man who could thus forget his own misfortunes while occupying himself with the destinies of others. *" yes, yes, "* continued he, " ' twill be the same as it was in england. after charles i., cromwell ; after cromwell, charles ii., and then james ii., and then some son - in - law or relation, some prince of orange, a stadtholder who becomes a king. then new concessions to the people, then a constitution, then liberty. ah, my friend! " said the abbe, turning towards dantes, and surveying him with the kindling gaze of a prophet, " you are young, you will see all this come to pass. ..." Category: *(2) plot cannot be understood without history* Target Character: *The elder* **Groundtruth Trait:** *intelligent* Distractors: *confident, breezy, single-minded, decadent* Snippet: the servant soon returned. the decanter and the glass were completely empty. noirtier made a sign that he wished to speak. " why are the glass and decanter empty? " asked he ; " valentine said she only drank half the glassful. " the translation of this new question occupied another five minutes. " i do not know, " said the servant, " but the housemaid is in mademoiselle valentine ' s room : perhaps she has emptied them. " " ask her, " said morrel, translating noirtier ' s thought this time by his look. the servant went out, but returned almost immediately. " mademoiselle valentine passed through the room to go to madame de villefort ' s, " said he ; " and in passing, as she was thirsty, she drank what remained in the glass ; as for the decanter, master edward had emptied that to make a pond for his ducks. " noirtier raised his eyes to heaven, as a gambler does who stakes his all on one stroke. from that moment the old man ' s eyes were fixed on the door, and did not quit it. *it was indeed madame danglars and her* daughter whom valentine had seen ; they had been ushered into madame de villefort ' s room, who had said she would receive them there. that is why valentine passed through her room, which was on a level with valentine ' s, and only separated from it by edward ' s. the two ladies entered the drawing - room with that sort of official stiffness which preludes a formal communication. among worldly people manner is contagious. madame de villefort received them with equal solemnity. valentine entered at this moment, and the formalities were resumed. ... Category: *(3) The current snippet can be associated to some previous plot where the character demonstrates a* personality trait Target Character: *Esmeralda* **Groundtruth Trait:** *simple* Distractors: *rational, mature, emotional, egocentric* Snippet: is she to be hung yonder? " " fool! t'is here that she is to make her apology in her shift! the good god is going to cough latin in her face! that is always done here, at midday. if'tis the gallows that you wish, go to the greve. " " i will go there, afterwards. " " tell me, la boucanbry? is it true that she has refused a confessor? " " it appears so, la bechaigne. " " you see what a pagan she is! " "'tis the custom, monsieur. the bailiff of the courts is bound to deliver the malefactor ready judged for execution if he be a layman, to the provost of paris ; if a clerk, to the official of the bishopric. " " thank you, sir. " " oh, god! " *said fleur - de - lys, " the poor creature! " this thought filled with* sadness the glance which she cast upon the populace. the captain, much more occupied with her than with that pack of the rabble, was amorously rumpling her girdle behind. she turned round, entreating and smiling. " please let me alone, phoebus! if my mother were to return, she would see your hand! " at that moment, midday rang slowly out from the clock of notre - dame. a murmur of satisfaction broke out in the crowd. the last vibration of the twelfth stroke had hardly died away when all heads surged like the waves beneath a squall, and an immense shout went up from the pavement, the windows, and the roofs, " there she is! " fleur - de - lys pressed her hands to her eyes, that she might not see. " charming girl, " said phoebus, " do you wish to withdraw? " " no, " she replied ... Table 10: Example of cases that require history information to solve. The red texts are the underlined text of the notes that used to construct the labeled instance. In the first example, according to the snippet, both *simple* and *impressionable* are possible traits to explain the character's behavior. Only from the history that *Dantes* is a brave and determined person, we can select *simple* as the correct answer. In the second example, only when the readers know that Noirtier (*The elder*) aims to help Valentine get immunity from the poisoned juicy, they can understand the character's wisdom. In the third example, *Esmeralda* is not present. However, the scene of love between Phoebus and Fleur-de-Lys is quite similar to her story with Phoebus, illustrating that she was easily deceived by the man. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8. ✓ A2. Did you discuss any potential risks of your work? Section 8. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4. B1. Did you cite the creators of artifacts you used? Not applicable. We create artifacts by ourselves. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 7. We will release our dataset for public research with CC-BY 4.0. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. We create artifacts by ourselves. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 4.1. We anonymized the data with user information. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 and Appendix C. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.4. ## C ✓ **Did You Run Computational Experiments?** Section 6. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 6.1. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 6.1. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6.2. We report mean-std results with 5 runs. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix A. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 1 and Section 4.2 Step 1. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4.2 Step 1. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. The data is from social network thus the study follows IRB. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 4.2 Step 1.
zhu-etal-2023-storytrans
{S}tory{T}rans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing
https://aclanthology.org/2023.acl-long.827
Non-parallel text style transfer is an important task in natural language generation. However, previous studies concentrate on the token or sentence level, such as sentence sentiment and formality transfer, but neglect long style transfer at the discourse level. Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences. In this paper, we formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style while maintaining source semantics. To tackle this problem, we propose a generation model, named StoryTrans, which leverages discourse representations to capture source content information and transfer them to target styles with learnable style embeddings. We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder. Moreover, to enhance content preservation, we design a mask-and-fill framework to explicitly fuse style-specific keywords of source texts into generation. Furthermore, we constructed new datasets for this task in Chinese and English, respectively. Extensive experiments show that our model outperforms strong baselines in overall performance of style transfer and content preservation.
# Storytrans: Non-Parallel Story Author-Style Transfer With Discourse Representations And Content Enhancing ## Xuekai Zhu2∗ , Jian Guan1∗, Minlie Huang1†**And Juan Liu**2 1The CoAI group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, 1Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China 2Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China {xuekaizhu,liujuan}@whu.edu.cn, j-guan19@mails.tsinghua.edu.cn, aihuang@tsinghua.edu.cn ## Abstract Non-parallel text style transfer is an important task in natural language generation. However, previous studies concentrate on the token or sentence level, such as sentence sentiment and formality transfer, but neglect long style transfer at the discourse level. Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences. In this paper, we formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style while maintaining source semantics. To tackle this problem, we propose a generation model, named StoryTrans, which leverages discourse representations to capture source content information and transfer them to target styles with learnable style embeddings. We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder. Moreover, to enhance content preservation, we design a mask-and-fill framework to explicitly fuse style-specific keywords of source texts into generation. Furthermore, we constructed new datasets for this task in Chinese and English, respectively. Extensive experiments show that our model outperforms strong baselines in overall performance of style transfer and content preservation. ## 1 Introduction Text style transfer aims to endow a text with a different style while keeping its main semantic content unaltered. It has a wide range of applications, such as formality transfer (Jain et al., 2019), sentiment transfer (Shen et al., 2017) and author-style imitation (Tikhonov and Yamshchikov, 2018). Due to the lack of parallel corpora, recent works mainly focus on unsupervised transfer by selfreconstruction. Current methods proposed to dis- ![0_image_0.png](0_image_0.png) Table 1: An example that transfers a vernacular story to the martial arts style of JY generated by StyleLM. The orange sentence indicates missing content in source text. The rewritten token is underlined. The red highlights are supplementary short phrases or plots to align with the target style. The English texts below the Chinese are translated versions of the Chinese samples. entangle styles from contents by removing stylistic tokens from inputs explicitly (Huang et al., 2021) or reducing stylistic features from token-level hidden representations of inputs implicitly (Lee et al., 2021). This line of work has impressive performance on single-sentence sentiment and formality transfer. However, it is yet not investigated to transfer author styles of long texts such as stories, manifesting in the author's linguistic choices at the lexical, syntactic, and discourse levels. In this paper, we present the first study on story author-style transfer, which aims to rewrite a story incorporating source content and the target author style. **The first challenge** of this task lies in imitation of author's linguistic choices at the discourse level, such as narrative techniques (e.g., brief or detailed writing). As exemplified in Table 1, the generation text for the JinYong (JY)1style not only rewrites some tokens to the martial arts style (e.g., "白云" /"white cloud" to "白光一闪" /"light flashing") but also adds additional events in detail and enrich the storyline (e.g., the red highlights). In contrast to the transfer of token-level features like formality, it is more difficult to capture the intersentence relations correlated with author styles and disentangle them from contents. **The second challenge** is that the author styles tend to be highly associated with specific writing topics. Therefore, it is hard to transfer these style-specific contents to another style. For example, the topic "talented man" hardly shows up in the novels of JY, leading to the low content preservation of such contents, as shown in the orange text in Table 1. To alleviate the above issues, we propose a generation framework, named **StoryTrans**, which learns discourse representations from source texts and then combines these representations with learnable style embeddings to generate texts of target styles. Furthermore, we propose a new training objective to reduce stylistic features from the discourse representations, which aims to pull the representations derived from different texts close in the latent space. To enhance content preservation, we separate the generation process into two stages, which first transfers the source text with the style-specific content keywords masked and then generates the whole text by imposing these keywords explicitly. To support the evaluation of the proposed task, we collect new datasets in Chinese and English based on existing story corpora.2 We conduct extensive experiments to transfer fairy tales (in Chinese) or everyday stories (in English) to typical author styles, respectively. Automatic evaluation results show that our model achieves a better overall performance in style control and content preservation than strong baselines. The manual evaluation also confirms the efficacy of our model. We summarize the key contributions of this work as follows: I. To the best of our knowledge, we present the first study on story author style transfer. We construct new Chinese and English datasets for this task. II. We propose a new generation model named StoryTrans to tackle the new task, which implements content-style disentanglement and stylization based on discourse representations, then enhances content preservation by explicitly incorporating stylespecific keywords. III. Extensive experiments show that our model outperforms baselines in the overall performance of style transfer accuracy and content preservation. ## 2 Related Work 2.1 Style Transfer Recent studies concentrated mainly on token-level style transfer of single sentences, such as formality or sentiment transfer. We categorize these studies into three following paradigms. The first paradigm built a style transfer system without explicit disentanglement of style and content. This line of work used additional style signals or a multi-generator structure to control the style. Dai et al. (2019) added an extra style embedding in input for manipulating the style of texts. Yi et al. (2020) proposed a style instance encoding method for learning more discriminative and expressive style embeddings. The learnable style embedding is a flexible yet effective approach to providing style signals. Such a design helps better preserve source content. Syed et al. (2020) randomly dropped the input words, then reconstructed input for each author separately, which obtained multiple author-specific generators. The multi-generator structure is effective but also resource-consuming. However, this paradigm incurs unsatisfactory style transfer accuracy without explicit disentanglement. The second paradigm disentangled the content and style explicitly in latent space, then combined the target style signal. Zhu et al. (2021) diluted sentence-level information in style representations. John et al. (2019) incorporated style prediction and adversarial objectives for disentangling. Lee et al. (2021) removed style information of each token with reverse attention score (Bahdanau et al., 2015) , which is estimated by a pre-trained style classifier. This paradigm utilizes adversarial loss functions or a pre-trained estimator for disentanglement. And experiment results indicate that explicit disentanglement leads to satisfactory style transfer accuracy but poor content preservation. The final paradigm views style as localized features of tokens in a sentence, which locates styledependent words and replaces the target-style ones. Xu et al. (2018) employed an attention mechanism to identify style tokens and filter out such tokens. Wu et al. (2019) utilized a two-stage framework to mask all sentimental tokens and then infill them. Huang et al. (2021) aligned words of input and reference to achieve token-level transfer. To sum up, this paradigm maintains all word-level information, but it is hard to apply to the scenarios where styles are expressed beyond token level, e.g., author style. Absorbing ideas from paradigm 1 and 2, we ![2_image_0.png](2_image_0.png) apply explicit disentanglement by pulling close discourse representations, which is formulated into disentanglement loss. Furthermore, we design a fusion module to stylize the discourse representation. ## 2.2 High-Level Representation Prior works captured the hierarchical structure of natural language texts by learning high-level representations. Li et al. (2015) and Zhang et al. (2019) proposed to learn hierarchical embedding representations by reconstructing masked version of sentences or paragraphs. Reimers and Gurevych (2019) derived semantical sentence embeddings by fine-tuning BERT (Devlin et al., 2019) on downstream tasks. Lee et al. (2020); Guan et al. (2021b) inserted special tokens for each sentence and devised several pre-training tasks to learn sentencelevel representations. We are inspired to use a sentence order prediction task to learn high-level discourse representations. ## 2.3 Long Text Generation In order to generate coherent long texts, recent studies usually decomposed generation into multiple stages. Fan et al. (2018); Yao et al. (2019) generated a premise, then transformed it into a passage. Tan et al. (2021) first produced domain-specific content keywords and then progressively refines them into complete passages. Borrowing these ideas , we adopted a mask-and-fill framework to enhance content preservation in text style transfer. ## 3 Methodology 3.1 Task Definition And Model Overview We formulate the story author-style transfer task as follows: assuming that S is the set of all author-styles, given a multi-sentence input x = (x1, x2, · · · , xT ) of T tokens and its author-style label s ∈ S, the model should generate a multisentence text with a specified author-style sˆ ∈ S while keeping the main semantics of x. As illustrated in Figure 1, we split the generation process into two stages. We first identify stylespecific keywords k = (k1, k2, · · · , kl) from x, and then mask them with special tokens ⟨mask⟩. We denote the resulting masked version of x as xm = (x m 1 , xm 2 , · · · , xm T ). In the first generation stage, we perform discourse representation transfer on xm. In the second stage, we complete the masked tokens in the output of the first stage conditioned on k in a style-unrelated manner. Due to the lack of parallel data, typical style transfer models tend to optimize the selfreconstruction loss with the same inputs and outputs (Xiao et al., 2021; Lee et al., 2021). Obviously, training with only the self-reconstruction loss will make the model easily ignore the target style signals and simply repeat the source inputs. Therefore, in the first stage, we devise an additional training objective, to disentangle stylistic features from intermediate discourse representations {ri} n i=1, where n is the number of sentences. Then, we fused these style-independent discourse representations with the target style sˆ as a discourse- ![3_image_0.png](3_image_0.png) level guidance for the subsequent generation of the transferred text. As for discourse representations learning, we employ a sentence order prediction loss to capture inter-sentence discourse dependencies. And we use a style classifier loss to control the style of generated texts (Lee et al., 2021). In summary, the fist-stage model is trained using the following loss function: $${\mathcal{L}}_{1}={\mathcal{L}}_{\mathrm{self}}+\lambda_{1}{\mathcal{L}}_{\mathrm{dis}}+\lambda_{2}{\mathcal{L}}_{\mathrm{sop}}+\lambda_{3}{\mathcal{L}}_{\mathrm{style}},\quad(1)$$ where λ1, λ2 and λ3 are adjustable hyperparameters. Lself, Ldis, Lsop and Lstyle are the self-reconstruction loss, the disentanglement loss, the sequence order prediction loss and the style classifier loss, respectively. Figure 2 shows the workflow of learning objects. In the second stage, we use a denoising autoencoder (DAE) loss to train another encoderdecoder model for reconstructing x: $${\mathcal{L}}_{2}=-\sum_{t=1}^{T}\log P(x_{t}|x_{<t},\{k_{i}\}_{i=1}^{l},x^{m}).\quad(2)$$ This stage is unrelated to author styles, and helps achieve better content preservation. ## 3.2 Discourse Representations Transfer As described in Figure 2, we propose to learn discourse representations, and then reconstruct the texts from discourse representations. And we perform the disentanglement and stylizing operation based on discourse representations. Discourse Representations Supposing that xm consists of n sentences, we insert a special token ⟨Sen⟩ at the end of each sentence in xm (Reimers and Gurevych, 2019; Lee et al., 2020; Guan et al., 2021b). Let rn denote the hidden state of the encoder at the position of the n-th special token, {ri} n i=1 = Encoder(x m). And zn is the output of the fusion module corresponding to rn. Previous studies have demonstrated that correcting the order of shuffled sentences is a simple but effective way to learn meaningful discourse representations (Lee et al., 2020). As shown in Figure 1, we feed zn into a pointer network (Gong et al., 2016) to predict orders. During training, we shuffled the original sentence order and feed the perturbed text into the encoder for calculating Lsop. Fusion Module To provide signals of transfer direction, we concatenate the learned discourse representations {ri} n i=1 with the style embedding s and fuse them using a multi-head attention layer, as illustrated in Figure 1. To capture discourselevel features of texts with different author-styles, we set each style embedding to a vector with the same dimension as ri. Formally, we derive the style-aware discourse representations {zi} n+1 i=1 as follows: $$\{\mathbf{z}_{i}\}_{i=1}^{n+1}=\mathrm{MHA}(Q=K=V=\{\mathbf{s}\parallel\{\mathbf{r}_{i}\}_{i=1}^{n}\}),\tag{3}$$ where MHA is the multi-head attention layer, Q/K/V is the corresponding query/key/value, ∥ is the concatenation operation. Then, the decoder gets access to {zi} n+1 i=1 through the cross-attention layer, which serve as a discourse-level guidance for generating the transferred texts. Then, we feed {zi} n+1 i=1 into the decoder. Pointer Network Following Logeswaran et al. (2018); Lee et al. (2020), we use a pointer network to predict the original orders of the shuffled sentences. The each position probability of sentence order is formulated as follows: $$p_{i}=\mbox{softmax}(\{z_{i}\}_{i=1}^{n}Wz_{i}^{T}),\tag{4}$$ where $p_{i}$ is predicted probabilities of sentence $i$, $W$ is a trainable parameter. ## 3.3 First-Stage Training Objectives Self-Reconstruction Loss We formulate selfreconstruction loss as follows: $${\mathcal{L}}_{\mathrm{self}}=-\sum_{t=1}^{T}\log P(x_{t}^{m}|x_{<t}^{m},\{r_{i}\}_{i=1}^{n},s),\quad(5)$$ where s is the learnable embedding of s. During inference, we replace s with the embedding of the target style sˆ (i.e., sˆ), to achieve the style transfer. Disentanglement Loss We disentangle the style and content on discourse representations. Inspired by prior studies on structuring latent spaces (Gao et al., 2019; Zhu et al., 2021), we devise an additional loss function Ldis to pull close discourse representations from different examples in the same mini-batch, corresponding to different author styles. Ldis and Lself work as adversarial losses and lead the model to achieve a balance between content preservation and style transfer. We derive Ldis as follows: $\mathcal{L}_{\text{dis}}=\dfrac{1}{2b}\sum\limits_{i=1}^{b}\sum\limits_{j=1}^{b}\parallel\bar{\mathbf{r}_i}-\bar{\mathbf{r}_j}\parallel_2^2,$ $\bar{\mathbf{r}}=\dfrac{1}{n}{\sum\limits_{i=1}^{n}\mathbf{r}_i}$ $\bar{\mathbf{r}}$ is the size of wiki batch. $${\mathrm{~of~mini-batch}}.$$ where b is the size of mini-batch. Sentence Order Prediction Loss We formulate Lsop as the cross-entropy loss between the golden and predicted orders as follows: $${\mathcal{L}}_{\mathrm{sop}}=-{\frac{1}{n}}\sum_{i=1}^{n}o_{i}\log(p_{i}),\qquad\qquad(8)$$ where oiis a one-hot ground-truth vector of correct sentence position, and piis predicted probabilities. Style Classifier Loss We expect the transferred text to be of the target style. Hence we train a style classifier to derive the style transfer loss as follows: $${\mathcal{L}}_{\mathrm{style}}=-\mathbb{E}_{{\hat{\mathbf{x}}}^{m}\sim\mathrm{Decoder}}[\log P_{C}(s|{\hat{\mathbf{x}}}^{m})],$$ m)], (9) where PC is the conditional distribution over styles defined by the classifier. We train the classifier on the whole training set with the standard crossentropy loss. Then, we freeze the weights of style classifier for computing Lstyle. On the other hand, we follow Lee et al. (2021); Dai et al. (2019) to use soft sampling to allow gradient back-propagation. ## 3.4 Content Preservation Enhancing As aforementioned, author styles have a strong correlation with contents. It is difficult to transfer such style-specific contents to other styles directly. Since we train the model in an auto-encoder manner, it will have no idea how to transfer those content representations that have never seen other style embeddings during training. To address the issue, we propose to mask the style-specific keywords in the source text and perform style transfer on the | Dataset | Train | Val | Test | | | | |-----------|---------|-------------|--------|-------|------|-----| | Style | JY | LX | Tale | Tale | Tale | | | ZH | Size | 2,964 | 3,036 | 1,456 | 242 | 729 | | Avg Len | 344 | 168 | 175 | 175 | 176 | | | EN | Style | Shakespeare | ROC | ROC | ROC | | | Size | 1,161 | 1,161 | 290 | 290 | | | | Avg Len | 71 | 49 | 48 | 50 | | | (6) $\text{}$ (7) $\text{}$ (a) . masked text in the first generation stage. Then, we fill the masked tokens in the second stage. We follow Xiao et al. (2021) to use a frequencybased method to identify the style-specific keywords. Specifically, we extract style-specific keywords by (1) obtaining the top-10 words with the highest TF-IDF scores from each corpus, (2) retaining only people's names, place names, and proper nouns, (3) and filtering out those words with a high frequency in all corpora3. We denote the resulting word set as Dsfor the corpus with the style s. We extract the style-specific keywords k from the text x by selecting the words that are in Ds. We detail above operation and explain it in Appendix A. In the second stage, we train another model to fill the mask tokens in outputs of the first stage conditioned on the identified style-specific keywords in source inputs. During training, we concatenate the keywords in k with a special token ⟨Key⟩ and feed them into the encoder paired with x m, as shown in Figure 1. The training object is formulated as Equation 2. During inference, the decoder generates the transferred text xˆ conditioned on the output of the first stage xˆ m in an auto-regressive manner. ## 4 Experiments 4.1 Datasets We construct stylized story datasets in Chinese and English, respectively. The Chinese dataset consists of three styles of texts, including fairy tales from LOT (Guan et al., 2021a), LuXun (LX), and JinYong (JY). Specifically, LuXun writes realism novels while JinYong focuses on martial arts novels. These texts of different styles have a gap in lexical, syntactic, and semantic levels. Samples of different styles are detailed in Appendix C. In our experiments, we aim to transfer a fairy tale to the LX or JY style. The English dataset consists 3We set those words appearing in at least 10% samples in a corpus as high-frequency words. of two styles of texts, including everyday stories from ROCStories (Mostafazadeh et al., 2016) and fragments from Shakespeare's plays. We expect to transfer a five-sentence everyday story into the Shakespeare style. The statistics of datasets are shown in Table 2. The more details are described in Appendix B. ## 4.2 Implementation We take LongLMBASE (Guan et al., 2021a) and T5BASE (Raffel et al., 2020) as the backbone model of both generation stages for Chinese and English experiments, respectively. Furthermore, the fusion module and pointer network consist of two and one layers of randomly initialized bidirectional Transformer blocks (Vaswani et al., 2017), respectively. We conduct experiments on one RTX 6000 GPU. In addition, we build the style classifier based on the encoder of LongLMBASE and T5BASE for Chinese and English, respectively. We set λ1/λ2/λ3 in Equation 1 to 1/1/1, the batch size to 4, the learning rate to 5e-5, the maximum sequence length of the encoder and decoder to 512 for both generation stages in the Chinese experiments. And the hyper-parameters for English experiments are the same except that λ1/λ2/λ3 are set to 0.5/0.5/0.5 and the learning rate to 2.5e-5. More implementation details are presented in Appendix D. ## 4.3 Baselines Since no previous studies have focused on story author-style transfer, we build several baselines by adapting short-text style transfer models. For a fair comparison, we initialize all baselines using the same pre-trained parameters as our model. Specifically, we adopt the following baselines: Style Transformer: It adds an extra style embedding and a discriminator to provide style transfer rewards without disentangling content from styles (Dai et al., 2019). StyleLM: This baseline generates the target text conditioned on the given style token and corrupted version of the original text (Syed et al., 2020). Reverse Attention: It inserts a reverse attention module on the last layer of the encoder, which aims to negate the style information from the hidden states of the encoder (Lee et al., 2021). ## 4.4 Automatic Evaluation Evaluation Metrics Previous works evaluate style transfer systems mainly from three aspects including style transfer accuracy, content preservation, and sentence fluency. A good style transfer system needs to balance the contradiction between content preservation and transfer accuracy (Zhu et al., 2021; Niu and Bansal, 2018). We use a joint metric to evaluate the overall performance of models. On the other hand, previous studies usually use perplexity (PPL) of a pre-trained language model. However, in our experiments, we found that the PPL of model outputs is lower than human-written texts, suggesting that PPL is not reliable for evaluating the quality of stories. Therefore, we evaluate the fluency through manual evaluation. Specifically, we adopt the following automatic metrics: **(1) Style Transfer Accuracy:** We use two variants of style transfer accuracy following Krishna et al. (2021), absolute accuracy (a-Acc) and relative accuracy (r-Acc). We train a style classifier and regard the classifier score as the a-Acc. And r-Acc is a binary value to indicate whether the style classifier score the output higher than the input (1/0 for a higher/lower score). We train the classifier by fine-tuning the encoder of LongLMBASE and T5BASE on the Chinese and English training set, respectively. The classifier achieves a 99.6% and 99.41% accuracy on the Chinese and English test sets, respectively. **(2) Content Preservation:** We use BLEU-n (n=1,2) (Papineni et al., 2002) and BERTScore (BS) (Zhang* et al., 2020) between generated and input texts to measure their lexical and semantic similarity, respectively. And we report recall (BS-R), precision (BS-P) and F1 score (BS-F1) for BS. (3) Overall: We use the geometric mean of a-ACC and BLEU/BS-F1 score (BL-Overall/BS-Overall) to assess the overall performance of models (Krishna et al., 2020; Lee et al., 2021). Results on the Chinese Dataset We show the overall performance and individual metrics results in Table 3. In terms of overall performance, StoryTrans outperforms baselines, illustrating that StoryTrans can achieve a better balance between style transfer and content preservation. In terms of style accuracy, StoryTrans achieves the best style transfer accuracy (a-Acc) in LX and comparable performance in JY. The bad performance of baselines indicates the necessity to perform explicit disentanglement beyond the token level. In addition, manual inspection shows that Style Transformer tends to copy the input, accounting for the highest BLEU score and BERTScore. Target Styles Models r-Acc a-Acc BLEU-1 BLEU-2 BS-P BS-R BS-F1 **BL-Overall BS-Overall** ZH-LX Style Transformer 65.84 0.13 82.53 77.17 96.92 96.51 96.70 2.96 3.26 StyleLM 97.80 33.33 39.43 19.66 77.71 75.02 76.30 31.38 50.42 Reverse Attention 98.49 42.93 20.98 6.70 65.38 63.39 64.35 24.37 52.55 StoryTrans 97.66 59.94 32.19 14.44 68.53 70.48 69.45 **37.38 64.52** ZH-JY Style Transformer 46.77 0.13 83.24 77.85 97.15 96.82 96.97 3.23 3.55 StyleLM 79.97 51.16 36.72 18.01 74.20 75.19 74.62 37.41 61.78 Reverse Attention 94.51 66.39 21.15 6.32 64.05 65.08 64.54 30.19 65.45 StoryTrans 84.49 62.96 30.71 14.5 68.76 71.69 70.16 **37.72 66.46** EN-SP Style Transformer 0.34 0.01 99.88 99.88 87.10 95.43 90.78 3.31 3.16 StyleLM 57.93 3.44 37.05 19.40 84.72 90.53 87.30 9.85 17.32 Reverse Attention 20.68 0.01 96.90 96.16 86.93 95.27 90.61 3.25 3.15 StoryTrans 88.62 52.41 32.20 12.71 81.77 87.51 84.31 **34.31 66.47** Table 3: Automatic evaluation results on the test set of the Chinese and English datasets. Bold numbers indicate best performance. ZH-LX/ZH-JY is the Chinese author LuXun/JinYong, respectively. EN-SP is the English author Shakespeare. StoryTrans achieves the best overall performance (BL/BS-Overall), with a good trad-off between style accuracy (r/a-Acc) and content preservation (BLEU-1/2 and BS-P/R/F1). r-Acc a-Acc BLEU-1 BLEU-2 BS-P BS-R BS-F1 **BL-Overall BS-Overall** Proposed Model 88.62 52.41 32.20 12.71 81.77 87.51 84.31 34.31 66.47 (-) Ldis 75.86 31.37 33.49 14.52 82.38 88.07 84.92 27.44 51.61 (-) Lstyle 50.68 7.93 45.00 23.79 84.38 89.16 86.5 16.51 26.19 (-) Lsop 78.96 38.96 39.45 19.20 82.92 88.62 85.47 33.80 57.70 (-) CE 92.41 73.10 21.62 6.09 79.73 86.12 82.59 31.82 77.70 This means Style Transformer only takes the target style signals as noise, which may result from the stylistic features existing in the contents. StyleLM and Reverse Attention get better transfer accuracy than Style Transformer by removing such stylistic features from the contents. Moreover, Reverse Attention obtains better style accuracy but worse content preservation than StyleLM. Therefore, reweighting hidden states allows better control over style than deleting input words explicitly. In terms of content preservation, StoryTrans outperforms Reverse Attention. Additionally, StyleLM achieves better performance in content preservation, benefiting from inputting noisy versions of golden texts. But without disentanglement, it can't strip style information. This leads to a lower overall performance than StoryTrans. As for Style Transformer, the results demonstrate that only an attention-based model hardly removes style features in overwhelming tokens information, leading to degenerate into an auto-encoder. Results on the English Dataset Similarly, StoryTrans achieves the best overall performance on the English dataset, showing its effectiveness and generalization. And StoryTrans outperforms baselines significantly in terms of style transfer accuracy. As for content preservation, Style Transformer and Re- Models LX JY Sty. Con. Coh. κ **Sty. Con. Coh.** κ Style Transformer 1.02 **2.95** 2.91**** 0.80 1.00 **2.98** 2.94**** 0.89 StyleLM 1.61 1.99 1.58 0.20 1.7 1.92 1.94 0.23 Reverse Attention 1.69 1.25 1.64 0.21 2.07 1.25 1.92 0.20 StoryTrans **1.98**** 1.84 1.67 0.24 **2.43**** 1.69 1.91 0.23 verse Attention degenerate into an auto-encoder, and tend to copy the input even more than their performance on the Chinese dataset. Results on Ablation Study As shown in Table 4, we observe a significant drop in transfer accuracy without Ldis or Lstyle. Ldis works by disentangling stylistic features from the discourse representations, while Lstyle exerts direct supervision on styles of generated texts. Without Lsop, model can hardly capture discourse-level information and keeps more source tokens, leading to higher BLEU scores and lower accuracy. When removing the second stage, the lowers BLEU scores show the benefit of the mask-and-fill framework for content preservation. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ## 4.5 Manual Evaluation We randomly sampled 100 fairy tales from the Chinese dataset, and obtained 800 generated texts from StoryTrans and three baseline models. Then, we hire three Chinese native speakers to evaluate in three aspects including style transfer accuracy (**Sty.**), content preservation (**Con.**) and coherence (**Coh.**). We ask the annotators to judge each aspect from 1 (the worst) to 3 (the best). As illustrated in Table 6, our StoryTrans received the highest style accuracy and modest performance in content preservation and coherence. More details and analysis are presented in Appendix G. ## 4.6 Case Study Table 5 shows the cases generated by StoryTrans and the best baseline. StyleLM inserts many unrelated sentences, which overwhelm the original content and impact the coherence, further leading to the content loss of sentences 3 and 4. On the contrary, StoryTrans supplement several short phrases or plots (e.g., "纵身跃起" /"hurriedly jumped up") to enrich the storyline and maintain the main content. Furthermore, StoryTrans can rewrite most sentences with the target style and maintain source semantics. In addition, StyleLM tends to discard the source entities and use words which is specific in the target style (e.g., "郭靖" /"Guo Jing"), while StoryTrans dose not, suggesting the necessity of the mask-and-fill framework. ## 4.7 Stylistic Feature Visualization We follow Syed et al. (2020) to define several stylistic features and visualize the features of the golden texts and generated texts on the Chinese test set. The stylistic features include the type and number of punctuation marks, the number of sentences, and the number of words. As shown in Figure 3, the texts generated by Reverse Attention and StyleLM have similar stylistic features to source texts. In contrast, StoryTrans can better capture different stylistic features and transfer source texts to specified styles. More details are in Appendix F. ## 5 Conclusion In this paper, we present the first study for story author-style transfer and analyze the difficulties of this task. Accordingly, we propose a novel generation model, which explicitly disentangles the style information from high-level text representations to improve the style transfer accuracy, and achieve better content preservation by injecting style-specific contents. Automatic evaluations show StoryTrans outperform baselines on the overall performance. Further analysis shows StoryTrans has a better ability to capture linguistic features for style transfer. ## Limitations In style transfer, content preservation and style transfer are adversarial. Long texts have richer contents and more abstract stylistic features. We also notice that content preservation is the main disadvantage of StoryTrans in automatics evaluation results. Case studies also indicate that StoryTrans can maintain some entities and the relations between entities. However, strong discourse-level style transfer ability endangered content preservation. In contrast, baselines such as Style Transformer have better content preservation but hardly transfer the style. We believe that StoryTrans is still a good starting point for this important and challenging task. During preliminary experiments, we also manually inspected multiple author styles besides Shakespeare, such as Mark Twain. However, we found that their styles are not as obvious as Shakespeare, as shown in the following example. Therefore, we only selected authors with relatively distinct personal styles for our transfer experiments. In future work, we will expand our research and choose more authors with distinct styles for style transfer. For example, the style distinction between the following examples is not readily apparent. - Everyday story in our datatset: Ashley wanted to be a unicorn for Halloween. She looked all over for a unicorn costume. She wasn't able to find one. - "A Double Barrelled Detective Story" by Mark Twain: *You will go and find him. I have* known his hiding-place for eleven years; it cost me five years and more of inquiry. ## Ethics Statement We perform English and Chinese experiments on public datasets and corpora. Specifically, English datasets come from ROCstories and Project Gutenberg. Moreover, Chinese datasets include the LOT dataset and public corpora of JY and LX. Automatic and manual evaluation demonstrate that our model outperforms strong baselines on both Chinese and English datasets. In addition, our model can be easily applied to different languages by substituting specific pre-trained language models. As for manual evaluation, we hired three native Chinese speakers as annotators to evaluate generated texts and did not ask about personal privacy or collect the personal information of annotators. We pay 1.8 yuan (RMB) per sample in compliance with Chinese wage standards. Considering it would cost an average of 1 minute for an annotator to score a sample, the payment is reasonable. ## Acknowledgments This work was supported by the NSFC projects (Key project with No. 61936010 ). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005. ## References Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.". Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5997– 6007, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2019. Structuring latent spaces for stylized response generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1814–1823, Hong Kong, China. Association for Computational Linguistics. Jingjing Gong, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2016. End-to-end neural sentence ordering using pointer network. *arXiv preprint* arXiv:1611.04953. Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2021a. Lot: A benchmark for evaluating chinese long text understanding and generation. Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021b. Long text generation by modeling sentence-level and discourselevel coherence. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6379–6393, Online. Association for Computational Linguistics. Geoffrey E Hinton and Sam Roweis. 2002. Stochastic neighbor embedding. *Advances in neural information processing systems*, 15. Fei Huang, Zikai Chen, Chen Henry Wu, Qihan Guo, Xiaoyan Zhu, and Minlie Huang. 2021. NAST: A non-autoregressive generator with word alignment for unsupervised text style transfer. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1577–1590, Online. Association for Computational Linguistics. Parag Jain, Abhijit Mishra, Amar Prakash Azad, and Karthik Sankaranarayanan. 2019. Unsupervised controllable text formalization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6554–6561. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 424–434, Florence, Italy. Association for Computational Linguistics. Kalpesh Krishna, Deepak Nathani, Xavier Garcia, Bidisha Samanta, and Partha Talukdar. 2021. Fewshot controllable style transfer for low-resource settings: A study in indian languages. arXiv preprint arXiv:2110.07385. Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 737–762, Online. Association for Computational Linguistics. Dongkyu Lee, Zhiliang Tian, Lanqing Xue, and Nevin L. Zhang. 2021. Enhancing content preservation in text style transfer using reverse attention and conditional layer normalization. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 93–102, Online. Association for Computational Linguistics. Haejun Lee, Drew A. Hudson, Kangwook Lee, and Christopher D. Manning. 2020. SLM: Learning a discourse language representation with sentence unshuffling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1551–1562, Online. Association for Computational Linguistics. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1106–1115, Beijing, China. Association for Computational Linguistics. Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In *Thirtysecond aaai conference on artificial intelligence*. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In *Proceedings of the 2016* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics. Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. *Transactions of the* Association for Computational Linguistics, 6:373– 389. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS'17, page 6833–6844, Red Hook, NY, USA. Curran Associates Inc. Bakhtiyar Syed, Gaurav Verma, Balaji Vasan Srinivasan, Anandhavelu Natarajan, and Vasudeva Varma. 2020. Adapting language models for non-parallel authorstylized rewriting. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 9008–9015. Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313–4324, Online. Association for Computational Linguistics. Alexey Tikhonov and Ivan P Yamshchikov. 2018. Guess who? multilingual approach for the automated generation of author-stylized poetry. In *2018 IEEE Spoken* Language Technology Workshop (SLT), pages 787– 794. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Mask and infill: Applying masked language model for sentiment transfer. In *Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19*, pages 5271–5277. International Joint Conferences on Artificial Intelligence Organization. Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, Huawei Shen, and Xueqi Cheng. 2021. Transductive learning for unsupervised text style transfer. *arXiv preprint* arXiv:2109.07812. Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In *Proceedings of the 56th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 979–988, Melbourne, Australia. Association for Computational Linguistics. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Planand-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7378–7385. Xiaoyuan Yi, Zhenghao Liu, Wenhao Li, and Maosong Sun. 2020. Text style transfer via learning style instance supported latent space. In *Proceedings of the* Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 3801–3807. International Joint Conferences on Artificial Intelligence Organization. Main track. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In *International* Conference on Learning Representations. Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 5059–5069, Florence, Italy. Association for Computational Linguistics. Qingfu Zhu, Wei-Nan Zhang, Ting Liu, and William Yang Wang. 2021. Neural stylistic response generation with disentangled latent variables. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 4391–4401, Online. Association for Computational Linguistics. ## A Style-Specific Contents We detail how we extract style-specific contents and explain how they are used from the following three aspects: What do we mean by "style-specific content"? We refer to "style-specific content" as those mainly used in texts with specific styles and should be retained after style transfer. For example, "Harry Potter" and "Horcrux" are style-specific since they are used only in J.K. Rowling-style stories. When transferring J.K. Rowling-style stories to other styles, style-specific tokens shouldn't be changed. However, existing models tend to drop style-specific tokens since they are not trained to learn these tokens conditioned on other styles. How do we extract "style-specific contents"? We extract style-specific contents by (1) obtaining top10 salient tokens using TF-IDF, (2) reserving only people names (e.g., "Harry Potter"), place names (e.g., "London"), and proper nouns (e.g., "Horcrux"), and (3) filtering out high-frequency tokens in all corpus (e.g., "London") since these tokens can be learned conditioned on every style. We regard the remaining tokens as style-specific contents. As mentioned before, we employ the TF-IDF algorithm on the corpus to obtain rough style-specific contents for different styles, respectively. The reason for using TF-IDF: it is necessary to ensure that the extracted tokens are salient to the story plots. We extract style-specific tokens from the salient tokens using the second and third steps. Then, we use a part-of-speech tagging toolkit (e.g., NLTK) to identify function words and prepositions to retain people's names, place names, and proper nouns. Note that the frequency is an empirical value observed from datasets. However, the TF-IDF algorithm chooses the important words corresponding to the special style based on word frequency. There may be some style-unrelated words that are important to the content. Therefore, we need to filter out style-unrelated words. Concretely, we use Jieba4/NLTK(Bird et al., 2009) to collect the word frequency for Chinese and English datasets, respectively. Moreover, we regard the words possessing a high frequency in all styles corpus as style-unrelated words. Specifically, We set tokens appearing in 10% samples in the dataset as highfrequency words. Then we filter out these words to obtain style-specific contents. The frequency value needs to be reset to apply the method to other datasets. How are the "style-specific contents" used? One challenge of long-text style transfer is transferring discourse-level author style while preserving the main characters and storylines. It's difficult for existing models to transfer style-specific contents since they are not trained to learn these tokens conditioned on other styles. Therefore, we extract "style-specific contents" before style transferring and replace them with the special token "<Mask>". 4https://github.com/fxsjy/jieba Then, the "style-specific contents" will be filled in the second stage, as shown in Figure 1. ## B Data Pre-Processing Due to lack of stylized author datasets, we collected several authors' corpus to construct new datasets. As for Chinese, we extracted paragraphs from 21 novels of LuXun (LX) and 15 novels of JinYong (JY), and fairy tales collected by Guan et al. (2021a). On the other hand, the English dataset consists of everyday stories from ROCStories (Mostafazadeh et al., 2016) and fragments from Shakespeare's plays. Each fragment of Shakespeare's plays comprises multiple consecutive sentences and as long as samples in ROCStories. We collect the Shakespeare-style texts from the Shakespeare corpus in Project Gutenberg5 under the Project Gutenberg License6. We use Jieba/NLTK (Bird et al., 2009) for word tokenization for the Chinese/English dataset in data pro-processing. In addition, these data are public corpora, and we also check the information for anonymization. Regarding to limitation of modern language models, the length of samples is also limited. We set the max length as 384 and 90 for Chinese and English, respectively. Each sample has 4 sentences at least. We choose above length to balance the data length of different styles. Additionally, we filtered the texts which are too long to generate or too short to unveil author writing style. As Figure 4 shows, texts in the Chinese dataset spans a diverse range of length. ## C Different Style Samples In process of constructing datasets, we try to collect different author corpus who have a gap in writing styles. As shown in Table 8, the JY-style texts mostly describe martial arts actions and construct interesting plots, while the LX-style texts focus on realism with profound descriptive and critical significance. And the fairy tales differ from these texts in terms of topical and discourse features. In the English datasets, the Shakespeare-style texts are flamboyant and contain elaborate metaphors and ingenious ideas, which the everyday stories are written in plain language and without rhetoric. | Source Text | 1一次,柯里教授在独自攀登凯恩果山脉的最高峰时发现,在他的身后不时传出巨大的脚步声,似乎有人在以大他三四倍的步伐紧跟其后,似乎还可以听到 巨人喘息的声音。2柯里教授立即站住左右张望,由于大雾什么也看不清,四周也摸不到任何东西。 3他只好迈开步子继续前进,可与此同时,那怪异的 脚步声也随之响起。4柯里教授禁不住毛骨悚然,不由自主地撒开两腿,一口气跑出很远很远。5从那以后,他再也不敢独自攀登班马克律山了。 1 Once, when Professor Curry was climbing the highest peak of the Cairngo Mountains alone, he heard some huge footsteps and panting behind him, like someone following him. 2 Professor Curry immediately stopped and looked around, but he couldn't see and touch anything due to the thick fog. 3 He had no choice but to take a step forward, but at the same time, the weird footsteps sounded. 4 Professor Curry couldn't help being terrified, and ran far far away in one breath. 5 Since then, he never dared to climb Ban Makrut Mountain alone. | |--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | StyleTransformer | 1一次,柯里教授在独自攀登凯恩果山脉的最高峰时发现,在他的身后不时传出巨大的脚步声,似乎有人在以大他三四倍的步伐紧跟其后,似乎还可以听到巨人喘 息的声音。 2柯里习立即站住左右张望,由于大雾什么也看不清,四周也摸不到任何东西。 3他只好迈开步子继续前进,可与此同时,那怪异的脚步呼也随之响 起。 4柯里面教授禁不住毛骨悚然,不由自主地撒开两腿,一口气跑出很远很远。 5从那以后,他再也不敢独自攀擒班马克律山了。 1 Once, when Professor Curry was climbing the highest peak of the Cairngo Mountains alone, he heard some huge footsteps and panting behind him, like someone following him. 2 Professor Curry immediately stopped and looked around, but he couldn't see and touch anything due to the thick fog. 3 He had no choice but to take a step forward, but at the same time, the weird footsteps sounded. 4 Professor Curry couldn't help being terrified, and ran far far away in one breath. 5 Since then, he never dared to climb Ban Makrut Mountain alone. | | Reverser Attention | 1柯镇恶可不敢造次太过诡异,凝神留意周身呼吸与举动,待他凑近数步,全身便如猛地大了起来,只觉两脚似乎已经碰到一团棉花,立即低头缩住,似乎怕小沙弥 又重新动过。柯镇恶低声道:"莫十侠,那邪术似乎有些不对。"邪术沉声道:"不对,是真的。那边的东西,一定与那东西一模一样。"柯镇恶道:"那么就对 了。"4猛地窜起,飞身而前。柯镇恶身子尚未落地,一脚踏下,立即从窗口跃了出去,与那东西一模一样。那东西并非东西之生,却是东西之异,柯镇恶一怔之下, 随即纵身而起,站在高处。那东西站定脚步,立即转身,一口气转不过来,砰的一声巨响,在柯镇恶面前响了半天,同时远处传来一阵气喘吆喝之声。 1 Ke Zhen'e didnt dare to do anything wrong, it was too weird. Breathing carefully and moving of his body after a few steps, his whole body suddenly grew bigger and his feet touched a ball of cotton, then he immediately bowed his head and shrank back , seemed to be afraid that the little novice will move again. Ke Zhen'e said : Mo Shixia, there is something wrong." The sorcery said in a deep voice, "No, its true. The thing must be exactly the same." Ke Zhen'e said: "Thats right."4Jumping up suddenly, flying forward. He stepped down and immediately jumped out of the window, which was exactly the same as that thing. That was different from things, Ke Zhen'e was startled, then jumped. The thing stood still, turned around immediately, unable to turn around in one breath, then there was a loud bang in front of him, and at the same time, there was a sound of panting and shouting from a distance. | Table 7: More Chinese cases generated by baselines, which are transferred from the fairy tale style to the JY style. The number before the sentences indicate their corresponding sentences in the source text in semantics. The underline sentences indicate inserted content to align with target style. The English texts below the Chinese are translated versions of the Chinese samples. | Authors | Texts | |-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | JY | 杨过左手抢过马缰,双腿一夹,小红马向前急冲,绝尘而去。郭芙只吓得手足酸软,慢慢走到墙角拾起长剑,剑身在墙角上猛力 碰撞,竟已弯得便如一把曲尺。以柔物施展刚劲,原是古墓派武功的精要所在,李莫愁便拂尘、小龙女使绸带,皆是这门功夫。 杨过此时内劲既强,袖子一拂,实不下于钢鞭巨杵之撞击。杨过抱了郭襄,骑着汗血宝马向北疾驰,不多时便已掠过襄阳,奔行 了数十里,因此黄蓉虽攀上树顶极目远眺,却瞧不见他的踪影。 Yang Guo grabbed the horse's reins with his left hand, clamping with his leg, and then little red horse rushed out of sight. Guo Fu was so frightened that his hands and feet were sore, and she slowly walked to the corner to pick up the long sword. Using soft objects to display strength was originally the essence of the ancient tomb school martial arts. Yang Guo's internal energy was strong at this moment, and a flick of his sleeve was no less than the impact of a giant steel whip. Yang Guo hugged Guo Xiang, and rode a sweaty horse to the north. After a while, he passed Xiangyang and ran for dozens of miles. Although Huang Rong climbed to the top of the tree and looked far into the distance, she could not see any trace of him. | | LX | 自《新青年》出版以来,一切应之而嘲骂改革,后来又赞成改革,后来又嘲骂改革者,现在拟态的制服早已破碎,显出自身的本 相来了,真所谓"事实胜于雄辩",又何待于纸笔喉舌的批评。所以我的应时的浅薄的文字,也应该置之不顾,一任其消灭的;但几 个朋友却以为现状和那时并没有大两样,也还可以存留,给我编辑起来了。这正是我所悲哀的。我以为凡对于时弊的攻击,文字 须与时弊同时灭亡,因为这正如白血轮之酿成疮疖一般,倘非自身也被排除,则当它的生命的存留中,也即证明着病菌尚在。 Since the publication of "New Youth", everyone has ridiculed the reform in response to it, later approved of it, and then ridiculed the reformers. Now the mimetic uniform has long been broken, showing its true nature. The so-called "facts speak louder than words", why should they be criticized by pen and paper mouthpieces. Therefore, my timely and superficial writing should also be ignored and wiped out. However, a few friends thought that the current situation was not much different from that at that time, and they could still be preserved, so they edited them for me. This is what I am saddened by. I think any attack on the evils of the times, the writing must perish at the same time as the evils of the times, because this is like the boils and boils caused by the white blood wheel. If it is not eliminated by itself, the existence of its life also proves that the germs are still there. | | Tale | 有个财主,非常喜欢自家的一棵橘子树。谁从树上摘下一个橘子,他就会诅咒人家下十八层地狱。这年,橘子又挂满了枝头。财 主的女儿馋的直流口水。忍不住摘了一个,刚尝了一口,就不省人事了。财主后悔不已,把树上的橘子都摘下来,分给邻居和路 人。最后一个橘子分完,女儿就苏醒了。财主再也不敢随便诅咒别人了。 There was a rich man who liked his orange tree very much. Whoever plucks an orange from the tree, he will curse him to eighteen levels of hell. This year, oranges are hanging on the branches again. The rich man's daughter was drooling. Then, she couldn't help picking one, and just after a bite, she was unconscious. The rich man was remorseful, so he plucked all the oranges from the tree and gave them to neighbors and passers-by. After the last orange was given, the daughter woke up. The rich man no longer dared to curse others casually. | | ROC | Garth has a chicken farm. Each morning he must wake up and gather eggs. Yesterday morning there were 33 eggs! After gathering the eggs, he feeds the chickens. Finally he gets to eat breakfast, and go to school. | | Shakespeare | King. Giue them the Foyles yong Osricke, Cousen Hamlet, you know the wager. Ham. Verie well my Lord, Your Grace hath laide the oddes a 'th' weaker side. King. I do not feare it, I haue seene you both: But since he is better'd, we haue therefore oddes. Laer. This is too heauy, Let me see another. | Table 8: Samples of different authors in Chinese and English datasets. The English texts below the Chinese are translated versions of the Chinese samples. ## D More Implementation Details In terms of selecting pre-trained model, LongLMbase and T5base are the generic base model for the Chinese and English generation, respectively. To optimize the models for these specific languages, we have fine-tuned them using different hyperparameter values (λ1/2/3). These values were determined based on the performance ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) observed on a validation set, which was created by pre-extracting 5% of the training data for this purpose. ## E More Ablation Study Results To explore the effect of the proposed component, we also conduct more ablation studies on Chinese datasets. As shown in Table 9, the ablation of Ldis leads to better style accuracy, which show the different trends comparing with English dataset. We conjecture that Ldis aims to maintain the content and reduces style information. Without Ldis, the powerful Lstyle leads the StoryTrans to degenerate to style conditional language model. Furthermore, the ablation of Lstyle also confirms the powerful ability of style control as in previous paper. And we find that when removing Lsop, the model loses the ability to transfer at the discourse level and has only learned token-level copy. ## F Style Analysis Of Transferred Texts In order to investigate whether our StoryTrans indeed rephrase the expression of texts, we employ surface elements of text to show author writing styles. And the surface element are associated with statistical observations. For example, the small average length of sentences show the author preference to write a short sentence, and more question marks indicate the author accustomed to using questions. To this end, we use the number of (1) commas, (2) colons, (3) sentences in a paragraph, (4) question mark (5) left quotation mark, (6) right quotation mark, and (7) average number of words in a sentence to quantify surface elements into a 7 dimension vector. Then we leverage the t-SNE to visualize the golden texts and transferred texts. As shown in Figure 3, different style distribute separately across the style space. This proves JY, LX and fairy tale in Chinese dataset have a gap in writing style. And Figure 5 shows the transferred texts fall in golden texts in style space, indicating StoryTrans successfully transferred the writing style. ## G More Details Of Manual Evaluation In addition to automatic evaluation, we conduct manual evaluation on generated texts. As mentioned before, we require the annotators to score each aspect from 1 (the worst) to 3 (the best). As for payment, we pay 1.8 yuan (RMB) per sample in compliance with Chinese wage standards. Our annotators consist of undergraduate students who are experienced in reading texts written in the styles of the respective authors (JY and LX). To ensure they fully understand the evaluation metrics, we conducted case analyses with them. Our scoring rubric assigns 1, 2, or 3 points to the transferred text based on the proportion of sentences meeting the following criteria (1/3, 2/3, or 3/3): - Style Accuracy: whether the transferred text conforms to the corresponding style. - Content Preservation: whether the source content, such as character names, are retained. - Coherence: whether the sentences in the transferred text are semantically connected. And we compute the final score of each text by averaging the scores of three annotators. As illustrated in manual evaluation, we observe that the results mainly conform with the automatic evaluation. Our StoryTrans obtained the highest score on the style accuracy in both transferred directions by a sign test compared to the other baselines, showing its stable ability of style control. Moreover, in terms of content preservation, the score Target Styles Model r-Acc a-Acc BLEU-1 BLEU-2 BS-P BS-R BS-F1 **BL-Overall BS-Overall** ZH-LX Proposed Model 97.66 59.94 32.19 14.44 68.53 70.48 69.45 37.38 64.52 (-) Ldis 99.86 92.59 20.36 5.45 63.37 62.96 63.14 34.56 76.46 (-) Lstyle 88.06 12.20 43.09 23.88 75.44 75.68 75.53 20.21 30.35 (-) Lsop 87.10 2.05 54.38 32.95 81.19 79.77 80.42 9.46 12.83 ZH-JY Proposed Model 84.49 62.96 30.71 14.5 68.76 71.69 70.16 37.72 66.46 (-) Ldis 97.53 92.59 18.49 4.85 62.17 65.42 63.73 32.87 76.81 (-) Lstyle 61.86 40.87 39.78 21.97 73.73 75.42 74.52 35.52 55.18 (-) Lsop 61.72 10.83 51.29 30.98 79.65 79.82 79.72 21.10 29.38 of StoryTrans is comparable with StyleLM and slightly higher than Reverse Attention, demonstrating that StoryTrans can keep the main semantics of input. In terms of coherence, the score of StoryTrans is also comparable with baselines, showing some room for improvement. As discussed before, Style Transformer tends to copy the input, leading to the highest performance in content preservation and coherence. In summary, human evaluation depicts the strength of StoryTrans not only on style control but also on overall performance, indicating a balance of these metrics. ## H More Case Studies We show more cases in Table 7. Comparing source text with Style Transformer, Style Transformer copies the input and only changes little tokens. This result also confirms with highest BLEU and BERTScore in automatic results. Like StyleLM, Reverse Attention also incorporates some target author content into generated texts. However, Reverse Attention inserts too much content that overwhelms original plots. Furthermore, some critical entities (e.g., character name, "柯里教授" /"Professor Curry" → "柯镇恶" /"Ke Zhen'e") are revised to the similar word on in target author corpus. To maintain the story coherence, these important entities should stay the same. In summary, the token-level transfer may destroy the essential plots and damage the coherence. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4.2 ✓ B1. Did you cite the creators of artifacts you used? 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix B ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix B ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4.1 and Appendix B ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We counted the details of the dataset and discussed the details in Section 4.1 ## C ✓ **Did You Run Computational Experiments?** 4.4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 4.5 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 4.5 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 4.5 and Appendix F ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix B and Appendix F D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We do not submit the protocol to an ethics review board because our country has not yet established an ethical committee at the national level. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 4.5 and Appendix F
tan-etal-2023-towards
Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
https://aclanthology.org/2023.acl-long.828
Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TempReason to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach.
# Towards Benchmarking And Improving The Temporal Reasoning Capability Of Large Language Models Qingyu Tan ∗ 1, 2 Hwee Tou Ng† 2 **Lidong Bing**1 1DAMO Academy, Alibaba Group 2Department of Computer Science, National University of Singapore {qingyu.tan,l.bing}@alibaba-inc.com {qtan6,nght}@comp.nus.edu.sg ## Abstract Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TEMPREASON to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach1. ## 1 Introduction In recent years, large language models (LLMs) have achieved significant success in many natural language processing (NLP) tasks, such as natural language understanding (NLU) (Fei et al., 2023), information extraction (IE) (Ding et al., 2023), and question answering (QA) (Ye et al., 2023; Zhao et al., 2023). Many facts and answers are dependent on their related time scopes, such as 'What soccer club was Lionel Messi playing for?'. Chia et al. (2022) has pointed out around 48% of the qualifiers in the widely-used knowledge base Wikidata (Vrandeciˇ c and Krötzsch ´ , 2014) are time-related. That is, a significant number of the knowledge triples in the Wikidata KB have their expiry dates. Correct understanding of temporal concepts is crucial for language models to be successful in real-world applications. To examine the temporal reasoning capabilities of LLMs, the Time-Sensitive Question Answering (TSQA) task has been proposed and several evaluation datasets were published for research purposes. The Time-sensitive QA dataset (Chen et al., 2021) and the TEMPLAMA dataset (Dhingra et al., 2022) were constructed based on the Wikidata temporal KB. StreamingQA (Liska et al., 2022) was constructed by news article collections in English WMT challenges from 2007 to 2020. One consensus of prior work is that time-sensitive QA is a challenging task and its performance is still far below human performance. However, they did not provide a systematic analysis of LM's temporal reasoning capability. In this paper, we aim to systematically analyze such capability and identify the strengths and weaknesses of LMs on temporal reasoning. As shown in Figure 1, humans' understanding of temporal reasoning could be broken down into three levels: time-time (L1) relation, time-event (L2) relation, and event-event (L3) relation. For the understanding of time-time relations, humans can easily determine the relation between two timestamps t1 and t2 on the time axis. For example, when humans are asked 'What is the year after 2020?', they are able to answer this question without any external information. This level of temporal understanding could be regarded as a set of logic rules and is highly generalizable across different times, while this type of reasoning was overlooked in prior TSQA research (Ning et al., 2020; Chen et al., 2021; Dhingra et al., 2022). For time-event relations, the reasoning process requires grounding events to their specific time ranges. In this paper, the concept of events includes time-dependent facts. Humans either memorize a large number of timeevent pairs or need to rely on relevant contexts to deduce such relations. An example question is 'What soccer club was Lionel Messi playing for in 14820 ![1_image_0.png](1_image_0.png) Dec 2010?', where a time is specified in the question, and the answer changes based on the given time. If this question is posed to a person who is unfamiliar with sports, this person also needs external information to provide the answer. Answering this type of questions requires information retrieval and temporal grounding. For event-event relations, there are multiple reasoning paths to determine such relations. One possible path is to first identify the timestamps of different events and perform time-time reasoning. Another path is to search for the textual cues of relative relation, such as 'before', 'after', 'during', and 'simultaneous'. We first conducted a simple preliminary experiment for probing LLM's L1 temporal reasoning capability. We found that not only do LMs perform poorly on the time-time relation task, but they are also heavily biased in favor of contemporary years (2000 - 2020). This may be due to the imbalanced term frequencies in the pre-training corpora. Most LLMs (such as BERT, GPT, and T5) are pre-trained on raw texts from a snapshot at a specific timestamp, typically around 2018 to 2020. Therefore, the time expression vocabulary is highly dependent on term frequencies in the pre-training corpora. Typically, year tokens that occur frequently will have a smaller index in the vocabulary and the uncommon years generally have larger indices or will be split into subtokens. Take the T5 tokenizer as an example, the year '2014' is tokenized as '2014', however, the year '2021' is tokenized as '20' and '21'. This means that language models only learn the co-occurrences of time expressions and their context. Given such findings, we found that the recently proposed TSQA TEMPLAMA dataset has several main drawbacks. Firstly, the time span of the dataset is only from 2010 to 2020, which is a highly biased distribution in favor of LM. Secondly, it only focused on the questions of time-event relations. To overcome these shortcomings, we created a more comprehensive TSQA benchmark TEMPREASON, which spans a longer time range and all three types of temporal understanding. We conducted comprehensive experiments in closed book QA, open book QA, and reasoning QA settings. We found that the temporal reasoning capabilities of LLMs are highly variable with respect to the reference time in the question. LLMs perform well on the contemporary years and poorly on low-resource years. Moreover, we proposed a novel temporal learning framework based on temporal span extraction and time-sensitive reinforcement learning. Our proposed framework encourages LMs to generate temporally correct answers while penalizing predictions that do not satisfy the temporal constraints. Experimental results showed that our proposed benchmark TEMPREASON provides a more comprehensive evaluation for LM's temporal reasoning capability and our model consistently outperforms strong baselines. | Ref. Year | Question | Target | |-------------|---------------------------------------|----------| | 2011 | What is the year x years before 2011? | 2011 - x | | 2010 | What is the year before 2010? | 2009 | | 1949 | What is the year x years after 1949? | 1949 + x | | 1905 | What is the year after 1905? | 1906 | ## 2 Preliminaries We aim to examine the capability of LMs for simple year prediction. We first design a set of question templates that reflects the basic concepts of temporal prediction, as shown in Table 1. Questions of these kinds can be easily answered by humans and this understanding is highly generalizable across the years, and all the expected answers are years in numeric form. In order to have a more comprehensive understanding of temporal expressions, we divide 1900 to 2040 into seven 20-year time periods. Then, we randomly generate 400 questions for each 20-year time period. We then use three language models to make predictions on such questions. The first LM is T5-large model finetuned on the Natural Question dataset (T5-L-NQ, Kwiatkowski et al., 2019). This QA dataset is one of the largest open domain QA datasets. Roberts et al. (2020) has demonstrated that language models fine-tuned on such data can achieve competitive performance on the open domain QA task. The second LM is FLAN-T5-Large (Wei et al., 2022) model. This model is instruction-tuned on data of more than 60 NLP tasks. The fine-tuned model demonstrated competitive zero-shot reasoning capability, and achieved strong performance on many natural language understanding and generation tasks. The third model is the popular ChatGPT (Ouyang et al., 2022) model. To ensure that the predictions are consistent, we used the *gpt-3.5-0301* version of ChatGPT. We aim to evaluate the temporal reasoning capability of the three language models. We evaluate the answers using the following three metrics: (1) exact match (EM), which is a standard metric for QA. Besides, since the expected answers are numeric, we also evaluate the answers by (2) mean absolute error (MAE) and (3) trend accuracy (Trend Acc). Trend accuracy is calculated by whether the predicted year is before or after the reference year. If the trend is correct, the prediction is deemed to be correct. The experimental results on year prediction are shown in Table 2. We report the scores of T5-LNQ on the left, FLAN-T5-L in the middle, and ChatGPT on the right. From these experiments, we have several interesting observations: (1) The ChatGPT model is able to solve this problem with high accuracy (99.6 overall EM). However, it still made a few mistakes in the 1900-1940 time period. (2) The first two LMs (T5-L-NQ and FLAN-T5-L) are biased towards contemporary time ranges. We can clearly see that the EM scores between 2000 to 2020 are significantly higher than the rest of the time ranges. This could be the result of the higher term frequencies of the contemporary year tokens | Time Range | EM (↑) | MAE (↓) | Trend Acc (↑) | |--------------|---------------|---------------|-----------------| | 1900-1920 | 17.5/6.8/99.5 | 28.0/7.4/0.0 | 99.5/96.8/100 | | 1920-1940 | 31.5/1.8/98.9 | 16.4/11.9/0.1 | 94.5/94.5/100 | | 1940-1960 | 17.5/3.3/100 | 7.7/9.2/0.0 | 100/91.0/100 | | 1960-1980 | 22.5/3.5/100 | 17.1/7.5/0.0 | 94.0/92.0/100 | | 1980-2000 | 23.0/10.0/100 | 7.9/6.9/0.0 | 98.5/100/100 | | 2000-2020 | 47.5/20.0/100 | 51.2/2.3/0.0 | 97.0/100/100 | | 2020-2040 | 23.5/11.3/100 | 15.7/8.9/0.0 | 84.5/83.8/100 | | Average | 26.1/8.1/99.6 | 20.6/7.7/0.0 | 95.4/94.0/100 | in the pre-training corpora. Since many large LMs are trained and released after 2018, the pre-training corpora may contain more year expressions that are closer to that date. In contrast, the first two LMs perform significantly worse in the past (1900-2000) and the future (2020-2040) years. (3) The first two LMs **lack numeric reasoning ability** with respect to time. The answers provided by these LMs for the time prediction questions are in numeric form, indicating that the LMs understand what the questions are asking. However, the EM scores are all around the 20-30 range, except for T5-L-NQ in the 20002020 time range. This indicates that LMs have poor estimation of temporal concepts. Besides, we find that the FLAN-T5-L model has significantly lower EM scores compared to T5-L-NQ, but achieves lower MAE estimations across most of the time ranges. This indicates that instruction tuning implemented in FLAN has implicitly improved the numeric reasoning capability of T5. (4) On the other hand, **All LMs are good at catching (before/after) trends**, indicating that at least the LMs understand the concepts of before/after well. We can see that all LMs achieve over 90% performance across time ranges before 2020. However, for the first two LMs, this capability is not able to generalize to the future, as the performance in 2020-2040 is significantly worse than in other time periods. ## 3 Comprehensive Benchmark For Temporal Reasoning 3.1 Tempreason **Dataset** Based on the findings of the previous section, we found that the recently proposed TEMPLAMA TSQA dataset (Dhingra et al., 2022) has several | Train | Dev | Test | | |----------------|-----------|----------|----------| | Time Range | 1014-2022 | 634-2023 | 998-2023 | | L1-Questions | 400,000 | 4,000 | 4,000 | | L2-Questions | 16,017 | 5,521 | 5,397 | | L3-Questions | 13,014 | 4,437 | 4,426 | | Subjects | 3,000 | 1,000 | 1,000 | | Facts | 16,017 | 5,521 | 5,397 | | Facts/subjects | 5.3 | 5.5 | 5.4 | Table 3: Dataset statistics of TEMPREASON. major limitations. Firstly, it only contains questions from 2010 to 2020, which are highly in favor of LM's temporal reasoning biases. Secondly, the TEMPLAMA dataset is heavily biased towards long-duration facts, as 70.69% of the questions of TEMPLAMA have the most frequent answer for a given subject. That is, the TEMPLAMA dataset may encourage models to learn shortcuts to memorize the most frequent answers instead of learning temporal reasoning capability. If the research on time-sensitive question answering only focuses on adaptation to a short period of time, the maintenance and continual adaptation shall be highly expensive. As shown in the previous section, language models perform poorly on past and future time spans. If the language model is not able to understand the changes from the past to the present, it is highly difficult for this model to understand the evolution from the present to the future. In order to probe the temporal reasoning ability in a more systematic manner, we constructed a new comprehensive dataset TEMPREASON. For the L1 time-time relation reasoning, we extend the year prediction task to month prediction, since year prediction can be enumerated by several thousands of examples and LMs may simply memorize such examples. Specifically, we randomly pick a reference time t within a specific time range and then synthesize questions with respect to that time. The questions have the form of 'What is the date x years and y months before/after t?'. In this way, we can randomly generate L1 questions and answers within the time period. To avoid data leakage, we make sure each generated question is unique. We then randomly split the questions to train, dev, and test sets. To evaluate the generalizability of L1 temporal reasoning, we also create a future test set from 2022 to 2040. For L2 and L3 reasoning, similar to Dhingra et al. (2022) and Chen et al. (2021), we also leverage the Wikidata KB as the knowledge source. We first preprocess the 20 Nov 2022 dump of the Wikidata (Vrandeciˇ c and Krötzsch ´ , 2014) knowledge base (KB) to extract all time-dependent facts. We then keep the facts of 10 time-sensitive relations mentioned in the TEMPLAMA dataset. We process the knowledge triples and qualifiers into quintuplet format, (s, r, o, ts, te), where s is the subject, r is the relation, o is the object, ts and te are the start time and end time of this fact. We group all the temporal facts by s and r. In this way, facts in the same group are all relevant to the subject s. The group of facts can be denoted as S = {(s, r, oi, tsi , tei )|i ∈ 1*...N*} and they are sorted chronologically, where N is the number of facts within a group. Since we mainly want to focus on questions whose answers change with time, we only keep the groups that contain three or more temporal facts. In this way, we make sure that each group has at least three time-dependent answers. Moreover, since the Wikidata KB is highly classimbalanced, we only keep a maximum of 2,000 subjects for each relation type. We then create clozestyle questions based on time-dependent facts. For the time-event (L2) type of questions, we randomly select a time tr between ts and te, and we then create a question with the query (s, r, ?, tr) and a set of manually-defined question templates. The templates can be found in Table 13 in Appendx A. For the event-event (L3) type of questions, we first identify the 'before/after' relation pairs within group S (we only keep the 1-hop pairs). We then create the event-event question for each 'before/after' pair using similar templates of the L2 questions (Table 13). The statistics of our TEMPREASON dataset can be found in Table 3. We also compared our datasets with prior works in Appendix C ## 3.2 Problem Settings The time-sensitive question answering (TSQA) task is formally defined as follows: given an input question and its corresponding time (Figure 2), the model is asked to output the answer of this question, and the answers are evaluated by token-level F1 and exact match (EM) scores. Intuitively, the difficulty of the TSQA task is highly dependent on the context provided for each question. The challenges of the TSQA task can be broken down into three levels: (1) **Answer Retrieval**. The first challenge of TSQA is finding the possible answers, which is the same challenge as normal open-domain question answering. For questions in TEMPREASON, each question may have 5.3 to 5.5 possible answers (Ta- ![4_image_0.png](4_image_0.png) ble 3). (2) **Time Grounding**. The second challenge of TSQA is temporal grounding. That is, this subtask is to find the start time and end time of each possible answer. (3) **Temporal Reasoning**. The last challenge is finding the correct answer among the possible candidates based on the specified time constraints. To thoroughly examine the temporal reasoning capability of large language models in different aspects, we propose to tackle TSQA in three different context settings: (1) closed book QA, (2) open book QA, and (3) reasoning QA. We describe the three problem settings as follows. Closed Book Question Answering (CBQA). CBQA is a common task formulation in timesensitive QA research (Dhingra et al., 2022; Liska et al., 2022). In this setting, only the question is prompted to the language model, which is then asked to output the answer without access to any natural language text. In Figure 2, the example question is asking about the soccer athlete Lionel Messi. The most difficult part of this question is the memorization of *Lionel Messi*'s experiences, since people who are not sports fans may not be able to answer such questions easily. Open Book Question Answering (OBQA). The OBQA formalization is a more realistic problem setting, where external context in the form of natural language text is provided to help LMs to answer the questions. As shown in middle of Figure 2, we use the Wikipedia page of the subject entity as part of the prompt to the language model, together with the question. Reasoning QA. In this setting, all the relevant temporal facts within the group S = {(s, r, oi, tsi , tei )|i ∈ 1*...N*} are provided in structured form as part of the prompt (right of Figure 2). This is a simplified version of OBQA since all possible answers and their time ranges are provided in the context. To avoid the models learning shortcuts, the provided facts are re-ordered randomly. Essentially, this setting resembles human temporal reasoning. The language models are required to deduce answers based on the time ranges of all possible answers. Human is able to deduce the answer by locating the query time within the group. Intuitively, human-level performance in this setting can be regarded as 100%. ## 4 Improving Temporal Reasoning In order to improve the temporal reasoning capabilities, we propose a temporal training framework for sequence-to-sequence language models. Firstly, we pre-train the language model with a temporal span extraction task to encourage the model to pay more attention to the temporal and entity spans. We then fine-tune the model on task-specific data in TEMPREASON. Finally, we further fine-tune the language model by time-sensitive reinforcement learning with our novel reward function. Temporal Span Extraction Pre-Training (TSE) Conventional language model pre-training randomly masks texts and reconstructs the original sentence. However, the relative importance of tokens and spans differs. Guu et al. (2020) first introduced salient span masking, i.e, reconstructing masked named entities, as an intermediate pre-training technique for language models. This approach has shown positive effects on the QA task. In order for the language model to capture more knowledge on time-related spans, we first pre-train on 100K Wikipedia articles with a temporal and entity span extraction task. Specifically, we use the Spacy NER tagger to extract the temporal and entity spans in 100K Wikipedia articles. The NER tagger is trained on the Ontonotes 5.0 corpus (Weischedel et al., 2013). We randomly mask 50% of the entities and temporal spans for a given paragraph and treat this paragraph as the input of T5 models. In this way, the model pays more attention to the contexts that are relevant to temporal shifts. Then the pre-trained language model will be used for fine-tuning with TEMPREASON question-answer pairs in different settings. Supervised Fine-Tuning (SFT) The TSE pretrained language model with parameters θ will then be fine-tuned on the task data in each setting. The input prompt to the LM is the concatenation of question q and context c, and the objective of SFT is to maximize the probability of P(a|*q, c*), where a is the correct answer. ![5_image_0.png](5_image_0.png) Figure 3: An example of time-sensitive reinforcement learning (TSRL). The ground truth is highlighted in green color and the negative answers are highlighted in yellow color. Time-Sensitive Reinforcement Learning (TSRL) One of the key challenges of temporal reasoning is that there are multiple possible answers for one subject. For a given fact x = (s, r, oj , tsj , tej ), we have the facts in the same group SN = {(s, r, oi, tsi , tei )|i ∈ 1*...N, i* ̸= j}. These facts have the same subject and relation as the given fact, but are in other time periods. Therefore, for a question related to the fact x, we are able to collect the negative answers N = {oi|i ∈ 1*...N, i* ̸= j} within the same group as the negative sample set for TSQA. An example of such negative examples is shown in Figure 3. For a given question related to fact x, we want to maximize the probability of the correct answer oj while penalizing the model when it outputs temporally wrong answers. The correct answers and negative answers were used for our reward function. We first calculate the positive score p(x) of the model prediction θ(x) with respect to the ground truth: $$p(x)=F(\theta(x),o_{j})$$ p(x) = F(θ(x), oj ) (1) where F refers to the scoring function for reward computation. Specifically, we used the EM scoring function as F. We then calculate the negative score n(x) by: $$n(x)=m a x\{F(\theta(x),o_{i})|i\neq j\}\qquad(2)$$ The negative score will be 1 if the model prediction returns a temporally wrong answer. Finally, the reward function for TSRL is calculated as: $$R(x)={\left\{\begin{array}{l l}{p(x)}&{p(x)\geq n(x)}\\ {-n(x)}&{n(x)>p(x)}\end{array}\right.}\quad(3)$$ The reward function is designed to give positive rewards for predictions that match the ground truth and negative rewards for predictions that match the answers in the negative answer set N. We then optimize the fine-tuned language model by the Proximal Policy Optimization (Schulman et al., 2017) algorithm. We denote our final model as TempT5. ## 5 Experiments 5.1 Experimental Settings We conduct experiments in each proposed setting in Section 3.2. The compared baselines are: **FLANT5-Large** (Wei et al., 2022). This model is finetuned on data from over 60 NLP tasks and the authors showed that large-scale instruction tuning significantly improves the model's performance on few-shot reasoning. We evaluate the model's zeroshot performance on temporal reasoning. **ChatGPT** (Ouyang et al., 2022). This model is initialized by GPT-3 and further trained to follow human instructions. We used the *gpt-3.5-0301* version of ChatGPT for more consistent evaluation. Since this model is not open source and not free, we only examined its performance on 200 examples for each setting. **T5-SFT** (Raffel et al., 2020). This baseline is based on supervised fine-tuning of the conventional T5 models. We use the T5-base model in our experiments and we fine-tune this model on each setting of TEMPREASON (Section 3.2). ## 5.2 Experimental Results In Table 4, we show the experimental results on the test sets of TEMPREASON. We then analyze the performance by each level of temporal understanding. L1 Understanding. For L1 temporal understanding, the performance of FLAN-T5-L and ChatGPT $$(1)$$ | FLAN-T5-L | ChatGPT | T5-SFT | TempT5 | | | | | | | | |-----------------|-----------|----------|----------|------|------|------|------|------|------|------| | Question Type | Setting | EM | F1 | EM | F1 | EM | F1 | EM | F1 | ∆ F1 | | L1: Time-Time | CBQA | 0.0 | 2.9 | 30.5 | 56.7 | 100 | 100 | 100 | 100 | +0.0 | | CBQA | 0.5 | 9.2 | 6.5 | 11.5 | 1.4 | 23.2 | 1.5 | 23.4 | +0.2 | | | L2: Time-Event | ReasonQA | 57.3 | 66.3 | 47.5 | 51.0 | 82.6 | 87.1 | 84.8 | 88.9 | +1.8 | | OBQA | 9.4 | 22.5 | 8.5 | 16.1 | 14.8 | 35.2 | 15.4 | 36.3 | +1.1 | | | CBQA | 0.4 | 10.5 | 12.0 | 21.8 | 12.1 | 25.3 | 12.3 | 25.4 | +0.1 | | | L3: Event-Event | ReasonQA | 36.3 | 47.5 | 49.5 | 52.3 | 78.2 | 83.0 | 81.1 | 86.1 | +3.1 | | OBQA | 8.1 | 19.2 | 17.0 | 25.3 | 19.7 | 31.2 | 21.1 | 32.4 | +1.2 | | TempT5 Time Range EM F1 1000-2022 100 100 2022-2040 94.4 97.1 Table 6: Comparison of L2 and L3 performance of TempT5 in the CBQA setting. | Question Type | EM | F1 | | |-----------------|------|------|------| | L2: CBQA | P39 | 1.6 | 21.1 | | Others | 1.3 | 19.9 | | | L3: CBQA | P39 | 51.4 | 68.2 | | Others | 0.6 | 12.1 | | significantly deteriorates compared to year prediction (Table 2). ChatGPT is able to achieve 99.6 EM on year prediction, whereas it can only achieve 30.5 EM on month prediction. The fine-tuned models T5-SFT and TempT5 are able to achieve 100 EM/F1 performance on this task. This showed that even though the L1 logic rules were not explicitly encoded in the language models, we can teach the language model to learn such rules by creating examples of the rules on a large scale. We further evaluate the trained L1-TempT5 model on an outof-domain futuristic test set (Table 5). The questions of the futuristic test set have reference times from 2022 to 2040, which are disjoint from the time period of TEMPREASON. The TempT5 model performs decently on the future test set, achieving 97.1 F1 score. However, this performance is still below the in-domain performance. L2 Understanding. The time-event relation is the main question type of previous TSQA datasets. When we compare the performance of the three settings of L2 performance, we can see the problem setting plays a significant role. For all three models, the performance of CBQA is the lowest among the three settings. This shows that it is highly difficult for the LMs to answer temporal questions without any context. Meanwhile, ReasonQA has a significantly better performance compared to OBQA and CBQA. This shows that the language models are able to perform temporal reasoning when the relevant facts were provided. That is, once the possible answers and the related timestamps are retrieved, fine-tuned language models (TempT5 and T5-SFT) can perform temporal reasoning relatively well. It is worth noting that the ChatGPT model has the worst performance in the L2 ReasonQA setting while its performance is exceptionally high in the preliminary year prediction experiments. This phenomenon shows that temporal understanding at different levels may not be easily transferable. Last but not least, our proposed TempT5 model achieves significant performance gains over T5-SFT in OBQA and ReasonQA, which is the strongest baseline in our experiments. L3 Understanding. Similar to L2 understanding, all models perform the best in ReasonQA, followed by OBQA and have the worst performance in CBQA. Besides, compared to L2 questions, most models have significantly worse performance on the L3 questions in the ReasonQA setting (except for ChatGPT), showing that L3 temporal reasoning is more challenging than L2. For the FLAN-T5- L model, the performance deterioration from L2 to L3 is 18.8 F1 (L2: 66.3 vs L3: 47.5), whereas the performance gaps of T5-SFT and TempT5 are much lower. It is worth noting that for the T5- SFT model, the exact match scores of L3 questions are significantly higher than those of L2 in the CBQA (L2:1.4 vs L3:12.1) and OBQA (L2:14.8 vs L3:19.7) setting (same for TempT5). We found that this counter-intuitive result is due to a reasoning shortcut of a specific question type 'P39 position held' (Table 13). We further analyze the CBQA performance by question type in Table 6. For questions other than 'P39', L3 performance is | questions. | FLAN-T5-L | ChatGPT | TempT5 | | |--------------|-------------|-----------|----------|------| | Time Range | % Train | F1 | F1 | F1 | | before 1900 | 8.4 | 69.5 | 77.8 | 85.6 | | 1900-1920 | 4.1 | 67.9 | 78.7 | 87.5 | | 1920-1940 | 6.6 | 65.3 | 43.8 | 87.6 | | 1940-1960 | 7.5 | 71.9 | 47.9 | 88.7 | | 1960-1980 | 11.0 | 68.0 | 43.8 | 90.5 | | 1980-2000 | 18.3 | 65.6 | 43.9 | 89.6 | | 2000-2020 | 37.8 | 66.1 | 49.1 | 89.8 | | 2020-2040 | 6.3 | 68.5 | 72.7 | 82.6 | | Overall | 100 | 67.1 | 51.0 | 88.9 | | ReasonQA | OBQA | | | | |------------|--------|------|------|------| | Metric | EM | F1 | EM | F1 | | TempT5 | 84.8 | 88.9 | 15.4 | 36.3 | | –TSE | 84.0 | 88.0 | 14.8 | 35.5 | | –TSRL | 83.4 | 87.7 | 15.0 | 35.8 | significantly worse than L2 (L3: 12.1 F1 vs L2: 19.9 F1). However, the performance of L3 CBQA on 'P39' questions is much higher than the other questions. This is because there are reasoning shortcuts for 'P39 *position held*' questions from entity names. For example, for the question 'Which position did Nicholas Budgen hold before Member of the 46th Parliament of the United Kingdom?', the reasoning shortcut is to simply change the '46th' to '45th'. This shows that L3 temporal reasoning can be achieved via different reasoning paths. ## 5.3 Ablation Study In Table 7, we showed the ablation study of TempT5 based on the L2 questions in the OBQA and ReasonQA settings. We can see that TSE and TSRL have different effects in the two settings. Removing TSRL has a heavier impact on the ReasonQA setting, leading to a 1.2 F1 drop. On the other hand, TSE pre-training is more important in the OBQA setting and removing the TSE pretraining leads to a performance drop of 0.8 F1. ## 5.4 Further Analysis In this section, we examine the model biases in TEMPREASON. We first analyze the L2 reasoning performance across different years in a similar manner as Section 2. The performance breakdown can be found in Table 8. We can see that for the FLANT5-L model and ChatGPT model, the L2 reasoning Table 9: Performance of TempT5 in L2 ReasonQA by question type. The intra-year question type refers to questions that have multiple possible answers within one year. In contrast, the inter-year question type only has one possible answer in that specific year. | Question Type | EM | F1 | | |-----------------|------------|------|------| | L2: ReasonQA | Intra-year | 80.5 | 86.3 | | Inter-year | 86.9 | 90.3 | | Example 1 Error Type: Intra Year Error Error Cause: Lack of monthly-level understanding. Question: Which position did Hirofumi Yoshimura hold in Jul 2019? Context: Hirofumi Yoshimura holds the position of: Governor of Osaka Prefecture from Apr 2019 to Dec 2022. Member of the House of Representatives of Japan from Dec 2014 to Oct 2015. Mayor of Osaka from Dec 2015 to Mar 2019. Prediction: Mayor of Osaka Ground Truth: Governor of Osaka Prefecture Table 10: An example of intra-year error of TempT5 in L2 ReasonQA. performance fluctuates across different time periods. FLAN-T5-L not only has higher performance but also lower variability across the different time periods. On the other hand, from the performance breakdown of our proposed TempT5, we can see that the temporal biases shown in the year prediction experiments (Table 2) were alleviated. The F1 scores from 1940 to 2020 were similar. However, the F1 scores before 1900 and after 2020 are still significantly worse than the other time periods. This performance degradation is largely due to the lack of training data in those time periods. The other major source of errors comes from the intra-year question type. The intra-year question type refers to questions that have multiple possible answers within one year. Therefore, it requires reasoning at the month level. As shown in Table 9, the performance on intra-year questions is significantly worse than the performance on inter-year questions, especially for the difference in the EM score (6.4, 86.9 vs. 80.5). In Table 10, we show an example of an intra-year reasoning error. We can see that the model fails to capture the intra-year position change of the subject. ## 6 Related Work Temporal Information Extraction Early efforts on temporal NLP research primarily focus on event temporal information extraction. Pustejovsky et al. (2003) constructed the TimeBank corpus, which is a temporally annotated corpus that annotates events, times, and temporal relations (such as before/after). The TIE task asks models to extract the events within a piece of text and to identify the temporal relations between event pairs. The TempEval (Verhagen et al., 2010; Bethard et al., 2016) challenge is a popular TIE challenge with a similar annotation scheme as TimeBank. However, it is costly to exhaustively annotate the temporal relations among all events. Cassidy et al. (2014) proposed a dense annotation scheme and constructed the TimeBankDense dataset, which has more complete annotation compared to TimeBank. Han et al. (2019) proposed a joint framework to extract events and time in an end-to-end manner. Rajaby Faghihi and Kordjamshidi (2021) proposed the Time-stamped Language Model to understand the flow of events. However, prior works in this field focused on extracting events and temporal relations within one piece of document. The models trained on this task cannot perform global event-to-time grounding. Temporal Reasoning over KGs The Temporal Knowledge Graph Completion (TKGC) field studies temporal reasoning in knowledge graphs. This task aims to rank all entities in a knowledge graph given a temporal query. Many works in this field (TTransE, Jiang et al., 2016; TTransH, Dasgupta et al., 2018; TNTComplEx, Lacroix et al., 2020) were proposed as extensions to prior knowledge completion techniques, such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), and ComplEx (Kipf and Welling, 2017). With a similar concept as TKGC, several question answering datasets are proposed based on temporal knowledge graphs, such as TEQUILA (Jia et al., 2018b), TimeQuestions (Jia et al., 2021), and CronQuesions (Saxena et al., 2021). These datasets include more complex questions in a natural language format, and the task setting is also asking models to rank all the entities of a given knowledge graph. Mavromatis et al. (2022) proposed a joint model that unifies temporal KG embeddings and pre-trained language models for this task. Shang et al. (2022) proposed a contrastive approach to improve the QA performance for temporal KGs. Temporal reasoning in KGs is closely related to our problem of interest. However, the major difference is that KGQA presumes all the entities are known to the system and the task is to rank all the possible entities that satisfy the queries. In contrast, our task aims to answer temporal questions based on natural text input only. Temporal Reasoning for LMs Large language models (Devlin et al., 2019; Raffel et al., 2020; Liu et al., 2019) have demonstrated good performance on the question answering task (Rajpurkar et al., 2016; Kwiatkowski et al., 2019). In recent years, several contemporary time-sensitive QA datasets were proposed. Zhang and Choi (2021) proposed the SituatedQA dataset, which contains plenty of time-dependent question-answer pairs. The TEMPLAMA dataset (Dhingra et al., 2022) was proposed to evaluate the CBQA performance for time-dependent questions from 2010 to 2020. However, the QA performance of TEMPLAMA may be overestimated, since it only covers a short time period and the period is in favor of LM's temporal bias. Similarly, StreamingQA (Liska et al., 2022) has a similar disadvantage, since its time coverage is from 2007 to 2020. The Time-sensitive QA dataset (Chen et al., 2021) covers a relatively longer timespan (from 1367 to 2018), but it only contains questions of time-event relation. The common drawback of the previously proposed TSQA datasets is the lack of coverage of temporal reasoning levels other than the time-event type of questions. ## 7 Conclusions And Future Work In this paper, we tackled the under-explored temporal reasoning problem for large language models. We found that large language models are highly susceptible to biases of time, and their temporal reasoning capability varies depending on the specific time given in the question. Besides, we proposed a comprehensive time-sensitive QA dataset TEMPREASON to evaluate LMs' temporal reasoning capability in diverse settings. Lastly, we proposed a novel training paradigm to improve language models' reasoning capability by temporal span extraction pre-training and time-sensitive reinforcement learning. We conducted extensive experiments and demonstrated that our proposed model consistently outperformed strong baselines. ## 8 Limitations The focus of the TEMPREASON dataset is to examine language models' temporal reasoning capability. However, the temporal expressions of TEMPREASON are only in the form of month in textual form and year in numeric form. One limitation of the TEMPREASON benchmark is the lack of adversarial attacks in other temporal formats, such as all numeric dates and months. The robustness of temporal reasoning is also important in real-world applications. Since the scope of this paper only focuses on the reasoning aspect, the robustness of TEMPREASON will be left for future research. Besides, the knowledge triples of TEMPREASON are from the crowd-sourced Wikidata KB, and these triples are used to construct the question-answer pairs in this paper. Hence, it is possible that errors in the Wikidata KB propagate to the answers in TEMPREASON. However, such errors have minimal effect in the ReasonQA setting, for this task only asks the models to infer from factual knowledge in the Wikidata KB. ## 9 Ethics Statement In this paper, we created a probing dataset TEMPREASON for temporal reasoning evaluation. The dataset is constructed based on the matching of Wikidata KB and Wikipedia articles. This approach is commonly used for distantly supervised data construction. The Wikidata KB is under the public domain4and the Wikipedia articles are licensed under the Creative Commons AttributionShareAlike 3.0 Unported License5. Therefore, we are able to adapt these data to construct our dataset. We will also release our data under the same license as Wikidata. The scope of our dataset is purely for scientific research of language models' temporal reasoning capability. However, the contexts from the Wikipedia articles may contain improper content. The adoption of such content is not a decision of the authors, and all content in the dataset does not reflect the views or stances of the authors of this paper. ## 10 Acknowledgements We would like to thank all the reviewers for their insightful comments and constructive feedback. ## References Steven Bethard, Guergana Savova, Wei-Te Chen, Leon Derczynski, James Pustejovsky, and Marc Verhagen. 2016. SemEval-2016 task 12: Clinical TempEval. In Proceedings of SemEval. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 4https://www.wikidata.org/wiki/Wikidata: Licensing 5https://en.wikipedia.org/wiki/Wikipedia: Copyrights 2013. Translating embeddings for modeling multirelational data. In *Proceedings of NIPS*. Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In *Proceedings of ACL*. Wenhu Chen, Xinyi Wang, and William Yang Wang. 2021. A dataset for answering time-sensitive questions. In *Proceedings of NIPS*. Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si, and Soujanya Poria. 2022. A dataset for hyper-relational extraction and a cube-filling approach. In *Proceedings of EMNLP*. Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. HyTE: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*. Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-Aware Language Models as Temporal Knowledge Bases. *Transactions* of ACL. Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Boyang Li, Shafiq Joty, and Lidong Bing. 2023. Are large language models good data annotators? a study on gpt-3, chatgpt and gpt-4. In Proceedings of ACL. Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, and Tat-Seng Chua. 2023. Reasoning implicit sentiment with chain-of-thought prompting. In *Proceedings of* ACL. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *Proceedings of ICML*. Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint event and temporal relation extraction with shared representations and structured prediction. In *Proceedings of EMNLP*. Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Strötgen, and Gerhard Weikum. 2018a. Tempquestions: A benchmark for temporal question answering. In *Proceedings of WWW*. Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Strötgen, and Gerhard Weikum. 2018b. Tequila: Temporal question answering over knowledge bases. In *Proceedings of CIKM*. Zhen Jia, Soumajit Pramanik, Rishiraj Saha Roy, and Gerhard Weikum. 2021. Complex temporal question answering on knowledge graphs. In Proceedings of CIKM. Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, and Zhifang Sui. 2016. Towards time-aware knowledge graph completion. In *Proceedings of COLING*. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In *Proceedings* of EMNLP. Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What's the answer right now? In Proceedings of EMNLP. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In *Proceedings of ICLR*. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. *Transactions of* ACL. Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In *Proceedings of ICLR*. Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D'Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer, Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In *Proceedings of ICML*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. In *arXiv preprint arXiv:1907.11692*. Costas Mavromatis, Prasanna Lakkur Subramanyam, Vassilis N Ioannidis, Adesoji Adeshina, Phillip R Howard, Tetiana Grinberg, Nagib Hakim, and George Karypis. 2022. Tempoqr: temporal question reasoning over knowledge graphs. In *Proceedings of AAAI*, volume 36, pages 5825–5833. Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth. 2020. TORQUE: A reading comprehension dataset of temporal ordering questions. In *Proceedings of EMNLP*. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. In arXiv preprint arXiv:2203.02155. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In *Corpus linguistics*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JMLR*. Hossein Rajaby Faghihi and Parisa Kordjamshidi. 2021. Time-stamped language model: Teaching language models to understand the flow of events. In *Proceedings of NAACL*. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* EMNLP. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In *Proceedings of EMNLP*. Apoorv Saxena, Soumen Chakrabarti, and Partha Talukdar. 2021. Question answering over temporal knowledge graphs. In *Proceedings of ACL*. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. In arXiv preprint arXiv:1707.06347. Chao Shang, Guangtao Wang, Peng Qi, and Jing Huang. 2022. Improving time sensitivity for question answering over temporal knowledge graphs. In *Proceedings* of ACL. Association for Computational Linguistics. Marc Verhagen, Roser Saurí, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 task 13: TempEval-2. In *Proceedings of SemEval*. Denny Vrandeciˇ c and Markus Krötzsch. 2014. ´ Wikidata: a free collaborative knowledgebase. *Proceedings of CACM*. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In *Proceedings of AAAI*. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In *Proceedings of* ICLR. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium. Hai Ye, Qizhe Xie, and Hwee Tou Ng. 2023. Multisource test-time adaptation as dueling bandits for extractive question answering. In Proceedings of ACL. Michael Zhang and Eunsol Choi. 2021. SituatedQA: Incorporating extra-linguistic contexts into QA. In Proceedings of EMNLP. Ruochen Zhao, Shafiq Joty Xingxuan Li, Chengwei Qin, and Lidong Bing. 2023. Verify-and-edit: A knowledge-enhanced chain-of-thought framework. In *Proceedings of ACL*. ## A Realtime Adaptation Of Lms Besides the experiments on our proposed TEMPREASON dataset. We also examined our model in the RealtimeQA (Kasai et al., 2022) leaderboard. This leaderboard releases time-sensitive questions every week based on weekly quizzes from news websites (such as CNN and CNBC). The RealtimeQA challenge has two tracks: (1) multiplechoice questions and (2) generation track. The generation track of this challenge is the same as OBQA in this paper. We examined our model along with the two retrievers provided in the challenge: (1) Google custom search (GCS), and (2) Dense Passage Retrieval (DPR, Karpukhin et al., 2020). We adapt our TempT5 model of L2 ReasonQA on the question-answer pairs of RealtimeQA before December 2022. We then evaluate the adapted model on the questions released on 16th December 20226. Experimental results (Table 12) show that our model performs competitively even when adapting to the most up-to-date TSQA challenge. ## B Implementation Details This section describes the implementation details of our models and baselines. For temporal span extraction pre-training, we use the T5-base model for initialization. We then train the model for 100K steps with a batch size of 8 and a learning rate of 2e-5. We use the maximum input length of 512 for TSE pre-training. For task-specific fine-tuning, we use the same batch size and learning rate, whereas the maximum input lengths are different for each setting. For the CBQA setting, the maximum input length is set as 128, since only the question is given to the model. For the ReasonQA setting, the maximum input length is set as 512. The maximum length of 1,024 is used for the OBQA setting, since the context in this setting is the longest on average. For each setting, we fine-tune the language model for 3 epochs, and evaluation is conducted using the final checkpoint. For time-sensitive reinforcement learning, we followed the proximal policy optimization (PPO, Schulman et al., 2017) algorithm. Instead of using a reward model, we use the reward function described in Section 4. For this stage, we set the initial KL penalty coefficient as 0.05 and the target KL coefficient as 6. The discount factor γ for PPO is set to 0.99. ## C Comparison Of Tempreason And Prior Datasets In Table 11, we show the detailed comparison of our TEMPREASON dataset and prior time-sensitive question answering datasets. Our dataset is the first to include all three temporal reasoning types and the ReasonQA setting. ## D Tempreason **Templates** The templates that we used to create TEMPREASON is shown in Table 13. Dataset QA format Knowledge Corpus Closed/Open/Reason Time Coverage Size L1 L2 L3 TEMPREASON Language Wikidata/Wikipedia Closed/Open/Reason 634-2023 52.8K ✓ ✓ ✓ TEMPLAMA (Dhingra et al., 2022) Language Wikidata Closed 2010-2020 50k ✗ ✓ ✗ Time-Sensitive QA (Chen et al., 2021) Language Wikidata/Wikipedia Open 1367-2018 41.2k ✗ ✓ ✗ StreamingQA (Liska et al., 2022) Language WMT Closed/Open 2007-2020 147k ✗ ✓ ✗ SituatedQA (Zhang and Choi, 2021) Language Wikipedia/Human Annotation Closed/Open 1270-2021 12.2k ✗ ✓ ✗ TempQuestions (Jia et al., 2018a) KG Wikipedia KG NA 1.2k ✗ ✓ ✗ TimeQuestions (Jia et al., 2021) KG Wikidata KG NA 16.1k ✗ ✓ ✗ CronQuestions (Saxena et al., 2021) KG Wikidata KG 34-2021 410k ✗ ✓ ✓ Table 11: Dataset comparison of TEMPREASON and prior datasets. Table 12: Experimental results on the generation track of RealtimeQA leaderboard based on December 16, 2022's questions. The task formulation of this track is the same as OBQA in this paper. Results with † are taken from the URL in the footnote. | EM | F1 | | |--------------|------|------| | GPT3+GCS† | 55.0 | 63.6 | | TempT5-L+GCS | 48.3 | 53.3 | | RAG+GCS† | 35.0 | 45.9 | | GPT3+DPR† | 17.2 | 23.0 | | TempT5-L+DPR | 10.3 | 18.4 | | RAG+DPR† | 0.0 | 3.1 | | WikiData ID | KB Relation | # Queries | Template | |----------------------------|-----------------------|-------------|-------------------------------------------------------------------| | L1 Question Templates: NA | NA | NA | What is the time x year(s) and y month(s) before/after t? | | NA | NA | NA | What is the time x year(s) before/after t? | | NA | NA | NA | What is the time y month(s) before/after t? | | L2 Question Templates: P54 | member of sports team | 4,087 | Which team did <subject> play for in t? | | P39 | position held | 3,133 | Which position did <subject> hold in t? | | P108 | employer | 2,368 | Which employer did <subject> work for in t?. | | P102 | political party | 500 | Which political party did <subject> belong to in t? | | P286 | head coach | 1,153 | Who was the head coach of <subject> in t? | | P69 | educated at | 750 | Which school was <subject> attending in t? | | P488 | chairperson | 1,904 | Who was the chair of <subject> in t? | | P6 | head of government | 1,627 | Who was the head of the government of <subject> in t? | | P35 | head of state | 250 | Who was the head of the state of <subject> in t? | | P127 | owned by | 245 | Who was the owner of <subject>in t? | | L3 Question Templates: P54 | member of sports team | 2,524 | Which team did <subject> play for before/after oj ? | | P39 | position held | 2,538 | Which position did <subject> hold before/after oj ? | | P108 | employer | 1,991 | Which employer did <subject> work for before/after oj ?. | | P102 | political party | 433 | Which political party did <subject> belong to before/after oj ? | | P286 | head coach | 1,051 | Who was the head coach of <subject> before/after oj ? | | P69 | educated at | 594 | Which school was <subject> attending before/after oj ? | | P488 | chairperson | 1,881 | Who was the chair of <subject> before/after oj ? | | P6 | head of government | 1,535 | Who was the head of the government of <subject> before/after oj ? | | P35 | head of state | 268 | Who was the head of the state of <subject> before/after oj ? | | P127 | owned by | 199 | Who was the owner of <subject> before/after oj ? | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 9 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 9 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 3, Section 9, and Appendix C ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 and appendix C. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
rotem-etal-2023-finding
Finding the {SWEET} Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
https://aclanthology.org/2023.acl-long.829
Adaptive inference is a simple method for reducing inference costs. The method works by maintaining multiple classifiers of different capacities, and allocating resources to each test instance according to its difficulty. In this work, we compare the two main approaches for adaptive inference, Early-Exit and Multi-Model, when training data is limited. First, we observe that for models with the same architecture and size, individual Multi-Model classifiers outperform their Early-Exit counterparts by an average of 2.3{\%}. We show that this gap is caused by Early-Exit classifiers sharing model parameters during training, resulting in conflicting gradient updates of model weights. We find that despite this gap, Early-Exit still provides a better speed-accuracy trade-off due to the overhead of the Multi-Model approach. To address these issues, we propose SWEET (Separating Weights for Early-Exit Transformers) an Early-Exit fine-tuning method that assigns each classifier its own set of unique model weights, not updated by other classifiers. We compare SWEET{'}s speed-accuracy curve to standard Early-Exit and Multi-Model baselines and find that it outperforms both methods at fast speeds while maintaining comparable scores to Early- Exit at slow speeds. Moreover, SWEET individual classifiers outperform Early-Exit ones by 1.1{\%} on average. SWEET enjoys the benefits of both methods, paving the way for further reduction of inference costs in NLP.
# Finding The Sweet Spot: Analysis And Improvement Of Adaptive Inference In Low Resource Settings Daniel Rotem♡ Michael Hassid♡ Jonathan Mamou♠ **Roy Schwartz**♡ ♡School of Computer Science & Engineering, Hebrew University of Jerusalem ♠ Intel Labs, Israel {daniel.rotem,michael.hassid,roy.schwartz1}@mail.huji.ac.il jonathan.mamou@intel.com ## Abstract Adaptive inference is a simple method for reducing inference costs. The method works by maintaining multiple classifiers of different capacities, and allocating resources to each test instance according to its difficulty. In this work, we compare the two main approaches for adaptive inference, Early-Exit and MultiModel, when training data is limited. First, we observe that for models with the same architecture and size, individual Multi-Model classifiers outperform their Early-Exit counterparts by an average of 2.3%. We show that this gap is caused by Early-Exit classifiers sharing model parameters during training, resulting in conflicting gradient updates of model weights. We find that despite this gap, Early-Exit still provides a better speed-accuracy trade-off due to the overhead of the Multi-Model approach. To address these issues, we propose SWEET,1an Early-Exit fine-tuning method that assigns each classifier its own set of unique model weights, not updated by other classifiers. We compare SWEET's speed-accuracy curve to standard Early-Exit and Multi-Model baselines and find that it outperforms both methods at fast speeds while maintaining comparable scores to EarlyExit at slow speeds. Moreover, SWEET individual classifiers outperform Early-Exit ones by 1.1% on average. SWEET enjoys the benefits of both methods, paving the way for further reduction of inference costs in NLP. We publicly release our code.2 ## 1 Introduction Pre-trained Transformer-based language models such as BERT (Devlin et al., 2019), DeBERTa (He et al., 2020), and GPT3 (Brown et al., 2020) have become the go-to tool in NLP . Although powerful, the growing size of these models has been a major drawback (Thompson et al., 2020; Schwartz et al., ![0_image_0.png](0_image_0.png) Figure 1: Illustration of the adaptive inference approaches compared in this work. In both methods, multiple classifiers of increasing sizes are run serially, until a confident prediction is made. In Early-Exit (left), a single model with multiple classifiers is used, such that early computations are reused by later classifiers. In Multi-Model (right), a sequence of independent models is used, allowing each classifier to decouple its parameters from other classifiers. 2020a), making them costly to run. Various attempts to reduce inference cost have been proposed, including distillation (Hinton et al., 2015), pruning (LeCun et al., 1989) and quantization (Courbariaux et al., 2014). This work focuses on adaptive inference (Graves, 2016; Liu et al., 2020), a recent approach in which the variability of sample difficulty is leveraged toward a smarter allocation of computational resources. An appealing property of adaptive inference is that it enables dynamic control of the speed-accuracy trade-off. There are two main approaches to adaptive inference, both using a set of classifiers of different sizes. In *Early-Exit* (Schwartz et al., 2020b; Xin et al., 2020), multiple classification heads are added to the same model at different layers, allowing for early exit during inference (Fig. 1, left). Another approach (henceforth *Multi-Model*) is to apply multiple *independent* classifiers of varying capacities serially until a prediction is made (Varshney and 14836 Baral, 2022a; Li et al., 2020, Fig. 1, right). These approaches have complementary benefits; MultiModel allows for easier batching at inference time, and potentially larger savings due to using very efficient models (Mamou et al., 2023). Early-Exit on the other hand is more memory efficient, faster to train, and enables re-use of early computation if an early exit was not taken. In this work, we compare the speed-accuracy behavior of the two approaches when training data is limited.3 We first observe that Early-Exit model weights are updated by multiple conflicting gradient signals throughout the training process (Fig. 2, left). We show that this leads to a decrease in performance of *individual* Early-Exit classifiers compared to Multi-Model ones (2.3% gap on average). We find that this gap is higher for the earliest classifiers (5.2% on average) than for later ones (1.4%). We also find that while each Multi-Model classifier outperforms its Early-Exit counterpart, it does not translate to an overall better speed-accuracy trade-off. Instead, we find that each method dominates performance in a different region: MultiModel outperforms Early-Exit at fast inference speeds, while Early-Exit is better at slow speeds. Multi-Model downgraded scores at slow speeds are likely caused by the overhead of running models sequentially to predict hard samples. Inspired by our findings, we present SWEET,4 an Early-Exit method for bridging the performance gap between standard Early-Exit and Multi-Model. In SWEET, each Early-Exit classifier only updates the parameters of layers preceding it up to the previous classifier. This way, each model parameter is updated by a single classifier, thus avoiding conflicting gradients during the training of Early-Exit models (Fig. 2, right). We experiment with two established pretrained models: BERT (Devlin et al., 2019) and DeBERTa (He et al., 2020). We fine-tune EarlyExit models using SWEET on seven text classification tasks from GLUE (Wang et al., 2018) and compare them to Early-Exit and Multi-Model baselines. The speed-accuracy curve of SWEET dominates both baselines at fast speeds for 21 out of 28 experiments conducted. As for individual clas-3Reducing inference costs is particularly helpful when computational resources are limited. Such conditions are often paired with restricted access to labeled data or a low budget for data annotation. To evaluate the effectiveness of adaptive inference methods in such scenarios, we limit training data size to a few thousand examples in all our experiments. 4Separating Weights for Early-Exit Transformers. ![1_image_0.png](1_image_0.png) sifiers, SWEET performs 1.1% better on average than Early-Exit, mostly improving earlier classifiers, where conflicting gradients are more dominant. We summarise our main contributions: (1) We propose a way of measuring conflicting gradients, and show that they exist in Early-Exit training process; (2) We empirically compare Early-Exit and Multi-Model classifiers and show that conflicting gradients lead to individual Early-Exit classifiers being less accurate; (3) We propose a novel finetuning method, SWEET, which alleviates the conflicting gradients problem in Early-Exit, and leads to improved results at fast inference speeds; (4) We publicly release our code.5 ## 2 Background: Adaptive Inference Adaptive inference aims to reduce inference costs of deep neural nets by matching model and sample complexities. Sample difficulty usually varies in real-world data, meaning not all instances require processing by the most powerful classifier. Therefore, we can allocate fewer resources to easier instances, and reduce the average inference cost, potentially at the cost of performance degradation. Several exit strategies have been developed to control the speed-accuracy trade-off of a model by deciding when to make an early prediction and halt computation (Xin et al., 2020; Zhou et al., 2020; Xin et al., 2021; Schuster et al., 2021; Zhang et al., 2022). In this work, we mainly experiment with confidence-based exiting (Schwartz et al., 2020b), in which computation halts if the softmax probability assigned to a given label exceeds a predetermined threshold. Dynamic control of model's inference speed is gained through setting different threshold values. There are two main adaptive inference approaches, Early-Exit and Multi-Model, which we describe below. Early-Exit Early-Exit models are deep neural nets with multiple output points, following intermediate layers. In this work, we focus on EarlyExit implementation as presented in Schwartz et al. (2020b). During fine-tuning,6instances are passed through the model, and a loss is calculated based on predictions made by all classifiers. This leads to some model weights being updated by gradient signals from multiple classifiers (Fig. 2, left). At inference time, an instance is passed through the model and its label is predicted sequentially until a decision to exit early is taken, and computation is halted. An appealing property of Early-Exit is that it allows for efficient re-use of previous computation from lower layers in higher classifiers. However, this means that some model parameters are updated by multiple classifiers, which may lead to sub-optimal performance of individual classifiers due to conflicting gradients. In this work, we study the effect of this property on Early-Exit models. Multi-Model In Multi-Model adaptive inference, a set of independent models of increasing capacity are fine-tuned separately for the same task. At inference time, the models are used sequentially from smallest to largest until a prediction meets some predetermined criterion or until the largest model has been used. This method is more robust than Early-Exit, being easier to extend and enabling the use of different architectures. Additionally, MultiModel potentially allows for further computational savings by using models smaller than the smallest Early-Exit model (a single layer of the backbone 6Some works have pre-trained Early-Exit Models from scratch (Liu et al., 2021b) but due to budgetary constraints, it is common practice to fine-tune pre-trained models with the added classifiers on the downstream task (Liu et al., 2020; Xin et al., 2020; Schwartz et al., 2020b). model, Mamou et al., 2023). However, it may add overhead to the prediction time of hard instances, as those pass through multiple models with early computations being discarded, bringing the total runtime to exceed that of using the largest model. ## 3 Early-Exit Vs. Multi-Model 3.1 Conflicting Gradients In Early-Exit Unlike Multi-Model, when fine-tuning Early-Exit models, model weights are updated by multiple gradient signals, originating in different classification layers (Fig. 2, left). We hypothesize that this leads to sub-optimal performance for all classifiers involved, as gradient signals might conflict with one another and derail the classifiers from their goal. To test this hypothesis, we compare the gradient similarity of different Early-Exit classifiers. We fine-tune a BERT*BASE* model with four exit points (following layers [1, 4, 6, 12]) for 400 training steps on the MNLI dataset (Williams et al., 2018), using a batch size of 16 with a learning rate of 2e-5. We pass a new training batch through the model,7and inspect the gradients with respect to the last feed-forward matrix in each layer preceding a classifier.8 To measure the degree of alignment between gradient updates of different classifiers, we average the cosine similarity between the rows of gradient matrices for every pair of classifiers. High similarity indicates that the classifiers are updating the weights in a similar direction, while low similarity (in absolute values) suggests that the updates are close to orthogonal and potentially detrimental to both classifiers. This section studies the strengths and weaknesses of both adaptive inference approaches. We start by describing a limitation of Early-Exit approaches: lower model layers are updated by conflicting gradient signals during training. We show that this leads to inferior performance of individual Early-Exit classifiers compared to corresponding Multi-Model classifiers. We then compare the effects of these performance drops on the speedaccuracy curve of Early-Exit models compared Multi-Model ones. Fig. 3 shows that the gradients of future classifiers are generally orthogonal to those of the current classifier for each examined Transformer block. This indicates that model weights indeed suffer 7We repeat this experiment with an additional batch. Results (Appendix B) show a very similar trend. 8Except for the last layer, updated by a single classifier. ![3_image_1.png](3_image_1.png) from conflicting gradient updates during the training process, which might affect the performance of each individual classifier. Interestingly, when multiple future classifiers are present, they tend to align with each other, as indicated by the relatively high similarity between layer 1's gradient updates originating in classifiers 2, 3, and 4, and between layer 4's gradient updates from classifiers 3 and 4. ## 3.2 Effects Of Conflicting Gradients To evaluate the effect of conflicting gradients on individual Early-Exit classifiers, we compare the different classifiers of an Early-Exit model and a Multi-Model model. For a clean comparison, we use the same backbone model, exit points, hyperparameters,9and random seeds. As an example, for a given 12-layer Transformer model, an EarlyExit model with exit points following layers [4, 12] would be compared to a Multi-Model one consisting of two classifiers: the first four layers fit with a classification head, and a full (12 layer) model. The models differ only in the fact that for the EarlyExit model, model weights are updated by multiple future classifiers during the fine-tuning process, while each Multi-Model classifier is an independent model. To isolate the effect of conflicting gradients ![3_image_0.png](3_image_0.png) | Size | Method | Exit Layer | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------|--------------|----| | 1 | 4 | 6 | 12 | | MM | 60.90.6 71.40.1 74.71.2 79.90.9 | | | | EE | 57.30.3 70.30.3 74.40.7 78.70.5 | | | | SWEET 60.90.5 71.60.5 74.00.4 77.40.6 1 6 12 24 MM 60.10.1 66.90.4 74.40.9 81.60.3 EE 56.60.3 65.60.9 74.00.6 79.91.9 SWEET 59.80.3 66.50.7 74.50.3 81.31.0 | | | | ![3_image_2.png](3_image_2.png) on the classifiers, we evaluate each one separately on the entire validation set. The performance gap between each individual Multi-Model classifier and its corresponding Early-Exit classifier, allows us to directly measure the latter's downgrade in performance caused by conflicting gradients. We experiment with BERT and DeBERTa {BASE (∼110M parameters), and LARGE (∼350M parameters)}. For BASE versions, we install classifiers following layers [1, 4, 6, 12]. For LARGE versions, we use layers [1, 6, 12, 24]. We fine-tune an Early-Exit model and a corresponding Multi-Model model on seven NLP tasks from the GLUE benchmark (Wang et al., 2019): SST2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), RTE (Dagan et al., 2006; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), CoLA (Warstadt et al., 2019), MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and QQP (Iyer et al., 2016). We report accuracy for all tasks except for CoLA (Matthews corr.). We finetune the models for two epochs on all tasks. As our goal is to test Adaptive inference in a low resource setting, we limit the training set size for each task to 6K. 10 We report the mean validation scores and standard deviation across three random seeds. Other hyper-parameters are listed in Appendix A. Table 1 shows results for BERT classifiers,11 averaged across all tasks. Our results show that multiple classifiers updating the same weights during fine-tuning diminishes the performance of Early Exit classifiers by 2.3% on average (1.7% for BERT, 3.0% for DeBERTa), with earliest classifiers affected the most (3.5% for BERT, 7.0% for DeBERTa). The increased effect on early classifiers supports our hypothesis regarding conflicting gradients: parameters used by early classifiers receive updates from the largest number of classifiers, thus increasing the chance for conflicting gradients and leading to a larger decrease in performance. ## 3.3 Speed-Accuracy Trad-Eoff So far we observed that Multi-Model classifiers outperform Early-Exit ones. On the other hand, we also note that there is considerable overhead when using a Multi-Model model. For Multi-Model, an instance that makes an exit on classifier Ci, runs all classifiers up to (including) Ci, each being an independent model, while Early-Exit early layer computations are reused by later classifiers. We turn to evaluate the trade-off between these two factors and compare the overall speed-accuracy curve of each adaptive inference method. We evaluate model scores across different inference speeds using 11 threshold values, evenly spaced in the range 1 \# of labels, 1 . We note that t =1 \# of labels corresponds to the earliest classifier predicting all labels, while for t = 1, the final classifier is used on the entire validation set. We compute the speedup ratio for each instance as the number of layers it passes through divided by the number of layers of the full backbone model (12 for BASE models, 24 for LARGE models), and average across all instances. As an example, for classifiers following layers [1, 4, 6, 12], an instance that exits on the third classifier (i.e., after layer 6), will have a speedup ratio of 6 12 for Early-Exit and 11 12 for Multi-Model ( 6 12 + 4 12 + 1 12 ).12 We evaluate models on the seven tasks listed in Section 3.2, and report the average scores across all tasks. BERT*BASE* results are presented in Fig. 4, while the results for all other models can be found in Fig. 5. The speed-accuracy curves reveal two important observations. First, as expected, MultiModel achieves better scores at fast speeds (when most instances are predicted by early classifiers). 11See Appendix C, Table 3 for DeBERTa results. 12Further experimental details can be found in Appendix A. ![4_image_0.png](4_image_0.png) Second, although each Multi-Model classifier outperforms its corresponding Early-Exit one, the Multi-Model overhead leads to this approach being outperformed by Early-Exit at slow speeds (when more instances are predicted by later classifiers). ## 4 Sweet: Separating Weights For Early-Exit Transformers Based on our findings, we aim to design an EarlyExit method that takes advantage of the benefits of both Multi-Model and Early-Exit approaches: making the lower Early-Exit classifiers as accurate as the Multi-Model ones, without the additional overhead of the Multi-Model approach. ## 4.1 Sweet We present SWEET—a method for fine-tuning Early-Exit models that avoids the harmful impact of conflicting gradients. SWEET grants each classifier exclusive control over part of the model's parameters, such that each model parameter is only updated by a single classifier. This is done by truncating the loss signal from classifier Ci when reaching the Transformer layer corresponding to classifier Ci−1 (Fig. 2, right). In SWEET, each classification head receives complete control over a portion of the Model's parameters and can alter them in a way that optimizes its own goals. Truncating future loss signals makes the very first classifier an independent model, as ![5_image_0.png](5_image_0.png) opposed to a model that shares its parameters with future classifiers. The following classifiers update only a subset of the model's parameters but are affected by early model parameters, allowing them to still make use of earlier computations when processing an instance. We clarify that all model weights are updated simultaneously, as opposed to training the classifiers (and their corresponding layers) sequentially, which would have led to significantly longer fine-tuning, matching that of a Multi-Model model. ## 4.2 A Better Speeds-Accuracy Curve We turn to evaluate the speed-accuracy curve of SWEET. We use the same experimental setup as in Section 3.2; we fine-tune two pre-trained LMs (BERT and DeBERTa) in two sizes (BASE, LARGE) over the same seven text classification tasks. We use the same exit points and evaluate over the entire validation set using confidence-based exiting. We compare SWEET to two baselines: a standard Early-Exit model and a Multi-Model model. We compute the speedup ratio using the same method as described in Section 3.3. For further implementation details, see Appendix A. Fig. 5 presents the speed-accuracy curves of all models, averaged across all tasks. The figure shows that at the fastest speeds, SWEET outperforms Early-Exit for all models and is comparable to, or outperforms, Multi-Model. However, for 3 out of the 4 models (all but BERT*LARGE*), Early-Exit surpasses the performance of SWEET at slow speeds. SWEET's reduced scores at slow inference speeds are likely due to the lower capacity of the later classifiers, stemming from their restricted influence on early model parameters during fine-tuning. Fig. 6 presents results on individual tasks for BERT*BASE*. 13 On five out of seven tasks, SWEET outperforms both baselines at fast speeds (up to a speedup ratio of 1/2), suggesting that SWEET improves the performance of lower Early-Exit classifiers by avoiding conflicting gradients during training. For two tasks (MRPC and CoLA), SWEET suffers a considerable decrease in performance at slow inference speeds. For the other five tasks SWEET maintains comparable results to Early-Exit. It is also interesting to examine how SWEET affects *individual* Early-Exit classifiers. Table 1 shows the results of BERT's individual classifiers trained with SWEET compared to Early-Exit and Multi-Model.14 SWEET classifiers close much of the gap between Early-Exit and Multi-Model: they outperform those of Early-Exit by 1.1% on average (1.2% for BERT, 1.0% for DeBERTa), with the margin being larger for the earliest classifier (3.4% for BERT, 6.2% for DeBERTa). The final two classifiers trained with SWEET achieve lower scores than those of Early-Exit (0.9% on average), probably due to the restricted influence those classifiers have on early model parameters. Our results hint that SWEET is able to effectively bridge the gap between Early-Exit and Multi-Model early classifiers, leading to a speed-accuracy curve favoring fast inference speeds. We note that during our experiments with EarlyExit and Multi-Model models, some models did not converge. In order to ensure the validity of our results, we repeated these experiments with different random seeds, which led to convergence. We emphasize that this did not occur during the training of models using SWEET. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ## 5 Further Analysis We turn to evaluate the robustness of our approach. We start with evaluating SWEET using a different exit strategy. We then test how our results generalize when using varying amounts of training data. A different exit strategy: Learning to Exit Our experiments so far have used a confidence-based exit strategy (Section 2). Here we consider a different strategy—learning to exit (Xin et al., 2021), in which a separate module is trained for each classifier, learning whether the classifier's prediction should be trusted, and an exit should be made. This method also allows for a natural extension to regression problems, unlike confidence-based methods that are limited to classification tasks. We use BERT {BASE, LARGE} as a backbone model and fine-tune in the same procedure described in Section 3.3. Our results (Fig. 7) reveal that the methods behave similarly to the confidencebased setup; SWEET outperforms both baselines at the early stage of the curve (fast speeds), while performing on par with Early-Exit at slow speeds. We also measure the performance of individual exit layers, as in Section 3.2, for models fine-tuned using learning-to-exit. Our results (Table 4 in Appendix E) reveal a similar behavior to that of models fine-tuned using confidence based early exiting: Multi-Model classifiers surpass the performance of Early-Exit classifiers. Moreover, SWEET's early classifiers outperform those of Early-Exit, while later ones show a slight decrease in performance. Varying data sizes Our experiments so far have focused on low-resource setups (a training set of several thousand examples). We now evaluate how SWEET performs as we vary the amount of training data. We fine-tune a BERT*BASE* model on different portions of the MNLI dataset using the same experimental setup as in Section 3.2. Our results, shown in Fig. 8, indicate that SWEET is mostly beneficial when training data is limited to a few thousand examples, but still somewhat useful even with as many as 60K training instances. Nonetheless, the effects of conflicting gradients on the earliest EarlyExit classifier tend to disappear when training on the full dataset (400K instances), making it as good as the smallest Multi-Model classifier. Moreover, in that setup, the use of SWEET seems to substan- ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) tially decrease the performance of later classifiers, suggesting that the harm caused by limiting the influence of classifiers on model parameters may grow with the amount of training data. ## 6 Related Work Several approaches have been proposed for reducing the inference time of large models (Treviso et al., 2022). These include knowledge distillation (Ba and Caruana, 2014; Hinton et al., 2015), in which the knowledge of a large teacher model is transferred to a smaller student model; pruning (LeCun et al., 1989; Frankle and Carbin, 2018), which removes unnecessary model parameters; and weight quantization (Jacob et al., 2018; Zafrir et al., 2019), which reduces the floating point precision of model weights. See (Treviso et al., 2023, Section 6), for a recent survey. Adaptive inference methods studied in this work are an orthogonal approach, which can be used in combination with these methods (Schwartz et al., 2020b). The use of Adaptive inference in deep neural networks has been extensively studied, with successful implementations in various types of networks including recurrent neural networks (Graves, 2016) and convolutional neural networks (Teerapittayanon et al., 2016; Huang and Chen, 2018). In the context of this work, it has also been applied to existing backbone pre-trained language models: Xin et al. (2020) implemented early exit strategies on top of classification tasks. Zhou et al. (2020) Introduced a patience-based exiting, requiring sequential classifiers to agree on a label for a prediction to be made. Xin et al. (2021) extended the use of Early-Exit in Transformers to regression tasks, as well as addressed the issue of reduced performance of the final classifier through the use of an alternating training algorithm. Schuster et al. (2021) proposed a confidence-based early exit model with guarantees on the agreement between early exit predictions and the final classifier. Liu et al. (2022) presented a strong baseline for efficient NLP by adding multiple classifiers to a BERT model during pre-training. Recently, Schuster et al. (2022) adjusted the Early-Exit method to language modeling for text generation, making dynamic computation at the single token level. Multi-Model approaches to adaptive inference have been proposed for vision tasks (Enomoro and Eda, 2021) as well as for NLP (Li et al., 2020; Varshney and Baral, 2022b). Mamou et al. (2023) introduced a two-tier design with a highly efficient model serving as the first model and a powerful model serving as the second, enabling the possibility of achieving extremely fast inference speed. Finally, the concept of conflicting gradients has been mainly studied in the context of multi-task learning, where a single model is faced with solving different tasks (Yu et al., 2020; Liu et al., 2021a). To the best of our knowledge, no previous work examined this in the context of Early-Exit. ## 7 Conclusion In this work, we analyzed the performance of two common adaptive inference methods–Early-Exit and Multi-Model. We found evidence that model weights are updated by conflicting gradients in the training process of Early-Exit models, causing classifiers to perform at a sub-optimal level. Despite this, we showed that regarding the entire speed-accuracy curve, Early-Exit is still favorable to Multi-Model due to the overhead of using independent model runs in a Multi-Model setup. To address these findings, we proposed SWEET, a novel Early-Exit method, which avoids conflicting gradients by allocating each Early-Exit classifier a subset of model weights which are updated solely by it. We found that for Early-Exit models trained with SWEET, early classifiers perform better than those of standard Early-Exit, but later classifiers of SWEET are not as good. These measures lead to SWEET outperforming both Early-Exit and Multi-Model in fast speeds, with slightly worse results than Early-Exit at slow speeds. Overall, our results demonstrate that Early-Exit models can benefit from fine-tuning algorithms that are tailored to their architecture, and that SWEET is a promising approach for improving the speedaccuracy trade-off of Early-Exit models in the context of adaptive inference. ## 8 Limitations This work focuses on the effects of adaptive inference in a low-resource setting, specifically when training data is limited. Our experiments (Section 5) suggest that the negative impact of conflicting gradients may be less prominent when larger amount of training data is available. Our experiments were conducted using relatively small pre-trained language models (≤ 350M parameters) due to computational constraints, and we defer the replication of our findings with larger, more powerful models to future work. Nonetheless, our results have important implications for the growing trend of increasingly large language models. We hope this work inspires further research on methods to reduce the computational cost of NLP. This work concentrates on evaluating the speedaccuracy trade-off of Multi-Model and Early-Exit at inference time. We recognize that there are additional factors, such as memory usage, batch processing, and training duration, that could be considered when comparing these methods. Finally, we experimented with seven text classification tasks in English. We recognize that results may vary for other tasks and languages. ## Acknowledgements We acknowledge Yarden Tal for her early contributions to this work, and Gabriel Stanovsky for his advice and meaningful feedback. This work was supported in part by the Israel Science Foundation (grant no. 2045/21), NSF-BSF grant 2020793, and by a grant from Intel Labs. ## References Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2654–2662. Curran Associates, Inc. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In *Proc. of TAC*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Matthieu Courbariaux, Yoshua Bengio, and JeanPierre David. 2014. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*, pages 177–190. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In *Proc. of IWP*. Shohei Enomoro and Takeharu Eda. 2021. Learning to cascade: Confidence calibration for improving the accuracy and computational cost of cascade inference systems. In *Proceedings of the AAAI Conference on* Artificial Intelligence. Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. arXiv:1603.08983. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In *ICML*. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In *Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual* Entailment, volume 7. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Cite arxiv:1503.02531 Comment: NIPS 2014 Deep Learning Workshop. Gao Huang and Danlu Chen. 2018. Multi-scale dense networks for resource efficient image classification. ICLR 2018. Shankar Iyer, Nikhil Dandekar, and Kornél Csernai. 2016. First quora dataset release: Question pairs. https://data.quora.com/ First-Quora-Dataset-Release-Question-Pairs. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. *2018 IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pages 2704–2713. Yann LeCun, John Denker, and Sara Solla. 1989. Optimal brain damage. *Advances in neural information* processing systems, 2. Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2020. Cascadebert: Accelerating inference of pre-trained language models via calibrated complete models cascade. *arXiv preprint* arXiv:2012.14682. Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. 2021a. Conflict-averse gradient descent for multi-task learning. *Advances in Neural Information Processing Systems*, 34:18878–18890. Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. FastBERT: a selfdistilling BERT with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035– 6044, Online. Association for Computational Linguistics. Xiangyang Liu, Tianxiang Sun, Junliang He, Jiawen Wu, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2022. Towards efficient NLP: A standard evaluation and a strong baseline. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3288–3303, Seattle, United States. Association for Computational Linguistics. Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2021b. Towards efficient nlp: A standard evaluation and a strong baseline. arXiv preprint arXiv:2110.07038. Jonathan Mamou, Oren Pereg, Moshe Wasserblat, and Roy Schwartz. 2023. TangoBERT: Reducing inference cost by using cascaded architecture. In In Proc. EMC2. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *EMNLP*. Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q Tran, Yi Tay, and Donald Metzler. 2022. Confident adaptive language modeling. *arXiv* preprint arXiv:2207.07061. Tal Schuster, Adam Fisch, Tommi Jaakkola, and Regina Barzilay. 2021. Consistent accelerated inference via confident adaptive transformers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4962–4979. Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. 2020a. Green AI. *Communications of the* ACM, 63(12):54–63. Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. 2020b. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6640–6651, Online. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proc. of EMNLP*. Surat Teerapittayanon, Bradley McDanel, and HsiangTsung Kung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 2464–2469. IEEE. Neil C Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F Manso. 2020. The computational limits of deep learning. *arXiv preprint* arXiv:2007.05558. Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H. Martins, André F. T. Martins, Peter Milder, Colin Raffel, Jessica Forde, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Stubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, and Roy Schwartz. 2023. Efficient methods for natural language processing: A survey. *TACL*. Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H Martins, et al. 2022. Efficient methods for natural language processing: A survey. *arXiv preprint* arXiv:2209.00099. Neeraj Varshney and Chitta Baral. 2022a. Model cascading: Towards jointly improving efficiency and accuracy of nlp systems. arXiv preprint arXiv:2210.05528. Neeraj Varshney and Chitta Baral. 2022b. Model cascading: Towards jointly improving efficiency and accuracy of nlp systems. In *In Proceedings of EMNLP*. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proc. of* ICLR. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. TACL. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proc. of* NAACL. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2246–2251, Online. Association for Computational Linguistics. Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. BERxiT: Early exiting for BERT with better fine-tuning and extension to regression. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 91–104, Online. Association for Computational Linguistics. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. 2020. Gradient surgery for multi-task learning. *Advances* in Neural Information Processing Systems, 33:5824– 5836. Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS), pages 36–39. Zhen Zhang, Wei Zhu, Jinfan Zhang, Peng Wang, Rize Jin, and Tae-Sun Chung. 2022. Pcee-bert: Accelerating bert inference via patient and confident early exiting. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 327–338. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: fast and robust inference with early exit. In *Proceedings of the 34th International Conference* on Neural Information Processing Systems, pages 18330–18341. ## A Implementation Details Further implementation details For fine-tuning BERT models, we use a batch size of 16. For finetuning DeBERTa, due to GPU memory constraints, we use a batch size of 16 for BASE and 8 for LARGE. We fine tune the models for two epochs with a maximal sequence length of 256. We use β = 0.9, 0.999 for the *AdamW* optimizer with linear LR-decay. We optimize the initial learning rate for each method, model size & task by performing a search over five values {1e-5, 2e-5, 3e-5, 4e-5, 5e-5} and choose the value leading to the largest area under the speed-accuracy curve. Chosen LRs are presented in table Table 2 All experiments were run on a single NVIDIA A5000 GPU. The overall computational budget was ∼1000 GPU hours. We implement all methods using the HuggingFace Transformer library (Wolf et al., 2020). Speedup evaluation The speed-up ratio of the model when using exit threshold t is calculated using the formula: $${\mathrm{speedup}}_{t}={\frac{\sum_{i=1}^{M}S_{i}^{t}\cdot L_{i}}{L_{M}\cdot\sum_{i=1}^{M}S_{i}^{t}}}\qquad\qquad(1)$$ where M is the number of classifiers used for the model, Li denotes the number of the layer preceding the i-th classification head and S t i denotes the number of samples classified by the i-th classifier when using threshold t. The same exit threshold can lead to different speedup ratios amongst models trained with different random seeds. We use linear interpolation to approximate the accuracy score at set speedup ratios. We evaluate at (1/12, 1/4, 1/2, 3/4, 1)15 and report the average across three random seeds as well as a 95% confidence interval.16 We use temperature scaling (Guo et al., 2017) to make classifiers confidence, and therefore early-exiting decisions, more reliable. Note that this scaling is monotonic and therefore does not influence predictions. ## B Conflicting Gradients We replicate the experiment done in Section 3.1 with another batch of size 16 to examine if our findings generalize. results presented in Fig. 9 show a similar trend to Fig. 3: Gradient updates of current classifiers are roughly orthogonal to those of future classifiers, whereas future classifier updates are more aligned. ![11_image_0.png](11_image_0.png) ## C Deberta Individual Layer Comparison Table 3 Shows individual classifier results for DeBERTa models. As with the BERT results, Multi-Model classifiers outperform corresponding Early-Exit classifiers. SWEET early classifiers are better than Early-Exit ones, while later classifiers tend to downgrade in performance. ## D Individual Task Results Fig. 10 shows results on individual tasks for BERT*LARGE* and DeBERTa (BASE & LARGE). For BERT*LARGE*, SWEET outperforms both baselines throughout the entire speed-accuracy curve over all tasks examined. For DeBERTa models, results are similar to those of BERT*BASE*, where SWEET performs better at high speeds (small speedup ratio) and is dominated at low speeds (where later classifiers do most of the heavy lifting). | Model | Size | Task | | | | | | | |---------|--------|--------|-------|-------|-------|-------|-------|-------| | SST-2 | MRPC | CoLA | MNLI | QQP | QNLI | RTE | | | | BERT | BASE | 5\5\5 | 5\3\4 | 4\4\5 | 5\5\5 | 5\5\5 | 5\4\5 | 5\5\4 | | LARGE | 4\2\2 | 4\5\4 | 4\3\4 | 4\3\4 | 5\3\4 | 4\4\3 | 3\3\3 | | | DeBERTa | BASE | 3\3\3 | 4\5\5 | 4\5\4 | 2\4\3 | 4\5\5 | 3\4\3 | 2\1\4 | | LARGE | 2\4\3 | 3\2\1 | 3\3\2 | 2\2\2 | 2\2\3 | 3\2\1 | 1\2\2 | | Table 2: Chosen initial learning rate to optimize area under the speed-accuracy curve of each model, size, task. All numbers should be multiplied by 1e-5. x\y\z represent the initial learning rate of Early-Exit \ Multi-Model \ SWEET respectively. | Size | Method | Exit Layer | Size | Method | Exit Layer | | | |----------------------------------------|---------------------------------|---------------------------------|---------|----------|--------------|----|----| | 1 | 4 | 6 | 12 | 1 | 4 | 6 | 12 | | MM | 60.10.4 74.60.6 78.31.1 82.20.7 | | | | | | | | BASE | EE | 52.30.4 70.90.6 77.90.8 81.70.1 | | | | | | | SWEET 59.11.5 72.40.2 75.11.1 79.50.5 | MM | 53.70.5 66.10.4 71.10.3 | 75.42.7 | | | | | | BASE | EE | 49.51.1 63.00.2 70.00.6 | 75.30.4 | | | | | | SWEET 53.50.4 64.90.9 68.51.4 | 74.10.4 | | | | | | | | 1 | 6 | 12 | 24 | 1 | 6 | 12 | 24 | | MM | 61.90.1 76.20.4 80.20.9 86.80.2 | | | | | | | | LARGE | EE | 55.80.9 74.71.1 79.20.8 83.61.9 | | | | | | | SWEET 61.30.7 76.00.5 77.80.4 82.70.4 | MM | 51.60.3 59.90.4 67.62.6 | 77.90.8 | | | | | | LARGE | EE | 50.50.8 59.30.9 68.81.3 | 76.61.6 | | | | | | SWEET 53.90.5 60.60.2 69.50.4 75.811.0 | | | | | | | | ## E Learning-To-Exit Individual Layer Comparison Table 4 Shows individual classifier results for BERT models fine tuned with the learning to exit strategy. As with the confidence-based results, Multi-Model classifiers outperform corresponding Early-Exit classifiers. SWEET early classifiers are better than Early-Exit ones, while later classifiers tend to downgrade in performance. ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✗ A2. Did you discuss any potential risks of your work? Our work does not present any risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? We used ChatGPT for assistance, purely with the language of the paper ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 + 4 ✓ B1. Did you cite the creators of artifacts you used? Section 3 + 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We use publicly available data sets which are commonly used during research ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Such terms were not specified. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use publicly available data sets which are commonly used during research ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We use publicly available data sets which are commonly used during research ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 + 4, Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 3 + 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 1 + 3, Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 + 4, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 + 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
ho-etal-2023-large
Large Language Models Are Reasoning Teachers
https://aclanthology.org/2023.acl-long.830
Recent works have shown that chain-of-thought (CoT) prompting can elicit language models to solve complex reasoning tasks, step-by-step. However, prompt-based CoT methods are dependent on very large models such as GPT-3 175B which are prohibitive to deploy at scale. In this paper, we use these large models as reasoning teachers to enable complex reasoning in smaller models and reduce model size requirements by several orders of magnitude. We propose Fine-tune-CoT, a method that generates reasoning samples from very large teacher models to fine-tune smaller models. We evaluate our method on a wide range of public models and complex tasks. We find that Fine-tune-CoT enables substantial reasoning capability in small models, far outperforming prompt-based baselines and even the teacher model in many tasks. Additionally, we extend our method by leveraging the teacher model{'}s ability to generate multiple distinct rationales for each original sample. Enriching the fine-tuning data with such diverse reasoning results in a substantial performance boost across datasets, even for very small models. We conduct ablations and sample studies to understand the emergence of reasoning capabilities of student models. Our code implementation and data are available at \url{https://github.com/itsnamgyu/reasoning-teacher}.
# Large Language Models Are Reasoning Teachers Namgyu Ho, Laura Schmid, Se-Young Yun Kaist {Itsnamgyu, Laura.Schmid, Yunseyoung}@Kaist.Ac.Kr ## Abstract Recent works have shown that chain-of-thought (CoT) prompting can elicit language models to solve complex reasoning tasks, step-by-step. However, prompt-based CoT methods are dependent on very large models such as GPT-3 175B which are prohibitive to deploy at scale. In this paper, we use these large models as *reasoning teachers* to enable complex reasoning in smaller models and reduce model size requirements by several orders of magnitude. We propose *Fine-tune-CoT*, a method that generates reasoning samples from very large teacher models to fine-tune smaller models. We evaluate our method on a wide range of public models and complex tasks. We find that Fine-tuneCoT enables substantial reasoning capability in small models, far outperforming prompt-based baselines and even the teacher model in many tasks. Additionally, we extend our method by leveraging the teacher model's ability to generate multiple distinct rationales for each original sample. Enriching the fine-tuning data with such *diverse reasoning* results in a substantial performance boost across datasets, even for very small models. We conduct ablations and sample studies to understand the emergence of reasoning capabilities of student models.1 ## 1 Introduction Language models (LMs) have demonstrated remarkable performance in a wide range of downstream tasks. Recently, large language models (LLMs) have demonstrated in-context generalization capabilities: performing downstream tasks simply by conditioning on few in-context exemplars or plain natural language task descriptions (Brown et al., 2020; Sun et al., 2021). Despite these advancements, even the largest LLMs have been found to struggle with complex tasks which require multiple reasoning steps (Rae et al., 2021). 1Our code implementation and data are available at https://github.com/itsnamgyu/reasoning-teacher. ![0_image_0.png](0_image_0.png) To solve complex tasks, recent works show that it is possible to elicit reasoning abilities by prompting LLMs to perform *chain-of-thought* (CoT) reasoning, i.e., generate a series of intermediate reasoning steps. This can be achieved by providing CoT demonstrations as exemplars in prompting (Wei et al., 2022b). More recently, Kojima et al. (2022) found that LLMs can be prompted to perform CoT reasoning simply by providing a natural language instruction to *think step-by-step*. A major drawback of prompt-based CoT reasoning methods, however, is their reliance on extremely large models that span *hundreds of billions* of parameters (Wei et al., 2022b; Kojima et al., 2022). These models are prohibitive to deploy at scale due to overwhelming computational requirements and inference costs (Wei et al., 2022b). 14852 ![1_image_0.png](1_image_0.png) Therefore, we strive to enable complex reasoning in small models which are more feasible for largescale deployment. In this light, we propose an approach named Fine-tune-CoT, which utilizes the reasoning capabilities of very large LMs to teach small models how to solve complex tasks. We apply existing zero-shot CoT prompting (Kojima et al., 2022) to generate rationales from very large *teacher* models, and use them to fine-tune smaller *student* models2. We illustrate this in Figure 2. We note that standard fine-tuning *without rationales* has been shown to be inadequate for solving reasoning tasks with small models (Talmor et al., 2018). While there have been attempts to fine-tune small models with hand-annotated reasoning steps (Nye et al., 2021; Cobbe et al., 2021), they often require task-specific training setups and high-quality rationales which are costly to annotate (Wei et al., 2022b). In contrast, our approach can be readily applied to novel downstream tasks without hand-crafted reasoning or task engineering. We also propose a novel extension to our method, termed *diverse reasoning*, to maximize the teaching effects of Fine-tune-CoT. Inspired by the intuition 2This can be interpreted as a variant of knowledge distillation (Hinton et al., 2015). that complex tasks can have multiple solutions with distinct reasoning paths (Evans, 2010), we generate multiple reasoning solutions from teacher models using stochastic sampling to augment the training data for student models3. We find that this is a simple yet highly effective approach to maximizing student performance, which has not been explicitly recognized in concurrent works on fine-tuning with CoT reasoning (Huang et al., 2022; Li et al., 2022b; Magister et al., 2022; Fu et al., 2023). We evaluate our method on 12 tasks using a wide range of publicly available models. We find that Fine-tune-CoT can elicit notable reasoning performance in small models while preserving much of the versatility of prompt-based CoT reasoning, which previously required >100B parameter models (Wei et al., 2022b). Diverse reasoning enables remarkable gains in performance at the minor cost of additional teacher inference at development time, by exploiting our unique learning setup. This enables models as small as 0.3B to outperform larger students, and even the 175B teacher model in some tasks. Our ablations show that performance is consistently scalable across all axes considered: diverse reasoning, dataset size, teacher performance, and student model size. This shows the potential of our method to enable reliable performance in small models that are feasible for use in real-world applications. Lastly, we conduct thorough sample studies and analyses which shed light on crucial details previous overlooked in fine-tuning for CoT and provide intuition on the emergence of reasoning abilities in small models. ## 2 Related Work Downstream transfer in language models Much previous work established a "pre-train and fine-tune" paradigm for enhancing LLM performance on downstream tasks (Radford et al., 2018; Dong et al., 2019; Vaswani et al., 2017; Devlin et al., 2018). However, fine-tuning is not always easily applicable (Hendrycks et al., 2020). More recent literature exhibits a paradigm shift towards "prompting" the model to predict the desired output (Liu et al., 2021; Raffel et al., 2020). Large LMs can exhibit strong performance in this setting (Brown et al., 2020). For smaller models to be able to perform similarly, additional engineering is usually required (Gao et al., 2021; Schick and Schütze, 2021b; Schick et al., 2020). For more complex tasks, the idea of using samples with explicit reasoning steps for fine-tuning a model (Nye et al., 2021; Cobbe et al., 2021) preceded the approach of chain-of-thought (CoT) prompting (Wei et al., 2022b), which enables very large LMs to perform well. Chain-of-thought reasoning In few-shot CoT prompting, the model learns to generate intermediate reasoning steps that lead to a problem solution, after being fed examples of step-by-step reasoning. This enables very good performance on a wide range of tasks. (Wang et al., 2022). Additionally, LLMs can perform well in an unsupervised task-agnostic setting, using Zero-shot-CoT (Kojima et al., 2022). This requires no fine-tuning or task specific conditioning, and substantially outperforms standard zero-shot learning and sometimes even few-shot learning on a wide number of tasks. Yet, prior work has shown that CoT requires extremely large models for optimal performance (Hoffmann et al., 2022; Chowdhery et al., 2022). In our work, we contrast this by showing how to utilize CoT reasoning methods for smaller models by fine-tuning them on rationales generated by a very large model. Using various LLMgenerated explanations for fine-tuning smaller models has been successfully used in prior work (Li et al., 2022a), with a focus on specific single tasks. Also, a similar approach to ours is mentioned in (Huang et al., 2022); however we note that this concurrent work focuses on using Few-shot-CoT to self-generate fine-tuning examples by and for very large proprietary models. There is a brief glimpse into fine-tuning on smaller distilled models, but the results are limited to one dataset and very large teacher models that are inaccessible to the general community. In contrast, we provide a rich set of results and qualitative/quantitative analysis on a wide range of datasets, using open-source models that are small and accessible to everyone. Knowledge distillation Typically, knowledge distillation (KD) refers to training small models derived from large models in order to reduce model size and latency, while still preserving accuracy and capacity to generalize (Hinton et al., 2015; Sanh et al., 2019). Essentially, KD is a form of model compression, making efficient deployment to capacity-limited devices possible (Bucilua et al., 2006). We note that our work could also be considered a distant variant of KD (Gou et al., 2021), similar to works on improving prompt-based methods such as Yoo et al. (2021); Schick and Schütze (2021b,a); Zelikman et al. (2022), or works on datafree distillation (Micaelli and Storkey, 2019; Nayak et al., 2019; Shen et al., 2021), where the transfer data is synthetically generated from a large teacher model. Similarly, sequence-level distillation, i.e. training a student model on sequence distributions of a larger teacher, can make neural machine translation more efficient (Kim and Rush, 2016). Despite being similar in spirit, our method still distinguishes itself from such previous work. The role of the teacher model in our method is to teach the notion of intermediate reasoning. It is not the specific output that is the main supervising signal for reasoning, but rather the generation's structure. Hence, we do not use a standard KD loss function that reflects trying to match the teacher output. Adding to this, we note that our diverse reasoning is also unusual in the context of KD, where it is e.g. sufficient in practice to only generate one teacher sequence for sequence level distillation. ## 3 Chain-Of-Thought Fine-Tuning We propose Fine-tune-CoT, a task-agnostic approach to enable chain-of-thought reasoning in small LMs. The core idea is to generate reasoning samples from very large teacher models using CoT prompting and subsequently fine-tune small student models using the generated samples. This approach preserves the versatility of prompt-based CoT methods while overcoming their reliance on prohibitively large models. To maximize versatility and minimize teacher inference costs, we use the task-agnostic Zero-shot-CoT prompting method (Kojima et al., 2022) on teacher models, as it does not require any reasoning examples or long inference context. We discuss our choice of teacher CoT prompting method in Section 7.3. In the following, we characterize Fine-tune-CoT in three distinct steps. We also provide a visual overview in Figure 2. Step 1. Reasoning generation First, we utilize a large teacher model to generate CoT reasoning explanations for a given task. Consider a standard sample Si consisting of a question qi and its true answer ai. Using Zero-shot-CoT 4. we prompt the teacher model to generate a reasoning explanation, or rationale, rˆito solve question qi and make a final answer prediction aˆi. The resulting text sequence, including the prompt and generations, takes the following form: "Q: <qi>. A: Let's think step by step. <rˆi> Therefore, the answer is <aˆi>". Step 2. Curation Next, we filter the generated samples and reformat them into prompt-completion pairs. For filtering, we simply compare the final prediction of the teacher model aˆi with the groundtruth answer ai, following previous works (Zelikman et al., 2022; Huang et al., 2022). Note that this filtering incurs some loss of training samples. For all instances i where aˆi = ai, we repackage (Si, rˆi, aˆi) into a *reasoning sample* S′ i = (pi, ci), a prompt-completion pair. To maximize inferencetime efficiency, we use special-character based delimiters to minimize token usage. Specifically, pi and ci each take the form of "<qi> \#\#\#" and "<rˆi> --> <ai> END". We note that answer-based filtering does not ensure the correctness of the rationales, especially for multi-choice questions. We provide an analysis in Appendix E.1 regarding this important detail which has not been addressed in concurrent work. Step 3. Fine-tune Finally, we fine-tune a small pre-trained student model on the assembled reasoning samples. We use the same training objective of that used during pre-training, i.e., autoregressive language modeling objective, or next-token prediction (Radford et al., 2018). Diverse reasoning To maximize the teaching effects of Fine-tune-CoT, we can generate multiple reasoning explanations for each training sample. This approach is motivated by the intuition that multiple reasoning paths can be used to solve complex tasks, i.e., *type-2* tasks (Evans, 2010). We posit that this unique feature of complex tasks, in tandem with the stochastic generation abilities of the teacher model, can enable diverse reasoning to significantly boost reasoning supervision simply through additional teacher inference. In detail, for a given sample Si, instead of applying Zero-shot-CoT using greedy decoding to obtain a single explanation-answer pair (ˆei, aˆi), we use a stochastic sampling strategy, i.e., temperature sampling with large T, to obtain D distinct generations {(ˆrij , aˆij )} D j . Subsequent reasoning sample curation and fine-tuning then proceed as before. We refer to D as the *degree of reasoning diversity*. A similar approach is used in Wang et al. (2022); Huang et al. (2022), where multiple CoT outputs are generated and marginalized to find the optimal answer. However, the effects of such diverse reasoning on teaching student models has not been acknowledged or thoroughly investigated in concurrent work (Huang et al., 2022; Li et al., 2022a; Magister et al., 2022; Fu et al., 2023). We note that diverse reasoning imposes an important tradeoff between the development cost and inference cost/quality of student models which we discuss in Section 5.3. ## 4 Experiments Tasks and datasets We evaluate our method on 12 datasets pertaining to four categories of complex reasoning, following Kojima et al. (2022). These include arithmetic (SingleEq, AddSub, MultiArith, GSM8K, SVAMP), other (Date Understanding, Tracking Shuffled Objects), symbolic (Last Letter Concatenation, Coin Flip), and common sense (CommonSenseQA, StrategyQA) reasoning. We provide details and references in Appendix B. 4Note that Zero-shot-CoT is itself a two-step prompting method. The reasoning (blue) is generated in the first step and answer prediction (red) is generated in the second step. | Method | Params | Single | Add | Multi | GSM8K | Aqua | SVAMP | Date | Shuffled | Last | Coin | Common | Strategy | |-----------------------------------------|----------|----------|---------------|---------|---------|--------|---------|--------|------------|--------|--------|----------|------------| | Eq | Sub | Arith | Understanding | Objects | Letter | Flip | SenseQA | QA | | | | | | | Random | 0.00 | 0.00 | 0.00 | 0.00 | 20.00 | 0.00 | 17.12 | 33.33 | 0.00 | 50.00 | 20.00 | 50.00 | | | Teacher: InstructGPT (text-davinci-002) | | | | | | | | | | | | | | | Zero-shot-CoT | 175B | 82.24 | 78.99 | 78.89 | 40.26 | 34.25 | 64.67 | 73.87 | 50.22 | 56.00 | 92.67 | 61.75 | 53.57 | | Student: GPT-3 (ada, babbage, curie) | | | | | | | | | | | | | | | Zero-shot | 6.7B | 0.66 | 0.84 | 3.33 | 1.74 | 16.54 | 2.67 | 9.91 | 32.89 | 0.00 | 56.67 | 20.23 | 52.98 | | Zero-shot-CoT | 6.7B | 1.32 | 2.52 | 5.00 | 2.35 | 21.26 | 1.33 | 15.32 | 31.11 | 0.00 | 46.67 | 19.98 | 51.09 | | Few-shot-CoT | 6.7B | 22.37 | 31.93 | 10.00 | 2.50 | 15.75 | 11.33 | 12.84 | - | 0.67 | 40.00 | 24.73 | 54.68 | | Fine-tune | 6.7B | 24.34 | 25.21 | 15.00 | 6.14 | 15.35 | 20.67 | 14.41 | 33.78 | 32.67 | 72.00 | 76.17 | 65.21 | | Fine-tune-CoT | 0.3B | 7.24 | 6.72 | 6.11 | 3.11 | 23.62 | 5.00 | 17.12 | 49.33 | 50.67 | 99.33 | 32.68 | 52.55 | | 1.3B | 11.18 | 11.76 | 13.33 | 4.70 | 19.69 | 8.00 | 38.74 | 52.44 | 50.67 | 100.00 | 43.08 | 52.69 | | | 6.7B | 20.39 | 21.01 | 33.33 | 6.75 | 24.02 | 12.67 | 60.36 | 64.44 | 52.67 | 98.67 | 56.76 | 55.02 | | | Fine-tune-CoT | 0.3B | 9.21 | 10.08 | 23.89 | - | - | 14.33 | 58.56 | 61.78 | 59.33 | 99.33 | - | 57.21 | | w/ diverse reasoning | 1.3B | 18.42 | 19.33 | 27.78 | - | - | 16.33 | 70.27 | 72.00 | 60.67 | 100.00 | - | 57.06 | | 6.7B | 24.34 | 31.09 | 53.33 | - | - | 30.33 | 83.78 | 73.33 | 62.00 | 100.00 | - | 58.22 | | Models For teacher models, we use four variants of GPT-3 175B (Brown et al., 2020), provided by the OpenAI API. Unless otherwise stated, we use text-davinci-002 based on InstructGPT 175B (Ouyang et al., 2022) as the teacher for Finetune-CoT. For student models, we consider four popular model families. For our main experiments, we use GPT-3 {ada, babbage, curie} as they are readily available for fine-tuning via the OpenAI API. Due to the blackbox nature of the API, we also consider various open-source models under controlled settings. We use GPT-2 {Small, Medium, Large} (Radford et al., 2019) and T5- {Small, Base, Large} (Raffel et al., 2020) as representative model families for decoder-only and encoder-decoder architectures, respectively. We also use the instruction-tuned version of T5, FlanT5-{Small, Base, Large} (Chung et al., 2022), to investigate the effects of instruction tuning on student models, prior to applying Fine-tune-CoT. These student models are 25–2500x smaller than the teacher model, thus considerably more feasible for realworld deployment. We provide details on models and API usage in Appendix C. Baseline methods We provide a comparison of Fine-tune-CoT (ours) with four baseline methods: standard zero-shot prompting, vanilla fine-tuning, Zero-shot-CoT (Kojima et al., 2022), and Few-shotCoT (Wei et al., 2022b). Given a training sample {(qi, ai)}i, we use a simple format "Q: <qi>" for | Method | Model | CoT | Sample | Teacher Reference | | |---------------|---------|-------------|----------|---------------------|------------------------| | Updates | Output | Utilization | Usage | | | | Zero-shot | ✗ | ✗ | ✗ | ✗ | (Radford et al., 2019) | | Zero-shot-CoT | ✗ | ✓ | ✗ | ✗ | (Kojima et al., 2022) | | Few-shot-CoT | ✗ | ✓ | △ | ✗ | (Wei et al., 2022b) | | Fine-tune | ✓ | ✗ | ✓ | ✗ | (Radford et al., 2018) | | Fine-tune-CoT | ✓ | ✓ | ✓ | ✓ | Ours | zero-shot prompting. For vanilla fine-tuning, we format the prompt and completion as "<qi> \#\#\#" and "<ai> END", respectively. We clarify the taxonomy of methods in Table 2. For text generation, we use greedy decoding following Wei et al. (2022b); Kojima et al. (2022) throughout our experiments, except for diverse reasoning. For diverse reasoning on the teacher, we use temperature sampling with T = 0.7, following Wang et al. (2022). We provide experimental details in Appendix A. ## 4.1 Results In this section, we present the reasoning performance of models using Fine-tune-CoT and diverse reasoning. We compare with various baselines and demonstrate the scalability of our method across four axes: degree of diverse reasoning (Figure 3), dataset size (Figure 4), performance of the teacher (Figure 5), and size of the student model (Figure 6). We present our findings on GPT-3 models in the main text and defer results on open-source models to Appendix G, with a brief summary at the end of this section. Fine-tune-CoT elicits complex reasoning in small models Table 1 summarizes the accuracy of student models using the proposed Fine-tuneCoT, compared to prompt-based CoT baselines as well as standard fine-tuning. While Zero-shot-CoT exhibits remarkable performance on the very large 175B model (Kojima et al., 2022), it fails to enable complex reasoning in all three smaller models, showing near-negligible performance across all tasks. We also find that small models are unable to approach these tasks under standard zero-shot prompting. On the other hand, Fine-tune-CoT elicits notable reasoning performance, demonstrating significant gains over Zero-shot-CoT when using smaller models and outperforming both fine-tuning and Few-shot-CoT in more than half of the tasks. For complex arithmetic, Fine-tune-CoT achieves a notable 33% accuracy on MultiArith while Zeroshot-CoT only reaches 5%. Few-shot-CoT and fine-tuning only achieve 10% and 15%, respectively. For two commonsense reasoning tasks, our method outperforms the near-random performance of Zero-shot-CoT by 37% and 5%, respectively. Furthermore, it surpasses Few-shot-CoT on CommonSenseQA by 32% and performs similarly on StrategyQA. We observe that Fine-tune-CoT performance is most notable for tasks that are not overly complex, which include other reasoning tasks (Date Understanding, Shuffled Objects) and symbolic reasoning (Last Letter, Coin Flip), significantly outperforming other baselines. See Appendix Table 9 for performance of all students. Small models can outperform very large teachers in reasoning Table 1 also shows that Finetune-CoT is highly effective on small models compared to the large 175B teacher model. For the tasks Shuffled Objects and Coin Flip, Fine-tuneCoT is shown to outperform the teacher model using either 1.3B or 6.7B parameters, i.e., reducing the number of required parameters by approx. 25–100x. We also find that Fine-tune-CoT with the very small 0.3B model consistently outperforms the 6.7B model under Zero-shot-CoT, demonstrating that our method is able to unlock a wider range of capabilities compared to the baseline, even when model size is vastly reduced. ![5_image_0.png](5_image_0.png) Diverse reasoning substantially improves Finetune-CoT performance. To examine the learning effects of diverse reasoning and compare it with two baselines given by fine-tuning and Few-shotCoT, we apply Fine-tune-CoT using 1–64 reasoning explanations per sample across three model scales on MultiArith and SVAMP5. Figure 3 shows that diverse reasoning can significantly improve the performance of student models using Fine-tuneCoT. For the 6.7B student model, we find a boost of around 26% on MultiArith, and around 17% on SVAMP. We also note that using diverse reasoning always leads to outperforming the baseline within the respective model size, and can even boost performance of our method beyond that of a larger model that does not use diverse reasoning. This even includes the teacher in two cases (Date Understanding, Last Letter). Moreover, we find that diverse reasoning can boost the performance of Finetune-CoT to surpass that of both Few-shot-CoT and vanilla fine-tuning across all model sizes. We posit that due to our focus on *complex tasks*, the diversity of reasoning paths and linguistic templates can substantially aid in teaching student models to reason. Fine-tune-CoT consistently benefits from more data. We perform an ablation on dataset size to study the performance scalability of our method with dataset size. We see that the performance of ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) the 6.7B model clearly scales with the size of the dataset, independent of the task. In comparison, vanilla fine-tuning does not always exhibit this behavior. In fact, for Date Understanding, we find that an increase in dataset size harms the performance of fine-tuning. Furthermore, Fine-tune-CoT sees additional benefits from diverse reasoning, which is not applicable in standard fine-tuning. Better reasoners are better teachers Next, we can ask the question of whether the performance of the teacher is correlated with that of their student when using Fine-tune-CoT. To test this, we use different versions of GPT-3 as teacher models, keeping the size of the student model constant at 6.7B parameters (Figure 5). We find that student performance indeed scales with teacher performance, particularly in the less complex tasks Date Understanding and Last Letter. There, the performance of the student matches the performance of the teacher very closely. This also fits with our observations in Appendix D, which show that the successes and failures of teachers are correlated with those of the students. We note that this scaling effect is in contrast not a given in knowledge distillation, where more accurate teachers do not always result in better students (Menon et al., 2021). Fine-tune-CoT performance scales with model size for small LMs Finally, we explore the effect of scaling up student model size on our method, and compare it with the effects of increasingly larger student models in Few-shot-CoT as well as vanilla fine-tuning. We can observe that the performance of Fine-tune-CoT is consistently scalable with student size (Figure 6). In contrast, the two baselines do not always exhibit the same behavior: in Date Understanding, neither Few-shot-CoT nor vanilla fine-tuning results in scalable performance. Results on open-source student models Overall, our findings on T5, Flan-T5, and GPT-2 show similar trends to those observed on GPT-3. Small models exhibit near-random performance under standard zero-shot or CoT prompting in nearly all cases. Notable, we find that encoder-decoder models, i.e., T5 and Flan-T5, show noteworthy performance under standard fine-tuning, suggesting that causal masking may be a bottleneck to reasoning in decoder-based language models in the absence of CoT output. Fine-tune-CoT consistently outperforms prompt-based baselines and is comparable or superior to vanilla fine-tuning. Diverse reasoning improves performance even further, often exhibiting significant gains. We report our full findings on open-source models in Appendix G. ![7_image_0.png](7_image_0.png) ## 4.2 Analysis Sample study To identify the strengths and weaknesses of our method, we perform a thorough sample study across all datasets and methods. Across all arithmetic tasks, we find that a large portion of errors arises from calculations. MultiArith and SVAMP also show many semantic errors, but these are significantly reduced with diverse reasoning. For difficult tasks such as GSM8K and AQUA, we found that all methods tend to struggle. We found that our method is highly effective in text-based tasks, excluding commonsense reasoning, as well as tasks that contain common linguistic patterns. On the other hand, we find that students under Zero-shot-CoT often repeat questions or produce incoherent repetitive statements. While Few-shotCoT elicits step-by-step sentences, the student models rarely seem to understand the semantics of the question, and generations often contain logical or commonsense errors. For details on our sample study, see Appendix D. ## Nuances Of Fine-Tuning On Cot Reasoning We shed light on nuances that have often been overlooked in previous or concurrent work (Wei et al., 2022b; Li et al., 2022a; Magister et al., 2022). First, we acknowledge the possibility that *correct* samples may contain incorrect reasoning. In fact, we find that 27.6% of *correct* teacher completions for Date Understanding contained reasoning errors. However, ablations on rationale filtering suggest that these incorrect rationales can aid in student supervision (Appendix E.1). Secondly, we find that common maximum sequence lengths used for CoT generations often lead to incomplete answers. We observe that reasoning length differs among datasets, and longer generations typically improve accuracy, but may not be beneficial for fine-tuning (Appendix E.2). Lastly, we find that many datasets are comprised of samples that share common templates, potentially compromising the validity of our random train-test splits. To address this, we evaluate our method on manual template-wise data splits, and confirm that students retain meaningful reasoning capabilities (Appendix E.3). ## 5 Discussion 5.1 Accessibility Of Fine-Tune-Cot Owing to the versatility of the teacher generation method, i.e., Zero-shot-CoT, our method can be readily applied to any complex task without taskspecific engineering. Rationales can be readily generated using publicly available APIs such as those provided by OpenAI or Anthropic. This makes it viable to obtain CoT training data in low-resource scenarios, which not only outperforms standard fine-tuning, but elicits the student to output interpretable explanations. Fine-tuning and inference on student models can also be performed on much more accessible hardware, in contrast to very large models. This can reduce long-term inference costs and minimize environmental impact while making our method fully accessible to a wide community. ## 5.2 Viability Of Fine-Tune-Cot While Fine-tune-CoT elicits notable complex reasoning capabilities in small models, performance on some difficult datasets would not be considered viable for real-world use, such as 30.33% on SVAMP. However, our findings in Section 4.1 indicates significant potential for improvement, as our method is shown to be uniquely scalable with (1) diverse reasoning, (2) dataset size, (3) teacher model performance, and (4) student model size. The use of diverse reasoning and better teacher models is especially promising, as these can benefit from improved teacher LLM performance and inference costs in the future. In addition, it is possible to incorporate recent CoT methods, which lead to significant performance improvements, in student models, which we discuss in Section 7.3. ## 5.3 Tradeoffs Of Fine-Tune-Cot The aforementioned opportunities to enhance Finetune-CoT also pose many important tradeoffs. We leave further analysis to future work. Degree of diverse reasoning The performance benefits of diverse reasoning come at the cost of additional teacher inference. Therefore, diverse reasoning poses a tradeoff between development cost vs inference cost/quality. In other words, performance gains from diverse reasoning may be utilized to enhance student performance or alleviate the need for larger student models. This must also be taken into account for fair evaluation of similar distillation methods in the future. Data acquisition Data annotation and diverse reasoning can both be used to enlarge fine-tuning data, but each have their associated costs. We note that the cost of diverse reasoning is linear to the number of generated rationale and the number of original samples. Despite this, it can still be a costeffective alternative to hand-annotating additional data. A preliminary cost analysis in Appendix F shows that the pareto front of data-acquisition-cost to performance always incorporates diverse reasoning. We expect that the cost benefits of diverse reasoning will continue to improve with improvements in teacher model performance and efficiency. ## 5.4 Emergence Of Cot Reasoning The emergence of abilities such as CoT reasoning has become a point of interest in recent works (Wei et al., 2022b,a; Schaeffer et al., 2023). We note that the efficacy of Fine-tune-CoT on small models does not disprove this emergence, as our method is based on fine-tuning. However, we believe our results can provide some insight into this phenomena. Why does Fine-tune-CoT work in small models? In a seminal work, Wei et al. (2022b) suggests that CoT reasoning is an emergent ability of scale—more specifically, a complicated phenomena involving a variety of emergent abilities, such as semantic understanding, symbol mapping, arithmetic ability. However, our sample studies suggest that Fine-tune-CoT elicits these *emergent* abilities even in relatively small models (see Appendix D). We explain this from two perspectives. First, Wei et al. (2022b) demonstrated the emergence of reasoning abilties by identifying a reduction in the frequency of reasoning errors with larger model scale. Similarly, we find that more potent forms of supervision also lead to a *gradual* reduction in reasoning errors. For example, we found a clear distinction between Zero-, Few-shot-CoT and Finetune-CoT (with diverse reasoning) in the frequency and severity of semantic errors, i.e., understanding complex questions, and calculation errors. This suggests that explicit supervision on reasoning can also lead to the emergence of reasoning abilities. Second, we qualitatively find that students show capabilities that are reminiscent of the larger teacher model. We found that students can recognize common semantics and reasoning cues of the given task, and is able to imitate the process of splitting large tasks into subtasks. This suggests that it is possible to learn reasoning abilities pertaining to a particular domain. We posit that this is possible in small models due to the limited domain of reasoning, and may not be applicable in reasoning tasks that require large domains of knowledge. Distillation of emergent abilities Chain-ofthought reasoning has been recognized as a prime example of emergent abilities in very large language models (Wei et al., 2022a). Our findings show that it is possible to distill this ability, under certain domains, to much smaller models simply through fine-tuning. The potential for distillation implies that future advancements in language models may lead to emergent abilities that are not only pertinent to those larger models, but could also have a broader impact, cascading benefits to smaller models. ## 6 Conclusion We have proposed Fine-tune-CoT, a method that uses LLMs as *reasoning teachers* to transfer the broad reasoning capabilities previously found in >100B models to student models as small as 0.3B. We propose diverse reasoning as a novel approach to maximize these teaching effects, exploiting the unique characteristics of this new learning setup to *vastly* improve performance. Our extensive experiments show that Fine-tune-CoT elicits significant reasoning performance in small models, thus demonstrating the distillation of CoT reasoning which has been considered an *emergent* ability of scale. By leveraging publicly available models with zero-shot prompting, we demonstrate a taskagnostic approach to elicit reasoning performance in small models, making complex reasoning feasible for real-world deployment and accessible to the broader community. ## 7 Limitations 7.1 Towards Concise Answers Sample studies show that rationales output from student models may occasionally be repetitive and digressive. This is undesirable in terms of inference-time efficiency as well as interpretability. As a minor optimization to inference computation, we construct our fine-tuning sample templates using special-character based delimiters instead of natural language used in concurrent work (Huang et al., 2022) to minimize sequence length. Preliminary findings showed this had no significant impact on reasoning performance. More importantly, it is desirable to train student models to generate concise answers in terms of substance. Appendix E.2 hints at the possibility for this, showing that finetuning on shorter reasoning samples causes the student model to also produce shorter rationales. ## 7.2 Exploring A Wider Array Of Models We note that the performance of our method is currently not state-of-the-art. However, it can benefit from advances in teacher models as well as other prompting methods. For example, future work should include a wider array of teachers, such as the highly versatile ChatGPT, which typically generates detailed long responses that may be able to impart more knowledge to the student. More recent models such as GPT-4 have demonstrated significant advances in complex reasoning abilities, which may improve the efficacy of Fine-tuneCoT on very difficult datasets, such as GSM8K. Conversely, our method could prove even more advantageous when applied to recent models with improved efficiency, such as those based on the recent LLaMA model (Touvron et al., 2023), which has sparked a proliferation of work focused on compact language models. Both of these avenues are promising for future work. ## 7.3 Better Cot Inference Methods The use of diverse reasoning and better teacher or student models is especially promising, as it is possible to leverage future improvements in model performance and decreased inference costs. However, we can also consider other ways to boost performance, such as using different prompting methods. For example, previous work shows that Few-shot-CoT (Wei et al., 2022b) can improve accuracy over Zero-shot-CoT by a wide margin, e.g., going from 78.7% to 93.0% on MultiArith (Kojima et al., 2022). However, our choice to use Zeroshot-CoT to generate reasoning samples from the teacher model is motivated by the fact that Fewshot-CoT requires a significantly larger inference context. With the current pricing models based on token usage, the typical setup of 8-shot CoT would cost approximately 8 times more compared to Zero-shot-CoT. Therefore, we see a tradeoff between using the inference budget for Few-shot-CoT and using it for diverse reasoning with Zero-shotCoT. On the other hand, we also note that recent works introduce various ways to improve CoT reasoning performance substantially (often to nearperfect levels), which can be applied to our student models. These include refinement over repeated inferences (Wang et al., 2022; Li et al., 2022b) and self-improvement (Zelikman et al., 2022; Huang et al., 2022). In particular, self-consistency (Wang et al., 2022) can be utilizied on unlabeled samples to maximize the teaching signal. In contrast, we aim to achieve CoT reasoning without the inference time cost incurred by very large LMs. Future work is needed to incorporate these methods into Fine-tune-CoT while minimizing development and inference costs. ## 7.4 Connection With Knowledge Distillation We assume that there is a lot of potential in strengthening the connections between knowledge distillation and our method. We have already seen in this work that our method shares some characteristics with KD, such as the fact that the knowledge of intermediate reasoning imparted by using also incorrect samples can have positive effects on student accuracy, akin to "dark knowledge" (Menon et al., 2021) that is transferred by training on teacher output logits and not one-hot labels. We have seen that this leads to a quantity-quality tradeoff when it comes to the ability of the student model to generalize: having fewer but perfectly curated reasoning samples is not necessarily as helpful as having a larger amount of reasoning samples that might not always be fully correct. On the other hand, we have also found that more accurate teachers do lead to more accurate students, which is not always the case in KD (Müller et al., 2019). It would therefore be of interest for future work to formalize the connection of Fine-tune-CoT with classic KD methods, and potentially test the use of a different distillation loss function that takes the teacher's actual output into account. ## 8 Ethics Statement Our work presents various challenges and opportunities in terms of bias and toxicity in language models. It is widely known that LLMs trained on large corpora have been shown to capture biases found in the training data (Brown et al., 2020; Chowdhery et al., 2022). Since our student models are trained on reasoning samples generated by these LLMs, it is possible that such characteristics of the teacher model can get passed along to the student. This is an important point to consider when selecting the teacher model for our method. Our training setup, however, does offer a unique opportunity to minimize bias and toxicity in student models, by influencing the samples used for fine-tuning. One approach would be to augment the curating step of Fine-tune-CoT to filter out biased or toxic samples. It is possible to automate this via neural network-based verifiers, previously used to filter correct output (Cobbe et al., 2021; Li et al., 2022b). Alternatively, one may consider optimizing the CoT prompts to minimize bias and toxicity in teacher-generated rationales. We note that bad actors can also potentially take advantage of our method to utilize complex reasoning for malicious purposes and deploy it at scale, using small models. This highlights the importance of safeguarding the potential capabilities of LLMs by major providers. To prevent the distillation of malicious reasoning abilities in small (or large) students, future work in identifying usage patterns involved in these distillation schemes may help providers apply more stringent precautions to these use cases. ## Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by Korea government (MSIT) [No. 2021-0-00907, Development of Adaptive and Lightweight Edge-Collaborative Analysis Technology for Enabling Proactively Immediate Response and Rapid Learning, 90%], [No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 5%], and the Stochastic Analysis and Application Research Center (SAARC) under the National Research Foundation of Korea grant (NRF-2019R1A5A1028324, 5%). ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In *Proceedings of the 12th ACM SIGKDD International* Conference on Knowledge Discovery and Data Mining, KDD '06, page 535–541, New York, NY, USA. Association for Computing Machinery. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *arXiv preprint* arXiv:2110.14168. Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, and Felix Hill. 2022. Language models show human-like content effects on reasoning. *arXiv preprint, arXiv:2207.07051*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc. Jonathan St BT Evans. 2010. Intuition and reasoning: A dual-process perspective. *Psychological Inquiry*, 21(4):313–326. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. arXiv preprint arXiv:2301.12726. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021. Knowledge distillation: A survey. *International Journal of Computer Vision*, 129(6):1789–1819. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. *arXiv preprint arXiv:2004.06100*. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. *arXiv* preprint arXiv:2203.15556. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In *EMNLP*, pages 523–533. Citeseer. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. arXiv preprint arXiv:2210.11610. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint, arXiv: 1606.07947. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics*, 3:585–597. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. arXiv preprint, arXiv: 2206.14858. Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng Yan. 2022a. Explanations from large language models make small reasoners better. *arXiv preprint, arXiv:* 2210.06726. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022b. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. *arXiv preprint arXiv:1705.04146*. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022. Teaching small language models to reason. arXiv preprint arXiv:2212.08410. Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. arXiv preprint, arXiv: 2202.04538. Aditya K Menon, Ankit Singh Rawat, Sashank Reddi, Seungyeon Kim, and Sanjiv Kumar. 2021. A statistical perspective on distillation. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning* Research, pages 7632–7642. PMLR. Paul Micaelli and Amos Storkey. 2019. *Zero-Shot* Knowledge Transfer via Adversarial Belief Matching, chapter -. Curran Associates Inc., Red Hook, NY, USA. Rafael Müller, Simon Kornblith, and Geoffrey Hinton. 2019. *When Does Label Smoothing Help?* Curran Associates Inc., Red Hook, NY, USA. Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. 2019. Zero-shot knowledge distillation in deep networks. In International Conference on Machine Learning, pages 4743–4751. PMLR. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. *arXiv preprint arXiv:2203.02155*. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? *arXiv preprint* arXiv:2103.07191. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. -. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Yasaman Razeghi, Robert L. Logan, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot reasoning. arXiv preprint, arXiv:2202.07206. Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. arXiv preprint arXiv:1608.01413. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? *arXiv preprint arXiv:2304.15004*. Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5569–5578, Barcelona, Spain (Online). International Committee on Computational Linguistics. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also fewshot learners. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352, Online. Association for Computational Linguistics. Chengchao Shen, Xinchao Wang, Youtan Yin, Jie Song, Sihui Luo, and Mingli Song. 2021. Progressive network grafting for few-shot knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2541–2549. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615. Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. *arXiv preprint* arXiv:2107.02137. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. *arXiv preprint* arXiv:2302.13971. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *arXiv preprint arXiv:1706.03762*. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. Transactions on Machine Learning Research. Survey Certification. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Kang Min Yoo, Dongju Park, Jaewook Kang, SangWoo Lee, and Woomyeong Park. 2021. Gpt3mix: Leveraging large-scale language models for text augmentation. *arXiv preprint arXiv:2104.08826*. Eric Zelikman, Yuhuai Wu, and Noah D Goodman. 2022. Star: Bootstrapping reasoning with reasoning. *arXiv preprint arXiv:2203.14465*. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. ## A Experimental Details A.1 Generation Maximum sequence length For the maximum sequence length of teacher-generated rationales, rˆi, we use Lr = 128, following Kojima et al. (2022), unless stated otherwise. For the maximum sequence length of the student model predictions, we use Lp = 1024, unless stated otherwise. We retroactively applied Lp = 1024 as the default, after discovering that Lp = 128 is insufficient for many tasks, as discussed in Appendix E.2. Sampling temperature We apply greedy decoding for all generations, except diverse reasoning, to obtain deterministic results following (Wei et al., 2022b; Kojima et al., 2022). For diverse reasoning, we use temperature sampling with T = 0.7 to obtain diverse samples, following a similar approach from Wang et al. (2022). ## A.2 Answer Cleansing We follow the method used in Kojima et al. (2022) to cleanse answers generated by models to assess their correctness. ## A.3 Few-Shot-Cot Exemplars For Few-shot-CoT prompting, we use exemplars provided by Wei et al. (2022b), with some minor formatting adaptations for consistency with our other experiments. For Last Letter Concatenation and Coin Flip, for which Few-shot-CoT prompts are not provided, we use 8 training samples used in our 8-shot data experiments shown in Figure 4 and adapt them for Few-shot-CoT using the format of Wei et al. (2022b). This was not applicable to Tracking Shuffled Objects, therefore it was omitted from Few-shot-CoT experiments. ## A.4 Fine-Tuning Openai Models We use default hyperparameters set by the OpenAI API for both vanilla fine-tuning and Fine-tune-CoT. While the specifics of the fine-tuning API is not publicly known, some details on hyperparameters are documented in the API reference 6. According to the default settings, our models are trained for 4 epochs. The batch size and learning rate determined based on the number of examples used for training. The batch size is set to 0.2% of the number of training examples capped at 256. The 6https://platform.openai.com/docs/api-reference/finetunes/create learning rate is set to 0.05, 0.1, or 0.2 times that of the learning rate used to pre-train the base model, depending on the batch size. Training loss is also applied to the prompt portion of the training examples, i.e., the question, with a small weight of 0.01. Based on API pricing , we posit that OpenAI employs a form of parameter efficient fine-tuning such as LoRA (Hu et al., 2021) for their fine-tuning API instead of updating all model parameters. ## A.5 Fine-Tuning Open Source Models For vanilla fine-tuning and Fine-tune-CoT on open source models, we strictly control for hyperparameters. Across all experiments, we fine-tune the entire model with a fixed learning rate of 3e-4 and batch size of 8. Upon inspection of model performance under various learning rates and batch sizes, we found that optimal parameters varies among datasets, even between those with similar number of reasoning samples. We train all models for a maximum of 20 epochs, which we found to be sufficient for test accuracy to plateau. We report the best test accuracy from 20 epochs, but found that performance varies significantly between epochs. Overall, we found that performance by epoch is stable for larger models, and that instruction-tuned Flan-T5 is more stable compared to T5. Similar to learning rate and batch size, the optimal number of epochs also varies between datasets, even those with similar number of reasoning samples. Based on the above, we note that our reported performances of fine-tuned open-source models may be significantly under-estimated compared to those with optimal hyperparameters, and recommend practitioners to optimize hyperparameters using a separate validation set, per each training setting. ## B Datasets We provide a summary of datasets used in our experiments, including their original licenses, in Appendix Table 3. We consider the 12 datasets used in Kojima et al. (2022) to measure reasoning performance. For Last Letter Concatenation and Coin Flip, we use the publicly available data provided by Kojima et al. (2022). Train-test split Contrary to previous works on prompt-based CoT such as Wei et al. (2022b); Kojima et al. (2022), our fine-tuning approach requires distinct sets of samples for training and testing. If Dataset Choices Training Samples Test Samples Data Split License References SingleEq - 356 152 70:30 None Koncel-Kedziorski et al. (2015) AddSub - 276 119 70:30 Unspecified Hosseini et al. (2014) MultiArith - 420 180 70:30 Unspecified Roy and Roth (2016) GSM8K - 7473 1319 Original MIT Cobbe et al. (2021) AQUA-RAT 5 10000 254 Custom Apache-2.0 Ling et al. (2017) SVAMP - 700 300 70:30 MIT Patel et al. (2021) Date Understanding 5–6 258 111 70:30 Apache-2.0 Srivastava et al. (2022) Tracking Shuffled Objects 3 525 225 70:30 Apache-2.0 Srivastava et al. (2022) Last Letter Concatenation - 350 150 70:30 Unspecified Wei et al. (2022b); Kojima et al. (2022) Coin Flip 2 350 150 70:30 Unspecified Wei et al. (2022b); Kojima et al. (2022) CommonSenseQA 5 9741 1221 Original Unspecified Talmor et al. (2018) StrategyQA 2 1603 687 70:30 Apache2.0 Geva et al. (2021) separate subsets for training and testing (or development) are provided, we use those. Otherwise, we perform a samplewise random split with a train-test ratio of 70:30. For AQUA, due to the disproportionately large size of the original training set, we randomly sample 10,000 instances for training in our experiments. Note that due to the highly templated nature of many datasets, this naive data split may not be appropriate for evaluating reasoning capabilities. This is an important nuance of fine-tuning on CoT reasoning, which we address in Appendix E.3. ## C Models And Api Usage Appendix Table 4 describes all teacher and student models used in our study. We use InstructGPT (Ouyang et al., 2022) as the default teacher model in our experiments, due to its superior zero-shot reasoning performance, compared to GPT-3 (Brown et al., 2020) of the same size (Kojima et al., 2022). | Model Family | Params | Role | Variant / Name in API | |----------------|----------|---------|-------------------------| | InstructGPT | 175B | Teacher | text-davinci-001 | | InstructGPT | 175B | Teacher | text-davinci-002 | | InstructGPT | 175B | Teacher | text-davinci-003 | Specifically, we use text-davinci-002 at the default, as it was the best available model at the start of our experiments. We were unable to consider small InstructGPT models for fine-tuning, as it is not offered by the OpenAI API. We attach model size information based on (https://blog.eleuther.ai/gpt3-model-sizes/), following Kojima et al. (2022). Our total expenditure for API usage, including all preliminary experiments, was $1,981 USD. The majority of this expenditure occurred after September 1st, 2022, from which the pricing for inference on davinci models was $0.02/1K tokens, among others. Between teacher model inference, student model {fine-tuning, inference}, the majority of API usage in terms of cost was focused on teacher model inference. ## D Sample Study To understand where our method makes mistakes, where diverse reasoning can improve performance, and where our method always performs well, we observe randomly sampled instances and analyze the reasoning performance on them. To do so, we compare its generations for these samples with (1) the output of the large teacher model, (2) a student model using Zero-shot-CoT (3) a student model using Few-shot-CoT, and (4) a student model using fine-tuning without chain of thought reasoning. Our analysis reflects our overall findings, which we exemplify with representative examples in Tables 10–13. ## D.1 Error Analysis For our analysis of the most common types of errors, we take a look at datasets where we find particularly bad performance of our vanilla method, also in comparison to other students. We also discuss the benefits of using diverse reasoning in D.2. We summarize our observations below. Difficult datasets First, we observe that the sets GSM8K and AQUA appear to be too difficult for any small student model, in particular, given that the teacher model gets below 50% accuracy on both. In fact, even correct answers are usually correct only by chance, due to the high complexity of the tasks (Appendix Tables 10a,b). For AQUA in particular, we note that while we occasionally find meaningful reasoning in the 6.7B student model, students clearly cannot sufficiently learn to solve the tasks. We do note however that of all the student methods, Fine-tune-CoT still gets the best performance in these two datasets. A similar, if less salient, issue arises for StrategyQA. Here, the teacher also performs only 3% above the random guess accuracy of 50%. The smaller student models actually manage to improve on this performance as long as they do not use Zero-shot-CoT, in particular vanilla fine-tuning, but the errors arising in Fine-tune-CoT often look very similar to the ones in the large teacher model. We see that all models usually merely retrieve information related to the question, but cannot synthesize an answer from it (Appendix Tables 10c,11a). Arithmetic mistakes Next, we note that small models overall exhibit weak arithmetic skills. This has already been discussed in previous literature, where calculation capability has been found to scale with model size (Wei et al., 2022a). Especially in SingleEq (Appendix Table 11b) and AddSub (Appendix Table 11c), a majority of errors in the output of student models using Finetune-CoT simply arise from wrong calculations, less so bad reasoning. This is also a major factor in the bad performance our method exhibits on SVAMP as well as GSM8K; even correct multistep reasoning cannot compensate for the fact that the model's arithmetic tends to be wrong on intermediate steps (Appendix Tables 11d, 12a). Only the teacher model then does better on these tasks, given its much larger size, even though it does not get perfect accuracy either. However, we note here that very large language models, such as PaLM 540B, can be trained on arithmetic and scientific data to be able to reason correctly about a wide range of mathematical tasks in a step-by-step fashion (Lewkowycz et al., 2022). Problematic benchmarks, impact of commonsense reasoning errors Meanwhile, when looking at our method's performance in CommonsenseQA, we note that producing consistent multistep reasoning is not always the issue. We find that the student model utilizing Fine-tune-CoT can often generate logical reasoning paths for many of the samples that are marked as false (Appendix Table 13b). Rather, the exact answer is often subjective, making it difficult to guess the correct output from logical reasoning alone (Appendix Table 13c). CommonsenseQA thus is not always an ideal benchmark when judged on accuracy, but gives insight into how well the model can produce reasoning. We also note a difference compared to Few-shot-CoT in terms of the impact of reasoning errors: the latter only performs around 5% above random, lacks understanding of the question in many cases, and makes more severe logical and commonsense mistakes compared to our method. In fact, Fine-tune-CoT comes close to the teacher due to the relatively lower impact of errors that do arise (Appendix Table 13d). This suggests that Fine-tune-CoT enables stronger task-solving capabilities and avoids making serious commonsense mistakes that prevent it from arriving at a reasonable conclusion. Aligned failures Importantly, we note that for each dataset, there seems to be a difference between "easy" and "hard" instances. When we consider the accuracy of the teacher and other student models (using fine-tuning, Zero-shot- or Few-shot-CoT) on tasks where our method fails, we find that it is always lower than on tasks where our method is successful. That is, successes and failures tend to be aligned across the different methods. We can hypothesize that factors such as content bias may play a role here; language models have been found to fail depending on context and content of the task, in a way similar to human reasoners (Dasgupta et al., 2022). We can identify samples that hint at this issue when we look at questions that include phrasing that seems contradictory or counterintuitive to the context that the model expects (see Appendix Table 13d, where the number of movies watched is larger than the number of available movies). Additionally, previous work shows that GPT-3 exhibits a performance gap between instances including terms that are frequent in the pretraining corpus, and instances including less frequent terms (Razeghi et al., 2022). This can contribute to uneven performance on a multitude of (especially numerical) tasks across different methods and model sizes. We can then surmise the observed absolute differences in accuracy to stem from the various sources of errors for each method. For example, fine-tuning has much less room for error than Fine-tune-CoT, which can additionally make mistakes on intermediate reasonings such that errors compound. ## D.2 Improvements From Diverse Reasoning Semantic issues We find that models seem sensitive to how a question is formulated. This is noticeable in all datasets, in particular in SVAMP and to a certain degree in MultiArith. Besides arithmetic mistakes, we observe that such semantic issues are one of the main factors for uneven performance of vanilla Fine-tune-CoT on these two datasets. In particular, we observe this issue when there is redundant information present in the question (Appendix Table 12b). Such cases elicit wrong reasoning, or lead the model to become stuck on the question, similarly to what usually happens with Zero-shot-CoT in the student model (i.e. repeating the question, or coming up with information that only vaguely pertains to the question). Other common sources of errors are when hidden variables make up the first part of the task (i.e. those tasks that force the model to calculate a previously unknown value that is described in the first sentence (Appendix Table 12c), or when the model encounters overloaded words (e.g., "landing" in Appendix Table 12d). We also observe samples where the model gets stuck on an intermediate result (Appendix Table 13a). This observation agrees with previous findings that language models have recency bias (Zhao et al., 2021). However, this source of errors can be compensated for by using diverse reasoning. When comparing the generations from Few-shot-CoT, vanilla Fine-tune-CoT and Fine-tune-CoT with diverse reasoning on MultiArith, we find that diverse reasoning enables the model to understand the question better. While calculation errors are still relatively frequent, the generations show clear advantages in terms of semantic understanding and being able to reason logically as a consequence. This is especially clear when compared to Fewshot-CoT, which exhibits problems both in understanding the question and formulating coherent expressions, especially when three or more terms are involved in the calculation, as mentioned in Kojima et al. (2022). By contrast, Fine-tune-CoT with diverse reasoning makes for a significantly smoother reasoning performance than using Fewshot-CoT or even vanilla Fine-tune-CoT. This results in vastly improved accuracy on both MultiArith and SVAMP. ## D.3 Strengths Having analyzed the main sources of errors, we can now focus on the datasets that elicit good performance from our method, regardless of whether we use diverse reasoning. Text-based datasets As arithmetic errors are one of the main reasons for the decrease in performance of small student models, it comes as little surprise that our vanilla method without diverse reasoning performs well on datasets that are mainly text-based and do not require actual calculation skills. This includes Date Understanding (60.4%) (Appendix Table 14a), Last Letter Concatenation (52.67%) (Appendix Table 14b), Coin Flip (98.7%) (Appendix Table 14c), and Shuffled Objects (64.4%) (Appendix Table 14d). Our methods performs significantly above random choice on these sets, and additionally beats the teacher on Shuffled Objects and Coin Flip. We find that accuracy metrics for these sets are mostly faithful: while the elicited reasoning is not always very detailed, and occasionally misses some reasoning steps (Appendix Table 14e) , the model draws correct conclusions from mostly correct steps. We also note that similar to MultiArith and SVAMP, performance on these four datasets can be even further boosted with diverse reasoning, outperforming the teacher model across all four. Patterns These datasets also have very clear patterns in their tasks, which helps Fine-tune-CoT to perform well by providing cues on how to solve a specific task. We note that in contrast, classic fine-tuning does not have an advantage in these datasets, and it gets significantly lower accuracy than Fine-tune-CoT on all four. The same is also true for MultiArith, which we have used as a benchmark in the main text. While arithmetic errors cause the absolute accuracy of our method to be lower than the teacher, it significantly outperforms fine-tuning on MultiArith even without using diverse reasoning. Indeed, we find that also in the presence of arithmetic errors, our model reasons correctly in many cases. We can surmise that the patterned nature of the tasks in MultiArith helps the student model to understand what is asked of it, eliciting the correct reasoning. Additionally, we note that the presence of such patterns in successful datasets does not mean that our method overfits to existing templates. In our template-split analysis (Appendix E.3), we in fact show that while tasks look similar to one another in certain datasets such as Date Understanding, the student model's reasoning does not rely on simply matching templates or memorizing particular solutions. This implies that our method can generalize to previously unseen tasks; the patterns in the datasets do not produce overfitting, but can be surmised to act as cues for the model's understanding of its current task. Thus, we observe that the reasoning skills of a student using Fine-tune-CoT can overcome the smaller model capacity (which proves to be completely prohibitive, e.g., for Zero-shot-CoT to have any success on the various tasks). ## E Nuances Of Fine-Tune-Cot E.1 Rationale Filtering We investigate whether answer-based filtering is sufficient for selecting *good* teacher-generated reasoning samples. It is possible for the teacher model to answer correctly despite incorrect reasoning, especially in multi-choice questions where the random-guess probability is significant. To investigate the potential impact of a better filtering scheme (as opposed to our baseline answer-based filtering) we manually annotate the correctness of rationales from the teacher model and evaluate student performance when fine-tuning on *correctly reasoned* samples. We use the Date Understanding dataset for this ablation, as it is comprised of well-grounded multi-choice questions for which Fine-tune-CoT achieves adequate performance. Appendix Table 6 compares the Fine-tune-CoT performance of student models on Date Understanding when using correct samples filtered based on answer predictions vs *golden* samples, hand-picked based on the correctness of rationales. For golden samples, we exclude samples that contain incorrect reasoning steps or irrelevant steps which are misleading. We find that 28% of correct samples have incorrect rationales–significantly more than the randomguess performance of 17.12%, indicating the importance of filtering. Surprisingly, we however find that answer-based filtering outperforms the more stringent human filtering by 5-11%, given the same initial samples. When we match the number of samples post-filtering (via undersampling), we do find that fine-tuning on golden samples outperforms that on correct samples by 5-8%. These results suggest that there is a tradeoff between the quality and quantity of reasoning samples which must be addressed when considering sample-filtering methods. We also note that this must be considered in tandem with diverse reasoning, which can drastically increase the quantity of reasoning samples. ## E.2 Maximum Sequence Length Following the original setting for Zero-shot-CoT (Kojima et al., 2022), we limit the max sequence length, or max tokens, allowed for the teachergenerated rationale and student reasoning predictions, denoted Lr, Lp, to 128 initially. However, we find that this can be insufficient in many datasets. Allowing for longer inference, we observe that model performance improves significantly on AQUA and commonsense reasoning tasks (Appendix Table 5). Sample inspection shows that rationales with over ∼500 tokens are typically repetitive or too digressive. To investigate the effect of the max length Lr of the teacher rationale on fine-tuning, we compare student performance using Lr = {128, 512} (Appendix Table 7). The effect of Lr on student performance varies across datasets, and increased Lr does not necessarily improve student performance on tasks that require longer rationales, such as AQUA. Finally, we examine the length distribution of the generated rationales from the teacher model and student trained on short (Lr = 128) and long (Lr = 512) reasoning samples, respectively (Appendix Figure 7). We find that the distribution is different for each dataset. Notably, we find that while the distributions from the *long* students were similar to that of the teacher, the generated rationale from the *short* students were typically limited to less than ∼128 tokens. These findings are in line with the intuition that different tasks require different lengths of rationales, and suggest that careful consideration is needed in determining parameters related to sequence length. ## E.3 Templated Datasets Upon inspection, we found that many datasets contain groups of samples which share common templates. Therefore, naive samplewise data split has the potential to leak the same templates into the train and test sets, essentially demoting the learn- | Model | Max | Single | Add | Multi | GSM8K | Aqua | SVAMP | Date | Shuffled | Last | Coin | Common | Strategy | |-----------------------------------------|----------|----------|---------|----------|---------------|----------|----------|----------|------------|----------|---------|----------|------------| | Params | Tokens | Eq | Sub | Arith | Understanding | Objects | Letter | Flip | SenseQA | QA | | | | | Teacher: InstructGPT (text-davinci-002) | | | | | | | | | | | | | | | 128 | 81.18 | 75.72 | 76.90 | 42.42 | 29.63 | 64.00 | 65.89 | 54.10 | 57.43 | 89.71 | 59.86 | 53.40 | | | (84.83) | (90.22) | (95.24) | (69.85) | (44.04) | (86.57) | (98.06) | (97.14) | (99.71) | (97.14) | (82.55) | (71.55) | | | | 175B | 2048 | 81.18 | 75.72 | 76.48 | 47.73 | 34.77 | 66.00 | 63.28 | 54.10 | 57.43 | 89.71 | 59.40 | 53.03 | | (84.83) | (90.22) | (94.29) | (99.34) | (96.42) | (99.00) | (97.14) | (97.14) | (99.71) | (97.14) | (99.92) | (99.69) | | | | Student: GPT-3 (ada, babbage, curie) | | | | | | | | | | | | | | | 128 | 7.24 | 6.72 | 5.56 | 3.11 | 16.54 | 4.33 | 17.12 | 48.89 | 50.67 | 99.33 | 30.30 | 47.16 | | | (96.05) | (99.16) | (96.11) | (74.75) | (45.67) | (91.33) | (100.00) | (100.00) | (100.00) | (100.00) | (86.73) | (87.63) | | | | 0.3B | 1024 | 7.24 | 6.72 | 6.11 | 3.11 | 23.62 | 5.00 | 17.12 | 49.33 | 50.67 | 99.33 | 32.68 | 52.55 | | (98.68) | (99.16) | (97.22) | (99.77) | (100.00) | (97.33) | (100.00) | (100.00) | (100.00) | (100.00) | (100.00) | (99.71) | | | | 128 | 11.18 | 11.76 | 13.89 | 4.02 | 15.35 | 7.33 | 38.74 | 53.78 | 50.67 | 100.00 | 40.95 | 47.02 | | | (92.76) | (96.64) | (98.89) | (75.36) | (48.03) | (90.33) | (100.00) | (99.56) | (100.00) | (100.00) | (86.57) | (83.99) | | | | 1.3B | 1024 | 11.18 | 11.76 | 13.33 | 4.70 | 19.69 | 8.00 | 38.74 | 52.44 | 50.67 | 100.00 | 43.08 | 52.69 | | (98.68) | (98.32) | (99.44) | (99.92) | (99.61) | (99.00) | (100.00) | (100.00) | (100.00) | (100.00) | (99.92) | (98.98) | | | | 128 | 21.05 | 20.17 | 34.44 | 7.20 | 16.93 | 12.67 | 60.36 | 64.00 | 52.00 | 98.00 | 51.27 | 47.16 | | | (92.76) | (97.48) | (99.44) | (76.19) | (55.91) | (93.67) | (99.10) | (100.00) | (100.00) | (100.00) | (85.26) | (84.28) | | | | 6.7B | 1024 | 20.39 | 21.01 | 33.33 | 6.75 | 24.02 | 12.67 | 60.36 | 64.44 | 52.67 | 98.67 | 56.76 | 55.02 | | (98.68) | (100.00) | (100.00) | (99.92) | (100.00) | (99.00) | (100.00) | (100.00) | (100.00) | (100.00) | (100.00) | (99.71) | | | | Random | 0.00 | 0.00 | 0.00 | 0.00 | 20.00 | 0.00 | 17.12 | 33.33 | 0.00 | 50.00 | 20.00 | 50.00 | | ![19_image_0.png](19_image_0.png) Table 6: **Effects of rationale filtering.** Accuracy (%) of GPT-3 student models under Fine-tune-CoT when using samples filtered using answer predictions (Answer), or filtered by humans based on the correctness of the rationale (Golden). Answer†refers to using a randomly sampled subset of the correct samples to match the number of golden samples. Model Max GSM8K AQUA Common Strategy Params Tokens SenseQA QA 0.3B 128 3.11 23.62 32.68 52.55 512 3.41 15.35 32.10 52.98 | Method | Filter | Samples | Model Params | | | |---------------|----------|-----------|----------------|-------|-------| | 0.3B | 1.3B | 6.7B | | | | | Zero-shot-CoT | 0 | 10.81 | 14.41 | 15.32 | | | Fine-tune-CoT | Answer | 170 | 17.12 | 38.74 | 60.36 | | Fine-tune-CoT | Golden | 123 | 17.12 | 28.83 | 54.95 | | Fine-tune-CoT | Answer† | 123 | 17.12 | 18.92 | 50.45 | | Random | 16.09 | | | | | 1.3B 128 4.70 19.69 43.08 52.69 512 3.79 18.90 43.65 53.42 6.7B 128 6.75 24.02 56.76 55.02 512 7.96 18.90 58.15 54.15 Random 1.01 20.44 20.01 50.18 Table 7: **Effects of teacher reasoning length on student performance.** Accuracy (%) of GPT-3 students models under Fine-tune-CoT on four datasets which require longer rationales, when trained on reasoning samples with maximum rationale sequence lengths of Lr = 128, 512. ![20_image_1.png](20_image_1.png) ![20_image_0.png](20_image_0.png) ($282)28.3 ($86)13.3 ($21)12.2 ($5)2.2 | Params | Split | MultiArith | Date Understanding | |---------------|-------------|--------------|----------------------| | 0.3B | Sample-wise | 5.56 | 17.12 | | Template-wise | 5.35 | 22.22 | | | 1.3B | Sample-wise | 13.89 | 38.74 | | Template-wise | 7.49 | 35.19 | | | 6.7B | Sample-wise | 34.44 | 60.36 | | Template-wise | 21.39 | 49.07 | | ing problem into simple pattern matching, rather than complex reasoning. This brings into question the validity of naive samplewise data split, as it has the potential to leak the same templates into the train and test sets. To investigate whether the student models are truly learning to reason rather than matching simple patterns, we manually group samples by template and evaluate Fine-tune-CoT using a template-wise data split. We consider MultiArith and Date Understanding as they contain a moderate number of templates. Note that all datasets excluding GSM8K, CommonsenseQA, and StrategyQA contain templates to varying degrees. Appendix Table 8 shows the performance of Fine-tune-CoT when using sample-wise vs template-wise split, using the same train-test ratio of 70:30. While student performance is typically lower with a templatewise split, it still significantly outperforms random guess performance, as well as prompt-based baselines shown in Appendix Table 1. This reaffirms that Fine-tune-CoT is able to elicit complex reasoning capabilities in small language models. ## F Data Annotation Vs Diverse Reasoning In Appendix Figure 8, we analyze the cost of data annotation and diverse reasoning, based on current OpenAI API pricing and a low estimate of annotation cost at 30 annotations per hour at an hourly rate of $20, i.e., $0.67 per question-answer sample. When comparing the cost and student performance of models trained with D = 1 and D = 64, we can clearly see that using diverse reasoning can enhance the cost-effectiveness of data acquisition. However, as the cost of diverse reasoning correlates with the size of the dataset, it is important to consider the cost-performance tradeoffs. ## G Experiments On Open Source Models To validate the generality of our method, we apply our method to a wide range of student models beyond variants of GPT-3. While the OpenAI API for GPT-3 inference and fine-tuning is accessible and does not require high-end GPUs, the model weights and implementation are not publicly available and may involve black-box processing. We therefore conduct experiments from Section 4 on open-source models under a standard setting with fixed hyperparameters, as explained in Appendix A and report our results in the following. Tables and figures include results from Section 4 on GPT-3 for reference. Prompt-based baselines A comprehensive performance evaluation of student models across multiple tasks is encapsulated in Table 9, comparing Fine-tune-CoT against baseline methods. Performance of standard zero-shot prompting, predominantly insignificant, is omitted when negligible but does exhibit unexpected spikes on Flan-T5, such as 94.22% on Tracking Shuffled Objects on the smallest model. Few-shot-CoT likewise demonstrates inconsequential performance across most student models, yet the Flan-T5 models reveal significant performance on some tasks such as 7.51% on GSM8K and 83.87% on CommonSenseQA. This hints at the possibility that instruction tuning may empower models to comprehend and execute CoT prompts, unveiling a latent reasoning capacity within smaller language models. Fine-tune-CoT vs vanilla fine-tuning Further examining Table 9, we note that vanilla fine-tuning achieves notable performance in encoder-decoder architectures, namely T5 and Flan-T5, achieving more than 80% on Date Understanding and 100% on Coin Flip, significantly outperforming vanilla fine-tuning on GPT-2 and GPT-3 student models. This leads us to believe that the causal attention masking present in decoder-only models could impede complex inter-token reasoning. CoT reasoning, in this regard, may serve to mitigate this limitation by repeating key information within the decoding context. Other the other hand, Fine-tune-CoT either surpasses or matches the performance of vanilla fine-tuning across a variety of tasks. Our method also displays consistent scalability with model size, in contrast to the fluctuating performance between model sizes for baseline methods. The incorporation of diverse reasoning enhances this scalability. Particularly, we find that the FlanT5 models benefit more from Fine-tune-CoT compared to T5 models, implying a favorable role of instruction tuning. When enhanced with diverse reasoning, Fine-tune-CoT excels over vanilla finetuning across several complex reasoning tasks, notably observed in the performance of Flan-T5 on Tracking Shuffled Objects (44.00%→89.33%) and GPT-2 on MultiArith (11.67%→19.44%). Effects of diverse reasoning Figure 9 shows the performance of all student models on MultiArith and SVAMP under varying degrees of diverse reasoning. We observe that performance scales with diverse reasoning in all student models, with the exception of T5-Small. It is shown that diverse reasoning enables Fine-tune-CoT to outperform standard fine-tuning in all cases. Effects of student model scale Figure 10 shows the performance of all student model families according to model size. While we observe performance scaling for Fine-tune-CoT on GPT-3 models, this is not apparent in other open-source models. We posit that this may be due to under-tuned hyperparameters, as we used fixed hyperparameters for all open-source models, in contrast to default suggested settings for GPT-3. Method Params Single Add Multi GSM8K Aqua SVAMP Date Shuffled Last Coin Common Strategy Eq Sub Arith Understanding Objects Letter Flip SenseQA QA Random 0.00 0.00 0.00 0.00 20.00 0.00 17.12 33.33 0.00 50.00 20.00 50.00 Teacher: InstructGPT 175B (**text-davinci-002**) Zero-shot-CoT 175B 81.50 76.71 78.79 42.17 29.74 64.20 67.58 53.20 57.71 90.04 60.07 53.45 Student: GPT-3 (ada, babbage, **curie**) Few-shot-CoT 0.3B 0.66 0.84 3.33 1.74 15.75 2.00 19.27 - 0.00 44.67 18.43 42.98 1.3B 3.29 5.88 5.00 1.59 13.78 4.33 16.51 - 0.00 46.00 18.67 46.05 6.7B 22.37 31.93 10.00 2.50 15.75 11.33 12.84 - 0.67 40.00 24.73 54.68 Fine-tune 0.3B 9.87 8.40 8.89 5.08 24.41 7.67 23.42 32.44 28.67 100.00 51.68 60.41 1.3B 11.84 17.65 17.78 5.38 21.26 14.33 31.53 30.22 30.00 100.00 70.93 60.70 6.7B 24.34 25.21 15.00 6.14 15.35 20.67 14.41 33.78 32.67 72.00 76.17 65.21 Fine-tune-CoT 0.3B 7.24 6.72 6.11 3.11 23.62 5.00 17.12 49.33 50.67 99.33 32.68 52.55 1.3B 11.18 11.76 13.33 4.70 19.69 8.00 38.74 52.44 50.67 100.00 43.08 52.69 6.7B 20.39 21.01 33.33 6.75 24.02 12.67 60.36 64.44 52.67 98.67 56.76 55.02 Fine-tune-CoT 0.3B 9.21 10.08 23.89 - - 14.33 58.56 61.78 59.33 99.33 - 57.21 w/ diverse reasoning 1.3B 18.42 19.33 27.78 - - 16.33 70.27 72.00 60.67 100.00 - 57.06 6.7B 24.34 31.09 53.33 - - 30.33 83.78 73.33 62.00 100.00 - 58.22 Student: T5-{Small, Base, Large} Few-shot-CoT 60M 1.32 3.36 3.33 1.97 24.80 1.33 20.72 - 0.00 44.67 19.25 46.00 220M 1.97 2.52 1.11 1.74 23.23 0.33 9.91 - 0.00 55.33 13.35 52.55 700M 1.32 1.68 2.78 2.43 19.69 3.00 9.91 - 0.00 55.33 18.92 53.13 Fine-tune 60M 5.92 8.40 13.89 4.02 29.92 11.33 80.18 94.22 24.67 100.00 22.11 58.81 220M 5.92 11.76 15.00 5.00 24.80 8.67 78.38 37.78 44.00 100.00 51.60 59.24 700M 6.58 9.24 13.89 4.25 26.77 9.67 79.28 33.78 50.67 100.00 20.88 61.72 Fine-tune-CoT 60M 2.63 5.04 5.56 2.58 24.02 9.33 77.48 40.00 29.33 100.00 29.48 54.73 220M 4.61 7.56 10.56 3.18 26.77 7.00 80.18 42.67 47.33 98.67 45.37 55.90 700M 5.26 10.92 10.56 4.55 29.92 9.00 80.18 46.22 52.00 100.00 54.22 56.33 Fine-tune-CoT 60M 7.24 7.56 15.00 - - 7.67 81.08 59.11 46.67 100.00 - 56.04 w/ diverse reasoning 220M 5.26 10.08 16.11 - - 10.33 82.88 65.33 60.67 100.00 - 59.68 700M 7.89 11.76 17.78 - - 11.33 81.98 81.78 63.33 100.00 - 62.15 Student: Flan-T5-{Small, Base, Large} Zero-shot 60M 0.00 0.00 1.67 2.12 23.62 2.00 32.43 33.78 0.00 54.00 39.07 48.47 220M 1.32 0.00 5.00 2.50 27.95 2.00 30.63 31.11 0.00 7.33 72.24 53.42 700M 1.32 4.20 3.89 2.05 24.41 2.67 9.91 28.89 0.00 54.00 84.03 49.34 Few-shot-CoT 60M 1.32 0.84 1.67 2.81 20.87 1.67 27.93 - 0.00 44.67 11.79 51.97 220M 2.63 0.84 3.89 3.64 24.80 3.67 12.61 - 0.00 44.67 70.27 53.86 700M 12.50 10.08 10.00 7.51 23.23 8.33 20.72 - 0.00 44.67 83.87 65.21 Fine-tune 60M 7.24 9.24 16.67 4.93 28.74 10.33 81.08 33.78 39.33 100.00 45.95 58.95 220M 5.26 10.08 16.11 5.08 29.53 10.67 83.78 44.00 45.33 100.00 63.55 61.14 700M 7.24 12.61 18.89 5.53 24.80 11.00 82.88 33.78 53.33 100.00 66.75 63.90 Fine-tune-CoT 60M 6.58 5.88 8.33 2.96 23.23 5.67 80.18 36.00 35.33 100.00 42.01 54.15 220M 4.61 9.24 12.22 4.40 29.13 6.00 83.78 48.89 50.00 100.00 59.05 59.97 700M 11.84 10.92 14.44 5.38 28.35 10.67 84.68 55.11 64.00 100.00 66.83 59.83 Fine-tune-CoT 60M 7.24 10.92 17.22 - - 10.67 84.68 62.22 46.00 100.00 - 56.04 w/ diverse reasoning 220M 9.21 10.92 21.11 - - 12.33 84.68 67.11 56.67 100.00 - 60.84 700M 10.53 15.13 20.00 - - 13.67 87.39 89.33 65.33 100.00 - 61.72 GPT-2 {Small, Medium, Large} Few-shot-CoT 124M 1.32 0.00 0.00 0.45 17.32 0.33 13.51 - 0.00 44.67 20.15 0.00 355M 0.00 0.00 0.56 0.00 3.94 0.00 9.91 - 0.00 55.33 0.00 0.15 774M 0.00 0.00 0.00 0.00 0.39 0.00 13.51 - 0.00 55.33 0.16 35.08 Fine-tune 124M 2.63 3.36 11.67 2.88 25.59 7.67 7.21 33.78 0.67 60.00 20.80 54.00 355M 0.66 0.84 5.00 0.38 18.90 0.00 23.42 36.89 1.33 57.33 19.82 50.22 774M 1.32 5.04 8.33 2.58 24.80 7.67 13.51 32.44 0.67 1.33 20.88 53.57 Fine-tune-CoT 124M 4.61 4.20 10.00 3.03 24.02 5.67 17.12 38.67 4.67 88.00 22.19 53.57 355M 3.29 5.88 7.22 2.73 23.62 7.33 28.83 35.56 10.67 80.00 22.03 55.02 774M 3.95 5.88 10.56 2.58 22.05 6.33 15.32 39.11 4.00 89.33 25.80 53.13 Fine-tune-CoT 124M 7.24 9.24 19.44 - - 10.67 21.62 57.33 10.67 93.33 - 56.62 w/ diverse reasoning 355M 5.92 9.24 17.22 - - 9.67 20.72 56.00 20.00 95.33 - 55.60 774M 8.55 12.61 17.22 - - 8.67 18.02 52.44 7.33 84.67 - 57.06 Table 9: **Fine-tune-CoT performance on all models.** Accuracy (%) of all models on 12 tasks under Fine-tune-CoT (with diverse reasoning) and baseline methods. 'Random' refers to random-guess performance derived based on the number of choices in multi-choice tasks. For diverse reasoning, we report results for maximum degree D considered: D = 64 for MultiArith and SVAMP; D = 8 for other datasets. We omit diverse reasoning for large datasets due to resource constraints and Few-shot-CoT for Tracking Shuffled Objects due to absence of prompts. Zero-shot baseline performance is omitted due to negligible performance, except for Flan-T5 models. ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) | (a) GSM8K | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Zaid spends 1/4 of his salary on rent, 1/3 on car fuel and donates half of the remaining amount to his favorite charity. He gives his daughter 200$ to use for her weekly expenses and 700$ to his wife to budget for groceries and other household goods. If Zaid earns 6000$ per month, how much money will he still have after all these expenses and donations? ### | | | Answer | 350 | | Question | Zaid spends 1/4 of his salary on rent, so that's 300$ per month. Zaid spends 1/3 of his salary on car fuel, so that's 150$ per month. That leaves him with 3000$ per month. He donates half of the remaining amount to his favorite charity, so that's 1500$ per month. That leaves him with 1500$ per month. He gives his daughter 200$ per month, so that's 400$ per month. He gives his wife 700$ per month, so that's 350$ per month. | | Prediction | 350 | | Completion | (b) AQUA | | 20 marbles were pulled out of a bag of only white marbles, painted black, and then put back in. Then, another 20 marbles were pulled out, of which 1 was black, after which they were all returned to the bag. If the percentage of black marbles pulled out the second time represents their percentage in the bag, how many marbles in total Q does the bag currently hold? Answer choices: (A) 40, (B) 200, (C) 380, (D) 400, (E) 3200. ### | | | Answer | D | | Question | There are 20 white marbles, 20 black marbles, and 1 black marble. The first time, 10 white marbles and 10 black marbles are pulled out. The second time, 5 white marbles and 5 black marbles are pulled out. Now, we can calculate the total number of marbles that are in the bag. There are 20 white marbles and 20 black marbles. The first time, 10 white marbles and 10 black marbles are pulled out. The second time, 5 white marbles and 5 black --> D | | Prediction | D | | Completion | (c) StrategyQA | | Question | Is it common to see frost during some college commencements? ### | | Answer | Yes First, what is a college commencement? A college commencement is the ceremony at which a student graduates from college. Second, what is frost? Frost is a type of weather that occurs when the temperature is below freezing and the ground is frozen. So, is it common to see frost during some college commencements? No, it is not common to see frost during some college commencements. --> No | | Prediction | No | | Completion | | | (a) StrategyQA | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------| | Question | Is sternum connected to any of the bones mentioned in James Weldon Johnson's Dem Bones? ### | | | Answer | Yes The sternum is a bone in the chest. The bones in the chest are the sternum, the clavicle, the two collarbones, and the two breastbones. The breastbones are the two right and left breastbones. So, no, the sternum is not connected to any of the bones mentioned in James Weldon Johnson's Dem Bones. --> No | | | Prediction | No | | | Completion | (b) SingleEq | | | Question | There are 33 walnut trees currently in the park. Park workers will plant 44 more walnut trees today. How many walnut trees will the park have when the workers are finished ? ### | | | Answer | 77.0 There are 33 walnut trees currently in the park. Park workers will plant 44 more walnut trees today. | | | Completion | This means that when the workers are finished, there will be a total of 83 walnut trees in the park. --> 83.0 | | | Prediction | 83.0 | (c) AddSub 25 | | Question | Jason has 43 blue and 16 red marbles . Tom has 24 blue marbles . How many blue marbles do they have in all ?### | | | Answer | 67.0 Jason has 43 blue marbles and 16 red marbles. Tom has 24 blue marbles. To find out how many | | | Completion | blue marbles they have in all, we need to add 43 and 24. 43 + 24 = 77 Therefore, they have 77 blue marbles in all. --> 77 | | | Prediction | 77 | (d) GSM8K | | Four dogs sat in a line within the veterinarian's waiting room. The first dog was the Affenpinscher, who weighed only 10 pounds. Next to him sat a Dachshund, who weighed twice as much as the Affenpinscher. Next to the Dachshund sat a Papillon, who weighed one-fourth as much as the Dachshund. And at the end of the line sat a Mastiff, who weighed 44 times the weight of the Papillon. How much did the Mastiff weigh, in pounds? Let's think step by step ### | | | | Answer | 220 | | | Question | The Affenpinscher weighs 10 pounds. The Dachshund weighs twice as much as the Affenpinscher, | | | Completion | so it weighs 20 pounds. The Papillon weighs one-fourth as much as the Dachshund, so it weighs 2 pounds. The Mastiff weighs 44 times as much as the Papillon, so it weighs 44 pounds. --> 44 | | | Prediction | 44 | | Table 11: Examples of incorrect output. | (a) SVAMP | | | |--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------| | Question | If you had 33 cookies and each bag has 11 cookies How many bags of cookies do you have? ### | | | Answer | 3.0 There are 33 cookies. Each bag has 11 cookies. That means that if we divide 33 by 11, we will know how many bags of cookies there are. 33 divided by 11 equals 2. There are 2 bags of cookies. | | | Completion | --> 2.0 | | | Prediction | 2.0 | (b) CommonsenseQA | | Question | What is someone who isn't clever, bright, or competent called? Answer choices: (A) clumsy, (B) ineffectual, (C) dull, (D) clumsy, (E) stupid. ### | | | Answer | E If someone isn't clever, bright, or competent, then they're not effective. If someone's not effective, | | | Completion | then they're not useful. If someone's not useful, then they're not needed. If someone's not needed, then they're not important. Therefore, the answer is (B) ineffectual. --> B | | | Prediction | B | (c) CommonsenseQA | | Question | Billy set aside a block of time for having fun after work. Why might he do this? Answer choices: (A) happiness, (B) stress relief, (C) pleasure, (D) ocean, (E) may laugh. ### | | | Answer | B After work, Billy might be tired and want to relax. He could set aside a block of time and say, "OK, | | | Completion | I'm going to spend the next X minutes doing something I enjoy." This might help him relax and feel happier. --> A | | | Prediction | A | (d) CommonsenseQA | | Question | What island country is ferret popular? Answer choices: (A) own home, (B) north carolina, (C) great britain, (D) hutch, (E) outdoors.### | | | Answer | C A ferret is a small mammal, so (D) hutch is probably not the best answer. It's also not a very common animal, so (A) own home and (B) north carolina are probably not the best answers. That leaves us with (C) great britain and (E) outdoors. Great britain is a larger island than outdoors, so the ferret is probably more popular in that country. --> C | | | Prediction | C | | | Completion | (e) SVAMP | | | There are 17 different movies and 11 different books in the ' crazy silly school ' series. If you read | | | | Question | 13 of the books and watched 63 of the movies How many more movies than books are there in the ' crazy silly school ' series? ### | | | Answer | 6.0 There are 17 different movies and 11 different books in the 'crazy silly school' series. If you read 13 of the books, that means you have read 7 books. If you watched 63 of the movies, that means you have watched 63 movies. So there are 17 movies and 11 books, and you have read 7 books and watched 63 movies. That means there are 8 more movies than books in the 'crazy silly school' series. --> 8.0 | | | Prediction | 8.0 | | | Completion | | | | (a) SVAMP | | | |--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------| | Danny collects bottle caps and wrappers. He found 65 wrappers and 5 bottle caps at the park. Now | | | | Question | he has 31 bottle caps and 9 wrappers in his collection. How many more wrappers than bottle caps did danny find at the park? ### | | | Answer | 60.0 First, Danny finds 65 wrappers and 5 bottle caps at the park. Then, he has 31 bottle caps and 9 wrappers in his collection. Now, we want to know how many more wrappers than bottle caps Danny found at the park. We can find this out by subtracting the number of bottle caps from the number of wrappers: 9 - 5 = 4 So, Danny found 4 more wrappers than bottle caps at the park. --> 4.0 | | | Prediction | 4.0 | | | Completion | (b) SVAMP | | | Next on his checklist is wax to stick the feathers together and colors to paint them. He needs a total | | | | Question | of 57 g of colors to paint them. He needs 22 g of wax more. If the feathers require a total of 353 g of wax How many grams of wax does he already have? ### | | | Answer | 331.0 First, he needs 57 grams of colors to paint the feathers. Second, he needs 22 grams of wax more. Third, the feathers require a total of 353 grams of wax. So, he already has 353 grams of wax. --> | | | Completion | 353.0 | | | Prediction | 353.0 | (c) CommonsenseQA | | Question | You'll find a landing at the top of what? Answer choices: (A) ocean, (B) apartment building, (C) stairwell, (D) airport, (E) room. ### | | | Answer | C A landing is a place where something can touch the ground. A landing at the top of an ocean would be a very strange place to find, since there would be no ground to touch. A landing at the top of an apartment building would be a very strange place to find, since there would be no ground to touch. A landing at the top of a stairwell would be a very strange place to find, since there would be no ground to touch. A landing at the top of an airport would be a very strange place to find, since there would be no ground to | | | Prediction | A | | | Completion | (d) SVAMP | | | Question | 18 red peaches, 14 yellow peaches and 17 green peaches are in the basket. How many red and yellow peaches are in the basket?### | | | Answer | 32.0 There are a total of 48 peaches in the basket. There are 18 red peaches, 14 yellow peaches, and 17 green peaches. This means that there are a total of 33 peaches in the basket. There are 33 peaches in the basket. This means that there are 17 red peaches and 14 yellow peaches in the basket. --> 17.0 | | | Prediction | 17.0 | | | Completion | | | | (a) Date Understanding | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------| | Jane was born on the last day of Feburary in 2000. Today is her 16-year-old birthday. What is the | | | | Question | date a month ago in MM/DD/YYYY? Which choice is true? Answer choices: (A) 01/30/2016, (B) 11/08/2015, (C) 01/29/2016, (D) 07/29/2015, (E) 05/04/2016, (F) 02/02/2016. ### | | | Answer | C | | | Completion | Jane's 16th birthday is on 02/28/2016. A month ago from that would be 01/28/2016. Therefore, the answer is (C). --> C | | | Prediction | C | (b) Last Letter Concatenation | | Question | Take the last letters of each words in "Lino Mariel Aditya Elisabeth" and concatenate them. ### | | | Answer | olah | | | Completion | The last letter of "Lino" is "o". The last letter of "Mariel" is "l". The last letter of "Aditya" is "a". The last letter of "Elisabeth" is "h". So the final answer is "olah". –> olah | | | Prediction | olah | (c) Coin Flip | | Question | A coin is heads up. Cinthia does not flip the coin. Lloyd does not flip the coin. Jacqueline flips the coin. Jc flips the coin. Is the coin still heads up? Note that "flip" here means "reverse". ### | | | Answer | Yes Cinthia does not flip the coin, so the coin remains heads up. Lloyd does not flip the coin, so the | | | Completion | coin remains heads up. Jacqueline flips the coin, so the coin is now tails up. Jc flips the coin, so the coin is now heads up again. –> yes | | | Prediction | Yes | (d) Tracking Shuffled Objects | | Alice, Bob, and Claire are playing a game. At the start of the game, they are each holding a ball: Alice has a blue ball, Bob has a yellow ball, and Claire has a green ball. As the game progresses, pairs of players trade balls. First, Bob and Claire swap balls. Then, Bob and Alice swap balls. Finally, Bob and Claire swap balls. At the end of the game, Bob has the Which choice is true? Answer choices: (A) blue ball, (B) yellow ball, (C) green ball. ### | | | | Answer | B | | | Question | First, Bob and Claire swap balls. This means that Bob now has the green ball and Claire has the yellow ball. Then, Bob and Alice swap balls. This means that Bob now has the blue ball and Alice has the green ball. Finally, Bob and Claire swap balls again. This means that Bob now has the yellow ball and Claire has the green ball. At the end of the game, Bob has the yellow ball. –> B | | | Prediction | B | | | Completion | (e) Date Understanding | | | Tomorrow is 11/12/2019. What is the date one year ago from today in MM/DD/YYYY? Which | | | | Question | choice is true? Answer choices: (A) 11/02/2018, (B) 11/04/2018, (C) 09/04/2018, (D) 08/25/2018, (E) 11/11/2018. ### | | | Answer | E | | | Completion | One year ago from today would be 11/11/2018. --> E | | | Prediction | E | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? We used ChatGPT sparingly for paraphrasing. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We create code for our experiments and provide a link to the anonymized code in a footnote within the abstract. We use 12 existing datasets which are listed in Section 4 and Appendix B. ✓ B1. Did you cite the creators of artifacts you used? Appendix B ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We include an MIT license notice in our code repository. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3, Appendix B ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. We use default hyperparameters. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4, Appendix A C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhao-etal-2023-abductive
Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations
https://aclanthology.org/2023.acl-long.831
Abductive reasoning aims to find plausible explanations for an event. This style of reasoning is critical for commonsense tasks where there are often multiple plausible explanations. Existing approaches for abductive reasoning in natural language processing (NLP) often rely on manually generated annotations for supervision; however, such annotations can be subjective and biased. Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context. The method uses posterior regularization to enforce a mutual exclusion constraint, encouraging the model to learn the distinction between fluent explanations and plausible ones. We evaluate our approach on a diverse set of abductive reasoning datasets; experimental results show that our approach outperforms or is comparable to directly applying pretrained language models in a zero-shot manner and other knowledge-augmented zero-shot methods.
# Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations Wenting Zhao and **Justin T. Chiu** and **Claire Cardie** and **Alexander M. Rush** Department of Computer Science Cornell University {wz346,jtc257,ctc9,arush}@cornell.edu ## Abstract Abductive reasoning aims to find *plausible* explanations for an event. This style of reasoning is critical for commonsense tasks where there are often multiple plausible explanations. Existing approaches for abductive reasoning in natural language processing (NLP) often rely on manually generated annotations for supervision; however, such annotations can be subjective and biased. Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context. The method uses posterior regularization to enforce a mutual exclusion constraint, encouraging the model to learn the distinction between fluent explanations and plausible ones. We evaluate our approach on a diverse set of abductive reasoning datasets; experimental results show that our approach outperforms or is comparable to directly applying pretrained language models in a zeroshot manner and other knowledge-augmented zero-shot methods. ## 1 Introduction Abductive reasoning aims to find *plausible* explanations for an event (Paul, 1993). Unlike deduction, which draws a firm conclusion from a set of premises, abduction requires reasoning from an outcome to plausible explanations. Fig. 1 (top) demonstrates the distinction: given only the context x, both the blue and the red sentences describe possible subsequent events; however, upon seeing the outcome y only one of the two is a plausible explanation (although there may be others). Humans apply abduction in everyday situations (Andersen, 1973) such as reading-between-the-lines (Charniak and Shimony, 1990) and analyzing causes and effects (Thagard and Shelley, 1997; Pearl and Mackenzie, 2018). Learning to perform abduction is thus an important step towards building humanlike machines with commonsense knowledge. ![0_image_0.png](0_image_0.png) Figure 1: **Top:** An abductive reasoning example consisting of a context x, an outcome y, and two candidate explanations. The goal is to identify the plausible explanation given x and y. To predict an explanation, one can apply a pretrained language model (shown as LM) to score y given x and an explanation, and then compute the posterior probability for the explanation. Bottom: Using a LM without fine-tuning (Zero-shot) leads to poor performance, whereas a LM fine-tuned via max-marginal likelihood (Tuned) fails to distinguish the two explanations. LiPoR is trained to partition the explanations in a mutually exclusive manner. Abductive reasoning has been extensively studied in the setting where annotations are available (Storks et al., 2019). However, because determining whether an explanation is plausible is a subjective and noisy process, annotating plausibility of explanations can be problematic for commonsense reasoning problems. Zhang et al. (2020) show that, in a dataset verification step where five annotators are asked to determine whether a handwritten explanation is plausible, they disagree with each other on 62.34% of 1365 explanations. This subjectivity thus introduces annotator-specific bias as has been seen in related tasks (Elazar et al., 2021; 14883 Geva et al., 2019). The potential bias in plausibility annotation motivates the study of learning to perform abductive reasoning without plausibility annotations. Thus, we consider the setting where the context x and outcome y are observed, and models must learn to identify plausible explanations out of a given set of candidate explanations, without direct supervision over plausibility. Rule-based methods use formal logic to reason about explanations (Paul, 1993); however, their limited coverage prevents them from scaling to the full complexity of natural language. Recently, pretrained language models, which have achieved remarkable performance on a range of NLP tasks (Li et al., 2020; Wei et al., 2022a), hold the potential for zero-shot abductive reasoning. Specifically, Bhagavatula et al. (2019) directly estimate the probability of an explanation for an outcome through Bayes' Rule (*Zero-shot* in Fig. 1). In practice, however, this direct approach can often lead to performance that is only slightly better than random guessing (Zhang et al., 2020; Zhou et al., 2021b). To avoid these issues, we reduce abductive reasoning down to a single constraint - an explanation must be plausible or *implausible*. This restriction, argued by Gordon and Hobbs (2017), enforces that explanations are mutually exclusive; that is, one explanation being plausible automatically rules out some other explanations. We introduce Likelihood learning with Posterior Regularization (LiPoR), an approach to perform abductive reasoning that only leverages mutual exclusivity of explanations and does not rely on plausibility annotations. Specifically, we maximize the marginal likelihood of the outcome given the context and a set of explanations (*Tuned* in Fig 1), then use posterior regularization to enforce mutual exclusion between plausible and implausible explanations (*LiPoR* in Fig 1). We show how to impose this relation with a simple distributional constraint on the posterior of the model. We empirically evaluate LiPoR on a diverse set of abductive reasoning datasets. Specifically, we consider four datasets under the abductive reasoning framework: αNLI (Bhagavatula et al., 2019), Sen-Making (Wang et al., 2019), δ-NLI (Rudinger et al., 2020), and WinoWhy (Zhang et al., 2020). Results show that LiPoR consistently outperforms pretrained language models directly applied in a zero-shot manner and is comparable to different variants of a state-of-the-art knowledge-augmented zero-shot method (Ma et al., 2021). As humanwritten explanation candidates are not always available during fine-tuning, we further evaluate LiPoR on the explanation candidates generated via prompting (Brown et al., 2020). We show that, even though automatically generated explanations are noisy, LiPoR can still leverage them and outperform strong zero-shot models including GPT3. ## 2 Related Work Zero-shot commonsense reasoning. We categorize zero-shot approaches for commonsense reasoning into two groups. The first group uses pretrained language models as a source of world knowledge. Shwartz et al. (2020); Zhou et al. (2021a) query the language models with information seeking questions to identify background knowledge relevant to specific examples, and the answers returned by the models are later used as additional information for producing the final outputs. Dou and Peng (2022) convert multiple-choice QA to cloze-style sentences and have the language models score different answers. Qin et al. (2020) proposed a decoding algorithm that generates free-form explanations by considering the future contexts through backpropagation. Our approach also uses pretrained language models as a source of knowledge, but we perform additional maximum likelihood finetuning to fit the abductive task data. The second group leverages external knowledge bases (KBs). Bosselut et al. (2021) leverage COMET (Bosselut et al., 2019), a dynamic knowledge graph, to generate a chain of commonsense inferences based on contexts of QA examples, which can be treated as explanations. Banerjee and Baral (2020); Ma et al. (2021) pretrain language models on artificial question answering (QA) datasets, created from knowledge graphs; a system trained on such datasets can directly perform zero-shot QA. Huang et al. (2021) formulate multiple-choice QA as natural language inference (NLI) and leverage both existing NLI datasets and KBs to identify answer choices in a zero-shot manner. Relation to deductive reasoning. Both abduction and deduction have intermediate explanations. Abductive reasoning infers the most likely explanation from outcomes. In contrast, deductive reasoning infers a conclusion given a complete set of premises. However, outcomes are often not a direct result of premises but come from a chain of reasoning over intermediate explanations. Identifying and providing the correct chain of reasoning is crucial to building trustworthy systems. Within the realm of deduction there are several different approaches that utilize neural models. Bostrom et al. (2021) develop a pipeline to automatically construct training examples from Wikipedia, so that a system trained on such data is able to generate deductive inferences from natural language inputs without direct human supervision. Arabshahi et al. (2021) present a neuro-symbolic theorem prover that extracts intermediate reasoning steps for understanding conversations. Rajani et al. (2019); Tafjord et al. (2021); Nye et al. (2022); Wei et al. (2022b) collect human annotated explanations for training interpretable systems which first generate intermediate explanations and then produce the final task outputs. Explanations as latent variables. Modeling intermediate explanations as latent variables is a common approach, although training and inference details differ. Here we consider representative works in NLP. Zhou et al. (2020) apply a latent variable model to language understanding and train the model with variational expectation maximization. Their method can generate free-form explanations but requires a small set of labeled examples for supervision. Zhou et al. (2021b) apply such a model to probe dialogue generation in a zero-shot manner. Vig et al. (2020) apply a latent variable model to analyze gender bias in large pretrained language models by viewing the behaviors of neurons as unobserved explanations. Lei et al. (2016); Vafa et al. (2021) apply such a model to identify rationales for sequence classification/generation, where rationales are a minimal subset of inputs or previous words that can lead to the same predictions. LiPoR is a training scheme developed for learning such latent-variable models for abductive reasoning, which has a unique challenge of identifying multiple plausible explanations. ## 3 Abductive Reasoning We consider four datasets that test abductive reasoning skills. While abduction can be difficult to pinpoint, we select datasets that obey the following criteria: there is a need for differentiating plausible explanations from implausible explanations, there is an observed outcome, and the outcome depends on intermediate explanations. Based on these criteria, we use αNLI (Bhagavatula et al., 2019), SenMaking (Wang et al., 2019), δ-NLI (Rudinger et al., | x: it was a very hot summer day | | |-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | αNLI | z: {he decided to run in the heat, he drank a glass of ice cold water} y: he felt much better | | Sen-Making | z: {a restaurant does not have doctors or medical treatment, a restaurant is usually too noisy for a patient, there are different types of restaurants in the city} y: it is not true that he was sent to a restaurant for treatment x: four people and a child walking in the street | | δ-NLI | z: {people from all over the world are gathered in the area, the people buy cotton candy from a booth, the family is the only humans in the area, the family is walking their dog} y: the family is enjoying the world's fair x: the fish ate the worm, it was hungry z: {hungry staff tend to eat, worm is | | WinoWhy | one being eaten, the worm is a common name for a variety of fish | | y: therefore, it refers to the fish | | 2020), and WinoWhy (Zhang et al., 2020) as our target datasets. To convert each to the abduction format, we first identify a context x, which sets a scope for candidate explanations Z, as well as an outcome y. The outcome could either be an event caused by z or a conclusion reached by z. Importantly, we differentiate explanation candidates Z as ones that are consistent with x, from plausible explanations that are consistent with both x and y. A central assumption is that training abductive reasoning systems with the candidate set introduces less noise and subjectivity than directly supervising the systems with plausibility annotations. Example conversions of each dataset are shown in Table 1. Because αNLI is designed as an abduction task, the conversion is straightforward. SenMaking is a benchmark that tests if a system can identify the reason why a statement is against common sense. In this case, a context is not required. We turn the nonsensical statement into a negative sentence, which becomes y. Then the original answer choices become z. δ-NLI is a defeasible inference task, which requires deciding whether new evidence has strengthened or weakened the original hypothesis. δ-NLI is made of extensions to three existing inference datasets: SNLI (Bowman et al., 2015), ATOMIC (Sap et al., 2019), and SOCIALCHEM-101 (Forbes et al., 2020); each of them will be referred to as δ-N for brevity, where N can be replaced by a dataset name. We map premises and hypotheses to contexts and outcomes, respectively. We then turn updates that strengthen a hypothesis into a plausible explanation and updates that weaken a hypothesis into an implausible explanation. WinoWhy is a follow-up task for Winograd Schema Challenge (WSC) (Levesque et al., 2012): Given the pronoun coreference resolution question and the answer from a WSC example, WinoWhy seeks to select all plausible reasons for why the pronoun is resolved to the answer. We thus turn the question of the WSC example into a context x and the answer into a declarative sentence y. Notably these datasets differ in the number of plausible explanations, which we denote by a value m ≥ 1. In αNLI and Sen-Making, m is fixed to 1 for all examples. However, in δ-NLI and WinoWhy, m is variable, and we assume that half of explanations are plausible. However these explanations are discrete; an explanation is either plausible or implausible. A successful unsupervised system should assign high probabilities to plausible explanations and low probabilities to implausible explanations. This discreteness is encoded into some of the tasks directly. For example, Bhagavatula et al. (2019); Zhang et al. (2020) instruct the annotators to make minimal possible changes to plausible explanations to produce implausible explanations, so that a system would fail if it predicts explanations based on superficial lexical features. ## 4 Lipor We now describe LiPoR, a method to adapt pretrained language models to incorporate mutual exclusivity between explanations. As we have seen, an abductive reasoning example consists of a context x, an observed outcome y, and an unobserved explanation z ∈ Z, which, together with x, has led to y. Importantly, the candidate set of explanations Z is given during training but the plausibility of each explanation is not.1 The goal of abductive reasoning is to produce a distribution over explanations z, defined by p(z|*x, y*). We are interested in modeling the joint distribution p(*y, z*|x), which is factored as follows: $$p(y,z|x)=p(y|x,z)p(z|x)$$ $$(1)$$ Given Eq 1, the posterior distribution can be obtained via the Bayes' rule, $$p(z|x,y)={\frac{p(y|z,x)p(z|x)}{p(y|x)}}.$$ $$\mathbf{(2)}$$ . (2) Because x itself does not provide further information for z, we set p(z|x) to be a uniform distribution. Therefore, we only parameterize p(y|*x, z*). ## 4.1 Baseline: Fine-Tuning Via Max-Marginal Likelihood We note that any off-the-shelf pretrained language model can be applied to evaluate p(z|*x, y*) for an abductive reasoning task in a zero-shot fashion. To adapt the pretrained model to a specific task distribution without plausibility annotations, we maximize the following marginal likelihood function L(·) with respect to parameters θ for all examples: $${\mathcal{L}}(\theta)=\log\sum_{z\in{\mathcal{Z}}}p_{\theta}(y|x,z)p(z|x).$$ $$\mathbf{\Pi}^{0}$$ Maximizing the marginal likelihood encourages the model to prefer explanations that assign the outcome high probability. Mechanically, the marginal likelihood requires computing the probability of the outcome given every explanation in the set Z. Training then gives credit (gradient) to explanations that assign high probability to the outcome, encouraging the model to prefer explanations that explain the outcome. We parameterize p(y|*x, z*) by θ, a language model, that takes "x [SEP] z" as input and returns a probability distribution over y. By optimizing this objective, we find θ under which p(y|x) has a high likelihood, thus shifting the pretrained model to the new task-specific distribution. Furthermore, this objective does not require plausibility annotations for explanations. ## 4.2 Incorporating Mutual Exclusivity The goal of abductive reasoning is to separate out plausible and implausible explanations. However, we note that L(θ) itself only maximizes p(y|x). In practice, this does not require the model to learn any distinctions between explanations, and we observe that in practice the approach learns ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) to treat them all as plausible. The blue line in Fig 2 shows the entropy of p(z|*x, y*) on the αNLI dataset when fine-tuning a model with L(θ). We note that a uniform distribution of two categories has approximately an entropy of 0.6931, the upper bound on the entropy of p(z|*x, y*) for the αNLI examples. Fine-tuning via max-marginal likelihood alone yields an entropy close to the upper bound, meaning the model believes that different z explain y equally well. To impose the mutual exclusivity among explanations, we apply posterior regularization (PR), which places soft constraints on posterior distributions (Ganchev et al., 2010). The posterior regularized likelihood shows as follows: $${\mathcal{L}}_{P R}(\theta)={\mathcal{L}}(\theta)-\lambda\Omega(p_{\theta}(z|x,y)).\quad\quad(4)$$ To enforce a model to prefer specific explanations over the others, we choose Ω : R|Z| → R to be the following function, proposed in Chen et al. (2020): Ω(p(z|x, y)) = max(H(pθ(z|x, y)), ln(m)) (5) H(·) is the entropy function. In Fig. 3, we plot Ω(·) when |Z| = 3 and m = 2, which shows that distributions with a non-zero probability for the third explanation have larger Ω values. Ω(·) thus penalizes a posterior distribution that has an entropy higher than ln(m), which sets an upper bound at the entropy of a distribution whose probability mass collapses to m categories. When m = 1, Ω(·) reduces to $$\Omega(p(z|x,y))=H(p_{\theta}(z|x,y)).$$ Ω(p(z|*x, y*)) = H(pθ(z|*x, y*)). (6) The orange line in Fig. 2 shows that incorporating Ω(·) enables the model to differentiate between different explanations. Notice that, except for m = 1, there is no guarantee that Ω(·) penalizes all distributions that have probability mass in more than m categories, but we will empirically justify that Ω(·) eliminates undesired posterior distributions. ## 5 Experimental Setup Metrics. Accuracy is used to evaluate a system's predictive power. For datasets with m = 1, accuracy is computed with regards to each example (i.e., whether the plausible explanation has been identified for each example). Otherwise, to stay consistent with evaluation in prior works, we compute accuracy with regards to each explanation (i.e., whether the plausibility of each explanation is correctly predicted). Therefore, more weight will be given to the instances that have larger |Z| (within a single dataset, the variance of |Z| for different examples is very small). Baselines. We consider three groups of baselines: (1) methods that do not rely on plausibility annotations (shown as w/o annotations), (2) pretrained language models fine-tuned with plausibility annotations (shown as w/ annotations), and (3) methods that incorporate external knowledge bases (shown as "w/ KBs"). For (1), we first consider previous best published results achieved by a RoBERTalarge model for αNLI (Ma et al., 2021), by a BERT model for Sen-Making (Wang et al., 2019), and a GPT-small model for WinoWhy (Zhang et al., 2020) (all abbreviated as Prev. Best). Additionally, we use GPT-Neo (Black et al., 2021), GPT3 (text-davinci-002) (Brown et al., 2020), and the | αNLI | Sen-Making | δ-ATOMIC | δ-SNLI | δ-SOCIAL | WinoWhy | | | |-------------------|---------------|------------|----------|------------|-----------|-------|-------| | Previous Best | 65.50 | 45.60 | - | - | - | 56.37 | | | ZS GPT-NEO | 57.47 | 29.80 | 47.53 | 45.38 | 51.69 | 59.13 | | | w/o annotations | ZS GPT3 | 67.54 | 43.00 | 50.73 | 49.69 | 49.22 | 50.99 | | ZS BART | 50.96 | 47.80 | 59.05 | 55.12 | 52.58 | 45.69 | | | Tuned BART | 57.40 | 63.50 | 67.49 | 64.76 | 53.88 | 55.32 | | | LiPoR | 71.56 | 65.50 | 76.82 | 65.26 | 57.19 | 69.88 | | | w/ annotations | RoBERTa | 85.60 | 93.10 | 78.30 | 81.60 | 86.20 | 75.04 | | KDDC-ATOMIC (N) | 70.80 | 51.00 | 75.90 | 69.83 | 64.49 | 42.44 | | | KDDC-CWWV (N) | 70.00 | 45.70 | 62.48 | 63.24 | 62.90 | 40.45 | | | w/ KB | KDDC-CSKG (N) | 70.50 | 49.60 | 72.20 | 69.93 | 63.80 | 44.05 | | QNLI-ATOMIC (N) | - | - | - | - | - | 73.47 | | | Previous Best (Y) | 87.30 | 95.00 | - | - | - | 87.55 | | BART-large model (Lewis et al., 2020) to directly score x [SEP] z [SEP] y for each z in a zero-shot (ZS) manner. We threshold the outputs of these models in the same way as done in our method to choose the plausible explanations. Finally, we consider BART fine-tuned with Eq. 3 (Tuned BART) as a baseline to better understand the role of posterior regularization. For (2), a RoBERTa-large model (Liu et al., 2019) is fine-tuned with plausibility annotations (abbreviated as RoBERTa). For this baseline, we refer to the best result in the literature: Ma et al. (2021) for αNLI, Wang et al. (2020) for Sen-Making, Rudinger et al. (2020) for δ-NLI, and Zhang et al. (2020) for WinoWhy. For (3), we run different variants of Knowledge-driven Data Construction (abbreviated as KDDC) (Ma et al., 2021), a method that leverages external knolwedge but not plausibility annotations. We note that KDDC is designed to predict a single correct answer with argmax. To handle the datasets that have more than one correct answers, we modify KDDC to choose the answers that have scores higher than the median. We also include Knowledge-Enabled Natural Language Inference (Huang et al., 2021) that is first supervised on QNLI (Wang et al., 2018) and then incorporate ATOMIC at inference time for WinoWhy (abbreviated as QNLI-ATOMIC). For models that use both external knowledge and plausibility annotations, we take RAINBOW (Raina et al., 2005) for αNLI, ECNU-SenseMaker (Zhao et al., 2020) for Sen-Making, and RoBERTa-Grande (Zhang et al., 2020) for WinoWhy. | Prompt for plausible explanations: Provide a brief explanation for why it is not sensible that y Prompt for implausible explanations: Provide a brief explanation for why y y: He poured orange juice on his cereal. In: Provide a brief explanation for why it is not sensible that he poured orange juice on his cereal. Out: It is not sensible because orange juice does not go well with cereal. In: Provide a brief explanation for why he poured orange juice on his cereal Out: He wanted to eat a healthy breakfast. | |---| Figure 4: Prompts for producing competing explanations, followed by an example generation. Implementation & Hyperparameters. We choose a BART-large model (Lewis et al., 2020) to be θ. We train the model with the Hugging Face Transformers framework (Wolf et al., 2020). We perform grid search with learning rates {1e-6, 3e-6, 5e-6, 1e-5}, batch sizes {2,4,8,16}, and λ {1e-2,1e-1,1,10}. We train 50 epochs for WinoWhy and 10 epochs for all other datasets. We perform evaluation on dev sets every 500 steps. We choose the checkpoint whose posterior distributions have the lowest average entropy on dev sets to run tests if the entropy starts to diverge during training. If the entropy converges, we choose the checkpoint at the end of training. Because there are not train/dev/test sets for WinoWhy, to perform a direct comparison with other methods, we do not split the dataset ourselves and simply train models on all of the data and choose the checkpoint based on loss values. Automatic Candidate Generation LiPoR assumes access to a candidate explanation set Z during training with human-written explanations. However, we may also want to use the model in domains without a candidate set. We consider a variant that uses a noisy automatic candidate generation process. In this setting, set Z˜ will contain a set of explanations with no guarantee that any are plausible. To generate Z˜ we utilize language model prompting with GPT3 (text-davinci-002) (Brown et al., 2020). Using prompt templates inspired by the instructions given to human annotators, we have the model generate explanations. We show example prompts for the Sen-Making dataset in Fig. 4. For datasets with fewer than 1000 unique contexts x (i.e., δ-NLI and Winowhy), we generate one plausible explanation and one implausible explanation for every x. For the other datasets, we randomly sample 1000 unique contexts and otherwise stay the same. We release the prompts as well as the generated explanations for every dataset in the supplementary materials. In this setting, LiPoR uses a lower PR penalty λ = 0.1. We additionally consider two more baselines. First, we score the most plausible explanation with the prompt as a prefix (denoted as Prompted GPT3). Secondly, we supervise RoBERTa-large with the generated explanations. ## 6 Results We summarize the results in Table 2. First of all, LiPoR produces the best results compared to all other methods without plausibility annotations, including GPT3 which has many more parameters and is pretrained on more data. We note that LiPoR consistently outperforms Tuned BART, suggesting that posterior regularization plays a positive role in selecting plausible explanations. Compared to knowledge-augmented methods without plausibility annotations, LiPoR is able to produce better results on αNLI, Sen-Making, and δ-ATOMIC. We note that δ-NLI is in part created from knowledge bases, and therefore KDDC-* is particularly good at δ-ATOMIC, δ-SNLI, and δ-SOCIAL, but fail on WinoWhy and Sen-Making. Additionally, QNLI-ATOMIC outperforms LiPoR by 4 points on Winowhy, but this improvement is expected given how much related task data it was pretrained on. Finally, LiPoR still cannot match the performance of RoBERTa trained with plausibility annotations. In Table 4, we show the confusion matrices for comparing among ZS BART, Tuned BART, and LiPoR on the αNLI test set. Tuned BART and LiPoR make the same predictions on a majority of examples, and on the instances they disagree, LiPoR is able to correctly identify plausible explanations on twice as many examples. We also observe a similar trend for ZS BART and Tuned BART. Fine-tuning with Generated Explanations Table 3 compares LiPoR fine-tuned with generated explanation candidates to the best performing methods without plausibility annotations. Even with noisy candidate sets, LiPoR is still able to leverage such data. It outperforms zero-shot GPT3 methods and improves over Prompted GPT3. Additionally, LiPoR is more robust than RoBERTa trained with plausibility annotations when such annotations are noisy. Therefore, even though the generated explanations by themselves correlate weakly with plausibility, they can be used in LiPoR. ## 7 Analysis Preserving Plausible Candidates Models trained to prefer single plausible explanations can become overconfident in their predictions. A major benefit of LiPoR is that it considers multiple plausible candidates. While LiPoR is fine-tuned to favor mutual exclusivity, we find that at test time it remains able to score multiple plausible explanations highly. Table 5 presents two examples in which both explanations are plausible. The RoBERTa model trained with plausibility annotations produces posterior distributions that collapse to one explanation. However, LiPoR can assign significant probability to both explanations. Qualitative Comparison Table 6 presents a number of examples accompanied with the predictions made by fine-tuning via max-marginal likelihood (-PR) and LiPoR (+PR) side by side. The two examples on the top are among the more difficult abduction examples: the first example requires a model to draw a connection between abstract concepts and concrete objects ("what you love" → "taking long warm showers"); the second example requires a model to figure out an inclusion relation (Nepal is a country in Asia). We italicize the words that co-occur across *x, z* and y, and we speculate that fine-tuning chooses the wrong explanations because of lexical overlap shortcuts. LiPoR, however, was able to correctly flip these predictions with | αNLI | Sen-Making | δ-ATOMIC | δ-SNLI | δ-SOCIAL | Winowhy | | |---------------|--------------|------------|----------|------------|-----------|-------| | ZS GPT3 | 67.54 | 43.00 | 50.73 | 49.69 | 49.22 | 50.99 | | Prompted GPT3 | 49.19 | 53.80 | 48.23 | 51.26 | 50.86 | 58.10 | | LiPoR | 57.50 | 61.50 | 67.60 | 64.40 | 55.40 | 58.67 | | RoBERTa (Y) | 53.71 | 61.30 | 62.74 | 57.81 | 51.78 | 42.13 | Table 3: Comparing LiPoR to several baselines on automatically generated explanation candidate sets. (Y) indicates that a method uses plausibility annotations. | Tuned ✓ | Tuned ✗ | LiPoR ✓ | LiPoR ✗ | | | |-----------|-----------|-----------|-----------|------|-----| | ZS ✓ | 1140 | 419 | | | | | ZS ✗ | 618 | 882 | Tuned ✓ | 1449 | 309 | | Tuned ✗ | 767 | 534 | | | | Table 4: **Left:** Comparison between ZS BART and Tuned BART on αNLI. **Right:** Comparison between Tuned BART and LiPoR. {*} ✓ and {*} ✗ denote the number of instances for which plausible explanations are correctly / incorrectly identified by {*}, respectively. Example Y N x: Sally went to Italy in the spring. Sally took a lot of pictures when she went sightseeing. 71.7 50.0 z:Sally took pictures at every place she visited. 28.3 50.0 y: When she got home, Sally showed her pictures to all her friends. x: Mike didn't study for a test. Mike was normally a good student. 100 50.0 z:Everyone in class failed the test except for Mike. 0 50.0 y: The teacher was very disappointed. ? LiPoR assigns close probabilities to the indistinguishably likely explanations, while the supervised model collapses to one of the explanations. Table 5: Comparison between posterior probabilities for each explanation produced by a RoBERTa model trained with plausibility annotations (Y) and LiPoR (N) on individual test examples, respectively. ## High Confidence. The two examples on the bottom are those for which Tuned BART fails to identify the plausible explanation because one explanation is short and the other is long. Again, LiPoR is able to correct these mistakes. Furthermore, the probability produced by LiPoR for each explanation also reflects the model's confidence to a certain degree. In the first example, "we met a golden retriever puppy and he played with us" is a much better explanation than "we were rained on," because one does not need to go to a park to experience rain. As a result, the difference between probabilities for the two explanations is 92.2%. For the second example, "we had an amazing time" could refer to Example -PR +PR x: I love taking long warm showers. Showers make me sleepy. 50.3 6.0 z: **Doing what you love is important.** 49.7 94.0 y: That's why I take two of them every day. x: Neil wanted to see the *mountains* of Asia. Neil booked a tripped online. 47.5 64.0 z: Neil took a trip to see the Rocky *mountains* instead.52.5 36.0 y: Neil loved being so close to the *mountains* in Nepal! - Fine-tuning (-PR) looks at superficial word cooccurrences, but LiPoR (+PR) tries to understand the true context. Example -PR +PR x: We went to the park today. We were rained on! 53.5 3.9 z: **We met a golden retriever puppy and** he played with us. 46.5 96.1 y: I love going to the park! x: Before my lunch time I got a phone call. My best friend wanted to go on a trip. 50.5 40.9 z: **My best friend wanted to try a new** restaurant for lunch. 49.5 59.1 y: We had an amazing time! ![7_image_0.png](7_image_0.png) - LiPoR (+PR) is able to correct the bias towards shorter explanations. Table 6: Comparison between posterior probabilities for each explanation produced by fine-tuning (-PR) and LiPoR (+PR) on individual test examples, respectively. The two tables consist of examples where LiPoR successfully corrects the mistakes made by fine-tuning. The plausible explanation labeled by human annotators are in boldface. both trying out a new restaurant and going on a trip. The phone call was received before lunch time makes the second explanation more likely, but the first explanation can still be what actually happened. As a result, LiPoR assigns 40.9% to the "trip" explanation and 59.1% to the "restaurant" explanation, leading to a smaller gap than that of the first example. ## 8 Conclusion We introduce LiPoR, which fine-tunes pretrained language models on abductive reasoning tasks without plausibility annotations. Results shows that LiPoR achieves comparable performance to that of knowledge-augmented zero-shot methods. ## Ethical Statement LiPoR shares similar concerns with other contemporary approaches for performing commonsense reasoning. Specifically, because LiPoR exploits the knowledge already present in pretrained language models, it can potentially reinforce existing harmful biases in such models. ## Acknowledgement AR and JC are supported by a Sloan Fellowship, NSF CAREER \#2037519, and NSF \#1901030. CC and WZ are supported by NSF \#1815455. ## References Henning Andersen. 1973. Abductive and deductive change. *Language*, pages 765–793. Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, and Tom Mitchell. 2021. Conversational neuro-symbolic commonsense reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4902–4911. Pratyay Banerjee and Chitta Baral. 2020. Selfsupervised knowledge triplet learning for zero-shot question answering. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 151–162. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. In International Conference on Learning Representations. Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow. If you use this software, please cite it using these metadata. Antoine Bosselut, Ronan Le Bras, and Yejin Choi. 2021. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering. In *Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI)*. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779. Kaj Bostrom, Xinyu Zhao, Swarat Chaudhuri, and Greg Durrett. 2021. Flexible generation of natural language deductions. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 6266–6278. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632– 642. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Eugene Charniak and Solomon E Shimony. 1990. Probabilistic semantics for cost based abduction. In *Proceedings of the eighth National conference on Artificial intelligence-Volume 1*, pages 106–111. Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John Gregoire, and Carla Gomes. 2020. Deep reasoning networks for unsupervised pattern de-mixing with constraint reasoning. In *International Conference on* Machine Learning, pages 1500–1509. PMLR. Zi-Yi Dou and Nanyun Peng. 2022. Zero-shot commonsense question answering with cloze translation and consistency optimization. In The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI). Yanai Elazar, Hongming Zhang, Yoav Goldberg, and Dan Roth. 2021. Back to square one: Artifact detection, training and commonsense disentanglement in the winograd schema. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10486–10500. Maxwell Forbes, Jena D Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 653–670. Kuzman Ganchev, Joao Graça, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. *The Journal of* Machine Learning Research, 11:2001–2049. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 1161–1166. Andrew S. Gordon and Jerry R. Hobbs. 2017. *Explanation*, page 299–305. Cambridge University Press. Canming Huang, Weinan He, and Yongmei Liu. 2021. Improving unsupervised commonsense reasoning using knowledge-enabled natural language inference. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4875–4885. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In *Thirteenth international conference on the principles of* knowledge representation and reasoning. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7871–7880. Jingjing Li, Zichao Li, Lili Mou, Xin Jiang, Michael Lyu, and Irwin King. 2020. Unsupervised text generation by learning from search. Advances in Neural Information Processing Systems, 33:10820–10831. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari. 2021. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. In 35th AAAI Conference on Artificial Intelligence. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2022. Show your work: Scratchpads for intermediate computation with language models. In *Deep* Learning for Code Workshop. Gabriele Paul. 1993. Approaches to abductive reasoning: an overview. *Artificial intelligence review*, 7(2):109–152. Judea Pearl and Dana Mackenzie. 2018. The book of why: the new science of cause and effect. Basic books. Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future: Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 794–805. Rajat Raina, Andrew Y Ng, and Christopher D Manning. 2005. Robust textual inference via learning and abductive reasoning. In *AAAI*, pages 1099–1105. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4932–4942. Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4661–4675, Online. Association for Computational Linguistics. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pages 3027–3035. Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4615–4629. Shane Storks, Qiaozi Gao, and Joyce Y Chai. 2019. Recent advances in natural language inference: A survey of benchmarks, resources, and approaches. arXiv preprint arXiv:1904.01172. Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In *Findings of the Association for Computational Linguistics:* ACL-IJCNLP 2021, pages 3621–3634. Paul Thagard and Cameron Shelley. 1997. Abductive reasoning: Logic, visual thinking, and coherence. In *Logic and scientific methods*, pages 413– 427. Springer. Keyon Vafa, Yuntian Deng, David Blei, and Alexander M Rush. 2021. Rationales for sequential predictions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10314–10332. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems, volume 33, pages 12388–12401. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of* the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020. SemEval2020 task 4: Commonsense validation and explanation. In *Proceedings of The 14th International* Workshop on Semantic Evaluation. Association for Computational Linguistics. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and explanation. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 4020–4026. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Hongming Zhang, Xinran Zhao, and Yangqiu Song. 2020. Winowhy: A deep diagnosis of essential commonsense knowledge for answering winograd schema challenge. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 5736–5745. Qian Zhao, Siyu Tao, Jie Zhou, Linlin Wang, Xin Lin, and Liang He. 2020. ECNU-SenseMaker at SemEval-2020 task 4: Leveraging heterogeneous knowledge resources for commonsense validation and explanation. In *Proceedings of the Fourteenth* Workshop on Semantic Evaluation, pages 401–410, Barcelona (online). International Committee for Computational Linguistics. Pei Zhou, Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2021a. Think before you speak: Learning to generate implicit knowledge for response generation by self-talk. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 251–253, Online. Association for Computational Linguistics. Pei Zhou, Pegah Jandaghi, Hyundong Cho, Bill Yuchen Lin, Jay Pujara, and Xiang Ren. 2021b. Probing commonsense explanation in dialogue response generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4132–4146, Punta Cana, Dominican Republic. Association for Computational Linguistics. Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, and Jian Tang. 2020. Towards interpretable natural language understanding with explanations as latent variables. Advances in Neural Information Processing Systems, 33:6803–6814. ## A Additional Experiments How do models with different architectures and sizes perform at abductive reasoning? Table 7 summarizes the results on the αNLI dataset with different model architectures and model sizes, which are obtained from the same grid search described in Sec. 5. Within the same architecture, models with more parameters are better at abductive reasoning. When comparing between BART and T5, BART can produce consistent better results at each size. Does a learnable p(z|x) **model lead to better** performance? Here we test if a learnable p(z|x) model instead of a uniform p(z|x) model leads to better performance. We should note that a learnable p(z|x) model may result in reasoning shortcuts: because if the signal from p(z|x) is too strong, then this term will dominate Eq. 2; thus, p(z|*x, y*) computed in this way is no longer a result of thinking backwards. We parametrize the learnable p(z|x) model by a BART-large model, which takes x as an input and returns a probability distribution over all sequences. Table 8 shows the comparison between the two p(z|x) models on the αNLI dataset. Although the uniform p(z|x) model outperforms the learnable p(z|x) model, the difference between them is not significant. How do methods without plausibility annotations perform in presence of distractors? In order to test the robustness of different methods without plausibility annotations, we evaluate them on two types of distractors added to the αNLI test set. The first type of distractor randomly samples a third explanation from another example, and the second type of distractor constructs a third explanation with randomly sampled words from the vocabulary of the αNLI dataset with a length that falls in-between the lengths of the two original explanations. Table 8 compares the results with and without the distractors. Notice that after adding a third option, the chance of getting the plausible explanation with a random guess is 13 . LiPoR's accuracy drops significantly with the presence of distrators, while the relative decrease for GPT NEO is smaller. Furthermore, the zero-shot results (i.e., ZS and GPT NEO) suggest that it is more difficult to identify the first type of distractor than the second one. Our interpretation for a worse performing LiPoR's on distractors is that the distrators break our assumption: p(z|x) is no longer uniform, and | BART | T5 | | |--------|-------|-------| | small | - | 54.14 | | base | 60.08 | 57.31 | | large | 71.56 | 65.48 | Table 7: Comparison between different model architectures and model sizes on the αNLI dataset. | Original | +Rand. E's | +Rand. W's | | |-------------------------|--------------|--------------|-------| | GPT NEO | 57.47 | 51.12 | 57.37 | | ZS | 50.96 | 34.39 | 38.22 | | LL | 57.40 | 53.48 | 53.52 | | LiPoR w/ unif. p(z|x) | 71.56 | 58.58 | 57.40 | | LiPoR w/ learned p(z|x) | 69.92 | 59.14 | 59.24 | the probability of a distracting explanation is independent of the probability of x. Therefore, the original factorization in Eq. 1 no longer applies. To build an unsupervised system that is robust to distractors requires incorporating the new assumptions in the data generating process. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
wang-etal-2023-pesco
{PESCO}: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
https://aclanthology.org/2023.acl-long.832
We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text retrieval problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label retrieval, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5{\%} accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification.
# Pesco: Prompt-Enhanced Self Contrastive Learning For Zero-Shot Text Classification Yau-Shian Wang Ta-Chung Chi Ruohong Zhang Yiming Yang Carnegie Mellon University king6101@gmail.com {tachungc,ruohongz}@andrew.cmu.edu yiming@cs.cmu.edu ## Abstract We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text matching problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label matching, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5% accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification. ## 1 Introduction Text classification is the task of assigning relevant category labels to each input document. It is an important problem in machine learning research with a wide spectrum of applications, including sentiment analysis (Pang et al., 2002; Maas et al., 2011; Socher et al., 2013; Tang et al., 2014), question answering (Rajpurkar et al., 2016, 2018), and intent classification (Tur et al., 2010), etc. Recently, deep neural networks have obtained remarkable improvements in text classification, including CNNs (Kim, 2014; Zhang et al., 2015), RNNs (Tang et al., 2015; Yang et al., 2016), Transformers (Vaswani et al., 2017), and more, thanks to the successful modeling of contextualized representations. Despite the remarkable progress, training wellperforming neural classifiers still requires a large amount of human-labeled documents, which is costly and time-consuming, especially for new application domains. This stimulates the recent trend of exploring self-supervised pre-training neural models on text classification tasks. In particular, pre-trained language models (PTLMs) (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019) clearly stand out from other methods owing to the pre-training on large-scale unlabeled data. Nevertheless, how to adapt PTLMs to downstream tasks with less supervision remains an open question for the research community, inviting new ideas to explore. Prompt-based learning (Brown et al., 2020; Shin et al., 2020; Liu et al., 2021; Li and Liang, 2021; Gao et al., 2021a) has been actively studied to better adapt PTLMs to downstream tasks with the goal of reducing human annotation effort. For example, PET (Schick and Schütze, 2020) is a prompt-based method for few-shot text classification. It formulates the task as a *Cloze Test*, where a PTLM is used to predict the output label(s) by completing a prompt concatenated right after an input document. For example, the sentiment of a product review is highly likely to be positive if a PTLM fills the word "good" into the following input: ## [Review] | It Is A _ Product. This example shows that prompt-based learning could unleash the potential power of a PTLM by constructing the input format of a downstream task in a way that closely resembles the PTLM pretraining objective, which is masked language modeling (MLM) in this case. Motivated by the recent success of prompt-based learning, we propose PESCO, a novel self-training framework for zero-shot classification that uses prompts to enhance performance. The self-training consists of two iterative steps, pseudo-label prediction and model update. To make label descriptions more informative, we first put label descriptions into some predefined prompts and call the enhanced descriptions label-prompts. As depicted in Figure 1, to predict the pseudo-label of a document, PESCO formulates text classification as a neural matching task. A pre-trained text encoder maps both docu14897 ments and label-prompts into a shared embedding space. A label whose embedding is closest to the document is predicted as the pseudo-label. To effectively update the text encoder with pseudo-labels, we propose the Prompt-enhanced Label-aware Cloze Test (PLCT), a contrastive learning framework for self-training. The text encoder is trained to match a document and the text relevant to its pseudo-label. The relevant texts include pseudo-label prompts and the key sentences from the documents assigned to the same pseudolabel. The key sentence of each document is the sentence most related to its pseudo-label. In our experiments, we show that the iterative self-training consistently improves the classification performance compared to the same model without self-training and that our proposed approach substantially outperforms other strong zero-shot classification baselines. On some datasets, the zeroshot results are even on par with a fully supervised baseline. On the Dbpedia dataset, in particular, PESCO achieves 98.5% accuracy without any labeled data. In summary, the contributions of this paper are twofold: 1. We explore text classification in a neural matching formulation enhanced by prompts. We demonstrate that even without any finetuning on the text encoder, this straightforward formulation is an effective method for zeroshot text classification. 2. The potential of contrastive learning for selftraining has not been explored. We show that this is a promising direction for self-training and can achieve state-of-the-art performance on zero-shot text classification. ## 2 Related Work 2.1 Contrastive Learning Contrastive learning (CL) (Chopra et al., 2005; Hadsell et al., 2006) is a metric learning method that aims to pull closer similar inputs in the embedding space. Recently, the most popular and efficient methods for CL involve batch contrastive learning (He et al., 2019; Chen et al., 2020), which put similar inputs (positive pairs) and dissimilar inputs (negative pairs) in the same batch, simultaneously minimizing the distance of representations from positive pairs, while maximizing the distance of negative pairs. ![1_image_0.png](1_image_0.png) The key to CL is how to construct positive samples. Based on downstream applications, there are various ways to formulate the positive pairs. In self-supervised pre-training, the positive pairs are usually formulated by data augmentation. That is, different versions of a distorted sample are treated as a positive pair. In supervised contrastive learning (Khosla et al., 2020), the examples belonging to the same class are viewed as a positive pair. In NLP, CL is usually used as an additional selfsupervised pre-training to PTLMs because the sentence embeddings from PTLMs without fine-tuning are not ready to be used in downstream tasks (Li et al., 2020). SimCSE (Gao et al., 2021b) employs dropout as minimal data augmentation and obtains state-of-the-art unsupervised sentence representations. In supervised SimCSE, the sentences with entailment relation are viewed as a positive pair. Other approaches for data augmentation include sentence reformulation (Wu et al., 2020), back translation (Fang et al., 2020), dual encoder (Carlsson et al., 2021), language model corruption (Meng et al., 2021), and translation pairs (Wang et al., 2022). In addition, CL is a commonly used training algorithm for neural text retrieval (Xiong et al., 2021). Inverse cloze test (ICT) (Lee et al., 2019) is the most commonly used contrastive pre-training task for retrieval that predicts a randomly selected sentence from the rest of the texts. It is also possible to construct positive pairs by leveraging the document structures (Chang et al., 2020). ## 2.2 Self-Training And Zero-Shot Text Classifcation Self-training Self-training (Yarowsky, 1995; Nigam and Ghani, 2000; Lee, 2013; Xie et al., 2020) is a widely used approach for semisupervised learning and can have additive improvement to pre-training in both computer vision (Zoph et al., 2020) and NLP (Du et al., 2021). The paradigm of self-training is first using a pre-trained base model as "teacher" to generate pseudo-labels on unlabeled data. The pseudo-label is then used to train a "student" model. The teacher-student training is performed iteratively until convergence. Zero-shot Text Classification Zero-shot classification aims to classify text using only label names without human annotation. Self-training has demonstrated impressive performance on fewshot (Mukherjee and Hassan Awadallah, 2020) and zero-shot text classification. Unlike a few-shot setting which can use supervised information to obtain a base model, in zero-shot text classification, obtaining a base model is non-trivial. LOTClass (Meng et al., 2020) leverages PTLMs to augment label descriptions with semantically related words and then find category-indicative words among these related words to label documents. They generalize the performance to the documents without category-indicative words via self-training. iPET (Schick and Schütze, 2020) formulates text classification as a cloze test to help PTLMs understand the task. They design several types of prompts for each dataset, and each type of prompt trains an individual teacher model to annotate documents using self-training. A student model aggregates the knowledge from the teachers via knowledge distillation. In this work, we propose a novel self-training method for zero-shot text classification that integrates self-supervised pre-training into self-training in a contrastive learning framework. ## 3 Zero-Shot Classification As Matching In our zero-shot setting, there are N unlabeled documents X = {x1, x2, · · · , xN } and a set of label descriptions C = {c1, c2, · · · , cL}, where L denotes the number of classes. We aim to learn a scoring function g(*x, c*) so that relevant document and label description pairs can have higher scores. A label whose label description has the highest score is selected as model prediction: $${\hat{y}}=\arg\operatorname*{max}_{j}\ g(x,c_{j}),$$ Inspired by the recent success of pre-trained sentence encoder (Gao et al., 2021b; Chuang et al., 2022) which has shown impressive performance on matching relevant texts, we explore using pretrained encoders as g(*x, c*j ). Specifically, as illustrated in Figure 1, we formulate zero-shot text classification as a neural text matching problem. Both document and label descriptions are encoded into dense vectors by a shared encoder. The matching score can be obtained by measuring cosine similarity between dense vectors. However, label descriptions are usually a few words rather than a sentence with full semantics, which makes PTLMs unable to fully understand the meaning of the labels. To tackle this, query reformulation (Nogueira and Cho, 2017; Petroni et al., 2020) is a commonly used technique in retrieval to enhance the semantics of a query. This technique can be further incorporated with promptbased learning (Schick and Schütze, 2020), which has shown that adding prompts to a text helps PTLMs understand classification tasks. We use a prompt function p(·) to convert a label description c into a prompt by placing label descriptions into pre-defined templates. We design T templates for each dataset, and the scoring function is: $$g(x,c)=\frac{1}{T}\sum_{i=1}^{T}s i m(f_{\theta}(x),f_{\theta}(p^{i}(c))),\ \ \ \ (2)$$ where fθ(·) is a text encoder with parameters θ that maps an input text to a dense embedding, and sim(·) is a similarity function. For the rest of our paper, we use cosine similarity as sim(·). For simplicity, in the rest of the article, we use pj to refer p i(cj ), which is the "label-prompt" of label j with i randomly sampled from {1, · · · , T}. ## 4 Pesco PESCO is a simple but effective self-training framework for zero-shot text classification. Algorithm 1 gives an overview of PESCO. In our iterative selftraining loop, we first use a pre-trained sentence encoder fθ to generate pseudo-labels (i.e. predicted labels) by the matching process described in Section 3. We then use the pseudo-labels to update fθ by Prompt-enhanced Label-aware Cloze Test (PLCT), which leverages pseudo-labels to construct positive training pairs. We continue the selftraining process by iteratively generating pseudolabels and updating the model using the PLCT objective function. $$(1)$$ ![3_image_0.png](3_image_0.png) ## 4.1 **Prompt-Enhanced Label-Aware Cloze Test** We propose Prompt-enhanced Label-aware Cloze Test (PLCT) to update our model using pseudolabels. As shown in Figure 2, PLCT consists of two losses, Label-aware Cloze Test (LCT) loss and Prompt Contrastive Loss (PCL). To compute LCT, for each document, we first select a key sentence from the document that is most relevant to its pseudo label. In LCT, given a document, the positive texts are the key sentences from the documents belonging to the same pseudo-label. For PCL, the positive texts for a document are its pseudo-label prompt (i.e. the label-prompt of a pseudo-label). We combine these two losses by putting the positive texts of LCT and PCL into the same batch of a contrastive loss. ## 4.1.1 Label-Aware Cloze Test LCT is inspired by Inverse Cloze Test (Lee et al., 2019) which is a widely used self-supervised pretraining task for neural text retrieval. It uses a randomly selected sentence from a document to match the remaining texts. In a document, as some sentences don't contain useful information, using a randomly selected sentence for training is not an optimal choice. Instead, we use pseudo-label to select the key sentences. Note that we use "Cloze Test" without "Inverse" because we use the remaining long texts to match its relevant short sentences, which can be viewed as label descriptions. As illustrated in Figure 2-(A), given an input document xi = {s 1 i , s2 i , · · · , sn i} consists of n sentences and its predicted pseudo label yˆi, its key sentence kiis sj , where: $$j=\arg\operatorname*{max}_{n}\ g(s_{i}^{n},p_{\bar{y}_{i}}).\qquad\qquad(3)$$ Here, g(·) is the scoring function in Eq.(1). As key sentence kiis more relevant to the pseudolabel than any other sentences in xi, optimizing this objective is similar to minimize the distance between a document and its pseudo-label in embedding space, so ki can be viewed as an augmented version of the pseudo-label prompt. Predicting the augmented version can have additional training signal than simply predicting pseudo-label prompt. We provide a real example of xˆ and k in Table. 1 and more examples can be found in the Appendix Table 8. Since key sentences are highly correlated to corresponding pseudo-label prompts, given a document, it should not only match its key sentence but also key sentences in documents assigned to the same pseudo-label as shown in Figure 2 (C)-1. We use the supervised contrastive loss (Khosla et al., 2020) to optimize LCT, which extends the SimCLR (Chen et al., 2020) to allow multiple positive keys for a query in a supervised setting. Specifically, let I = {1, · · · , B} be the set of the indices of the texts in a batch, where B denotes the batch size. The LCT loss LLCT is written as: $$\sum_{i\in I}\frac{-1}{|K(i)|}\sum_{\hat{k}\in K(i)}\log\frac{e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(\hat{k}))/\gamma}}{\sum_{j\in I}e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(k_{j}))/\gamma}}.\tag{4}$$ Here, $K(i)\equiv\{k_{j},\forall j\in I:\hat{y}_{j}=\hat{y}_{i}\}$ denotes the Here, K(i) ≡ {kj , ∀j ∈ I : ˆyj = ˆyi} denotes the keys belonging to the same pseudo class yˆi, and γ 14900 denotes a temperature commonly-used in CL. To prevent trivial training signal, the input document is xˆi = xi \ {ki} rather than xi, where the key sentence kiis removed. ## 4.1.2 Prompt Contrastive Loss As the update target of self-training is to maximize the similarity between xi and its pseudo-labelprompt pyˆi in embedding space, we use the prompt contrastive loss (PCL) L*P CL* to directly maximize the similarity: $$\mathcal{L}_{PCL}=-\sum_{i\in I}\log\frac{e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(p_{\hat{y}_{i}}))/\gamma}}{\sum_{c\in C}e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(p(c)))/\gamma}}.\tag{5}$$ **Remark 1**: _The $\mathcal{L}_{PCL}$ is the $\mathcal{L}_{PCL}$-norm of $\mathcal{L}_{PCL}$._ Depicted in Figure 2 (C)-2, this loss predicts yˆi from xˆi. ## 4.2 Combining Lct And Pcl Naturally, to combine LCT and PCL, the simplest way is to use LP CL + LLCT as the final training loss. However, we found that minimizing this loss has limited improvement over minimizing LLCT or L*P CL* alone. As depicted in Figure 2 (B), we come up with a more effective approach that puts the positive texts from these two losses into the same batch. By doing so, pseudo keys k and pseudo prompt p can serve as mutually challenging negative samples, thus enhancing the representative power through more difficult contrastive tasks. In our experiment, this simple solution significantly improves the performance. Specifically, we use xˆi as a query to retrieve (1) the key ki from the same text xi, (2) K(i), the keys belonging to the same pseudo class yˆi, and (3) the positive pseudo-label-prompt pyˆi . The PLCT loss L*P LCT* is written as: $$\sum_{i\in I}\frac{-1}{|A(i)|}\sum_{a\in A(i)}\log\frac{e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(a))/\gamma}}{\sum_{m\in M}e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(m))/\gamma}}\tag{6}$$ $\sum_{i\in I}\frac{-1}{|A(i)|}\sum_{a\in A(i)}\log\frac{e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(a))/\gamma}}{\sum_{m\in M}e^{sim(f_{\theta}(\hat{x}_{i}),f_{\theta}(m))/\gamma}}$ Here, A(i) ≡ K(i) ∪ {pyˆi} is the set of positive texts in the mini-batch for xi, M ≡ {kj , ∀j ∈ I }∪ {pc, ∀c ∈ C} denotes the set of all the candidate keys. Interestingly, xˆi can be viewed as a challenging data augmentation of xi for predicting pseudolabel prompt because it removes the most salient sentence from xi. A model can make a prediction simply based on one salient sentence, neglecting the information of remainder. This data augmentation method forces the model to capture additional information. ## Algorithm 1 Pesco Require: Unlabeled texts X, label descriptions C. Initialization: A pre-trained sentence encoder fθ(·). ## Repeat Until Convergence: 1. Use fθ(·) to generate hard pseudo-labels yˆ with Eq.(1) for all unlabeled texts without data augmentation. 2. Sample Tttraining pairs (x, yˆ) from step 1 based on the pseudo-label predicted probability. Use these pairs to update the θ of fθ(·) that minimizes the L*P LCT* in eq 6. 3. With a more powerful fθ(·), go back to step 1. ## Output: Fθ(·) 4.3 Self-Training Algorithm 1 describes PECOS self-training loop. Our self-training algorithm is a simplified version of noisy student training (Xie et al., 2020) that a single model alternately serves as a student and a teacher. The key idea of noisy student training is that the teacher uses clean data without data augmentation to generate pseudo-labels, while the student learns to predict the pseudo-label on augmented data. We first use pre-trained sentence encoder to initialize fθ(·). Then, in step 1, fθ(·) serves as a teacher to generate pseudo-labels from clean data x as described in Section 3. In step 2, fθ(·) serves as a student that learns to increase the probability of predicting pseudo-labels by minimizing L*P LCT* . Step 2 is a noisy student training because the model takes xˆ as input rather than clean x. The selftraining repeats step 1 and step 2 until convergence. We use fθ(·) from the last iteration as our final model. In the algorithm, we set Tt = d · Tt−1 that gradually increases T until a threshold T′. The probability of sampling a pseudo training pair is proportional to the normalized scores outputed by the score function, so a more confident pseudo training pair is more likely to be sampled. When sampling pseudo training pairs, we found that it is important | Label Description | x | k | |-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------| | Family and Relationship | how do you know if you're in love? is it possible to know for sure? in my experience you just know. it's a long term feeling of always wanting to share each new experience with the other person in order to make them happy, to laugh or to know what they think about it. it's jonesing to call even though you just got off an hour long phone call with them. it's knowing that being with them makes you a better person. it's all of the above and much more. | how do you know if you're in love? | Table 1: An example of the document xˆ and the selected pseudo positive keys k in Yahoo Answers. In this example, k is very related to label description. | Dataset | Class Number | Test Examples | |---------------|----------------|-----------------| | AG News | 4 | 7,600 | | DBPedia | 14 | 70,000 | | Yahoo Answers | 10 | 60,000 | | Amazon | 2 | 400,000 | Table 2: Dataset statistics. to keep the ratio of all the labels balanced. If a class doesn't have enough instances to be sampled, then we upsample the class to keep it balanced. ## 5 Experiments 5.1 Experimental Setting Implementation Details Inspired by Yin et al. (2019) who formulate zero-shot text classification as entailment prediction, we choose the version of SimCSE (Gao et al., 2021b) pre-trained on natural language inference (NLI) task 1as our text encoder for all datasets. Our experiments have shown that sentence encoder fine-tuned on NLI performs better on zero-shot classification tasks. We use the representation outputted by the last layer as our sentence representation. Following supervised contrastive learning (Khosla et al., 2020), the value of γ in all equations is set to be 0.07. For the value of d in the self-training section, we set it to be 2 because we want the model to annotate unlabeled data slowly. The details of other hyperparameters in the Appendix B. Datasets We conduct experiments on various text classification datasets: (1)**AG News**: topic classification on news article. (2)**DBpedia**: Ontology classification on selected classes from DBpedia. (3)**Yahoo Answers**: question type classification. (4)**Amazon**: binary sentiment classification on Amazon product review. The statistics of these 1We choose the model named "sup-simcse-bert-baseuncased" at https://github.com/princeton-nlp/ SimCSE. ## Dataset Are Listed In Table 2. We provide the label descriptions in Table 3. The label descriptions of Yahoo Answers and AG news are mainly from the original dataset, and the label description of DBpedia is mainly from LOTClass (Meng et al., 2020). ## 5.2 Effect Of Using Prompts We investigate whether supplementing the label description with the prompt can help the model better understand the meaning of the label, and thus improve the performance. In Table 3, we provide the label descriptions and the prompts we use. For each dataset, we manually design two prompts, where the '[desc]' in the templates is the label description. For example, given a label description "Health", the prompting function converts it into either "It is about Health" or "Category: Health". Our experiments showed that the choice of prompts doesn't affect performance much as long as reasonable prompts are given. For example, in AG news, without self-training, the accuracy of using "Category: <label> news", "This is about <label> news", and "<label> news" are 76.4, 76.0, and 78.0 respectively. Furthermore, our scoring function, as described in Eq.(2), combines the scores of different prompts, which further reduces the gap. The performance gap among different prompts is less than 2% without self-training and less than 1% after self-training. In Table 4, we analyze the effect of using prompts on SimCSE without self-training. By comparing [1] with [2], we find that using prompts for retrieval improves the performance on most of the datasets, especially on AG News. We find that without the word "news", the model can not understand the meaning of the class only with the description "world". Using the prompt-enhanced SimCSE [2] as the initial base model provides a better start for self-training. However, comparing with the performance gap of [1] and [2] in Table 4, we observed that the gap between [6] and [7] becomes smaller, | Datasets | Label Descriptions | Prompts | |--------------------------------------|----------------------------------------------|---------------------------------------------| | (1)Category: [desc] news. | | | | AG news | (1)World (2)Sports (3)Business (4)Technology | (2)[desc] news. | | and Science | (1)Category: [desc]. (2)It is about [desc]. | | | DBpedia | (1)company (2)school and university (3) artist (4)athlete (5)politics (6)means of transportation (7)building (8)river and mountain and lake (9)village (10)animal species (11)plant and tree (12)album (13)film (14)novel and publication and book | (1)Category: [desc]. (2)It is about [desc]. | | Yahoo Answers | (1)Society and Culture (2)Science and Mathematics (3)Health (4)Education and Reference (5)Computers and Internet (6)Sports (7) Business and Finance (8)Entertainment and Music (9)Family and Relationships (10)Politics and Government | | | Amazon-review-P | bad, good | (1)It is a [desc] product. | | (2)In summary, the product is [desc] | | | Table 3: The label descriptions and their prompts. [desc] in the templates denotes the label descriptions. | Id | Self-train | Methods | AG News | DBpedia | Yahoo Answers | Amazon | |------|--------------|-------------------|-----------|-----------|-----------------|----------| | [1] | No | SimCSE w/o prompt | 69.7 | 73.8 | 55.2 | 88.3 | | [2] | No | SimCSE w/ prompt | 76.3 | 76.0 | 56.5 | 88.3 | | [3] | No | PET | 79.4 | 75.2 | 56.4 | 87.1 | | [4] | Yes | iPET | 86.0 | 85.2 | 68.2 | 95.2 | | [5] | Yes | LOTClass | 86.4 | 91.1 | - | 91.6 | | [6] | Yes | PESCO w/o prompt | 87.1 | 96.0 | 69.9 | 95.1 | | [7] | Yes | PESCO | 89.6 | 98.5 | 71.1 | 95.2 | | [8] | - | Supervised | 94.2 | 99.3 | 77.3 | 97.1 | which indicates that the effect of using prompts decreases after self-training. ## 5.3 Zero-Shot Text Classification In Table 4, we compare our results against two stateof-the-art zero-shot text classification baselines, LOTClass (Meng et al., 2020) and iPET (Schick and Schütze, 2020). We select these two methods as our baselines because they both employ selftraining for zero-shot classification. In [1], [2], and [3], they do not employ self-training on unlabeled data, so the Self-train column is "No". In [7], we report the best results over 5 runs on PESCO single model performance without an ensemble. We also report the average, maximum, and minimum accuracy over 5 runs in Appendix Table 6. In [8], to see the gap between zero-shot and fully-supervised settings, we train a typical BERT (Devlin et al., 2019) classifier on a labeled training set. We jointly finetune BERT and a linear classifier on top of BERT [CLS] output layer. Effect of Self-training First, by comparing [7] against [2] in Table 4, we find that the proposed self-training framework significantly improves the performance by more than 10% on average. On DBpedia, self-training improves performance substantially by 20%, and it even achieves 98.5% accuracy. This demonstrates that self-training is an effective method to enhance performance after general pretraining, closing the gap between fully supervised training. Comparison against LOTClass Comparing [7] PESCO against [5] LOTClass, PESCO significantly improves the zero-shot text classification performance on all datasets. LOTClass leverages PTLMs to find the category-indicative words which are semantically related to label descriptions. The documents containing category-indicative words are classified as the corresponding category. Our method uses a pre-trained sentence encoder to define the relevance between document and category, which is more effective and requires less human heuristics. Comparison against iPET Our main baseline is [4] iPET, which uses [3] PET as a base model to generate initial pseudo-labels followed by a se- | Id | Methods | AG News | DBpedia | Yahoo Answers | Amazon | |------|---------------|-----------|-----------|-----------------|----------| | [1] | PESCO | 89.6 | 98.5 | 71.1 | 95.2 | | [2] | PESCO - R | 87.0 | 97.1 | 69.1 | 95.0 | | [3] | LCT | 88.0 | 89.1 | 69.6 | 94.3 | | [4] | LCT - R | 80.7 | 86.9 | 68.6 | 93.3 | | [5] | PCL | 87.8 | 89.4 | 68.7 | 95.1 | | [6] | LCT+PCL | 88.2 | 97.0 | 69.8 | 95.2 | | [7] | PESCO w/o aug | 87.8 | 96.7 | 68.6 | 93.5 | ries of self-training steps. We find that our base model [2] achieves similar performance with [3] on all datasets except Ag News, on which ours lags behind by 3%. The lesson here is that using text retrieval as a means of text classification gives a similar performance to that using cloze tests. Next, our full model [7] is also better than [4] iPET on three datasets while achieving similar performance on the Amazon dataset, demonstrating the effectiveness of our method. Also, we notice that PET requires a massive model ensemble (e.g. 15 models) to achieve the reported accuracy. We run their code with a PvP ensemble without using various random seeds for ensembling. Even with this simplified setting, iPET still needs far more disk space (1.4 GB vs 26 GB) and more training time than us in that we do not need to train various models for model ensembling in each self-training step. Note that It is not feasible to test our method using Roberta-base/large because language models without SimCSE finetuning poorly capture the semantic meaning of texts in cosine similarity space and cannot be used for retrieval. On the other hand, simCSE is finetuned for sentence embeddings, making language models lose text generation ability. Because iPET and LOTClass require language models to generate tokens, using SimCSERoberta for iPET or LOTClass is also not feasible. ## 5.4 Ablation Study And Analysis Comparison Of Different Contrastive Losses The results of different contrastive learning losses are shown in Table 5. In the table, LCT means we only use LLCT in Eq.( 4) to train our model, PCL means we use L*P CL*, and LCT+PCL means we sum the LLCT and L*P CL* as our loss function rather than using PLCT loss which puts keys and label-prompts in the same batch. The methods end with "-R" means the pseudo positive sentences k are randomly selected from the documents instead of picking the most salient sentences. In LCT, although it doesn't explicitly minimize the distance between an input document and its predicted pseudo-label-prompt, optimizing this loss still obtains performance similar to PLC. This implies the selected key sentences can serve as augmented version of label-prompts. Furthermore, we analyze the difference in the performance between using randomly selected sentences and the most salient sentences. By comparing [1] and [2], and [3] and [4], we can see that the model has a significant performance drop in predicting randomly selected sentences. This demonstrates the importance of choosing a salient sentence as the training target. Finally, to demonstrate the effectiveness of putting pseudo-label-prompts and key sentences in the same batch, we compare [1] against [6]. [1] yields better performance than [6], which implies using this more challenging contrastive task allows the model to learn more general representations. Effect of Data Augmentation In Table 5, [7] PESCO w/o aug means we use xi as a query to retrieve its positive examples A(i) instead of using xˆi as a query. Comparing [1] and [7], removing the most salient sentence from a document is an effective data augmentation method that can greatly improve performance. This is consistent with previous literature (Xie et al., 2020) that updating student models with noisy data is important in selftraining. ## 6 Conclusion This paper presents a novel approach to zero-shot text classification, which significantly improves the SOTA results on four benchmark datasets by formulating the classification task as a prompt-enhanced retrieval problem and by combining the strengths of pre-trained language models and contrastive learning over pseudo-labeled data in a self-training loop. Our experiments in comparison with representative baselines and ablation analysis show evidence for the effectiveness of the proposed approach. Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539–546 vol. 1. ## 7 Limitations The main limitation of our method is that it heavily depends on the quality of the label description. If a label description does not precisely describe the meaning of the label, our method cannot work. For some classification tasks such as microaggression detection, their labels have abstract meaning that is difficult to be understood by pre-trained language models. Similarly, our method cannot work on the domain that is not covered by the pre-training corpora of language models, such as the medical domain. Another limitation of our method is that PLCT loss cannot handle short texts. If a text consists of only one sentence, PLCT loss will no longer work because LCT requires a document to be more than one sentence. In this case, PCL loss can still be used for self-training. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, and Magnus Sahlgren. 2021. Semantic re-tuning with contrastive tension. In International Conference on Learning Representations. Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In *International Conference on Learning Representations*. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. S. Chopra, R. Hadsell, and Y. LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In *2005 IEEE Computer Society* Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, ShangWen Li, Scott Yih, Yoon Kim, and James Glass. 2022. DiffCSE: Difference-based contrastive learning for sentence embeddings. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4207–4218, Seattle, United States. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Veselin Stoyanov, and Alexis Conneau. 2021. Self-training improves pre-training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5408–5418, Online. Association for Computational Linguistics. Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. Cert: Contrastive selfsupervised learning for language understanding. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021a. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021b. SimCSE: Simple contrastive learning of sentence embeddings. In *Empirical Methods in Natural Language Processing (EMNLP)*. Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In *Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and* Pattern Recognition - Volume 2, CVPR '06, page 1735–1742, USA. IEEE Computer Society. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2019. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. *arXiv preprint* arXiv:2004.11362. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In *ICML Workshops*. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Association for Computational Linguistics. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130, Online. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582– 4597, Online. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. 2021. Cocolm: Correcting and contrasting text sequences for language model pretraining. In Advances in Neural Information Processing Systems, volume 34. Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language model self-training approach. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006–9017, Online. Association for Computational Linguistics. Subhabrata Mukherjee and Ahmed Hassan Awadallah. 2020. Uncertainty-aware self-training for few-shot text classification. In *Advances in Neural Information Processing Systems (NeurIPS 2020)*, Online. Kamal Nigam and Rayid Ghani. 2000. Analyzing the effectiveness and applicability of co-training. In Proceedings of the Ninth International Conference on Information and Knowledge Management, CIKM '00, page 86–93, New York, NY, USA. Association for Computing Machinery. Rodrigo Nogueira and Kyunghyun Cho. 2017. Taskoriented query reformulation with reinforcement learning. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 574–583, Copenhagen, Denmark. Association for Computational Linguistics. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 79–86. Association for Computational Linguistics. Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. In Automated Knowledge Base Construction. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few-shot text classification and natural language inference. Computing Research Repository, arXiv:2001.07676. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 1422–1432, Lisbon, Portugal. Association for Computational Linguistics. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for Twitter sentiment classification. In *Proceedings of the 52nd Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1555–1565, Baltimore, Maryland. Association for Computational Linguistics. Gokhan Tur, Dilek Hakkani-Tür, and Larry Heck. 2010. What is left to be understood in atis? In 2010 IEEE Spoken Language Technology Workshop, pages 19– 24. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Yau-Shian Wang, Ashley Wu, and Graham Neubig. 2022. English contrastive learning can learn universal cross-lingualsentence embeddings. In Empirical Methods in Natural Language Processing (EMNLP). Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. *ArXiv*, abs/2012.15466. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. 2020. Self-training with noisy student improves imagenet classification. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189–196, Cambridge, Massachusetts, USA. Association for Computational Linguistics. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914–3923. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training. In *Advances in Neural Information Processing Systems*, pages 3833–3845. | AG News | DBpedia | Yahoo | Amazon | | |-----------|-----------|---------|----------|------| | avg | 88.7 | 96.9 | 70.5 | 94.3 | | max | 89.6 | 98.5 | 71.1 | 95.2 | | min | 87.7 | 96.1 | 70.0 | 93.9 | ## A Discussion Text Classification as neural text retrieval Formulating text classification as neural retrieval is straightforward but not widely explored by previous work. In this work, we show that this formulation can also obtain good performance with a well-pre-trained sentence encoder. The benefit of this formulation over cloze test is that we don't need to restrict the label description to only one word. PET requires a carefully selected word (verbalizer) to represent each class. If a classification task has hundreds or even more than thousands of categories, it is not feasible to manually select a word to represent each class. Furthermore, if the meaning of a category in a classification task is too abstract or complex, we cannot simply represent it with a single word. Our formulation allows the model to describe categories using sentences or even short texts and maybe a better choice for more challenging classification tasks. Contrastive Learning for Self-training The effect of contrastive learning for self-training is not well-studied by previous work. Contrastive learning obtains impressive results on unsupervised representation learning. In a supervised setting, it is also robust to noisy labels and noisy data, and it also shows impressive performance on a few-shot classification. Considering these good properties of contrastive learning, we believe contrastive learning is a promising direction for self-training and propose PESCO to explore its potential on zeroshot text classification. ## B Hyperparameters As indicated by previous work (Chen et al., 2020), using a larger batch size generally yields better performance because it includes more negative samples. We analyze how different batch size influences the performance of PESCO in Figure 3. We found that PESCO is not very sensitive to batch size. Using a smaller batch size only reduces the accuracy by less than 2%. Also, we observe that our ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) algorithm converges after 1000 steps (1 epoch) of training, and additional training steps only slightly increase the performance. In other datasets, our algorithm also converges after 1 training epoch. We list the hyperparameters of our model in Table 7. We use AdamW as our optimizer. The T′ is the threshold mentioned in Section 4.3, we set it proportional to N, where N is the total number of unlabeled data in the corresponding dataset. We find that the number of training epoch only slightly influence the final performance that usually influences the accuracy by less than 1%. In Figure 4, we plot the training epoch versus validation set accuracy. Although we train 5 epochs on the AG news to obtain the best result, the model actually converges in the early training stage. Similar training curves can be observed in all the datasets. | AG News | DBpedia | Yahoo Answers | Amazon | | |---------------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------|----------|------| | Learning rate | 1e-5 | 1e-5 | 5e-6 | 5e-6 | | Document length | 156 | 128 | 192 | 128 | | Batch size | 32 | 32 | 32 | 32 | | Epsilon | 1e-6 | 1e-8 | 1e-8 | 1e-8 | | T ′ | 0.2N | 0.5N | 0.1N | 0.1N | | Epoch | 5 | 5 | 2 | 1 | | Table 7: Hyperparameters. | | | | | | Label Description | x | k | | | | where is the best place to look for love? | | | | | | Family and Relationship | where is the best place to look for love? it might be easy to use the internetthere are many good matching web sites that can help | what is the best place to get guitar lessons in the south bay area? | | | | Entertainment and Music | what is the best place to get guitar lessons in the south bay area? looking for a great instructor and relatively affordable price. i have no experience but have a desire to learn. it's really according to what you are looking for. certain teachers specialize in acoustic vs. electric (for example). your best bet is to place a request on a service such as click for lessons that will show you several teacher bios and let you decide for yourself. | does anyone know a good | | | | apartment rental agency around washington dc? | | | | | | Business and Finance | does anyone know a good apartment rental agency around washington dc? i've had personal experience with archstone apartments and summit (just bought by camden) apartments in the past two years. while neither one is stellar, both were acceptable. both of these were in the northern virginia area - bedroom communities for d.c. best of luck apartment hunting! the housing market around here is absolutely insane. | why are there 5 rings | | | | in the olympics symbol? | | | | | | Sports | why are there 5 rings in the olympics symbol? what does it represent? i heard few theories about it but not sure what is the correct one the 5 rings were introduced at the the 1920 games in antwerp games. the rings included at least one color from the flag of every participating country. | | | | | Table 8: More examples of the distorted document xˆ and the selected pseudo positive keys k in Yahoo Answers. | | | | | Table 8: More examples of the distorted document xˆ and the selected pseudo positive keys k in Yahoo Answers. It happens that k seems to be the most important sentence of the texts, so their semantics are closest to label descriptions. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 0,1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4,5 ✓ B1. Did you cite the creators of artifacts you used? 2 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5 ## C ✓ **Did You Run Computational Experiments?** 5 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
guo-etal-2023-visually
Visually-augmented pretrained language models for {NLP} tasks without images
https://aclanthology.org/2023.acl-long.833
Although pre-trained language models (PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense. Existing solutions often rely on explicit images for visual knowledge augmentation (requiring time-consuming retrieval or generation), and they also conduct the augmentation for the whole input text, without considering whether it is actually needed in specific inputs or tasks. To address these issues, we propose a novel **V**isually-**A**ugmented fine-tuning approach that can be generally applied to various PLMs or NLP tasks, **W**ithout using any retrieved or generated **I**mages, namely **VAWI**. Experimental results show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines on ten tasks. Our codes and data are publicly available at \url{https://github.com/RUCAIBox/VAWI}.
## Visually-Augmented Pretrained Language Models For Nlp Tasks Without Images Hangyu Guo1∗ , Kun Zhou3,4∗ , Wayne Xin Zhao2,4† , Qinyu Zhang1and Ji-Rong Wen**2,3,4** 1School of Electronics and Information Engineering, Harbin Institute of Technology (Shenzhen). 2Gaoling School of Artifcial Intelligence, Renmin University of China. 3School of Information, Renmin University of China. 4Beijing Key Laboratory of Big Data Management and Analysis Methods. hyguo0220@gmail.com, francis_kun_zhou@163.com batmanfly@gmail.com, zqy@hit.edu.cn, jrwen@ruc.edu.cn ## Abstract Although pre-trained language models (PLMs) have shown impressive performance by textonly self-supervised training, they are found lack of visual semantics or commonsense. Existing solutions often rely on explicit images for visual knowledge augmentation (requiring time-consuming retrieval or generation), and they also conduct the augmentation for the whole input text, without considering whether it is actually needed in specifc inputs or tasks. To address these issues, we propose a novel Visually-Augmented fne-tuning approach that can be generally applied to various PLMs or NLP tasks, Without using any retrieved or generated Images, namely **VAWI**. Experimental results show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines on ten tasks. Our codes and data are publicly available at https://github.com/RUCAIBox/VAWI. ## 1 Introduction Recent years have witnessed the success of pretrained language models (PLMs) (Qiu et al., 2020; Zhao et al., 2023), such as GPT-3 (Brown et al., 2020) and T5 (Raffel et al., 2020), in a variety of natural language process (NLP) tasks. Since these PLMs are mostly trained on text-only corpus via self-supervised pre-training, they have been shown lack of visual commonsense (Liu et al., 2022) and real-world knowledge (Zhang et al., 2022). As a result, PLMs can't well solve visually related language tasks 1, *e.g.,* answering the color and size of common things, especially those requiring complex commonsense knowledge. To alleviate this problem, existing works mainly enhance PLMs by infusing visual information. Typ- ∗ Equal contributions. † Corresponding authors. 1In this work, we mainly focus on text-only NLP tasks that may beneft from external visual information, rather than visual-language tasks involving images. ically, given a text input, these studies frstly augment the visual information from retrieved or generated images about the input and then leverage their visual representations to improve PLMs on NLP tasks. Such an approach leads to *visuallyaugmented pre-trained language models (VaLMs)*, where they adopt either visually-augmented pretraining (Tan and Bansal, 2020; Wang et al., 2022) or visually-augmented fne-tuning (Lu et al., 2022). Despite the effectiveness, there are two major shortcomings in these methods. First, these methods often rely on pre-learned complementary retrievers or generators, and also require time-consuming inference to retrieve or generate proper images that are paired with the input. The above costly conditions largely limit the applicability of these approaches. Second, the retrieved or generated images are inevitable to involve irrelevant or redundant visual information. If simply integrating them, the original text representations might be affected.Increasing evidence shows that the visual information is not always useful for NLP tasks (Dai et al., 2022), and sometimes leads to performance degradation. Considering these issues, we aim to develop a more effcient and effective way to visually augment the PLMs and the solution is twofold: - Firstly, we don't explicitly produce (retrieve or generate) the images but instead generate visually-aligned representations of the text onthe-fy. Recent studies (Radford et al., 2021; Jia et al., 2021) have shown that the vision-language pre-trained models (VL-PTMs) can well learn the alignment between the representations of texts and images from large-scale text-image pairs. Thus, our idea is to employ the output representations of a text from VL-PTMs' text encoders as a surrogate for the visual representations of related images. Such a way is simple and effcient: we can only keep the text encoder of a VL-PTM to produce the visually-aligned representations of texts, getting rid of the complicated image retrieval or generation process. It is widely recognized that there is a large semantic gap between different modalities (Liang et al., 2022). Our method can alleviate this issue to some extent since the visual augmentations are derived from the text representation itself. - Secondly, instead of directly feeding visual augmentations into the PLM, we propose to use the augmented visual information only when it is actually required. In fact, for a text input of a NLP task, PLMs are not always hungry for the visual background knowledge to effectively understand it, especially for visually-irrelevant expressions. Unlike previous works which inject visual information into a text (Tan and Bansal, 2020; Wang et al., 2022) from the whole, we consider identifying *visuallyhungry words* (those that require visual knowledge to derive complete semantics) from the text input, and only infuse the visual augmentations through these trigger words. We conduct visual augmentations at the word level, because it is more fexible and controllable, considering the augmented information is often irrelevant or noisy. To this end, in this paper, we propose a general Visually-Augmented fne-tuning approach to improving PLMs for NLP tasks Without Images, namely **VAWI**. Our approach consists of three ingredients, namely visually-hungry words extraction, visual knowledge augmentation, and visuallyenhanced fne-tuning. Given the text input from a NLP task, we frst extract the visually-hungry words (VH-words) from the input sentence. As the annotations of VH-words are generally unavailable, we propose three strategies to automatically extract the VH-words, relying on the syntax trees, attention distributions of VL-PTMs, and an adaptive learnable module, respectively. Then, based on the extracted VH-words, we leverage the text encoder of CLIP (Radford et al., 2021) (being fxed in our approach), a VL-PTM that has been pre-trained on millions of text-image pairs, to encode the VHwords for obtaining their visually-aligned representations. Finally, we infuse the visually-aligned representations into PLMs, and consider the general and parameter-effcient fne-tuning strategies for small and large PLMs, respectively. To verify the effectiveness of our framework VAWI, we test it on four PLMs (*i.e.,* BERT, BART, RoBERTa, and T5) at different scales (*i.e.,* 110M, 340M, 3B), and conduct extensive experiments in natural language understanding, commonsense reasoning, and text generation tasks. Experimental results show that our **VAWI** can boost the performance of these PLMs signifcantly, *i.e.,* 3.11%, 2.54%, and 2.16% absolute improvements on the commonsenseQA task using RoBERTa-base, RoBERTa-large, and T5-3b, respectively. Besides, VAWI can outperform (or be on par with) several competitive baselines that adopt complicated visually-augmented methods. ## 2 Related Work Pre-trained Language Models. Recent years have witnessed the success of pre-trained language models (PLMs) (Devlin et al., 2019; Radford et al., 2019). After pre-trained on the large-scale corpus, PLMs can be fne-tuned on multiple NLP tasks and achieve remarkable performance. However, since PLMs are just pre-trained with text-only data, they may suffer from the reporting bias problem (Gordon and Van Durme, 2013; Paik et al., 2021; Zhang et al., 2022), where the frequency distribution of visual commonsense in the text may not fully refect the real-world distribution of the commonsense. Existing works have also found that such a problem can not be well addressed by enlarging the model or pre-training corpus (Paik et al., 2021; Zhang et al., 2022). In this work, we aim to alleviate this problem by adding visual knowledge on PLMs during fne-tuning. Vision-Language Pre-Trained Models. To better accomplish the vision-language tasks, visionlanguage pre-trained models (VL-PTMs) (Su et al., 2019; Lu et al., 2019) become a hot point in recent years, which require large-scale image-text pairs for pre-training. Existing VL-PTMs fall into two categories based on the way of modeling visionlanguage interaction. The frst category of models (Lu et al., 2019; Li et al., 2021) adopts an explicit vision-language interaction layer to fuse the text embeddings and image features. These models are more suitable to capture fne-grained semantic interactions between vision and language.The second category of models (Radford et al., 2021; Jia et al., 2021) incorporates separate encoders to model the vision and language information, and relies on pre-training tasks (*e.g.,* cross-modal contrastive learning) to align their representations into the same latent space. Such a way is capable of producing enriched single-modal representations. Visually-Augmented Language Model. To introduce visual information into PLMs, visuallyaugmented language model (VaLM) (Wang et al., 2022) has become an emerging research topic. Existing VaLMs can be categorized into visuallyaugmented pre-training and fne-tuning. Visuallyaugmented pre-training approaches (Tan and Bansal, 2020; Zhu et al., 2022) continually pretrain PLMs with the retrieved visual information related to input tokens or sentences and also revise the masked language model task for better capturing the visual semantics. Visually-augmented fne-tuning method (Lu et al., 2022) introduces the visual information into PLMs during fne-tuning. These methods also leverage the image retrieval or generation models to augment the visual information and design a special fusion module to inject it into PLMs. However, existing VaLM approaches mostly need to retrieve or generate visual information for utilization. Such a way is time-consuming, and may involve unrelated or noisy information into PLMs, leading to performance degradation. In this work, we aim to frst detect the visually-hungry words from the text, and then utilize a VL-PTM to generate their visually-aligned representations without the usage of external images or generation models. As a comparison, our approach is more fexible and effcient to leverage visual information for enhancing text-based PLMs. ## 3 Method In this section, we frstly introduce the task setting, and then describe our proposed visual augmentation approach for infusing visual knowledge into PLMs during fne-tuning. ## 3.1 Task Setting And Solution Overview This work aims to improve the fne-tuning performance of pre-trained language models (PLMs) on NLP tasks by leveraging the related visual information without images. For a NLP task, a set of n labeled texts {⟨xi, yi⟩} are available, where xiis the i-th text data consisting of a sequence of words, denoted as xi = {w1, w2*, ..., w*m}, and yiis the ground-truth output, which can be a discrete label (classifcation), a continuous value (regression) or a text sequence (generation). To solve the target task, we assume that a textbased PLM is given (either for understanding or generation). Let f denote a PLM parameterized by θPLM that has already been pre-trained on generalpurpose large-scale text data. Given the labeled training data, we can train the PLM using a specifc loss function (*e.g.,* cross-entropy loss) and further solve the target task. However, existing works (Tan and Bansal, 2020; Zhang et al., 2022) have revealed that PLMs may be unaware of visual knowledge that is not explicitly mentioned in the pre-trained text-only data (*e.g.,* the shape of coins and the color of the sky), leading to the lack of world commonsense and generating wrong statements. In this work, we focus on devising an effcient and effective way to infuse such visual knowledge into PLMs during fne-tuning. Our approach is based on *visually-hungry words* (abbreviated as VH-words), which require visual information to derive complete semantic representations. The overall illustration of our approach is shown in Figure 1. Given the input text xi and its label yi, we frst detect and extract a set of VH-words. Then, we adopt a visual knowledge augmentation module to enhance the visual background knowledge of their tokens and generate their visually-aligned representations. Finally, we infuse the visuallyaligned text representations into the PLM to improve its fne-tuning performance, where we consider both the general fne-tuning of small PLMs and the parameter-effcient fne-tuning of largescale PLMs. ## 3.2 Visually-Hungry Words Extraction In our approach, visually-hungry words (VHwords) are the trigger units for visual augmentations, requiring visual knowledge for deriving complete semantic representations (*e.g.,* color, shape, and object). Therefore, we propose to frst detect the VH-words from the input text, and then inject the proper visual knowledge that they are hungry for into the PLM. However, the annotations about VH-words are generally not available in NLP datasets. To address this problem, we devise three different strategies to extract the VH-words from the input text, including two feature-based strategies based on syntax tree and attention distribution of PLMs, and a learnable model-based strategy. Syntax-based Strategy. In natural language, entity words and descriptive words usually convey more visual semantics than others. For example, for the sentence "He is eating a *green apple*", where underlined words are more related to visual semantics. Such words are mostly nouns or adjectives in the input text, which can be detected by syntactic analysis. Therefore, we design a rule-based strategy that leverages the syntactic information for ![3_image_0.png](3_image_0.png) VH-words extraction. Concretely, we frst delete all stop words in a text and then adopt an openresource toolkit SPACY 2to convert the input text into a syntax dependency tree. Based on the syntax tree, we extract the words that have a particular part of speech (POS), *e.g.,* nouns or adjectives, as the VH-words denoted by W(V H). In this way, we can effciently extract the VH-words from input text by using a fast parser toolkit. Visually-enhanced Attention Based Strategy. The attention-based strategy utilizes the attention distribution of a VL-PTM to detect the VH-words. Since VL-PTMs (Radford et al., 2021) are pretrained on large-scale image-text pairs, their text encoders can focus more on the words corresponding to some specifc visual concepts in an image, which are likely to be VH-words. Inspired by it, we use the attention scores calculated by the text encoder of VL-PLMs to select the VH-words. Specifically, we adopt the text encoder of CLIP (Radford et al., 2021), a VL-PTM that has been pre-trained on millions of image-text pairs, to help extract the VH-words. As CLIP adopts an autoregressive GPT2 model as the text encoder, we calculate the average attention scores between each token and the "[EOS]" token on the self-attention layer, denoted as swi . Then, we select the top-K ranked words according to {swi} as the VH-words W(V H). Learning-based Strategy. Considering that diverse PLMs and NLP tasks may be hungry for different complementary visual information, we 2https://spacy.io/ devise a learning-based strategy that can adaptively extract VH-words according to task requirements. Concretely, we add a parameterized VH-words extractor layer for the PLM, which can be updated by gradient-based optimization algorithms to ft the need for some specifc task. Given the input text xi, we frst leverage the PLM and a text encoder of a VL-PTM (*i.e.,* CLIP (Radford et al., 2021)) to produce the contextualized representations of the contained words in xi. Then, we concatenate the representations of each word from the two models and utilize a MLP layer to obtain the score swi : $$s_{w_{i}}=\mathrm{MLP}([\mathbf{h}_{w_{i}}^{(P)};\mathbf{h}_{w_{i}}^{(V)}])$$ ]) (1) where h (P) wiand h (V ) wiare the output word representations from the PLM and VL-PTM, respectively, and scores swi are calculated by the learned model based on the supervision information from downstream tasks. Based on the scores of all words, we incorporate the gumbel-softmax function (Jang et al., 2016) to extract the top-k words as the VHwords in a differentiable way. In this way, the gradients of the fne-tuned tasks can be back-propagated to the extractor layer, which learns to adaptively select the more suitable VH-words. ## 3.3 Visual Knowledge Augmentation Existing works (Lu et al., 2022; Wang et al., 2022) mainly utilize image retrieval or generation module to augment related visual knowledge. Such a way is time-consuming and may also involve noisy images.Inspired by recent works that show the effective visual-language alignment in VL-PTMs (Radford et al., 2021; Li et al., 2021), we utilize the visually-aligned text encoders to generate the visual augmentation representations of VH-words. As the text encoders have been aligned to the image encoders during pre-training, their output textual representations can be used as surrogates of visual augmentations based on real images related to the input text. As will be shown in experiments (Section 4), this approach is not only effcient but very effective for downstream NLP tasks. Based on the extracted VH-words, we frst add a prefx text in the image caption style before the VH-words, e.g., "*a photo of:* ", to compose the input text x′. Then, we utilize the text encoder of CLIP (Radford et al., 2021) to encode x′and obtain the contextualized word representations as the visually-aligned representations Hx ∈ R k×d, where k is the sequence length of x′and d is the embedding size. Next, we incorporate a reformulation layer to aggregate and strengthen the visually-aligned representation Hx into the visually-augmented representations of these VHwords. As the positions of the VH-words vary from sentence to sentence, we design a positionaware attention mechanism in the reformulation layer to inject position information into Hx for obtaining the visual representation of each VH-word. Specifcally, we frst leverage a soft position embedding matrix E ∈ R l×dto reserve the position information of VH-words, where l is the number of VH-words. Then, we perform the cross-attention between it and the visual representations as: $$\mathbf{Q}=\mathbf{E},\ \ \mathbf{K}=\mathbf{H}_{x}\mathbf{W}^{K}+\boldsymbol{b}^{K},\tag{2}$$ $$\mathbf{V}=\mathbf{H}_{x}\mathbf{W}^{V}+\boldsymbol{b}^{V},$$ (3) $$\mathbf{H}_{v}=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}})\mathbf{V},$$ (4) $$\mathbf{H}_{v}^{\top}=[\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{l}],\tag{5}$$ where $\mathbf{h}_{i}\in\mathbb{R}^{d},\ \mathbf{K},\mathbf{V}\in\mathbb{R}^{k\times d}.\ \mathbf{H}_{v}\in\mathbb{R}^{l\times d}$ is the obtained visually-augmented representations of VH-words, which is leveraged for augmenting the visual knowledge of the PLM. hiis the visual representation of the i-th VH-word in W(V H). Note that in Eq. 2 and 3, we adopt an effcient way that only uses the position information to set the *query* matrix Q, and the visual semantics are mainly captured and injected through the key and *value* matrices. ## 3.4 Visually-Enhanced Fine-Tuning After obtaining the visually-augmented representations of VH-words (*i.e.,* Hv in Eq. 5), we propose a visually-enhanced fne-tuning strategy to inject the captured visual knowledge. Here, we consider two cases: (1) full-parameter fne-tuning for small PLMs, and (2) parameter-effcient prompt-tuning for large-scale PLMs. Before introducing the learning method, we simply review the parameters of our approach, consisting of the parameters in the underlying PLM (Θplm), the VL-PTM (Θvlp) and the parameters of the reformulation layer (Θref ). Note that we will always fx Θvlp in our approach. Fine-tuning for Small PLMs. For small PLMs, we can perform full-parameter fne-tuning, which updates both Θplm and Θref . Specifcally, given the visually-augmented representations Hv of VHwords, we directly incorporate them into the embedding layer of the PLM. For each VH-word, we insert its visually-augmented representation after the original word embedding, to leverage the visual semantics to enrich the word representations. Prompt-tuning for Large-Scale PLMs. For large-scale PLMs, we fx the parameters in it, *i.e.,* Θplm, and employ a parameter-effcient prompttuning way to optimize it on downstream NLP tasks. Concretely, given the visually-augmented representations Hv of VH-words, we directly insert them before the input representations of every layer of PLMs. Then, following the typical prompttuning paradigm (Li and Liang, 2021), we only tune the parameters of the reformulation layer (*i.e.,* Θref ) as the soft prompts to adapt all the model into the fne-tuning task. Our approach can be generally applied to various PLMs (*e.g.,* BERT (Devlin et al., 2019), BART (Lewis et al., 2020), T5 (Raffel et al., 2020)) and NLP tasks (natural language understanding and text generation). Unlike other complicated visuallyaugmented methods (Tan and Bansal, 2020; Wang et al., 2022), it is more effcient, without the explicit need of external images or generation model; and meanwhile, it only introduces a small number of parameters (Eq. 3), which are easier to learn. ## 4 Experiments 4.1 Experimental Setup Datesets. We conduct experiments on four types of tasks. (1) Natural Language Understanding (NLU): we extract 6 datasets from the GLUE benchmark (Wang et al., 2018); (2) Commonsense reasoning: we select CommonsenseQA (Talmor et al., Base Model Method **SST-2 QNLI QQP MNLI MRPC STS-B Avg.** CLIP +None 73.3 74.5 72.8 68.4 74.3 73.8 72.85 BLIP +None 76.3 77.4 78.8 72.5 77.8 76.4 76.53 ALBEF14M +None 78.9 78.2 79.4 73.4 76.5 77.5 77.31 +None 89.3 87.9 87.2 79.4 81.7 84.4 84.98 +VOKEN 92.2 88.6 88.6 82.6 83.5 86.0 86.83 +iACE 91.7 88.6 89.1 82.8 85.8 86.6 87.43 +VAWI-SBS **92.9** 88.4 89.6 82.2 85.5 86.9 87.58 +VAWI-VABS 92.7 88.9 89.5 82.7 **85.8 87.2 87.80** +VAWI-LBS 92.4 **89.1 89.7 83.0** 85.6 86.9 87.78 | BERTbase | |-------------| | RoBERTabase | +None 89.2 87.5 86.2 79.0 81.4 85.4 84.78 +VOKEN 90.5 89.2 87.8 81.0 87.0 86.9 87.06 +iACE 91.6 89.1 **87.9** 82.6 87.7 86.9 87.63 +VAWI-SBS 91.4 89.4 87.7 82.2 88.2 87.7 87.76 +VAWI-VABS **91.7** 89.1 **87.9 82.6** 88.3 88.1 87.95 +VAWI-LBS 91.6 **90.6 87.9** 82.4 **88.5 88.3 88.21** 2019), a 5-way multiple choice QA dataset that requires commonsense knowledge; (3) Text generation: we select CommonGen (Lin et al., 2019b), a constrained text generation task about generative commonsense reasoning. (4) Cross-modal reasoning: we select SNLI-VE (Xie et al., 2019), to evaluate the capacity of predicting whether the image semantically entails the text. Baseline Models. We compare our approach with the following baselines, including pre-trained language models (PLMs), visual-language pre-trained models (VL-PTMs), and visually-augmented pretrained language modes (VaLMs). (1) **PLMs**: We choose BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), T5 (Raffel et al., 2020) as the PLM backbones, and directly fne-tune them as baselines. (2) **VL-PTMs**: We select ALBEF (Li et al., 2021), BLIP (Li et al., 2022), and CLIP (Radford et al., 2021), which have been pre-trained on large-scale image-text pairs. (3) **VaLMs**: we select VOKEN (Tan and Bansal, 2020) and iACE (Lu et al., 2022), which introduce the visual information into PLMs by pre-training on retrieved images and fne-tuning on generated images, respectively. Implementation Details. We implement all methods based on Huggingface Transformers (Wolf et al., 2020). For all baselines, we set their hyperparameters according to their papers. In our approach, we leverage the text encoder of CLIP (ViTB/32) to implement the learnable model-based VHwords extractor and generate the visual representations of VH-words in the visual knowledge augmentation module. The hidden size of visual representations is set to 512. For different NLP tasks, we tune the number of visually hungry words in {2, 3, 4, 5}. During fne-tuning, we perform parametereffcient tuning on T5-3b and BART-Large, and full-parameter tuning on other PLMs. For all tasks and all backbones, we utilize Adam as the optimizer, set the learning rate to 2e-5, weight decay to 0.01, and a linear warmup for the frst 6% steps. For GLUE, GommonGen, and SNLI-VE datasets, we fne-tune our model for 3 epochs with a batch size of 32. For CommonsenseQA, we tune our model for 10 epochs with a batch size of 32. We use the cross-entropy loss for classifcation and the mean squared error loss for regression. ## 4.2 Main Experimental Results In this part, we conduct a series of experiments on NLU, commonsense reasoning, text generation, and cross-modal commonsense reasoning tasks. Evaluation on NLU Tasks. We present the experimental results of different methods on 6 NLU tasks in Table 1. First, we observe that VL-PTMs perform worse than PLMs, a possible reason is that | Base Model | Method | CommonsenseQA-3k | CommonsenseQA | | | | | | | |--------------|----------|--------------------|-----------------|-------|-------|-------|-------|-------|-------| | 5% | 10% | 20% | 100% | 5% | 10% | 20% | 100% | | | | +None | 41.88 | 46.04 | 50.58 | 61.88 | 44.88 | 50.04 | 57.08 | 67.90 | | | RoBERTabase | +Images | 42.37 | 48.09 | 52.81 | 64.22 | 45.72 | 51.17 | 58.96 | 69.64 | | +VAWI-SBS | 42.94 | 49.27 | 53.97 | 65.10 | 46.51 | 52.44 | 59.87 | 71.01 | | | +None | 48.39 | 56.30 | 59.06 | 74.19 | 51.24 | 59.95 | 65.52 | 76.65 | | | RoBERTalarge | +Images | 49.55 | 57.78 | 61.29 | 75.61 | 52.18 | 60.93 | 66.08 | 78.39 | | +VAWI-SBS | 50.27 | 58.17 | 62.22 | 76.54 | 52.98 | 61.97 | 67.40 | 79.19 | | | +None | 70.16 | 73.02 | 75.04 | 81.81 | 71.99 | 75.27 | 77.72 | 82.40 | | | T5-3B | +Images | 70.96 | 73.60 | 75.91 | 82.40 | 72.87 | 76.17 | 78.71 | 83.64 | | VAWI-SBS+PET | 71.52 | 74.19 | 76.49 | 83.61 | 73.58 | 73.58 | 79.66 | 84.56 | | Table 2: Performance comparison on CommonsenseQA-3k and CommonsenseQA with different amounts of training data. We report the average performance on the dev set over three runs, and the **BEST** results are highlighted in bold. *+Images* denotes that we add retrieved images about the VH-words using web search engines, and encode them via CLIP-ViT. | Method | Base Model | BLUE-3 | BLUE-4 | METOR | Rouge-L | CIDER | SPICE | |---------------|--------------|----------|----------|---------|-----------|---------|---------| | +None | 42.80 | 32.42 | 31.36 | 57.57 | 16.56 | 32.94 | | | +Images | 42.67 | 32.67 | 32.12 | 57.46 | 16.78 | 32.81 | | | BART-large | +VAWI-SBS | 44.56 | 34.17 | 32.47 | 58.46 | 17.23 | 33.67 | | +VAWI-SBS+PET | 43.12 | 33.76 | 32.20 | 58.12 | 16.91 | 33.17 | | | +None | 45.92 | 35.92 | 33.02 | 58.57 | 17.71 | 33.51 | | | +Images | 45.69 | 35.50 | 33.55 | 58.94 | 17.51 | 32.91 | | | T5-3b | +VAWI-SBS | 47.67 | 37.54 | 33.41 | 59.94 | 18.34 | 34.67 | | +VAWI-SBS+PET | 47.40 | 37.36 | 33.71 | 59.78 | 18.18 | 34.17 | | Table 3: Performance comparison on CommonGen. We also show the performance of parameter-effcient tuning of our approach, denoted as *+PET*. The **BEST** results are highlighted in bold. they have been continually pre-trained on largescale image-text pairs, which may cause the catastrophic forgetting problem. Second, VaLMs (*i.e.,* VOKEN, iACE, and VAWI) achieve better performance over PLMs. As VaLMs infuse external visual knowledge into the PLMs, they can help the PLMs better understand the background knowledge of some words (*e.g.,* color, shape, and size of objects). Between the two VaLM baselines, iACE is slightly better. This is because iACE is enhanced based on VOKEN and incorporates an image generation model, so it produces more visual information to utilize. However, the generated images inevitably contain noise and redundant information, which limits the performance gain of iACE. Finally, by comparing our approach with all baselines, it is obvious that VAWI performs consistently better than them on the six datasets. In our approach, we adopt an effcient and effective way that augments the visually-augmented representations using the text encoder of CLIP to encode the VH-words from the input text. Benefting from pretraining on large-scale image-text pairs, the text encoder of CLIP has been well aligned with the semantic space of images, so that it can generate high-quality visually-augmented representations of the VH-words to enrich them. Such a way not only saves the costs of time and computation but also reduces the infuence of inevitable noise from retrieved or generated images. Additionally, among three VH-words extraction strategies, LBS slightly outperforms others in most NLU tasks. The reason is that LBS incorporates a learnable model-based strategy to select the VH-words. Such a way can adaptively extract proper VH-words with the consideration of the intrinsic knowledge of the PLMs. However, LBS will increase the computation cost due to its involved learnable VH-words extractor layer. Therefore, for effciency, in the following experiments, we utilize the SBS strategy in our approach for comparison. Evaluation on Commonsense Reasoning Tasks. Following existing works (Lin et al., 2019a), we | Method | SNLI-VE | | | | |-----------------|-----------|-------|-------|-------| | 10% | 20% | 50% | 100% | | | ALBEF | 65.46 | 67.52 | 75.47 | 80.91 | | ALBEF+VAWI +SBS | 65.94 | 68.23 | 76.14 | 81.64 | also rely on a rule-based strategy to extract the examples containing visible objects, to construct a new dataset called CommonsenseQA-3K. It consists of 2,903 and 341 examples in the training set and dev set, respectively. Based on the CommonsenseQA and CommonsenseQA-3k, we also report the results with different amounts of training data, to further evaluate the performance of different methods in the few-shot setting. As shown in Table 2, we can also see that with the help of the visual information from either retrieved images or our VAWI-SBS, the performance of PLMs can be improved signifcantly. It indicates that visual information is indeed helpful to improve PLMs for understanding commonsense knowledge. Besides, our approach outperforms the method using retrieved images from search engines. Our approach omits the image retrieval process due to its inevitably involved noise, and relies on the text encoder of CLIP to augment the visual representations. Such a way can guarantee the relevance between the augmented visual knowledge and the text input, reducing the infuence of retrieved noisy images and redundant information. Furthermore, we also perform parameter-effcient tuning on T53B-encoder with our approach and boost its performance. It shows that our approach is able to be applied to large-scale PLMs to meet their thirst for visual information. Evaluation on the Text Generation Task. As shown in previous experiments, it is useful to improve the performance of VAWI on commonsense reasoning and nature language understanding tasks. Here, we would like to study the effectiveness of our approach on the text generation task (*i.e.,* CommonGen) using large PLMs. As shown in Table 3, our model VAWI also consistently boosts the performance of BART-Large and T5-3b among all metrics. It further shows that our approach can also improve PLMs on the text generation task. As a comparison, we can see that the retrieved images are not very helpful and even cause performance degradation. The reason may be that the text generation task is more sensitive to the inevitable noise from the retrieved images. Finally, the parameter-effcient tuning strategy of our approach also achieves comparable performance with the full-parameter tuning. It indicates that our parameter-effcient strategy is able to effciently optimize the parameters of large-scale PLMs, and shows a promising future to apply our approach to much larger PLMs, *e.g.,* GPT-3. Evaluation on the Cross-modal Commonsense Reasoning Task. To verify the generality of our method, we further implement our VAWI on a VLPTM (*i.e.,* ALBEF (Li et al., 2021)), and conduct experiments on a cross-modal reasoning dataset, SNLI-VE. Concretely we implement our approach on ALBEF by inserting the visually-augmented representations after the VH-words embeddings of the text encoder before the multimodal encoder, and keeping others unchanged. As shown in Table 4, our VAWI can also improve the performance of ALBEF using different amounts of training data. It further shows the generality of our approach in VL-PTMs, as it can also provide rich information to enhance the text encoder of VL-PTM, helping it better perform cross-modal reasoning. ## 4.3 Ablation Study In this part, we conduct a series of experiments to verify whether the improvement of our approach derives from the augmented visual knowledge about the VH-words. More ablation studies are shown in Appendix A. The Effect of the Source of Visual Representations. We frst propose three variants that incorporate powerful PLMs, *i.e.,* RoBERTa-base, T5- Large, and T5-3b respectively, to replace the text encoder of CLIP in our framework. We also replace the generated visual representations from the text encoder of CLIP with random noise, to investigate the importance of the visual representations. As shown in Table 5, we can see that our approach is better than all the variants, even T5-3b with billionscale parameters. It indicates that CLIP-base is more effective to augment visual knowledge to improve the performance of PLMs. Besides, our approach also outperforms the variant using random noise as the visual representation, showing the worse performance among all the variants. It also shows the importance of visual representations, as they indeed contain the visual knowledge that the | Source of visual representation (Params) | CSQA-3k | CSQA | SST-2 | QQP | STS-B | QNLI | |--------------------------------------------|-----------|--------|---------|-------|---------|--------| | Random Noise (0M) | 61.59 | 66.78 | 89.13 | 86.27 | 85.13 | 87.22 | | RoBERTa-large (355M) | 61.18 | 67.17 | 89.43 | 86.53 | 85.60 | 87.77 | | T5-large-encoder (375M) | 62.21 | 67.87 | 89.71 | 86.67 | 86.40 | 87.94 | | T5-3b-encoder (1500M) | 63.10 | 68.42 | 90.24 | 86.96 | 86.93 | 88.21 | | CLIP-base (52M) | 65.10 | 71.07 | 91.41 | 87.72 | 87.67 | 89.40 | Table 5: Performance comparison of different sources of visual representation in our approach. The base model is RoBERTa-base. Table 6: Performance comparison of visual representations from different VL-PTMs in our approach. The base model is RoBERTa-base. ## Plm Is Hungry For. | The text encoder of different VL-PTMs (Params) | CSQA-3k | SST-2 | QQP | |--------------------------------------------------|-----------|---------|-------| | Random Noise (0M) | 61.59 | 89.23 | 86.21 | | ALBEF (110M) | 63.34 | 90.72 | 87.17 | | CLIP-base (52M) | 65.10 | 91.41 | 87.72 | | UniCL-base (52M) | 65.98 | 91.75 | 88.07 | | CLIP-large (123M) | 66.27 | 92.10 | 88.31 | The Effect of the Stronger VL-PTMs. In our work, we choose CLIP-base to enhance PLMs, as it has been pre-trained on a large-scale imagetext dataset. Generally, a stronger VL-PTM would be more promising to further improve the performance. Here, we replace our CLIP-base model with some stronger VL-PTMs, *e.g.,* ALBEF (Li et al., 2021), UniCL-base (Yang et al., 2022), and CLIP-large. Concretely, ALBEF leverages more pre-training tasks (*e.g.,* MLM, ITM, and ITC), UniCL utilizes more high-quality pre-training data, and CLIP-large increases the scale of model parameters. We evaluate the above variations on CSQA3k, QQP, and SST-2, and the results are shown in Table 6. We can see that UniCL and CLIP-large outperform CLIP-base. It indicates that the VLPTMs with the larger scale of model parameters or more high-quality pre-training data are more capable of augmenting useful visual knowledge for PLMs. Considering the effciency, CLIP-base is also a good choice in our approach, and we will investigate more proper VL-PTMs in the future. ## 5 Conclusion In this paper, we proposed a general visuallyaugmented fne-tuning approach that can be applied to a variety of PLMs and NLP tasks, without using any retrieved or generated images, namely **VAWI**. Specifcally, we frst identifed and extracted the visually-hungry words (VH-words) from input text via a token selector, where three different methods have been proposed, including syntax-, attention- and learning-based strategies. Then, we adopted a fxed VL-PTM text encoder to generate the visually-augmented representations of these VH-words. As it has been pre-trained by visuallanguage alignment tasks on the large-scale corpus, it is capable of injecting visual semantics into the aligned text representations. Finally, we transformed the visually-aligned features into visuallyaugmented features by reformulation layer based on VH-words, and inserted them into PLMs to enrich the visual semantics of word representations in PLMs. Experimental results on 10 NLP tasks show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines signifcantly. Besides, the visual prompts of our framework can also be used for parameter-effcient tuning, which can boost the performance of large language models, such as T5-3b. ## Limitations An important limitation of our approach VAWI is the need for extracting visually-hungry words (VHwords) as the trigger to inject visual knowledge into PLMs. In real-world applications, it is hard to obtain the annotations of VH-words. Therefore, we propose three VH-words extraction strategies. However, the three strategies may be not always proper for all NLP tasks, and we rely on the experimental results to select the best one among them. Besides, we adopt the text encoder of CLIP as the VL-PTM for generating the visually-aligned representation. As a pre-trained model, CLIP also may contain biases learned from the pre-training corpus, which may result in improper biased prediction on some NLP tasks. ## Acknowledgement This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098. Xin Zhao is the corresponding author. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, and Pascale Fung. 2022. Enabling multimodal generation on clip via vision-language knowledge distillation. *arXiv preprint arXiv:2203.06386*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25–30. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. *arXiv* preprint arXiv:1611.01144. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904–4916. PMLR. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unifed vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705. Xiang Lisa Li and Percy Liang. 2021. Prefx-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190. Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou. 2022. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. *arXiv preprint* arXiv:2203.02053. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019a. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151. Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2019b. Commongen: A constrained text generation challenge for generative commonsense reasoning. *arXiv preprint arXiv:1911.03705*. Xiao Liu, Da Yin, Yansong Feng, and Dongyan Zhao. 2022. Things not written in text: Exploring spatial commonsense from visual signals. *arXiv preprint* arXiv:2203.08075. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32. Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, and William Yang Wang. 2022. Imaginationaugmented natural language understanding. *arXiv* preprint arXiv:2204.08535. Cory Paik, Stéphane Aroca-Ouellette, Alessandro Roncone, and Katharina Kann. 2021. The world of an octopus: How reporting bias infuences a language model's perception of color. *arXiv preprint* arXiv:2110.08182. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872– 1897. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unifed text-to-text transformer. *ArXiv*, abs/1910.10683. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. Vl-bert: Pre-training of generic visual-linguistic representations. *arXiv* preprint arXiv:1908.08530. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4149–4158. Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. *arXiv preprint* arXiv:2010.06775. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461. Weizhi Wang, Li Dong, Hao Cheng, Haoyu Song, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2022. Visually-augmented language modeling. arXiv preprint arXiv:2205.10178. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fne-grained image understanding. arXiv preprint arXiv:1901.06706. Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Bin Xiao, Ce Liu, Lu Yuan, and Jianfeng Gao. 2022. Unifed contrastive learning in image-text-label space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19163– 19173. Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, and Elias Stengel-Eskin. 2022. Visual commonsense in pretrained unimodal and multimodal models. arXiv preprint arXiv:2205.01850. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Wanrong Zhu, An Yan, Yujie Lu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, and William Yang Wang. 2022. Visualize before you write: Imaginationguided open-ended text generation. *arXiv preprint* arXiv:2210.03765. ## A Ablation Study A.1 Ablation Study On Visual Knowledge Augmentation The Effect of the Pre-trained Dataset of VLPTMs. We notice that the pre-training dataset of VL-PTMs is different from PLMs. Here, we investigate whether the captions or images from the large-scale image-text pairs contribute more to the performance gain of our approach. To verify it, we pre-train a new PLM only using the captions data. Following the setting of ALBEF, we utilize the pretrained parameters of BERT to initialize this model and only extract the captions from the pre-training data of ALBEF (14.5M sentences in total). After pre-training on these captions until convergence, we utilize this model to replace CLIP-base in our approach and keep other settings unchanged. We conduct experiments on commonsense reasoning and NLU tasks to evaluate its effectiveness for augmenting visual knowledge. As shown in Table 7, we can see that such a variation underperforms ALBEF and our approach, and even leads to performance degradation on the CSQA task. It indicates that during pre-training the image data is an important resource for learning visual knowledge in VL-PTMs. Only text data (*i.e.,* captions) can not provide suffcient visual knowledge that PLMs are hungry for. Therefore, after pre-learned on largescale text-image pairs, CLIP can absorb the useful visual knowledge from the images and inject them into PLMs in our approach. It further indicates that the improvement of our method is due to the involvement of the visual information about the VH-words. ## A.2 Ablation Study On Visually-Enhanced Fine-Tuning Different Insertion Positions of Visual Representations. In our visually-enhanced fne-tuning framework, we insert the visual representation of the VH-word after its original word embedding. To verify its effectiveness, we propose three variants of it that do not insert, insert all visual representations of VH-words before and after the input text, respectively. As shown in Table 8, we can observe that all these variants would lead to a performance decrease. It demonstrates that a proper position to insert the visual representation is important for the utilization of augmented visual representations. By inserting them after the word embeddings of corresponding VH-words, PLMs can effectively aggregate the visual representations to enrich the word representations, leading to better performance on downstream NLP tasks. ## B Further Analysis The Frozen CLIP's Text Encoder. In the experiment presented in Table 1, we directly fnetuned CLIP and the results indicate that the performance of VL-PTMs' text encoder is unsatisfactory when directly fne-tuned on NLP tasks. In our VAWI, we fx the model parameters of CLIPbase's text encoder to preserve the visual knowledge. Hence we also conduct experiments on four NLU tasks from GLUE using frozen CLIP. Specially, we fx CLIP-base's text encoder and only fne-tuned added 4 transformer layers above it. As shown in Table 9, we can see that CLIP's performance under this setting is better than that of directly full-parameter fne-tuning CLIP and also underperforms RoBERTa and BERT. It indicates that fxing CLIP is more suitable for NLP tasks, and shows the rationality of VAWI settings that always fx the CLIP's text encoder in VAWI to preserve CLIP's knowledge. The Computation Latency of the Proposed Methods. In our VAWI, we fx the model parameters of CLIP-base to preserve the visual knowledge. Such a way can also decrease the computation costs during training and inference. To verify it, we report the mean training and inference latency per batch on the CSQA-3k dataset of our method and baselines on RTX3090 GPU, where all these methods utilize RoBERTa-base as the backbone. As shown in Table 10, we can see that our proposed VAWI-SBS and VAWI-VABS would not increase the latency too much. For VAWI-LBS, as it requires a PLM and a VL-PTM to adaptively select the VH-words, it will relatively increase the latency. As shown in Table 1, we can see that all the three variants achieve comparable performance in 6 NLU datasets. Therefore, it is more effcient and effective to select the SBS and VABS variations in our approach. Despite it, we can see that all our variants own less latency than iACE, since our approach does not require a time-consuming image generation process. And as shown in Table 1, our approach can also achieve better performance. The Effect of the Improper Visually-hungry Words. To analyze how the quality of the VH- | The text encoder of different VL-PTMs (Params) | CSQA-3k | CSQA | SST-2 | STS-B | MNLI | |--------------------------------------------------|-----------|--------|---------|---------|--------| | None | 61.59 | 67.90 | 89.23 | 85.46 | 79.06 | | BERT pre-trained on captions (110M) | 62.17 | 67.56 | 89.58 | 85.73 | 79.24 | | ALBEF (110M) | 63.64 | 68.47 | 90.72 | 87.17 | 80.86 | | CLIP-base (52M) | 65.10 | 71.07 | 91.41 | 87.73 | 82.27 | | Insert Positions | CSQA-3k | | | | |--------------------|-----------|-------|-------|-------| | 5% | 10% | 20% | 100% | | | Not insert | 41.88 | 46.04 | 50.58 | 61.88 | | Before input text | - | 39.77 | 44.86 | 57.47 | | After input text | - | 40.23 | 45.67 | 58.08 | | After the VH-words | 42.94 | 49.27 | 53.97 | 65.10 | Table 7: Performance comparison of visual representations pre-trained using different pre-training data in our approach. The base model is RoBERTa-base. Table 8: Performance comparison w.r.t. different insertion positions of visual representations. The base model is RoBERTa-base. | SST-2 | QNLI | QQP | STS-B | |---------|--------|-------|---------| | Method | Training Time (s) | Inference Time (s) | |--------------|---------------------|----------------------| | RoBERTa-base | 0.506 | 0.182 | | +Voken | 0.506 | 0.182 | | +iACE | 1.138 | 0.512 | | +VAWI-SBS | 0.587 | 0.241 | | +VAWI-VABS | 0.680 | 0.308 | | +VAWI-LBS | 0.893 | 0.486 | CLIP-base 73.3 74.5 72.8 73.8 Fixed CLIP-base 75.1 76.9 73.7 75.2 Table 9: The effect of fxed CLIP's text encoder. words affects the performance of our approach, we further conduct the experiments on CSQA-3K and two NLU tasks SST-2 and QQP from GLUE, to show the effect of insuffcient VH-words on our model performance. After extracting the VHwords, we remove part of them and only randomly sample 0%, 20%, and 50% VH-words for augmentation. As shown in Table 11, we can see that with the decreasing of the sampling probability, the performance of our approach degrades gradually. It indicates that not enough VH-words would degrade the performance of our approach. The Number of VH-words. Our approach has an important hyper-parameter required to tune, such as the number of VH-words. VH-words can supply visual knowledge that PLMs may be hungry for. Here, we would like to study whether more VH-words are better to improve performance. We conduct experiments on the QQP and CSQA-3K datasets using RoBERTa-base as the backbone, and present the results in Figure 2. We can see that with the increase of the number of VH-words, the performance gain of our approach frst increases and then decreases. A possible reason is that too many VH-words may also introduce noisy or redundant information (*e.g.,* not very relevant words), Table 10: The computation latency during training and inference. Table 11: The effect of the improper visually-hungry words. The base model is RoBERTa-base. | Correct VH-words proportions | CSQA-3k | SST-2 | QQP | |--------------------------------|-----------|---------|-------| | 0 % | 61.60 | 89.57 | 87.63 | | 20 % | 62.17 | 89.44 | 87.40 | | 50 % | 64.22 | 91.73 | 89.20 | | 100 % | 65.10 | 92.93 | 89.74 | | None | 61.88 | 89.23 | 86.21 | which would also infuence the fne-tuning performance. Instead, it is also more effcient to select a few VH-words (*e.g.,* two words for CSQA-3k) for deploying our approach in large-scale PLMs. ## Case Study Of Extracted Visually-Hungry Words. In this part, we show the VH-words extracted by syntax-, attention- and learning-based strategies in Table 12, Table 13, Table 14 and Table 15. We can see that the three strategies would extract slightly different VH-words. The reason is that the three strategies are based on different techniques to identify the VH-words. As we can see, the cases show that most of the extracted VH-words by our strategies are generally related to some visual semantics, e.g., spider, two eyes. Although such VH-words can not perfectly cover all the visual semantics, they actually contain most of the important words that the PLMs may be hungry for, *e.g.,* red and yellow. Besides, we can also see that the VH-words extracted by our three strategies may not perfectly align with human judgment. In fact, it is also hard for humans to determine proper rules to identify VH-words, *e.g.,* people, human, and water. In addition, as the learned knowledge of PLM is a black ![13_image_0.png](13_image_0.png) box, it is also diffcult for humans to judge the usefulness of our extracted VH-words for PLMs. The Interpretability of Augmented Embeddings. In this part, we show how our augmented embeddings infuse visual knowledge into the PLM. Concretely, we show the attention distributions of a PLM (*i.e.,* RoBERTa-base) in the last few layers before and after infusing visually-augmented representations on CSQA. As shown in Table 16, we can see that the [CLS] tokens pay more attention to the VH-words and their visually-augmented representations, and the VH-words also pay more attention to their visually-augmented representations. It shows that the injected visually-augmented representations provide useful knowledge, which guides the PLM to focus on more important tokens and also improves the representations of the VH-words and the [CLS] token. Input Input sentence: Unlike a spider and his many sight seers, people only have what? two eyes. Syntax-based Strategy Unlike a spider and his many sight seers, people only have what? two eyes Visually-enhanced Attention Based Strategy Unlike a spider and his many sight seers, people only have what? two eyes. Learning-based Strategy Unlike a spider and his many sight seers, people only have what? two eyes. Table 12: The frst instance from the CommonsenseQA dataset. The extracted visually-hungry words are highlighted in green. Input Input sentence: Where on a river can a human hold a cup upright to catch water on a sunny, clear day? waterfall. Syntax-based Strategy Where on a river can a human hold a cup upright to catch water on a sunny, clear day? waterfall. Visually-enhanced Attention Based Strategy Where on a river can a human hold a cup upright to catch water on a sunny, clear day? waterfall. Learning-based Strategy Where on a river can a human hold a cup upright to catch water on a sunny, clear day? waterfall. Table 13: The second instance from the CommonsenseQA dataset. The extracted visually-hungry words are highlighted in green. Input Input sentence: the mesmerizing performances of the leads keep the flm grounded and keep the audience riveted. Syntax-based Strategy the mesmerizing performances of the leads keep the flm grounded and keep the audience riveted. Visually-enhanced Attention Based Strategy the mesmerizing performances of the leads keep the flm grounded and keep the audience riveted. Learning-based Strategy the mesmerizing performances of the leads keep the flm grounded and keep the audience riveted. Table 14: The instance from the SST-2 dataset. The extracted visually-hungry words are highlighted in green. Input Input sentence: How do I sell dry Moringa leaves powder in Indian market? Can I use the moringa leaves that are already starting to turn yellow or yellowish? Syntax-based Strategy How do I sell dry Moringa leaves powder in Indian market? Can I use the moringa leaves that are already starting to turn yellow or yellowish? Visually-enhanced Attention Based Strategy How do I sell dry Moringa leaves powder in Indian market? Can I use the moringa leaves that are already starting to turn yellow or yellowish? Learning-based Strategy How do I sell dry Moringa leaves powder in Indian market? Can I use the moringa leaves that are already starting to turn yellow or yellowish? Table 15: The instance from the QQP dataset. The extracted visually-hungry words are highlighted in green. ![15_image_0.png](15_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6. ✓ A2. Did you discuss any potential risks of your work? The potential risks can be found in Section 6. ✓ A3. Do the abstract and introduction summarize the paper's main claims? The main claims can be found in Abstract and Section 5. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We introduce the dataset and pre-trained models used and the baselines in Section 4. ✓ B1. Did you cite the creators of artifacts you used? We cite the creators of artifacts we used in Section 4.1. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We use the default license of all artifacts in Section 4. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We use all artifacts with their default intended use in Section 4. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Except for the dataset we created ourselves in Section 4, the relevant statistics is not reported in any other datasets we use. The dataset we use exactly follows the amount and train/test/dev splits of data in the original dataset paper. ## C ✓ **Did You Run Computational Experiments?** Our computational experiments for evaluating our method can be found in Section 4. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We report the number of parameters in the models used in a few experiments, which can be found in Section 4 and Appendix. In addition, we report the total computation budget in Section C of the Appendix. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We talk about them in Section 4.1. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We report descriptive statistics about our results, which can be found in Section 4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We report these important settings in Section 4. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
nourbakhsh-etal-2023-using
Using counterfactual contrast to improve compositional generalization for multi-step quantitative reasoning
https://aclanthology.org/2023.acl-long.834
In quantitative question answering, compositional generalization is one of the main challenges of state of the art models, especially when longer sequences of reasoning steps are required. In this paper we propose CounterComp, a method that uses counterfactual scenarios to generate samples with compositional contrast. Instead of a data augmentation approach, CounterComp is based on metric learning, which allows for direct sampling from the training set and circumvents the need for additional human labels. Our proposed auxiliary metric learning loss improves the performance of three state of the art models on four recently released datasets. We also show how the approach can improve OOD performance on unseen domains, as well as unseen compositions. Lastly, we demonstrate how the method can lead to better compositional attention patterns during training.
# Using Counterfactual Contrast To Improve Compositional Generalization For Multi-Step Quantitative Reasoning ## Armineh Nourbakhsh Language Technologies Institute, Carnegie Mellon University J.P. Morgan AI Research anourbak@cs.cmu.edu ## Sameena Shah J.P. Morgan AI Research sameena.shah@jpmorgan.com ## Carolyn Rosé Language Technologies Institute, Carnegie Mellon University cprose@cs.cmu.edu ## Abstract In quantitative question answering, compositional generalization is one of the main challenges of state of the art models, especially when longer sequences of reasoning steps are required. In this paper we propose CounterComp, a method that uses counterfactual scenarios to generate samples with compositional contrast. Instead of a data augmentation approach, CounterComp is based on metric learning, which allows for direct sampling from the training set and circumvents the need for additional human labels. Our proposed auxiliary metric learning loss improves the performance of three state of the art models on four recently released datasets. We also show how the approach can improve OOD performance on unseen domains, as well as unseen compositions. Lastly, we demonstrate how the method can lead to better compositional attention patterns during training. ## 1 Introduction Enterprise documents such as reports, forms, and analytical articles often include quantitative data in tabular form. The data in these tables can be self-contained, but more commonly the surrounding text provides more context that is necessary to understand the content. Answering questions over these hybrid tabular/text contexts requires reasoning that combines verbal and quantitative semantics. Question answering over quantitative tabular/text data has gained recent traction with the release of datasets such as FinQA (Chen et al., 2021b), TAT-QA (Zhu et al., 2021), and HiTab (Cheng et al., 2022). Table 2 shows an example of a question that requires quantitative reasoning to derive the answer. Given the question and the tabular context, the output is a single-step program that leads to the final answer of -20. A major challenge that state of the art models face is compositional generalization (Montague, | # steps in | % wrong | % wrong | % wrong order | |--------------|-------------|-----------|-----------------| | output | operator(s) | operands | of operands | | 1 step | 39.07 | 53.64 | 7.28 | | 2 steps | 46.75 | 46.75 | 6.50 | | 3 steps | 56.47 | 29.41 | 14.12 | | >= 4 steps | 52.00 | 40.00 | 8.00 | Table 1: Share of FinQANet errors due to the selection of wrong operators or operands when applied to the FinQA dataset (Chen et al., 2021b), broken down by the number of steps in the output. Note that the numbers are based on program accuracy, which accounts for each type of error separately, resulting in the each row summing up to 1. 1973), especially when the number of reasoning steps grows (Chen et al., 2021b). In the context of quantitative QA, compositional generalization refers to the model's ability to generalize to new compositions of previously seen elements. As an example, if the model has encountered training examples that demonstrate calculations for "growth rate" and "percent change", we would like it to be able to come up with a reasonable hypothesis as to how to calculate "percent growth" or "rate of change". Table 1 demonstrates how this challenge becomes more difficult as the number of reasoning steps grows. For questions that require longer chains of reasoning, the model learns spurious patterns and unsuccessfully tries to leverage these memorized patterns to solve new problems. The Table also shows that as the number of steps grows, generating the wrong operator becomes a more dominant mistake than selecting the wrong operand. Not only is this error more dominant, but it can also have a more destructive impact on the chain of reasoning, as it can derail the model's hidden representations from that point onward. As an example, our analysis of the FinQANet model (Chen et al., 2021b) output showed that if the model generates an incorrect operator, it is about 30% more likely to commit other errors in the following 14930 ![1_image_0.png](1_image_0.png) steps compared to when the model generates an incorrect operand. In this paper, we propose CounterComp, an approach that can enhance compositional learning in multi-step quantitative QA. We take inspiration from the symbolic composition of arithmetic operations, and their correspondence to natural language phrases. Building on the work on attention alignments from previous studies, we propose an auxiliary metric learning loss that is focused on specific components of the input and output. Our sampling strategy is based on counterfactual scenarios. This means that the model learns proper representations for each component based on what-if scenarios. To the best of our knowledge, this is the first study that successfully applies component-wise counterfactual sampling as a metric learning strategy. We show how, when state of the art models are augmented with our auxiliary metric learning loss, they exhibit better performance in cases where multistep reasoning is required. CounterComp outperforms current baselines on four recently released datasets, and show stronger performance on OOD samples. ## 2 Related Work The typical architecture of a quantitative QA model is composed of a retriever and a generator (Jurafsy and Martin, 2021). The retriever identifies the particular context where the answer might be found. Since the context can be a mix of table cells and | Question | What was the net change in revenue from 2019 to 2020? Metric ($M) 2018 2019 | 2020 | | | |----------------|-------------------------------------------------------------------------------|--------|----|----| | Tabular | Operating | | | | | context | expenses | 35 | 29 | 30 | | Revenue | 70 | 80 | 60 | | | Verbalized | 2019 revenue was $80M. | | | | | facts | 2020 revenue was $60M. | | | | | Output program | subtract(80, 60) | | | | | Answer | -20 | | | | sentences, often a tabular encoder (Herzig et al., 2020) or verbalizer (Chen et al., 2021b) is used to convert the cells into a natural language sequence. The retrieved context is referred to as retrieved facts. Next, the generator uses the question along with the facts to generate the output in a step by step fashion. In multi-step QA, the generator often combines a recurrent module with an attention mechanism (Chen et al., 2021b), as illustrated in top half of Figure 1. The output can be assessed in terms of program accuracy as well as execution accuracy. Our study is focused on improving program accuracy by encouraging compositional generalization in the generator. There are two common approaches to improving compositional generalization. Attention alignment models encourage explicit alignments between natural language utterances (e.g. "rate of change") and corresponding symbolic math operations (e.g. subtraction followed by division). Methods informed by counterfactuals use what-if scenarios to generalize to a wider variety of compositions and reduce the effect of memorization. ## 2.1 Attention Alignments Yin et al. (2021) showed that additional supervision can be used to promote explicit alignments between components in the input and in the output. They added a regularization loss that encourages the cross-attention module to adjust its attention weights according to gold alignments. Using as few as 16 examples, their model was able to improve generalization in a semantic parsing task. CompAQT (Nourbakhsh et al., 2022) extended this idea to multi-step quantitative QA. Instead of using additional supervision, it used natural language heuristics to create noisy alignment labels between input tokens and output symbols. The additional alignment loss improved the performance of three baseline models on multi-step reasoning tasks for four datasets. ## 2.2 Methods Informed By Counterfactuals The success of alignment-based methods is limited by the fact that by heavily discouraging memorization, they underperform in settings where memorization can be helpful (Oren et al., 2020). To strike a balance between memorization and generalization, one approach is to generate new training examples that cover important semantic gaps in the training data. This is reminiscent of how adversarial training can help better define the semantic contours of compositional representations (Zhang et al., 2022). Contrastive or metric learning methods pursue a similar goal, but instead of generating new samples, they leverage existing samples within the training set (Jain et al., 2021). Counterfactual data augmentation (CAD) methods strive to achieve this by generating new samples using what-if scenarios (Zmigrod et al., 2019; Liu et al., 2021; Chen et al., 2021a). This can be done by altering a minimally sufficient set of tokens in the input such that the output class changes (Kaushik et al., 2020). There are two main challenges to creating these samples. First, it is difficult to identify the minimal set of tokens necessary to alter the output. Second, there is no guarantee that a counterfactual sample exists in the training set. To address these challenges, some studies employ human labelers (Kaushik et al., 2020) or a third party model (Huang et al., 2020). In domains like semantic parsing and quantitative QA where the output is symbolic, an alternative approach leverages the structure of the output to avoid the need for human labelers. Li et al. (2022) achieve this by intervening on the operands. Suppose that a question states "What was the net change in revenue from 2019 to 2020?" and the retriever produces two (verbalized) table cells: "2019 revenue was $80M" and "2020 revenue was $60M". The output program for this question would be: subtract(80, 60). Given the numeric nature of the operands, it's possible to generate new scenarios such as "What if 2019 revenue was $90?" with the updated output subtract(90, 60). Employing this method, Li et al. (2022) augment the TAT-QA dataset (Zhu et al., 2021) into a new dataset named TAT-HQA. They also enhance the verbal reasoning capacity of their model by offering the counterfactual scenario as a natural language prompt. Their model, named Learning to Imagine (L2I), outperforms state of the art models. As mentioned in the previous section, models that struggle with compositional generalization suffer from errors in operator selection, whereas L2I is focused on the selection of operands. In this paper, we propose CounterComp, a method that focuses on counterfactual sampling for components that indicate operators1. Using natural language constraints from previous studies, we first find components that correspond to operators versus those that correspond to operands. Next, we use an auxiliary metric learning loss with positive and negative samples chosen based on those components. This helps us avoid the complexities associated with a data augmentation approach, such as the need for creation of additional human labels. The next section lays out our problem definition in more detail. ## 3 Problem Formulation Let us consider the example provided by Table 3. Suppose Q is the question, represented as a sequence of tokens q1, · · · , qN (i.e. "what", "was", "the", *· · ·* , "2020", "?"). F is the evidence obtained by the retriever, made up of a sequence of tokens f1, · · · , fM (i.e. "2019", 1Please refer to Appendix C for a study on the use of CounterComp for operators versus operands. "revenue", "was", *· · ·* , "$60M"). The concatenation of these two sequences, i.e. Q||F, forms the input to the generator. The generator encodes Q||F using a neural language model such as RoBERTa (Liu et al., 2019), resulting in an embedding matrix U ∈ R denc×(N+M). Consistent with Chen et al. (2021b), we represent the output S as a sequence of steps s1, · · · , sL. Each step sl can be an operator (such as add or divide), or an operand. Similar to Chen et al. (2021b), our programs are modeled as rightexpanding binary trees with each operator having exactly two operands. If necessary, one or more operands are set to NONE, where NONE is a special constant. L is pre-defined as the maximum number of steps allowed. In the example from Table 3, S is: subtract, 80, 60, NONE, NONE, NONE. To generate the lth output step sl, the generator applies a cross-attention module to U , resulting in the attention weight matrix Al ∈ R 1×(N+M)and the attention output X l ∈ R K. A recurrent module then generates the hidden vector hl, which is used to produce the output step sl. $$\begin{array}{l}{\mathbf{h}_{l}=\mathrm{RNN}(\mathbf{h}_{l-1},\mathbf{X}_{l})}\\ {\mathbf{s}_{l}=\mathrm{NN}(\mathbf{h}_{l})}\end{array}$$ $$(1)$$ where NN can be any neural module that projects hl onto the simplex sl ∈ R K, from which sl can be sampled: sl = arg maxksl,k. Our goal is to encourage hlto be sensitive to the composition of the input Q||F with regards to the current output step sl. This means that hl needs to capture proper alignments between important terms in the input and the relevant operator/operand in the output. To achieve this, we pursue a metric learning approach where positive and negative samples are generated according to counterfactual scenarios. ## 3.1 Counterfactual Samples Given a training example ([Q||F] (i), S(i)), we define an *intervention target* Q(i)as a subsequence of the question tokens, i.e. Q(i) = {q (i) n ; n ∈ N (i)} where N (i) ⊆ {1, 2, · · · , N}. Suppose that changing the intervention target affects a single step in the output program S (i) = s (i) l, which we name the *intervention outcome*. Note that due to our focus on the generation of operators, we limit the intervention outcome to an operator. Since the output is composed of one operator followed by two operands followed by another operator and so on, l is selected from a limited index set: l ∈ {1, 4, 7, · · · , L−3}. In the example from Table 3, the possible indices will be 1 and 4, representing the operators subtract and NONE. Given this definition, it's possible to mine positive and negative examples for the ith training instance. A positive example ([Q||F] (i) pos, S(i) pos) is an instance for which, despite a possible intervention in the target, the outcome remains the same, i.e. Q (i) pos ̸= Q(i)and S (i) pos = S (i). A negative example ([Q||F] (i) neg, S(i) neg) is an instance for which an intervention in the target leads to a change in the outcome, i.e. Q (i) neg ̸= Q(i)and S (i) neg ̸= S (i). This allows us to define a triplet loss that encourages h (i) lto remain close to h (i) l,pos and far from h (i) l,neg with a margin of α (i): $$\mathcal{L}_{\text{triplet}}^{(i)}=$$ $$\max\{||\mathbf{h}_{l}^{(i)}-\mathbf{h}_{l,\text{pos}}^{(i)}||_{2}^{2}-||\mathbf{h}_{l}^{(i)}-\mathbf{h}_{l,\text{neg}}^{(i)}||_{2}^{2}+\alpha^{(i)},0\}\tag{2}$$ Figure 1 illustrates the sampling process for one training example. Note that this metric learning approach will only be valid if causal assumptions with regards to the intervention target are valid, i.e. the change in S (i) neg is in fact the result of the intervention in Q (i) neg and not a change in any other part of the input. In a data augmentation setting, this can be achieved by keeping the input fixed and perturbing a small segment that functions as the intervention target similar to (Kaushik et al., 2020). However, as discussed in Section 2.2. this requires additional manual labor to annotate the perturbed examples. In the next section, we describe how we impose certain constraints on the intervention target to achieve this in a self-supervised setting2. ## 4 Methodology Our goal is to identify potential positive and negative samples for the anchor ([Q||F] (i), S(i)). Suppose the anchor is the one shown in the top three rows of Table 3. *Bold italicized* tokens are redundant between the question and the fact, e.g. "revenue", "2019", and "2020". Those terms are often used by the retriever to find the correct facts. They are also used by the generator to find the correct order of operands. 2Note that the term "self-supervised" is used in this context to refer to the sampling strategy, i.e. no additional labeling is needed to generate the positive and negative samples. There are also terms that are unique to the question, i.e. "What", "the net change to", "from", and "to" (highlighted in blue). In CompAQT, the authors showed that these can be used as indicators for the operators. Lastly, there are terms that are unique to the facts, i.e. "was $80M" and "was $60M" (red italicized tokens). These can be used as indicators for the operands. We use these heuristics to guide our sampling strategy. We flag all spans in the question that do not overlap with the facts, i.e. underlined blue segments. Those spans serve as candidate intervention spans. In the example from Table 3, this results in four candidates: "What", "the net change in", "from", and "to". Next, we seek a positive and a negative example within the training set. A positive example is a sample in which, despite possible changes in the question, the operators in the output remain consistent with the operators in the anchor. Table 3 shows one such example. Several terms have been altered in the question. However, we would only focus on the changes in the candidate spans. Here, "was" has changed to "is", "net change" has changed to "difference", "from" to "between" and "to" to "and". This results in a token-level Levenshtein distance of 5 (four edits and one insertion) (Yujian and Bo, 2007). We ignore the change from "revenue" to "operating expenses" and from "2019" to "2018", because those changes have occurred outside of our candidate spans and only correspond to operands. A negative example is a sample in which exactly one output operator is altered, deleted, or added. Table 3 shows one such example. Here, the output includes a new operator divide. The question has also been altered with a token-level Levenshtein distance of 4. The given positive and negative example can now be plugged into Equation 2. Instead of a fixed margin, we use the edit distances mentioned before to dynamically adjust the margin. Let NLD(i) pos and NLD(i) neg be the *normalized, token-level Levenshtein edit distance* between the anchor and the positive example, and the negative example, respectively. We set the margin to: α (i) = 1−|NLD(i) neg − NLD(i) pos| This encourages a larger margin for cases where the anchor is equally similar to the positive and the negative examples, and the model might have a harder time picking up on the nuances of each component. ![4_image_0.png](4_image_0.png) ## 4.1 Runtime Optimization There are two runtime challenges to this proposed approach: 1) Sampling can be costly if the entire training set has to be scanned for each batch. This means an online sampling strategy cannot be used. On the other hand, an offline strategy introduces a large overhead. A hybrid approach is needed. 2) Calculating the edit distance metric is a costly operation with O(n 2) steps. To solve the first problem, we build two indices prior to training. One index groups the samples by their sequence of output operators. This index can be used to sample positive examples. The other index includes all training examples, and for each example, it includes the full list of one-step perturbations applied to its output operators. By generating all possible perturbations, we are able to find other samples whose outputs match the perturbed sequence (i.e. negative samples). For a sequence with n operators, all possible perturbations can be generated in O(n × K) time, where K is the number of possible operators3. Given the pre-generated positive and negative pools, we can also calculate and cache edit distances ahead of time. However, in practice, we realized that we could do so during training with little additional cost. This is because the edit distance is applied at the token-level4, and is limited to candidate spans, rendering it relatively fast. The decision as to whether distances should be cached or calculated on the fly depends on the average size of each pool versus the number of training steps. The algorithm outlined in Appendix B summarizes our approach. ## 5 Experiments 5.1 Datasets We use the hybrid CompAQT dataset, which is composed of four previously released datasets, namely **FinQA** (Chen et al., 2021b), **TAT-QA** (Zhu et al., 2021), **HiTab** (Cheng et al., 2022), and MULTIH**IERTT** (Zhao et al., 2022). The authors filtered these four datasets down to QA pairs that require single or multi-step quantitative reasoning. They also processed the tables and outputs in all four datasets to match the FinQA format. ## 5.2 Baselines We apply our proposed auxiliary loss to three baselines: 1) **FinQANet**, originally developed for the FinQA dataset (Chen et al., 2021b). 2) TAGOP, originally developed for the TAT-QA dataset (Zhu et al., 2021). 3) **Pointer-Verbalizer Network** (PVN), originally proposed by Nourbakhsh et al. (2022). We also apply the **CompAQT** loss to each model as a secondary baseline in order to determine how CounterComp compares to an attentionalignment strategy. ## 5.3 Sampling Success Rate Another possible concern is that our sampling strategy might be limited, in that positive and negative samples might not always be available in the training set, or that limited availability of samples might bias the training process. To remediate the problem of unavailable samples, when a positive sample is 3Since we follow Chen et al. (2021b), in all of our experiments K = 10. 4Since we're using a language model that uses word-piece tokenization, in effect the runtime is at subword level. missing, we use the anchor as the positive sample, and when a negative sample cannot be found, we use a uniformly sampled instance from the batch. Table 4 shows some statistics about the success rate of the sampling algorithm. "% Failure" identifies the share of training examples for which either a positive or a negative example was missing. Unsurprisingly this never happens for single-step programs, is very rare for two-step programs, and with the exception of HiTab, happens in less than 10% of the cases for longer programs. The Table also shows the average number of positive and negative examples found for each anchor. Again, HiTab has the lowest number of available samples, making it the most challenging dataset. In Section 6.4, we demonstrate how, even in cases with few possible samples, the model is able to generalize to unseen examples. ## 5.4 Settings Since we are focused on the generator, in the experiments discussed in this section we will use gold facts and encode the input using RoBERTalarge (Liu et al., 2019) 5. We run the baselines with and without the additional Ltriplet for 50 epochs with a learning rate of 5e−5, the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.999. At each step, we sample (with replacement) 5 positive and negative pairs per anchor, and add the average auxiliary triplet loss to the main model loss with a weight of λ. After a grid search with a step-size of 0.1, we set λ to 0.4 for all experiments. All experiments were conducted on 8 NVIDIA T4 GPUS with 16 GBs of memory. ## 6 Results And Analysis Table 5 shows the program accuracy of baselines (top row of each cell) compared the addition of CounterComp loss (bottom row of eachcell). Among the baseline models, TAGOP is not designed to generate multi-step programs. Therefore we only apply it to the TAT-QA dataset, which has a set of pre-determined operations (e.g. change ratio). We also apply the PVN model to the combined dataset, but since FinQANet outperforms it on all benchmarks, we will continue to use FinQANet as the reference baseline model for the remaining experiments in this section. As Table 5 shows, CounterComp consistently 5Please refer to Appendix A for results using retrieved facts. | Dataset | 1 step | 2 steps | 3+ steps | | | | | | | |-------------|------------|--------------------|------------|------------|--------------------|-----------|------------|------------|-----| | % Failure | Avg. # pos | Avg. # neg samples | % Failure | Avg. # pos | Avg. # neg samples | % Failure | Avg. # pos | Avg. # neg | | | samples | samples | samples | samples | | | | | | | | FinQA | 0 | 1457 | 3254 | 0.2 | 913 | 630 | 8.2 | 41 | 190 | | TAT-QA | 0 | 1808 | 638 | 0 | 295 | 958 | 1.3 | 1055 | 66 | | HiTab | 0 | 221 | 554 | 2.5 | 30 | 29 | 29.8 | 4 | 24 | | MULTIHIERTT | 0 | 189 | 533 | 0.2 | 326 | 335 | 7.8 | 92 | 190 | outperforms the baselines and the margin is often higher for longer programs. One notable exception is the TAT-QA dataset. As mentioned before, the dataset is not designed for open-ended multistep reasoning and includes a limited set of possible operations. Therefore methods that encourage memorization might achieve higher performance on TAT-QA. HiTab is another challenging dataset, but despite low performance on longer sequences, CounterComp offers an improvement over the baseline. | Model | Dataset | Program accuracy | | | |----------------|-----------|--------------------|---------|--------| | 1 step | 2 steps | 3+ steps | Overall | | | TAGOP | 45.01 | 39.56 | 42.73 | 43.25 | | +CompAQT | 46.07 | 40.28 | 43.73 | 43.88 | | TAT-QA | | | | | | +CounterComp | 46.12 | 41.51 | 45.67* | 45.38 | | PVN | 68.14 | 61.33 | 13.54 | 56.64 | | +CompAQT | 70.78 | 63.45 | 16.63 | 59.21 | | Combined | | | | | | +CounterComp | 71.58* | 64.31* | 18.44* | 61.20* | | FinQANet | 75.63 | 65.87 | 30.36 | 68.44 | | +CompAQT | 78.68 | 75.12 | 35.85 | 73.74 | | FinQA | | | | | | +CounterComp | 79.13* | 75.45* | 36.86* | 74.49* | | FinQANet | 73.33 | 63.76 | 64.88 | 70.71 | | +CompAQT | 70.00 | 63.76 | 66.26 | 69.97 | | TAT-QA | | | | | | +CounterComp | 70.56 | 63.80 | 66.90 | 70.01 | | FinQANet | 34.70 | 25.14 | 15.91 | 30.12 | | +CompAQT | 34.73 | 29.94 | 17.35 | 32.23 | | HiTab | | | | | | +CounterComp | 34.94 | 30.00* | 17.39 | 32.61* | | FinQANet | 38.99 | 40.07 | 15.01 | 38.94 | | +CompAQT | 38.87 | 42.35 | 16.77 | 40.82 | | MULTIHIERTT | | | | | | +CounterComp | 39.25* | 42.51* | 16.86 | 40.85* | | FinQANet | 65.11 | 62.00 | 30.76 | 58.60 | | +CompAQT | 66.60 | 65.14 | 34.16 | 60.88 | | Combined | | | | | | +CounterComp | 67.91* | 66.00* | 36.91* | 61.82* | | (Fixed margin) | 66.19 | 64.89 | 34.06 | 59.58 | ## 6.1 Auxiliary Triplet Loss Versus Auxiliary Attention Alignment Loss The middle row of each cell in Table 5 shows the program accuracy when CompAQT loss is added instead of CounterComp loss. As previously described, CompAQT imposes an auxiliary attention alignment loss such that tokens related to operators receive more attention during the generation of operators. Even though this leads to improvements over the baselines, CounterComp outperforms CompAQT in all experiments. This might be due to the fact that the regularizing effect of CompAQT loss is not as strong as the representation learning impact of CounterComp. Despite the fact that CounterComp was not designed as an attention alignment model, it does have an impact on how attention patterns evolve during training. Table 6 shows the top-attended input tokens during the generation of a divide operator in various contexts. For a singular division operation, FinQANet attends to tokens such as "year" whereas CounterComp encourages the model to attend to more relevant tokens such as "net" and "change". A subtraction followed by a division often indicates a percentage calculation, as captured by both models. An addition followed by a division often indicates an average calculation. Again, CounterComp is able to capture relevant tokens but the FinQANet baseline seems to attend to some memorized tokens such as "annual". | Top attended tokens during the generation of divide | | | | |-------------------------------------------------------|-------------|----------------|--------------| | Model | divide | subtract | add | | divide | divide | | | | FinQANet | share, year | ratio, percent | annual, per | | +CounterComp | net, change | share, percent | average, per | Table 6: Top attended tokens during the generation of the division operator in various sequences. The dataset used for this experiment is FinQA. ## 6.2 Fixed Versus Adaptive Margin The last row of Table 5 shows the performance of the FinQANet model on the combined dataset using the CounterComp loss with a fixed margin of 1. The performance suffers, especially as the number of steps grows. This further demonstrates the importance of the adaptive margin α (i)that takes the edit distance into account. | Question | Evidence | Gold program | FinQANet | FinQANet + CounterComp | |-----------------------------------------------------------------------------------------------------|-------------------------------------------------|------------------------------------|-----------------|--------------------------| | 2) the gross margin pct of 2003 is 27.5% subtract(27.5, 27.3) | subtract(27.5, 27.3), divide(#0, 27.5) | subtract(27.5, 27.3) | | | | 1) amounts expensed for 2009 was $35.1 divide(3.8, 35.1), multiply(#0, const_100) divide(3.8, 35.1) | divide(3.8, 35.1), | | | | | 2) expense includes a discretionary | multiply(#0, const_100) | | | | | company contribution of $3.8 2) the aaa/aaa share of 2008 is 19% | greater(14, 19) | subtract(14, 19), subtract(14, #0) | greater(14, 19) | | | 1) We have other committed and uncommitted credit lines of $746 million subtract(746, 554) | multiply(554, const_1000000) multiply(746, 554) | | | | | 2) $554 million of these credit lines were available for use as of year-end 2016 | | | | | | Model | Program accuracy on test dataset | | | | |--------------------------|------------------------------------|-------|-------|-------| | TAT-QA HiTab MULTIHIERTT | FinQA | | | | | (unseen programs) | | | | | | FinQANet | 41.64 | 22.80 | 35.33 | 65.74 | | +CompAQT | 39.88 | 22.71 | 35.28 | 70.32 | | +CounterComp | 42.00 | 22.97 | 36.94 | 73.53 | ## 6.3 Qualitative Examples Table 7 shows four qualitative examples from the FinQA dataset. The first two rows show how CounterComp enables the FinQANet model to represent concepts such as "decline" and "percentage" more accurately. The third example shows how CounterComp is able to determine the difference between a calculation question and a yes/no question. The last row shows a failure example that was also reported in Chen et al. (2021b). Here, CounterComp does not improve the performance of FinQANet. This particular example requires domain expertise to address, which goes beyond a direct mapping between components in the question and the evidence. This highlights the need for methods that allow domain expertise to be represented more effectively (Chen et al., 2021b). ## 6.4 Compositional V.S. Ood Generalization In a recent study Joshi and He (2022) showed that current approaches to counterfactual data augmentation do not necessarily lead to better generalization to out-of-distribution (OOD) samples. To test whether this holds for CounterComp, we conduct two studies. First, we train FinQNet with and without CounterComp loss on the FinQA dataset, then test it on the other three datasets. Note that the four datasets are based on different domains. FinQA and TAT-QA are both based on financial reports, but while FinQA was derived from US filings, TAT-QA is based on international filings and therefore covers a wider variety of metrics. HiTab and MULTIHIERTT are both based on other types of corporate reports with highly complex tabular structures. The first three columns of Table 8 show a slight improvement when CounterComp is used in this setting. In contrast, using CompAQT loss slightly hurts the performance, demonstrating CounterComp's higher OOD generalization potential. Next, we select a subset of samples from the FinQA dev set that have unseen compositions compared to the training set. This means that the particular combination of operations were never seen during training. As the last column of the table shows, CounterComp outperforms the baseline by more than 7 points. This further demonstrates how improving representation learning at the component level can enhance generalization to unseen contexts. ## 7 Conclusion In this paper, we presented CounterComp, a method that leverages counterfactual contrast to enable metric learning for quantitative QA. We show how using the auxiliary CounterComp loss can improve compositional generalization in multistep reasoning tasks, especially as the number of steps grows. Due to runtime challenges, we proposed a hybrid offline/online sampling strategy that uses predefined indices for easier lookup operations. This allows us to capture samples that have a contrast of one operator with the anchor. In future studies, we hope to capture contrastive samples with longer perturbation chains. We also hope to examine the effectiveness of counterfactual compositional contrast in other domains such as semantic parsing and question answering over multimodal input. Lastly, we hope to extend the use of CounterComp to enhance the performance of the retriever, using the heuristics introduced in Section 4 (i.e. by focusing on components in the question that overlap with the facts). This can result in a quantitative QA pipeline that is powered by compositional contrast in an end-to-end fashion. ## 8 Limitations As previously mentioned, our study is focused on the generator component of a QA pipeline and ignores the retrieval task. In the experiments presented in the paper, we have used gold facts to report the results. For certain datasets such as HiTab and MULTIHIERTT which were designed for complex tabular structures, this might simplify the end-to-end challenge. In future studies, we hope to explore whether CounterComp can enhance the performance of retrievers. The datasets used in our experiments were curated using enterprise documents such as financial reports or other corporate disclosures. Quantitative QA over these reports often involves multi-step reasoning that is limited to linear arithmetic operations such as addition, division, averaging, etc. A completely open-domain QA engine might need to cover more complex operators. Lastly, we designed CounterComp to leverage existing data by sampling from the training set. Nevertheless, combining CounterComp with augmentation-focused methods such as CAD might lead to more robust models. ## References Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In *Proceedings of the 2012 Joint* Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995–1005, Jeju Island, Korea. Association for Computational Linguistics. Hao Chen, Rui Xia, and Jianfei Yu. 2021a. Reinforced counterfactual data augmentation for dual sentiment classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 269–278, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021b. Finqa: A dataset of numerical reasoning over financial data. *Proceedings* of EMNLP 2021. Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and Dongmei Zhang. 2022. HiTab: A hierarchical table dataset for question answering and natural language generation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1094–1110, Dublin, Ireland. Association for Computational Linguistics. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4320–4333, Online. Association for Computational Linguistics. Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020. Reducing sentiment bias in language models via counterfactual evaluation. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 65–83, Online. Association for Computational Linguistics. Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph Gonzalez, and Ion Stoica. 2021. Contrastive code representation learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5954–5971, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nitish Joshi and He He. 2022. An investigation of the (in)effectiveness of counterfactually augmented data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3668–3681, Dublin, Ireland. Association for Computational Linguistics. Dan Jurafsy and James H. Martin. 2021. *Speech and* language processing, 3 edition, chapter 23. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In *International Conference on Learning Representations*. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Moxin Li, Fuli Feng, Hanwang Zhang, Xiangnan He, Fengbin Zhu, and Tat-Seng Chua. 2022. Learning to imagine: Integrating counterfactual thinking in neural discrete reasoning. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 57–69, Dublin, Ireland. Association for Computational Linguistics. Qi Liu, Matt Kusner, and Phil Blunsom. 2021. Counterfactual data augmentation for neural machine translation. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 187–197, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Richard Montague. 1973. *The Proper Treatment of* Quantification in Ordinary English, pages 221–242. Springer Netherlands, Dordrecht. Armineh Nourbakhsh, Cathy Jiao, Sameena Shah, and Carolyn Rosé. 2022. Improving compositional generalization for multi-step quantitative reasoning in question answering. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 1916–1932. Association for Computational Linguistics. Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, and Jonathan Berant. 2020. Improving compositional generalization in semantic parsing. In *Findings of the Association for Computational Linguistics:* EMNLP 2020, pages 2482–2495, Online. Association for Computational Linguistics. Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, and Jacob Andreas. 2021. Compositional generalization for neural semantic parsing via spanlevel supervised attention. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2810–2823, Online. Association for Computational Linguistics. Li Yujian and Liu Bo. 2007. A normalized levenshtein distance metric. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 29(6):1091–1095. Le Zhang, Zichao Yang, and Diyi Yang. 2022. TreeMix: Compositional constituency-based data augmentation for natural language understanding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5243–5258, Seattle, United States. Association for Computational Linguistics. Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022. MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6588–6600, Dublin, Ireland. Association for Computational Linguistics. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics. ## A Results On Retrieved Facts Table 9 shows the performance of FinQANet versus FinQANet+CounterComp on retrieved facts from the FinQA dataset. Similar to gold facts, CounterComp improves program accuracy, especially on multi-step output. | Model | Program accuracy | | | | |--------------|--------------------|----------|---------|-------| | 1 step | 2 steps | 3+ steps | Overall | | | FinQANet | 64.13 | 57.03 | 20.56 | 58.30 | | +CounterComp | 67.52 | 58.87 | 22.79 | 61.18 | Table 9: Ablation results on the FinQANet model, applied to the FinQA dataset with retrieved facts. ## B Training Algorithm Algorithm 1 details our pre-indexing, sampling, and training processes. Note that the algorithm is a simplified version of our implementation, e.g. it follows a basic SGD instead of a batch SGD process, and shows the process for only one epoch. ## C Countercomp For Operators Versus Operands CounterComp intervenes on operators, whereas operands provide another possible intervention target. As mentioned in Section 2, Learning to Image (L2I) (Li et al., 2022), which focuses on counterfactual scenarios for operands, was able to outperform TAGOP by a large margin. L2I was evaluated on TAT-QA, a dataset with a limited set of possible multi-step operations, resulting in the challenge of compositional generalization being mainly focused on operands. Since we were not able to recreate the results reported in the original L2I paper6, instead we evaluate an operand-focused approach via a metric learning method similar to CounterComp. Given an anchor, we generate new samples using the operations laid out in the L2I paper (i.e. SWAP, ADD, MINUS, etc.), where one or more operands are perturbed in random. We apply the same perturbation in the facts. This effectively eliminates the "imagination" component but provides a baseline that is more comparable to CounterComp. These samples are used as positive examples, whereas negative examples are randomly sampled from the batch. 6This could be because we failed to generate a TAT-HQA dataset that was comparable to the one used in the original paper. ## Algorithm 1 Training Algorithm 1: Training data: {([Q||F] (i), S(i))} I i=1 2: Parameters: λ 3: Model: model // Create the indices for pos and neg samples 4: pos_index *← {}* 5: neg_index *← {}* 6: for i ∈ {1*, . . . , I*} do 7: O (i) ← s (i) 1 , s (i) 4 , · · · , s (i) L−3 8: add_to_index(pos_index, O(i), i) //p is the perturbed output and l *is the location of the* perturbation 9: for *p, l* ∈ possible_perturbations(O (i)) do 10: j ← find_matching_sample(p) 11: add_to_index(neg_index, O(i)*,(j, l*)) 12: **end for** 13: **end for** // Train (single epoch, non-batch version) 14: for i ∈ {1*, . . . , I*} do 15: for j ∈ {1, 2, *· · ·* , 5} do // Basic model loss 16: L (i) ← loss(*model.forward*([Q||F] (i)), S(i)) // Pos/neg sampling 17: O (i) ← s (i) 1 , s (i) 4 , · · · , s (i) L−3 18: pos_sample ← sample(pos_index[O (i)] \ i) 19: neg_sample, l ← sample(neg_index[O (i)]) // Find candidate intervention spans 20: Q (i) ← find_intrvntn_span(i) 21: Q (i) pos ← find_intrvntn_span(pos_sample) 22: Q (i) neg ← find_intrvntn_span(neg_sample) // Calculate edit distances and loss 23: NLD(i) pos ← norm_edit_dist(Q (i), Q (i) pos) 24: NLD(i) neg ← norm_edit_dist(Q (i), Q (i) neg) 25: α (i) = 1 − |NLD(i) neg − NLD(i) pos| 26: pos_dist(i) j = ||h (i) l − h (i) l,pos||22 27: neg_dist(i) j = ||h (i) l − h (i) l,neg||22 28: L (i) tripletj = max{pos_dist(i) j −neg_dist(i) j +α (i), 0} 29: **end for** 30: L (i) = (1 − λ)L (i) + λ 5 P5 j=1 L (i) tripletj 31: **end for** 32: L = 1 I PI i=1 L (i) 33: *model.backward*(L) Table 10 shows the program accuracy of CounterComp versus the new method when applied to each dataset. As expected, TAT-QA is the only dataset responsive to the perturbation of operands. All other datasets suffer from an exclusive focus on operands. For HiTab and MULTIHIERTT, the operand strategy also underperforms compared to the baseline FinQANet performance (see Table 5). | Model | FinQA | TAT-QA | HiTab | MULTIHIERTT | |-------------------------|---------|----------|---------|---------------| | CounterComp (operators) | 74.49 | 70.01 | 32.61 | 40.85 | | CounterComp (operands) | 68.98 | 70.80 | 28.88 | 37.67 | Table 10: Program accuracy of CounterComp versus a sampling strategy focused on operands. FinQANet was used for all experiments. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 6.3, 6.4, 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5.1, 5.2 ✓ B1. Did you cite the creators of artifacts you used? 5.1, 5.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 5.1, 5.2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5.1, 5.2 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data is based on publicly available corporate reports. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5.1, 5.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 1 ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No packages used. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-etal-2023-needle
A Needle in a Haystack: An Analysis of High-Agreement Workers on {MT}urk for Summarization
https://aclanthology.org/2023.acl-long.835
To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain high-agreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks.
# A Needle In A Haystack: An Analysis Of High-Agreement Workers On Mturk For Summarization Lining Zhang,1* Simon Mille,2 Yufang Hou,3 Daniel Deutsch,4 Elizabeth Clark,5 **Yixin Liu,**6 Saad Mahamood,7 Sebastian Gehrmann,5 Miruna Clinciu,8 Khyathi Chandu,9 **João Sedoc**1 1New York University, 2ADAPT Centre, DCU, 3IBM Research, 4Google, 5Google Research, 6Yale University, 7trivago N.V., 8University of Edinburgh, 9Allen Institute for AI ## Abstract ![0_Image_0.Png](0_Image_0.Png) To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain highagreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks. ## 1 Introduction Natural language generation (NLG) tasks like text summarization are challenging to evaluate both in terms of automatic metrics and human evaluations (Gehrmann et al., 2022). Although automatic metrics are inexpensive proxies for human annotations for tasks like dialog evaluation (Mehri et al., 2022), they may have problems dealing with paraphrases, capturing distant dependencies, or identifying nuances in human languages (Banerjee and Lavie, 2005; Isozaki et al., 2010; Manning et al., 2020). Thus, it is still crucial to obtain high-quality human annotations as gold labels for evaluation. Amazon Mechanical Turk (MTurk)1is a commonly used crowdsourcing platform for collecting human annotations on designed tasks, known as Human Intelligence Tasks (HITs). However, finding qualified workers for high-quality annotations with a better inter-annotator agreement (IAA) is challenging, * Correspondence to lz2332@nyu.edu 1https://www.mturk.com/ especially for difficult tasks such as text summarization. Best practices for recruiting high-quality workers are also poorly understood, and the relationship between high quality and high agreement needs further investigation. To tackle the above issues, we design a recruitment pipeline to identify workers who are able to produce high-agreement annotations for the evaluation of text summarization on MTurk. It comprises a qualification task and an endurance task, followed by a reference-based task (see Figure 1). In the qualification task, workers who meet predefined qualification settings receive instructions and qualification questions, including an attention check (Oppenheimer et al., 2009). The qualification questions are designed to assess the annotator's ability to evaluate multiple dimensions of a 14944 summary correctly. Performance on this task determines whether they are categorized into GOLD, SILVER, BRONZE, or BLOCK. Only the best workers (GOLD and SILVER) move on to the endurance task, which consists of 10 HITs with 4 summaries in each to evaluate. This task only tests the summary's saliency, which is the most subjective dimension (Howcroft et al., 2020), but it challenges the annotator's capacity for handling a heavy annotation workload. GOLD and SILVER workers who complete all HITs are added to a maintained worker list as high-agreement annotators for future tasks. To ensure their general performance for the true annotation task, a reference-based task to evaluate information coverage between summaries is conducted with these workers later. While serving as a best practice beyond its scope, our study has the following contributions: - establish a cost-effective recruitment pipeline on MTurk to consistently build a pool of annotators for high-agreement annotations. - successfully recruit 12 out of 200 (6%) superior annotators for text summarization evaluation, while reducing costs and guaranteeing high agreement. - rigorously demonstrate that the annotators identified through our pipeline can match or surpass the IAA of expert annotators and standard statistical techniques, though further calibration may be required for correctness. ## 2 Related Work Challenges of Human Evaluation Compared to automatic evaluation metrics for NLG tasks like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), human annotations from non-expert annotators on MTurk can reach an agreement with gold standards or expert judgments (Callison-Burch, 2009). Although recent works leverage language models like BERT (Devlin et al., 2019) to get better automatic evaluations (Zhang et al., 2020), human judgments are still indispensable in identifying nuances in specific language tasks (Manning et al., 2020). Finding qualified workers to carry out the evaluations is crucial. This is especially true for tasks like text summarization, which lacks consensus on evaluation protocols (Fabbri et al., 2021) and is often inconsistent with previous human evaluations (Hardy et al., 2019). However, human evaluation from non-expert crowdsourcing platforms have low quality (Gillick and Liu, 2010) and a simple qualification filter is not sufficient to identify qualified workers (Berinsky et al., 2012; Robinson et al., 2019). Some studies applied quality control mechanisms to filter out poor quality annotations, resulting in a relatively low pass rate for a variety of tasks (Graham et al., 2017, 2018; Mille et al., 2019). The fact that up to 70% of the HITs are eventually discarded indicates a huge resource waste. Even with qualified workers, human annotations might still be adversely affected by factors like incomplete instructions or unfair wages paid to annotators (Huynh et al., 2021), and workers need clear references, schemes, or standards to follow (Howcroft et al., 2020; Karpinska et al., 2021). Thus, our study serves as a detailed reference for finding qualified MTurk workers for a summarization evaluation task and further identifying those who can assist in a large number of annotations. Inter-Annotator Agreement For annotations without true labels or those evaluated with a qualitative scale such as Likert scale (Likert, 1932), the inter-annotator agreement (IAA) among MTurk workers measures the reliability of the annotations. For example, Cohen's Kappa (Cohen, 1960) measures IAA between a pair of results of the same length from two annotators, while Krippendorff's Alpha (Hayes and Krippendorff, 2007) measures the agreement of a set of results from any number of annotators, even with unequal sample sizes. Both range from −1 to 1, with 1 indicating complete agreement. Further studies also continue to mitigate annotator bias through complementary methods to IAA (Amidei et al., 2020), aimed at high-quality annotations. In our study, we utilize both Cohen's Kappa and Krippendorff's Alpha as the measurement of annotation reliability. ## 3 Methods In this section, we detail how the workers were recruited and which tasks were carried out.2 ## 3.1 Mturk Qualification Settings To narrow down the pool of our target workers, we set a few pre-defined qualifications for workers on MTurk before publishing the qualification task: (i) the **Location** is set to "UNITED STATES (US)"; (ii) the **Number of HITs Approved** is set to be "greater than 1000" to target workers who are already experienced on MTurk; (iii) the HIT Approval Rate (%) is set to be "greater than or 2Appendix A.9 shows instructions given during the tasks. equal to 99" to target workers who are able to finish tasks with high quality and have stable performance. We also set the task visibility as "Private", which means our tasks are visible to any worker, but only workers who meet all qualification requirements can preview and accept. Paolacci et al. (2010) show that the annotations collected with the "Location" setting on MTurk are representative of the population of our target country in terms of demographic data. This helps mitigate biases introduced by samples from traditional recruitment methods like college undergraduate samples (Buhrmester et al., 2011). We set qualification settings (ii) and (iii) based on previous work (Whiting et al., 2019; Oppenlaender et al., 2020; Kummerfeld, 2021) and our own experience on MTurk. Workers who meet all qualification requirements are eligible to participate in the qualification task. ## 3.2 Qualification Task Summarization task In summarization, the input is the text of a document and the output is a short summary. We evaluate a summary S according to 6 dimensions based on the criteria taxonomy presented in Howcroft et al. (2020), and workers are asked for a binary answer as to whether a dimension is satisfied in a summary or not: - **Understandability**: can the worker understand S and is S worth being annotated. - **Compactness**: S does not contain duplicated information. - **Grammaticality**: S is free from grammatical & spelling errors. - **Coherence**: S is presented in a clear, wellstructured, logical, and meaningful way. - **Faithfulness**: all of the information in S can be found in the article; S accurately reflects the contents of the article. - **Saliency**: S captures the most important information of the article and does not include parts of the article that are less important. Training and qualification There are two main parts of the qualification task. The *training part* guides the workers through the above evaluation dimensions and instructs them on how to annotate. The definition of each dimension is illustrated with positive and negative examples, and full annotation examples are shown (summary and binary rating for each dimension). Then, workers are required to write an instruction summary in their own words to make sure they have understood the task and are ready to annotate. The *qualification part* tests the worker's understanding of the task. Three documents are provided, each with one summary. The worker reads the document and annotates the corresponding summary according to each dimension. The ratings are then compared to expert ratings provided by the authors of this paper. The last document comes with an attention check to test whether a worker is just randomly assigning scores without reading: a highlighted instruction asks the worker to ignore the task and select specific answers. Finally, an optional field is provided to collect feedback. Worker categorization Upon finishing their task, workers are categorized into four types: - **GOLD**. The GOLD workers pass the attention check and annotate every dimension of every document in the qualification part correctly. - **SILVER**. The SILVER workers pass the attention check and make only one mistake when annotating each dimension of the documents in the qualification part. - **BRONZE**. The BRONZE workers pass the attention check and make more than one mistake when annotating each dimension of the documents in the qualification part. - **BLOCK**. The BLOCK workers fail to pass the attention check. The GOLD and SILVER workers are assigned a qualification score and proceed with the endurance task. Besides, we conducted multiple rounds of the qualification task to avoid influence from the time or day when the task was conducted and randomly sampled workers (Arechar et al., 2017; Berinsky et al., 2012). ## 3.3 Endurance Task The endurance task is designed to test whether a worker can reliably perform a large number of annotations. The workers who finish all HITs of this task are assigned the highest qualification score and are added to a maintained worker list. The endurance task comprises 10 HITs. For each HIT, a document and 4 corresponding summaries generated by different models are provided; each HIT takes around 5 minutes to finish (approximately an hour for all HITs). To keep the task simple we only evaluate each summary on one dimension, but to ensure that the task is challenging enough we (i) use the most subjective of the 6 di- Round Number 1 2 3 4 Total Total participants at the beginning 50 50 50 50 200 # GOLD workers passed qualification task 1 3 2 2 8 # SILVER workers passed qualification task 4 5 3 6 18 # workers entered endurance task 5 8 5 8 26 # GOLD workers passed endurance task 1 1 1 1 4 # SILVER workers passed endurance task 0 3 2 3 8 # workers passed both tasks 1 4 3 4 12 mensions, Saliency, and (ii) use a more fine-grained 10-point Likert scale (from 1 to 10). Rationale for choosing 10 HITs Our motivation is two-fold: to find workers who were able to complete many tasks and whose annotations are better than random. As the number of HITs increases, the number of remaining workers drops from 26 to 12. The survival rate defined by the Kaplan–Meier estimator (Kaplan and Meier, 1958) is 38.59% when the number of HITs is set to 10 which is an estimate of a worker's capacity to be able to complete many tasks. We empirically found that we need a minimum of 8 HITs completed by a worker in order to validate that their annotations are statistically significantly different from random noise (see Table 2). ## 3.4 Reference-Based Task Finally, to test whether the selected MTurk workers actually perform better at annotating summaries in general, we conduct a reference-based task that comprises 30 HITs. In each HIT, a reference summary and 4 candidate summaries are provided. The worker is asked to assign each candidate summary two scores ("can2ref" score and "ref2can" score) | Confidence interval of Cohen's Kappa Lower Upper bound bound | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------|------| | - | 26[1] | 100 | - | | | 1 | 19 | 63.16 | - | | | 2 | 18 | 59.65 | - | | | 3 | 17 | 56.14 | - | | | 4 | 16 | 52.63 | - | | | 5 | 15 | 49.12 | -0.18 | 0.44 | | 6 | 15 | 49.12 | -0.18 | 0.44 | | 7 | 15 | 49.12 | -0.18 | 0.44 | | 8 | 14 | 45.61 | 0.06 | 0.44 | | 9 | 13 | 42.10 | 0.08 | 0.42 | | 10 | 12 | 38.59 | 0.09 | 0.42 | | [1] This (26) is the number of workers who entered the endurance task (GOLD and SILVER workers passed the qualification task). Num. of HITs finished Num. of workers remaining Survival rate % (Kaplan–Meier estimator) | | | | | on a scale from 1 to 5. The "can2ref" score indicates whether all of the information in the candidate summary can also be found in the reference summary, while the "ref2can" score checks the converse coverage direction. A score of 1 means that almost no information in one summary can be found in the other, while a score of 5 indicates complete information coverage. The worker is provided with instructions and examples of the rating at the beginning of the task. ## 4 Results 4.1 Annotation Data And Cost The collected experimental data not only contained annotation results but also metadata reflecting annotator behaviors.3 The cost of annotation on MTurk included both the wages paid to MTurk Workers and the fees paid to MTurk (which may vary according to the task). A worker who participated in the qualification and the endurance tasks earned $8.5 ($1 for the qualification task plus $7.5 for the endurance task) on average, while a worker who participated only in the qualification task (i.e. who did not qualify) earned $1 on average. Given the total cost of $514 for the entire pipeline which yielded 12 workers, the cost of identifying a qualified worker is $42.8. For details, the breakdown of the cost is shown in Table 3. ## 4.2 Qualification Task Results We conducted four rounds of the qualification task, each round included 50 MTurk workers (see Table 1). This choice of multiple rounds aimed to guarantee the stability of the annotation results (Berinsky et al., 2012; Arechar et al., 2017). The overall pass rate of the attention check was 0.69; thus, 62 workers in total did not pass the attention check and | Num. of Assignment | Total | Fees | Total | Hourly | | | |----------------------|----------|--------|----------|----------|------|------| | per Assignment | per Task | Reward | to MTurk | Cost | Wage | | | (Each of 4 rounds) | $1.00 | 50 | $50 | $20 | $70 | $2 | | Round 1 | $0.75 | 5 | $37.5 | $7.5 | $45 | $7.5 | | Round 2 | $0.75 | 8 | $60 | $12 | $72 | $7.5 | | Round 3 | $0.75 | 5 | $37.5 | $7.5 | $45 | $7.5 | | Round 4 | $0.75 | 8 | $60 | $12 | $72 | $7.5 | were categorized as BLOCK. Out of 200 MTurk workers, there were only 8 GOLD workers and 18 SILVER after the qualification task. Thus, only 26 MTurk workers (13% of all participants) qualified for the endurance task. For each round, we calculated Krippendorff's Alpha4to measure the agreement among annotators. The highest Krippendorff's Alpha was 0.33 reached by the first round, and the average Krippendorff's Alpha of all four rounds was 0.25. In addition, the exclusion of BLOCK workers led to an increase in Krippendorff's Alpha, compared to the value calculated on all workers. The highest Krippendorff's Alpha without BLOCK workers was 0.44 (second round), and the average Krippendorff's Alpha of all four rounds increased to 0.41. These results showed that, as expected, BLOCK workers seemed to lack good-faith effort in the task and likely yielded low quality annotations. ## 4.3 Endurance Task Results We published the same endurance task for GOLD and SILVER workers separately, and reported IAA using Cohen's Kappa and Krippendorff's Alpha among each type of worker; we also reported similar IAA results from combined GOLD and SILVER workers. We additionally collected endurance task results from volunteer researchers unrelated to this paper for a comparison between MTurk workers and NLG "experts". SILVER Workers There were 18 SILVER workers after the qualification task, 13 of whom accepted the endurance task. However, only 8 SILVER workers finished all 10 HITs–a yield rate of around 44% given the number of SILVER workers entering this task. To calculate the IAA, we considered the annotation scores of all summaries (40 ratings) for each of the 8 workers and calculated Cohen's Kappa for each worker pair; the highest Cohen's 4https://pypi.org/project/krippendorff/ Kappa was 0.451 between workers S22 and S43. To avoid influence from a possible unstable performance at the beginning of the task, we also tried to omit the first two HITs, that is, we only used 32 ratings when calculating Cohen's Kappa; the resulting improvement for Cohen's Kappa was very low. In addition, we calculated Krippendorff's Alpha on the entire annotation results for all summaries and workers, and it reached 0.358. GOLD Workers There were 8 GOLD workers after the qualification task and 6 of them accepted the endurance task. However, only 4 GOLD workers finished all 10 HITs, for a yield rate of around 67% given the number of GOLD workers entering this task. This rate was higher than that of SILVER workers. We calculated pairwise Cohen's Kappa using all the scores, and the highest IAA score increased to 0.48, compared to 0.45 for SILVER workers. There was no significant improvement after omitting the first two HITs. Krippendorff's Alpha for the GOLD workers reached 0.443, which is higher than with SILVER workers (0.358). GOLD and SILVER Workers To investigate IAA of worker pairs across GOLD and SILVER workers, we combined the results of these two categories of workers and calculated pairwise Cohen's Kappa. The highest pairwise Cohen's Kappa on the 40 ratings per worker was 0.55; see the matrix in Figure 2. Again, omitting the first two HITs also did not change the scores much. For Krippendorff's Alpha, the value was 0.396, which fell in the range between the SILVER worker's (0.358) and GOLD worker's (0.443) values.5 In Appendix A.2, we show a breakdown of the results per text position in each HIT (correlations for all first texts, for all second texts, etc.) for each of the three subgroups (SILVER, GOLD, GOLD AND SILVER); the possibly sightly darker heat maps | Qualification Task | |----------------------| | Endurance Task | could indicate higher correlations for the second text of each HIT. Comparison to Expert Ratings To get an idea of the quality of qualified MTurk workers according to our approach, we compared their IAA with the IAA obtained by conducting the same endurance task with three researchers as NLG "experts". The pairwise Cohen's Kappa for all 40 ratings only reached 0.268 (see Table 10 in Appendix A.3). The IAA among the experts was comparatively lower than the GOLD and SILVER workers, indicating that qualified workers identified by our tasks reached a better agreement at least for the endurance task. Thus, it seems possible to recruit high-quality workers using our pipeline. Detection of Abnormal Workers From Cohen's Kappa scores shown in Figure 2, the worker S42 6 had much lower agreement scores (heatmap in the yellow colors on the row and column corresponding to the worker). Recent studies have uncovered the presence of bots on MTurk (Webb and Tangney, 2022). To understand the reason for this worker's lower agreement with other workers, we analyzed their online behavior using the metadata extracted from their annotation results. Figure 3 shows the timeline of each of the 10 HITs as a horizontal gray line. The timelines are plotted from top to bottom, corresponding to the first to the last HIT in the endurance task. The X-axis represents the duration between the time of acceptance and submission, which is normalized by the duration for each HIT (ranging from 0 to 1). Different marks present each annotator behavior, as shown in the legend. Among these behaviors, blue points represent the time when the MTurk worker 6S42 stands for the second SILVER worker from Round 4 ![5_image_0.png](5_image_0.png) assigned a score for one of the four summaries, and the corresponding number on top represents the summary index (valued from 0 to 3). Orange crosses denote the suggested reading time of the article in each HIT, given the average human reading speed of 130 words per minute.7If the suggested reading time after normalization was longer than the duration, we marked the orange cross as 1 at the time of submission which is at the end of the gray line. Most of the orange crosses were marked at the end of the timelines in Figure 3 (right), indicating this worker assigned scores and submitted the HIT in less time than it usually takes for a human to even finish reading the article. This result demonstrates that this worker may not have put in good faith in the endurance task, which possibly explains the low IAA with other workers. By removing this worker and calculating Krippendorff's Alpha again within GOLD and SILVER workers, the IAA increased to 0.454 (compared to 0.396 when including the worker). ## 4.4 Reference-Based Task Results To test the reliability of our qualified workers and compare them to workers who do not undergo our selection process, we launched the reference-based task (see Section 3.4), which is open to our qualified workers as well as to any other workers satisfying basic qualification settings. Qualified Workers after Pipeline We published the reference-based task to the 12 MTurk workers from four rounds who have passed both the qualification and the endurance task. All 12 workers accepted this task but only 8 workers finished 30 HITs within a week. There are two scores to evaluate the information coverage between each candidate summary and the reference summary. We use the "can2ref" score to represent whether all information in the candidate summary can be found in the reference summary, and the "ref2can" score to represent the converse coverage. For both types of scores, we calculated Cohen's Kappa for every worker pair (given 4 candidate summaries per HIT, 30 HITS per worker). Cohen's Kappa for "can2ref" score ranges from 0.15 to 0.71, with a relatively high IAA between the first GOLD workers from the first two rounds (G11 and G21). Similarly, Cohen's Kappa for "ref2can" score ranges from 0.14 to 0.66. 7https://wordstotime.com/ ![6_image_0.png](6_image_0.png) Finally, Cohen's Kappa for the combined scores ![6_image_1.png](6_image_1.png) ranges from 0.15 to 0.68 (see Figure 4), demonstrating that the agreement numbers are stable across multiple measures. Krippendorff's Alpha for the above scenarios ("can2ref" score, "ref2can" score, and combined) are 0.558, 0.508, and 0.534. Baseline MTurk Workers For comparison, we published the same reference-based task to MTurk workers who did not participate in our previous experiments. 276 MTurk workers participated and each worker finished on average 2 HITs (In total 30 HITs × 20 Assignments/HIT). Krippendorff's Alpha for "can2ref", "ref2can", and the two combined were extremely low, at 0.087, 0.077, and 0.080 respectively, demonstrating the necessity of a high-quality recruitment pipeline. We experimented with the following approaches to investigate whether we could increase the agreement between random MTurk workers to a level comparable to qualified workers from our pipeline. IAA with Median Among the 20 assignments of each HIT, we randomly divided the workers into 4 groups of 5 workers and took the median of each group representing a "new worker" (Lau et al., 2014). Then, we concatenated the results of 20 HITs for the 4 "new workers" to calculate IAA. Krippendorff's Alpha scores increased to 0.191, 0.185, and 0.188 respectively. Filter on Timing and Number of Finished HITs To exclude unqualified workers whose annotations may decrease IAA, only workers who (i) spent more than the suggested reading time8and (ii) finished 3 or more HITs were selected for calculation of IAA. This resulted in 25 workers remaining, but Krippendorff's Alpha remained almost the same as calculated without the filter. Statistical Filter (MACE) We applied the MultiAnnotator Competence Estimation (MACE) (Hovy et al., 2013; Paun et al., 2018) to identify reliable workers based on competence scores calculated on annotations. The workers with competence scores above a threshold were kept. We additionally calculated Spearman's coefficient (Spearman, 1904) within the groups of our pipeline and MACE (see Table 4). We report the results of additional failed attempts to improve Spearman's coefficient across these two groups, in Table 12 in the Appendix. In summary, the most effective methods to improve agreement numbers among random workers were median grouping and MACE. IAA on median scores can raise Krippendorff's Alpha to almost 0.2. MACE increases Krippendorff's Alpha as the threshold increases, but at the cost of an incomplete HIT coverage (27/30 and 18/30 respectively for the threshold of 0.6 and 0.7 in Table 4) and fewer workers per HIT (1.9 and 1.2, respectively, for the threshold of 0.6 and 0.7 in Table 4). Similarly, Spearman's coefficient of MACE workers 8We performed the same timing analysis as in Section 4.3. | Threshold | 0.5 | 0.6 | 0.7 | |-------------------------------------------|-------|-------|-------| | % of workers kept | 19.2% | 15.9% | 7.6% | | HIT coverage | 30/30 | 27/30 | 18/30 | | Avg. num. workers per HIT | 2.4 | 1.9 | 1.2 | | Krippendorff's Alpha (all scores) | 0.380 | 0.472 | 0.754 | | Spearman's coefficient (MACE workers) | 0.351 | 0.414 | 0.770 | | Spearman's coefficient (pipeline workers) | 0.558 | 0.565 | 0.577 | | Worker | Metric | can2ref | ref2can | combined | |----------------|-----------|-----------|-----------|------------| | IAA | | | | | | Source | score | | | | | Pipeline | CK | 0.15-0.71 | 0.14-0.66 | 0.15-0.68 | | KA | 0.558 | 0.508 | 0.534 | | | CK | 0.18-0.60 | 0.19-0.61 | 0.18-0.60 | | | Cloud Research | KA | 0.527 | 0.498 | 0.513 | can be increased above our pipeline workers' only at the same expense as above. CloudResearch MTurk Workers To further test our pipeline, we conducted the same reference-based task on the CloudResearch platform (cloudresearch.com), which helps researchers recruit high-quality annotators. We recruited the same number (eight) of CloudResearch workers as our pipeline. The Krippendorff's Alpha and Cohen's Kappa9for CloudResearch workers is slightly lower than our pipeline workers (see Table 5 and Figure 9). Additionally, we found that our pipeline workers have a higher task acceptance rate. This results in a shorter experimental period compared to the task conducted on CloudResearch. Table 5: The range of Cohen's Kappa (CK) and Krippendorff's Alpha (KA) of pipeline and CloudResearch workers for reference-based task. ## Analysis Of Correctness Across Annotation Sources We randomly sampled 50 annotation questions from the reference-based task to test correctness, which is defined as the alignment with expert judgments.10 In addition, we also compared the expert judgment with scores generated by GPT models: GPT-3.5 ("text-davinci-003") and ChatGPT which are built on InstructGPT (Ouyang et al., 2022), and GPT-4 (OpenAI, 2023). Scores are aggregated by taking the median within groups of pipeline, MACE, and CloudResearch workers, as 9The range of Cohen's Kappa is slightly smaller for CloudResearch workers. 10Fifty random samples were chosen in order to differentiate between MACE and pipeline assuming 20% superiority in terms of correctness. well as experts.11 For ChatGPT we ran inference 5 times with default parameters (temperature=1, top_p=1) and took the median. To obtain GPT-3.5 and GPT-4 scores temperature was set to 0 with a single run. We did not find that pipeline workers were superior to MACE workers in terms of correctness. Pipeline and CloudResearch workers had a significant Spearman's correlation with each other (see Figure 5), which indicates a reproduction of the recruitment procedure on CloudResearch at a lower cost. However, the confidence intervals are too wide to draw any conclusion about the correlation between crowd annotators and expert judgments (see Table 6). This indicates that the pipeline may not guarantee the training of the correctness of annotations. However, we found that GPT models correlated well with expert judgments. Further details can be found in Appendix A.7 and A.8. | Class | Group Type | Spearman's | 95% Confidence | |-------------|---------------|---------------|------------------| | Coefficient | Interval | | | | Pipeline | 0.03 | (-0.61, 0.65) | | | Crowd | MACE | 0.10 | (-0.56, 0.69) | | Annotators | CloudResearch | 0.08 | (-0.58, 0.67) | | GPT-3.5 | 0.73 | (0.18, 0.93) | | | GPT | ChatGPT | 0.73 | (0.20, 0.93) | | models | GPT-4 | 0.83 | (0.41, 0.96) | ![7_image_0.png](7_image_0.png) ## 4.5 Discussion In Section 4.4, we published the same referencebased task as a test to different crowd annotators 11We use the median of a group of experts as the expert judgment, which has Krippendorff's Alpha of 0.52. (pipeline, MACE, and CloudResearch). It showed that filtering workers *before* the actual evaluation task (pipeline) can avoid the waste of time and resources and achieve high agreement at a lower cost and a full coverage of HITs, compared to discarding annotations *after* the task (MACE) (see Table 7). Our pipeline also recruited workers of similar quality to CloudResearch at a lower cost; however, based on further analysis, the correctness of annotations was not guaranteed (see Section 7 for details). Besides, details about the estimated cost of GPT models for the reference-based task can be found in Table 15 in Appendix A.8.2. | Pipeline | MACE (0.5) | CloudResearch | | |---------------------------------|--------------|-----------------|-------| | Num. of initial workers | 200 | 276 | 45 | | % of workers kept | 4% | 19.2% | 17.8% | | HIT coverage | 30/30 | 30/30 | 30/30 | | Avg. num. workers per HIT | 8 | 2.4 | 8 | | Krippendorff's Alpha | 0.534 | 0.380 | 0.513 | | Cost per worker | | | | | (for Avg. num. workers per HIT) | $27 | $175 | $31 | ## 5 Statistical Test For Stability Of Pipeline We next examined whether there was a difference in the probability of passing the qualification and endurance task among MTurk workers. Thus, we started by assuming the probability of passing each task for each round came from the same distribution, and we performed a statistical test as follows. Let X denote the random variable representing the MTurk worker. For the qualification task, let qx∈X (x) denote the binary random variable which has the value of 1 if the worker can pass the task, and 0 otherwise. Similarly, let ex∈X (x) denote the binary random variable indicating whether the worker can pass the endurance task. Given 50 MTurk workers in each round, we use Q to denote the binary random variables in a round as (1). It can also be regarded as examples sampled from qx∈X (x). Among the samples, the probability of a | Annotation Task | Qual. Task | End. Task | |-----------------------|--------------|-------------| | Pass Rate | 0.13 | 0.06 | | Mean of | | | | Pass Rate (Bootstrap) | 0.1302 | 0.0602 | | Standard Dev. of | | | | Pass Rate (Bootstrap) | 0.0236 | 0.0168 | Table 8: Statistical test results for stability of pipeline. worker who can pass the qualification task is equal to the expectation of qx∈X (x) = 1 as (2). Since only workers who passed the qualification task are eligible for the endurance task, the probability of a worker passing the endurance task is equal to the expectation of ex∈X,q(x)=1(x) = 1 as (3), which is a joint distribution of qx∈X (x) and ex∈X (x). $$Q=\{q_{x_{1}\in\mathcal{X}}(x_{1}),...,q_{x_{50}\in\mathcal{X}}(x_{50})\}\tag{1}$$ $$P(q_{x\in\mathcal{X}}(x)=1)=\mathbb{E}(q_{x\in\mathcal{X}}(x)=1)$$ (2) $$P(e_{x\in\mathcal{X},q(x)=1}(x)=1)$$ $$=\mathbb{E}(e_{x\in\mathcal{X},q(x)=1}(x)=1)$$ (3) $$=P(e_{x\in\mathcal{X}}(x)=1|q(x)=1)P(q(x)=1)$$ ## Thus, We Used The Bootstrap Method (Efron, 1992) With 10,000 Iterations To Estimate The Mean And Standard Deviation Of The Probability Of Passing The Qualification And Endurance Task. Table 8 Shows The Results Of All Rounds With Breakdowns Of Each Round. We Can See Some Variance That Might Come From Mturk Workers Given Each Round. To Test Whether There Is A Difference In The Probability Of Passing Each Task Among Different Rounds, We Conducted The Permutation Test (Fisher, 1936; Pitman, 1937) For Every Two Rounds. The Results Show That We Cannot Reject The Null Hypothesis That The Underlying Distributions Of Every Two Rounds Are The Same (See Appendix A.4). 6 Conclusion In this paper, we present a two-step recruitment pipeline that yields 12 qualified workers (4 GOLD and 8 SILVER workers) out of 200 MTurk workers with basic qualification settings in our experiments. We show that workers identified by our pipeline can (i) achieve a higher inter-annotator agreement than expert annotators in the endurance task, (ii) outperform the statistical filter (MACE) that discards annotation *after* the reference-based task, and (iii) replicate a proxy of CloudResearch annotations in the correctness analysis. Though the 6% yield rate is not as expected, our pipeline serves as the **best practice** to deliver high-agreement annotations and addresses the widespread waste of resources on low-quality annotations through filtering out subpar workers *before* they embark on large-scale tasks. In the future, we plan to build up a pool of reliable annotators who can deliver high-quality (both high agreement and correctness) evaluations on a large scale and in multiple tasks, languages, and platforms. ## 7 Limitations This research creates a relatively complete pipeline to identify qualified MTurk workers for highquality human evaluations based on existing techniques, and thoroughly tests the effectiveness of this pipeline both qualitatively and quantitatively compared to other methods. However, there are several limitations of this work: - **The experiments are only conducted for** summarization tasks in English on MTurk platform. Thus, this pipeline can also be tested on other NLG tasks, in other languages, and on other platforms to see whether our three-step concept generalizes broadly to all human evaluations. - **The specific questions designed for each** task are not "panacea" solutions. A better HIT design may exist for different experimental purposes, as long as it follows the ideas behind each task. For example, the endurance task aims to ensure the worker's reliable performance on a large number of annotations, so modifications based on this idea might work better in case-by-case scenarios12. - **There is no guarantee for the training of** correctness in the pipeline though a high agreement is achieved. An additional correctness check might need to be included along with the endurance task to achieve both high agreement and correctness through the filtering of the pipeline. ## 8 Ethical Considerations Considering that crowd workers are often underpaid, experiments in this work all followed fair working wage standards13 when using MTurk for recruitment purposes (details for each task are in Table 3). In addition, we have not rejected the work from any unqualified workers so far, though we reserve the right to do so when conducting the experiments. In our experiments, personal data (any information relating to an identifiable natural person) was collected, processed, and stored based on certain data protection regulations,14 given relevant privacy concerns. Special category information (i.e. 12We encourage starting the design from the referencebased task (which performs as the test of true annotation task) and thinking about what specific training the annotators are expected to have through the qualification and endurance task. 13https://livingwage.mit.edu/counties/27053 14https://gdpr.eu/article-4-definitions/ personal data revealing racial or ethnic origin, etc.) was not included in this work. More information about the details of human evaluation experiments in this work can be found in the Human Evaluation Datasheet (HEDS) (Shimorina and Belz, 2022) in the Appendix. ## Acknowledgements We would like to thank the anonymous reviewers for their helpful feedback on our paper. We would like to thank Claire Daniele for her editorial support. Mille's contribution was funded by the European Union under the Marie Skłodowska-Curie grant agreement No 101062572 (M-FleNS). ## References Jacopo Amidei, Paul Piwek, and Alistair Willis. 2020. Identifying annotator bias: A new IRT-based method for bias identification. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4787–4797, Barcelona, Spain (Online). International Committee on Computational Linguistics. Antonio A Arechar, Gordon T Kraft-Todd, and David G Rand. 2017. Turking overtime: How participant characteristics and behavior vary over time and day on amazon mechanical turk. *Journal of the Economic* Science Association, 3(1):1–11. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Adam J. Berinsky, Gregory A. Huber, and Gabriel S. Lenz. 2012. Evaluating online labor markets for experimental research: Amazon.com's mechanical turk. *Political Analysis*, 20(3):351–368. Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2011. Amazon's mechanical turk: A new source of inexpensive, yet high-quality data? *Perspectives on Psychological Science*, 6(1):3–5. Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using Amazon's Mechanical Turk. In *Proceedings of the 2009 Conference on Empirical Methods in Natural Language* Processing, pages 286–295, Singapore. Association for Computational Linguistics. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and Psychological Measurement*, 20:37 - 46. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Bradley Efron. 1992. Bootstrap methods: another look at the jackknife. In *Breakthroughs in statistics*, pages 569–593. Springer. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Ronald Aylmer Fisher. 1936. Design of experiments. British Medical Journal, 1(3923):554. Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. *arXiv preprint arXiv:2202.06935*. Dan Gillick and Yang Liu. 2010. Non-expert evaluation of summarization systems is risky. In *Proceedings of* the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 148–151, Los Angeles. Association for Computational Linguistics. Yvette Graham, George Awad, and Alan Smeaton. 2018. Evaluation of automatic video captioning using direct assessment. *PLOS ONE*, 13(9):1–20. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. *Natural Language* Engineering, 23(1):3–30. Hardy Hardy, Shashi Narayan, and Andreas Vlachos. 2019. HighRES: Highlight-based reference-less evaluation of summarization. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 3381–3392, Florence, Italy. Association for Computational Linguistics. Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. *Communication methods and measures*, 1(1):77–89. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In *Proceedings of the 2013 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130, Atlanta, Georgia. Association for Computational Linguistics. David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In *Proceedings of the 13th International Conference* on Natural Language Generation, pages 169–182, Dublin, Ireland. Association for Computational Linguistics. Jessica Huynh, Jeffrey Bigham, and Maxine Eskenazi. 2021. A survey of nlp-related crowdsourcing hits: what works and what does not. *arXiv preprint* arXiv:2111.05241. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944– 952, Cambridge, MA. Association for Computational Linguistics. E. L. Kaplan and Paul Meier. 1958. Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282):457– 481. Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265–1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jonathan K. Kummerfeld. 2021. Quantifying and avoiding unfair qualification labour in crowdsourcing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 343–349, Online. Association for Computational Linguistics. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In *Proceedings of the 14th Conference of the European Chapter of the Association for Computational* Linguistics, pages 530–539, Gothenburg, Sweden. Association for Computational Linguistics. Rensis Likert. 1932. A technique for the measurement of attitudes. *Archives of psychology*. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Emma Manning, Shira Wein, and Nathan Schneider. 2020. A human evaluation of AMR-to-English generation systems. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 4773–4786, Barcelona, Spain (Online). International Committee on Computational Linguistics. Shikib Mehri, Jinho Choi, L. F. D'Haro, Jan Deriu, Maxine Eskénazi, Milica Gasic, Kallirroi Georgila, Dilek Z. Hakkani-Tür, Zekang Li, Verena Rieser, Samira Shaikh, David R. Traum, Yi-Ting Yeh, Zhou Yu, Yizhe Zhang, and Chen Zhang. 2022. Report from the nsf future directions workshop on automatic evaluation of dialog: Research directions and challenges. *ArXiv*, abs/2203.10012. Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, and Leo Wanner. 2019. The second multilingual surface realisation shared task (SR'19): Overview and evaluation results. In *Proceedings of the 2nd* Workshop on Multilingual Surface Realisation (MSR 2019), pages 1–17, Hong Kong, China. Association for Computational Linguistics. OpenAI. 2023. Gpt-4 technical report. *ArXiv*, abs/2303.08774. Daniel M. Oppenheimer, Tom Meyvis, and Nicolas Davidenko. 2009. Instructional manipulation checks: Detecting satisficing to increase statistical power. *Journal of Experimental Social Psychology*, 45(4):867–872. Jonas Oppenlaender, Kristy Milland, Aku Visuri, Panos Ipeirotis, and Simo Hosio. 2020. Creativity on paid crowdsourcing platforms. In *Proceedings of the 2020* CHI Conference on Human Factors in Computing Systems, CHI '20, page 1–14, New York, NY, USA. Association for Computing Machinery. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. Running experiments on amazon mechanical turk. *Judgment and Decision making*, 5(5):411–419. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA. Association for Computational Linguistics. Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing Bayesian models of annotation. *Transactions of the Association for Computational Linguistics*, 6:571–585. E. J. G. Pitman. 1937. Significance tests which may be applied to samples from any populations. *Supplement to the Journal of the Royal Statistical Society*, 4(1):119–130. Jonathan Robinson, Cheskie Rosenzweig, Aaron J. Moss, and Leib Litman. 2019. Tapped out or barely tapped? recommendations for how to harness the vast and largely unused potential of the mechanical turk participant pool. *PLOS ONE*, 14(12):1–29. Anastasia Shimorina and Anya Belz. 2022. The human evaluation datasheet: A template for recording details of human evaluation experiments in NLP. In *Proceedings of the 2nd Workshop on Human Evaluation* of NLP Systems (HumEval), pages 54–75, Dublin, Ireland. Association for Computational Linguistics. C. Spearman. 1904. The proof and measurement of association between two things. *The American Journal* of Psychology, 15(1):72–101. Margaret A Webb and June P Tangney. 2022. Too good to be true: Bots and bad data from mechanical turk. *Perspectives on Psychological Science*, page 17456916221120027. Mark E. Whiting, Grant Hugh, and Michael S. Bernstein. 2019. Fair work: Crowd work minimum wage with one line of code. *Proceedings of the AAAI Conference on Human Computation and Crowdsourcing*, 7(1):197–206. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. ## A Appendix A.1 Proportion Of Worker Categories In Qualification Task For Each Round | Annotation Task | Total Number | GOLD | SILVER | BROZE | | |--------------------|----------------|---------|----------|----------|----------| | of Workers | Workers | Workers | Workers | | | | Round 1 | 50 | 1 (2%) | 4 (8%) | 32 (64%) | 13 (26%) | | Round 2 | 50 | 3 (6%) | 5 (10%) | 29 (58%) | 13 (26%) | | Round 3 | 50 | 2 (4%) | 3 (6%) | 24 (48%) | 21 (42%) | | Round 4 | 50 | 2 (4%) | 6 (12%) | 27 (54%) | 15 (30%) | | Qualification Task | | | | | | Table 9: Proportion of worker categories for each round. ## A.2 Cohen'S Kappa For Each Summary In Endurance Task For the figures below, "Answer.score_0" to "Answer.score_3" correspond to the scores aggregated from the 1st to 4th summary separately for each HIT. The dark color indicates a high IAA in terms of Cohen's Kappa score. S42 stands for the second SILVER worker from Round 4. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) ## A.3 Endurance Task Result Of Lab Members | Worker Combination | A and B | B and C | C and A | |-----------------------------------|-----------|-----------|-----------| | Answer.score_0 | -0.261 | -0.083 | 0.246 | | Answer.score_1 | 0.285 | 0.13 | 0.285 | | Answer.score_2 | 0.206 | -0.006 | -0.049 | | Answer.score_3 | 0.066 | 0.006 | 0.387 | | Cohen's Kappa (Concatenation) | 0.1 | 0.055 | 0.268 | | Cohen's Kappa (Omit first 2 HITs) | 0.2 | 0.091 | 0.196 | | Krippendorff's Alpha | 0.201 | | | | Cohen's Kappa (Each Summary) | | | | Table 10: Endurance task result of lab members. ## A.4 Statistical Test Results Of Qualification And Endurance Tasks For Each Round | Annotation Task | Pass Rate | Mean of | Standard Dev. of | | |-----------------------|-----------------------|-----------|--------------------|--------| | Pass Rate (Bootstrap) | Pass Rate (Bootstrap) | | | | | Round 1 | Qua. Task | 0.1 | 0.0997 | 0.0424 | | End. Task | 0.02 | 0.0199 | 0.0198 | | | Round 2 | Qua. Task | 0.16 | 0.1611 | 0.0521 | | End. Task | 0.08 | 0.0805 | 0.0384 | | | Round 3 | Qua. Task | 0.1 | 0.1000 | 0.0482 | | End. Task | 0.06 | 0.0599 | 0.0339 | | | Round 4 | Qua. Task | 0.16 | 0.1595 | 0.0511 | | End. Task | 0.08 | 0.0800 | 0.0380 | | | All Rounds | Qua. Task | 0.13 | 0.1302 | 0.0236 | | End. Task | 0.06 | 0.0602 | 0.0168 | | Table 11: Statistical test results of qualification and endurance task. ![13_image_0.png](13_image_0.png) ## A.6 Spearman'S Coefficient For Inter-Groups (Pipeline & Mace) In Reference-Based Task For the reference-based task, we used 4 methods to calculate Spearman's coefficient: - **Method 1**: Given the different numbers of remaining MACE workers for each HIT, we calculate Spearman's coefficient between our pipeline and MACE workers in each HIT. Then we take the average of these coefficients as the inter-group Spearman's coefficient shown in Table 12 15. - **Method 2**: The only difference between this method and Method 1 is that we take the absolute value when calculating Spearman's coefficient for each HIT. - **Method 3**: We take the average of each annotation question in each HIT within the group of our pipeline and MACE workers separately, then concatenate these average scores of all HITs together for each group and calculate Spearman's coefficient. - **Method 4**: The only difference between this method and Method 3 is that we calculate Spearman's coefficient for each HIT and then take the average of all coefficients instead of concatenating first and then calculating the coefficient. | Threshold | 0.5 | 0.6 | 0.7 | | |---------------------------------------|-------------------------------------------|--------|--------|--------| | % of workers kept | 19.2% | 15.9% | 7.6% | | | HIT coverage | 30/30 | 27/30 | 18/30 | | | Avg. num. workers per HIT | 2.4 | 1.9 | 1.2 | | | Krippendorff's Alpha (all scores) | 0.380 | 0.472 | 0.754 | | | Spearman's coefficient (MACE workers) | 0.351 | 0.414 | 0.770 | | | Method 1 | Spearman's coefficient (pipeline workers) | 0.558 | 0.565 | 0.577 | | Spearman's coefficient (inter-group) | -0.081 | -0.063 | -0.234 | | | Spearman's coefficient (MACE workers) | 0.396 | 0.418 | 0.770 | | | Method 2 | Spearman's coefficient (pipeline workers) | 0.575 | 0.580 | 0.591 | | Spearman's coefficient (inter-group) | 0.307 | 0.299 | 0.308 | | | Method 3 | Spearman's coefficient (inter-group) | -0.107 | -0.067 | -0.355 | | Method 4 | Spearman's coefficient (inter-group) | -0.102 | -0.113 | -0.194 | Table 12: Methods for calculation of Spearman's coefficient within and across groups of pipeline and MACE workers in reference-based task. ## Qualitative Analysis Of Correctness Across Annotation Sources In Reference-Based Task A.7 For the reference-based task, we first randomly select 50 HITs out of 30 HITs (HIT index ranges from 0 to 29), and then 1 annotation question out of 8 questions (annotation index ranges from 0 to 7) for each of these HITs selected in the above step. For each randomly selected annotation question, we calculate the median within the groups of our pipeline, MACE, and CloudResearch workers separately, as well as the scores generated by GPT models (GPT-3.5 ("text-davinci-003"), ChatGPT, and GPT-4 16 ). The expert judgment (aggregated by the median) and details for 50 randomly selected annotation questions can be found in Table 13 and Table 14 . Figure 10 shows Spearman's coefficient among different groups aggregated by the median before (left) and after (right) the removal of controversial HITs (HIT with index 15, 16, and 28). We also perform a similar analysis aggregated by the mean shown in Figure 11. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) | Index | Two Types of Summaries | Inclusion Direction | Human Annotators (Median) | GPT series scores | Expert | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|-----------------------------|---------------------|----------|-----|-----|-----|-----| | Sample | Pipeline MACE CloudResearch GPT-3.5 ChatGPT GPT-4 Judgment | | | | | | | | | | 1 | Reference The government has given regulators more time to investigate the proposed takeover of broadcaster Sky by 21st Century Fox. | can2ref | 5.0 | 4.0 | 4.0 | 4.0 | 5.0 | 5.0 | 5.0 | | Candidate The government has extended the deadline for an inquiry into the takeover of Sky by 21st Century Fox. | | | | | | | | | | | 2 | Reference A Chinese woman has been found guilty of trespassing at President Donald Trump's Mar-a-Lago club in Florida and of lying to a federal agent. | can2ref | 3.0 | 5.0 | 3.5 | 4 | 5 | 4.5 | 4.0 | | Candidate A Chinese woman who sparked alarm when she walked into US President Donald Trump's Mar-a-Lago resort has been found guilty of trespassing. | | | | | | | | | | | 3 | Reference A unique garden is helping Canadians to break a taboo that exists in many societies. It is allowing parents to talk openly about miscarriage. | ref2can | 4.0 | 4.0 | 4.0 | 4.0 | 5.0 | 4.0 | 3.0 | | Candidate A Canadian cemetery has created a garden dedicated to the memory of babies lost during pregnancy. It's a place that's especially for those who have had multiple miscarriages. | | | | | | | | | | | 4 | Reference Gadgets that track your steps, sleeping and heart rate could help us live longer and cut national healthcare costs by billions - or so we are told. | can2ref | 3.0 | 4.0 | 2.5 | 1.0 | 1.0 | 1.0 | 1.0 | | Candidate It is a huge amount of us have a smartphone, a smartphone and a gadget that feeds data from a smartphone. | | | | | | | | | | | 5 | Reference A unique garden is helping Canadians to break a taboo that exists in many societies. It is allowing parents to talk openly about miscarriage. | can2ref | 2.0 | 4.0 | 2.0 | 4.0 | 5.0 | 4.0 | 3.0 | | Candidate A Canadian garden dedicated to the memory of children lost during pregnancy is helping to heal the pain of grief. | | | | | | | | | | | 6 | Reference The 2017 Oscar nominations are out, with La La Land the frontrunner. Here's a round-up of the surprises and talking points from this year's list. | can2ref | 4.0 | 3.0 | 3.5 | 3.0 | 4.0 | 4.0 | 4.0 | | Candidate The full list of Oscar nominations has been announced. Here are 10 talking points from the shortlists. | | | | | | | | | | | 7 | Reference Welsh victims of the contaminated blood scandal have said it is not fair they get less financial help than people affected in England and Scotland. | ref2can | 2.0 | 4.0 | 1.5 | 4.0 | 4.0 | 4.0 | 2.0 | | Candidate A man who contracted hepatitis C from the contaminated blood scandal has said Welsh support payments are not fair. | | | | | | | | | | | 8 | Reference An anonymous letter sent to a council outlining an alleged plan to oust head teachers is "defamatory", the leader of Birmingham City Council has said. | can2ref | 4.0 | 4.0 | 4.0 | 3.0 | 1.0 | 3.0 | 2.0 | | Candidate A letter written by a council officer calling for schools to be taken over by a council has been defamatory. | | | | | | | | | | | 9 | Reference Graduates from ethnic minorities in Britain are less likely to be in work than their white peers, a study says. | ref2can | 4.0 | 4.0 | 3.5 | 2.0 | 2.0 | 1.0 | 2.0 | | Candidate The number of ethnic minority graduates in the UK has fallen by almost 5% in the last year, according to a think tank. | | | | | | | | | | | 10 | Reference Two endangered red panda cubs have been born at a wildlife park on the Isle of Man. | ref2can | 2.0 | 4.0 | 4.0 | 5.0 | 5.0 | 5.0 | 5.0 | | Candidate Two endangered red panda cubs have been born at a wildlife park in the Isle of Man. | | | | | | | | | | | 11 | Reference Two endangered red panda cubs have been born at a wildlife park on the Isle of Man. | ref2can | 5.0 | 3.0 | 5.0 | 4.0 | 4.0 | 5.0 | 5.0 | | Candidate Two endangered red panda cubs have been born at a wildlife park on the Isle of Man, a year after a giant themed elephant calf escaped from his enclosure. | | | | | | | | | | | 12 | Reference Welsh Water has announced pre-tax profits of £7m for the last financial year. | can2ref | 5.0 | 4.0 | 5.0 | 5.0 | 5.0 | 5.0 | 4.0 | | Candidate Welsh Water has announced pre-tax profits of £7m for the year to April. | | | | | | | | | | | 13 | Reference A "poo-powered" VW Beetle has taken to the streets of Bristol in an attempt to encourage sustainable motoring. | ref2can | 4.0 | 4.0 | 2.5 | 4.0 | 4.0 | 4.0 | 3.0 | | Candidate A car powered by biogas has been seen on the streets of Bristol. | | | | | | | | | | | 14 | Reference An anonymous letter sent to a council outlining an alleged plan to oust head teachers is "defamatory", the leader of Birmingham City Council has said. | can2ref | 5.0 | 3.0 | 4.5 | 4.0 | 5.0 | 4.0 | 3.0 | | Candidate A letter sent to Birmingham City Council by a whistle-blower has been described as "defamatory" by the city council's chief inspector of schools. | | | | | | | | | | | 15 | Reference In our media-saturated age, it's rare to have a chief executive who doesn't speak to the press or, indeed, very often publicly. | can2ref | 5.0 | 4.0 | 5.0 | 1.0 | 1.0 | 1.0 | 1.0 | | Candidate Chinese entrepreneurs are a familiar sight. Parliament has been dissolved and the official election campaign has begun. BBC Reality Check listened in to Prime Minister Boris Johnson's campaign speeches in Downing Street | | | | | | | | | | | 16 | Reference and in Birmingham to check the facts and figures. | ref2can | 5.0 | 4.0 | 5.0 | 3.0 | 3.0 | 3.0 | 1.0 | | Candidate Boris Johnson made a series of claims about his government's plans for the next few years. Here are six of the key pledges he made. | | | | | | | | | | | 17 | Reference Naturalist Sir David Attenborough and the Queen are the greatest living British man and woman, according to readers of Best of British magazine. | can2ref | 3.0 | 4.0 | 3.5 | 4.0 | 5.0 | 4.0 | 4.0 | | Candidate David Attenborough has been voted the best of British by the magazine. | | | | | | | | | | | 18 | Reference An Edinburgh adventurer has become the youngest woman to ski solo to the South Pole. | can2ref | 4.0 | 4.0 | 4.0 | 5.0 | 5.0 | 4.5 | 4.0 | | Candidate A woman from Edinburgh has become the youngest person to reach the South Pole solo. | | | | | | | | | | | 19 | Reference Resurfacing work on a newly-repaired canal towpath that washed away after vandals left a lock gate open has begun. | can2ref | 4.0 | 3.0 | 3.5 | 4.0 | 4.0 | 4.0 | 5.0 | | Candidate Work has begun to resurface a canal towpath which was damaged by flooding. | | | | | | | | | | | 20 | Reference The Brexit vote is already having a negative impact on business, a survey of bosses from some of the UK's biggest companies has suggested. | ref2can | 4.0 | 5.0 | 4.0 | 4.0 | 5.0 | 4.0 | 5.0 | | Candidate The majority of business leaders believe the Brexit vote has already had a negative impact on their company, a survey suggests. | | | | | | | | | | | 21 | Reference A campaign has begun to stop the spread of norovirus in Cornwall. | can2ref | 5.0 | 5.0 | 3.5 | 4.0 | 5.0 | 5.0 | 5.0 | | Candidate A campaign has been launched to prevent the spread of norovirus in Cornwall. | | | | | | | | | | | 22 | Reference Welsh victims of the contaminated blood scandal have said it is not fair they get less financial help than people affected in England and Scotland. | can2ref | 3.0 | 3.5 | 2.5 | 3.0 | 2.0 | 2.0 | 3.0 | | Candidate The Welsh Government has said it is not fair to pay for patients who have contaminated blood in the 1970s and 1980s. | | | | | | | | | | | 23 | Reference People on Jersey's Ecrehous islands are concerned travellers are arriving from France by boat and not being tested for coronavirus. | ref2can | 1.0 | 4.0 | 1.0 | 3.0 | 3.0 | 3.0 | 2.0 | | Candidate People living on Jersey's Ecrehous islands have said they are worried about the number of people arriving ashore. | | | | | | | | | | | 24 | Reference The government has given regulators more time to investigate the proposed takeover of broadcaster Sky by 21st Century Fox. | can2ref | 2.5 | 3.0 | 3.0 | 4.0 | 4.0 | 4.0 | 3.0 | | Candidate The government has extended its takeover inquiry into Sky's takeover deal with regulator Ofcom. | | | | | | | | | | | 25 | Reference Graduates from ethnic minorities in Britain are less likely to be in work than their white peers, a study says. | can2ref | 3.0 | 3.5 | 3.0 | 2.0 | 2.0 | 1.0 | 2.0 | | Candidate The number of ethnic minority graduates in the UK has fallen by almost 5% in the last year, according to a think tank. Table 13: Qualitative analysis of correctness with 50 random samples (Part 1). | | | | | | | | | | | Index | Two Types of Summaries | Inclusion Direction | Human Annotators (Median) | GPT series scores | Expert | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|-----------------------------|---------------------|----------|-----|-----|-----|-----| | Sample | Pipeline MACE CloudResearch GPT-3.5 ChatGPT GPT-4 Judgment | | | | | | | | | | 26 | Reference Joan Miro's 1927 work Peinture (Etoile Bleue) has sold for more than £23.5 million in London, setting a new auction record for the Spanish painter. | can2ref | 4.0 | 5.0 | 4.0 | 3.0 | 1.0 | 2.0 | 3.0 | | Candidate Joan Miro's painting, which inspired the famous Joan Miro, has smashed its auction record for £15m. | | | | | | | | | | | 27 | Reference One of Oxford's main routes remains closed because of flooding for the second time in a month. | ref2can | 2.5 | 5.0 | 2.0 | 5.0 | 5.0 | 5.0 | 5.0 | | Candidate A major route through Oxford has been closed for the second time in a month due to flooding. | | | | | | | | | | | 28 | Reference Holidaymakers say they have been left thousands of pounds out of pocket after a letting company ceased trading without notice. | ref2can | 5.0 | 4.0 | 5.0 | 4.0 | 4.0 | 4.0 | 2.0 | | Candidate Brighton Holiday Homes has gone bust with bookings cancelled after a third of its customers claimed their money was lost. | | | | | | | | | | | 29 | Reference A £4.4m revamped Denbighshire leisure centre will open on Saturday. | cand2ref | 4.0 | 5.0 | 3.0 | 5.0 | 4.0 | 4.0 | 4.0 | | Candidate A Denbighshire leisure centre is reopening on Thursday after a £4.4m revamp. | | | | | | | | | | | 30 | Reference Gadgets that track your steps, sleeping and heart rate could help us live longer and cut national healthcare costs by billions - or so we are told. | ref2cand | 1.0 | 3.5 | 3.0 | 4.0 | 1.0 | 1.0 | 1.0 | | Candidate Every step we take is going to be tracked by a device that cannot simply put our fingers on our wrists. | | | | | | | | | | | 31 | Reference Gadgets that track your steps, sleeping and heart rate could help us live longer and cut national healthcare costs by billions - or so we are told. | cand2ref | 2.0 | 3.0 | 4.0 | 4.0 | 1.0 | 1.0 | 1.0 | | Candidate Every step we take is going to be tracked by a device that cannot simply put our fingers on our wrists. | | | | | | | | | | | 32 | Reference Joan Miro's 1927 work Peinture (Etoile Bleue) has sold for more than £23.5 million in London, setting a new auction record for the Spanish painter. | ref2cand | 2.0 | 5.0 | 4.0 | 4.5 | 4.0 | 4.0 | 4.0 | | Candidate A painting by Joan Miro has sold for £18.8m at auction, breaking the previous record for a work by the artist. | | | | | | | | | | | 33 | Reference A unique garden is helping Canadians to break a taboo that exists in many societies. It is allowing parents to talk openly about miscarriage. | cand2ref | 3.0 | 3.0 | 4.0 | 3.0 | 4.0 | 5.0 | 5.0 | | Candidate A Canadian memorial garden is helping parents come to terms with the pain of losing a child during pregnancy. | | | | | | | | | | | 34 | Reference Holidaymakers say they have been left thousands of pounds out of pocket after a letting company ceased trading without notice. | cand2ref | 3.0 | 4.0 | 4.0 | 4.0 | 4.0 | 3.0 | 4.0 | | Candidate A holiday home firm has gone bust after customers were told they had been left "heartbroken" after bookings were cancelled. | | | | | | | | | | | 35 | Reference A woman rescued after falling from a North Sea ferry has told how she thought she was going to die. | ref2cand | 4.0 | 4.5 | 5.0 | 1.5 | 5.0 | 5.0 | 5.0 | | Candidate A woman who fell from a ferry into the North Sea has described how she thought she was going to die. | | | | | | | | | | | 36 | Reference The Brexit vote is already having a negative impact on business, a survey of bosses from some of the UK's biggest companies has suggested. | ref2cand | 4.0 | 3.0 | 3.0 | 2.0 | 4.0 | 5.0 | 5.0 | | Candidate The UK's vote to leave the European Union is already having a negative impact on businesses, a survey suggests. | | | | | | | | | | | 37 | Reference Welsh Water has announced pre-tax profits of £7m for the last financial year. | ref2cand | 4.0 | 4.5 | 4.0 | 4.0 | 5.0 | 5.0 | 5.0 | | Candidate Welsh Water has announced pre-tax profits of £7m for the year to April. | | | | | | | | | | | 38 | Reference One of Oxford's main routes remains closed because of flooding for the second time in a month. | ref2cand | 5.0 | 3.0 | 5.0 | 3.5 | 5.0 | 5.0 | 5.0 | | Candidate A major route through Oxford has been closed for the second time in a month because of flooding. | | | | | | | | | | | 39 | Reference A 10-year-old boy died after he hit his head on a wall while playing football at school, an inquest heard. | ref2cand | 4.0 | 4.0 | 3.0 | 3.5 | 4.0 | 5.0 | 5.0 | | Candidate A 10-year-old boy who hit his head while playing football at school died from traumatic brain injury, an inquest heard. | | | | | | | | | | | 40 | Reference A video artist who uses YouTube clips, a print-maker and an artist who pairs spoken word with photography are among this year's Turner Prize nominees. | ref2cand | 3.0 | 4.0 | 3.5 | 3.5 | 4.0 | 4.0 | 4.0 | | Candidate A YouTube artist who splices together clips of horror films and a print-maker who works with women's groups are among the nominees for this year's Turner Prize. Parliament has been dissolved and the official election campaign has begun. BBC Reality Check listened in to Prime Minister Boris Johnson's campaign speeches in Downing Street and in | | | | | | | | | | | 41 | Reference Birmingham to check the facts and figures. | ref2cand | 1.0 | 1.0 | 3.0 | 1.0 | 3.0 | 4.0 | 2.0 | | Candidate Boris Johnson has been making his pitch to Conservative voters in the final week of the election campaign. What did he get right and wrong? | | | | | | | | | | | 42 | Reference Film director Roman Polanski has been released after being questioned by prosecutors in Poland over sex offences in the US. | cand2ref | 3.0 | 4.0 | 3.5 | 4.0 | 4.0 | 4.0 | 4.0 | | Candidate Polish film director Roman Polanski has been freed after prosecutors said they had not made an extradition bid for him. | | | | | | | | | | | 43 | Reference A video artist who uses YouTube clips, a print-maker and an artist who pairs spoken word with photography are among this year's Turner Prize nominees. | cand2ref | 3.0 | 3.5 | 4.0 | 4.0 | 3.0 | 4.0 | 3.0 | | Candidate A video artist who uses YouTube and a storyteller who uses storytelling techniques are among the nominees for the 2014 Turner Prize. | | | | | | | | | | | 44 | Reference DJ Dave Lee Travis has told a court he does not have a "predatory nature". | ref2cand | 4.0 | 3.5 | 4.0 | 4.0 | 4.0 | 4.0 | 4.0 | | Candidate Former radio DJ Dave Lee Travis has told a court he is "cuddly" not "predatory". | | | | | | | | | | | 45 | Reference Naturalist Sir David Attenborough and the Queen are the greatest living British man and woman, according to readers of Best of British magazine. | cand2ref | 3.0 | 3.0 | 3.0 | 3.0 | 4.0 | 4.0 | 3.0 | | Candidate Sir David Attenborough has been named the best living British celebrity in a poll by the Magazine of British History. | | | | | | | | | | | 46 | Reference A Chinese woman has been found guilty of trespassing at President Donald Trump's Mar-a-Lago club in Florida and of lying to a federal agent. | ref2cand | 2.0 | 5.0 | 3.0 | 4.5 | 1.0 | 1.0 | 1.0 | | Candidate A woman who sparked alarm at Mar-a-Lago has been found guilty of killing herself. | | | | | | | | | | | 47 | Reference Graduates from ethnic minorities in Britain are less likely to be in work than their white peers, a study says. | ref2cand | 5.0 | 3.0 | 3.0 | 3.0 | 4.0 | 5.0 | 5.0 | | Candidate Black and ethnic minority graduates are less likely to be employed than white British counterparts, a report suggests. | | | | | | | | | | | 48 | Reference An anonymous letter sent to a council outlining an alleged plan to oust head teachers is "defamatory", the leader of Birmingham City Council has said. | ref2cand | 2.0 | 4.5 | 4.0 | 3.5 | 3.0 | 2.0 | 3.0 | | Candidate A letter written by a council officer calling for schools to be taken over by a council has been defamatory. | | | | | | | | | | | 49 | Reference The 2017 Oscar nominations are out, with La La Land the frontrunner . Here's a round-up of the surprises and talking points from this year's list. | ref2cand | 3.0 | 4.0 | 3.0 | 4.0 | 3.0 | 4.0 | 4.0 | | Candidate The full list of Oscar nominations has been announced. Here are 10 talking points from the shortlists. | | | | | | | | | | | 50 | Reference People on Jersey's Ecrehous islands are concerned travellers are arriving from France by boat and not being tested for coronavirus. | ref2cand | 3.0 | 5.0 | 4.0 | 5.0 | 4.0 | 3.0 | 4.0 | | Candidate People living on Jersey's Ecrehous islands have said they fear they are "playing Russian roulette" with coronavirus restrictions after a rise in arrivals. Table 14: Qualitative analysis of correctness with 50 random samples (Part 2). | | | | | | | | | | ## A.8 Interaction With Gpt Models In Reference-Based Task A.8.1 Prompt Design ![18_Image_0.Png](18_Image_0.Png) In Figure 12, we show an example of the interaction with ChatGPT and the exact prompt design we use to acquire scores generated by GPT models through API17 for the analysis of correctness in the reference-based task. This prompt design follows the instructions we provide to the crowd annotators in the reference-based task (see Figure 16 for details) with minor modifications for the score generation from GPT models. Details about running experiments through API can be found in Section 4.4. ![18_image_1.png](18_image_1.png) ![18_image_2.png](18_image_2.png) ## A.8.2 Estimated Cost Of Gpt Models We estimate the cost of running GPT models for the score generation in the reference-based task (240 annotation questions in total) based on the cost of 50 random annotation questions. Details of pricing can be found on OpenAI's website18. We assume the GPT model only returns the score without explanations. | GPT Models | Cost per 1K Token | Estimated Cost | |--------------------|---------------------|------------------| | GPT-3.5 | $0.02 | $0.21 | | ChatGPT | $0.002 | $0.02 | | GPT-4 | $0.03 (prompt) | | | $0.06 (completion) | $0.32 | | Table 15: Estimated cost of GPT models for the reference-based task. ## A.9 Instruction And Annotation Question Examples Of Hit Here we provide some examples of instructions and annotation questions for all three tasks as screenshots. A.9.1 Qualification Task - Figure 13 shows the definition of an evaluation dimension illustrated with examples in the training part. - Figure 14 shows the example of the qualification question in the qualification part. Definitions and Training Examples ![19_image_0.png](19_image_0.png) Figure 13: Example from training part of qualification task. ![19_image_1.png](19_image_1.png) Figure 14: Example from qualification part of qualification task. ## Endurance Task A.9.2 Figure 15 shows the example of the annotation question on a Likert scale of 1 to 10 in the endurance task. Task Instructions In this seek, you will evaluate for solence at offerent survivalen of an article. First, mad fixt article. Even assign each summary a saferios score from 1 to 18. A satiest survives is crited oughter the most inportant information of the article and these not include parts of the article that are less inportant. ![20_image_0.png](20_image_0.png) Figure 15: Example of the annotation question in endurance task. ## A.9.3 Reference-Based Task - Figure 16 shows the instructions for the reference-based task. - Figure 17 shows the example of the annotation question in the reference-based task. Instructions I this task, you will be shown a reference summaries and asked to assign each cansidate summary two soons from 1 to 5 based on how much pos agree with the following statement - All of the information in the candidate surstrary can olso be found in the reference summary - All of the information in the reference summary can also be found in the candidate summary. Yhat I important to f the candidate summary and reference summary consey the same information not if they use exactly the same words. Usually the referred summary and conditi If the accord is ", It means that almost to information in one summary can be found in the other. If the accord in the other. If the accord is 5, it means that almost all of Figure 16: Instructions for the reference-based task. | Annotation Task | |------------------------------------------------------------------| | Reference Summary | | dent biden on Friday micros | | Mc fider also seld be neceded sign an executive select selection | ![20_image_1.png](20_image_1.png) Figure 17: Example of the annotation question in the reference-based task. # The Human Evaluation Datasheet: A Template For Recording Details Of Human Evaluation Experiments In Nlp (Described In Shimorina And Belz **(2022))** ## 1 Questions About Paper And Supplementary Resources (Questions 1.1–1.3) Questions 1.1–1.3 record bibliographic and related information. These are straightforward and don't warrant much in-depth explanation. Question 1.1: Link to paper reporting the evaluation experiment. If the paper reports more than one experiment, state which experiment you're completing this sheet for. Or, if applicable, enter 'for preregistration.' Paper Link: https://arxiv.org/abs/ 2212.10397 This sheet is completed for three experiments in the paper: Qualification Task, Endurance Task, and Referencebased Task. What to enter in the text box: a link to an online copy of the main reference for the human evaluation experiment, identifying which of the experiments the form is being completed for if there are several. If the experiment hasn't been run yet, and the form is being completed for the purpose of submitting it for preregistration, simply enter 'for preregistration'. Question 1.2: Link to website providing resources used in the evaluation experiment (e.g. system outputs, evaluation tools, etc.). If there isn't one, enter 'N/A'. The data and code are available at https://github.com/GEM-benchmark/ MTurkRequirementPipeline. What to enter in the text box: link(s) to any resources used in the evaluation experiment, such as system outputs, evaluation tools, etc. If there aren't any publicly shared resources (yet), enter 'N/A'. Question 1.3: Name, affiliation and email address of person completing this sheet, and of contact author if different. Lining Zhang, New York University lz2332@nyu.edu What to enter in the text box: names, affiliations and email addresses as appropriate. ## 2 System Questions 2.1–2.5 Questions 2.1–2.5 record information about the system(s) (or human-authored stand-ins) whose outputs are evaluated in the Evaluation experiment that this sheet is being completed for. The input, output, and task questions in this section are closely interrelated: the value for one partially determines the others, as indicated for some combinations in Question 2.3. Question 2.1: What type of input do the evaluated system(s) take? Select all that apply. If none match, select 'Other' and describe. Describe the type of input, where input refers to the representations and/or data structures shared by all evaluated systems. This question is about input type, regardless of number. E.g. if the input is a set of documents, you would still select *text: document* below. Check-box options (select all that apply): □ *raw/structured data*: numerical, symbolic, and other data, possibly structured into trees, graphs, graphical models, etc. May be the input e.g. to Referring Expression Generation (REG), endto-end text generation, etc. NB: excludes linguistic structures. □ *deep linguistic representation (DLR)*: any of a variety of deep, underspecified, semantic representations, such as abstract meaning representations (AMRs; Banarescu et al., 2013) or discourse representation structures (DRSs; Kamp and Reyle, 2013). □ *shallow linguistic representation (SLR)*: any of a variety of shallow, syntactic representations, e.g. Universal Dependency (UD) structures; typically the input to surface realisation. □ *text: subsentential unit of text*: a unit of text shorter than a sentence, e.g. Referring Expressions (REs), verb phrase, text fragment of any length; includes titles/headlines. ✓ *text: sentence*: a single sentence (or set of sentences). ✓ *text: multiple sentences*: a sequence of multiple sentences, without any document structure (or a set of such sequences). ✓ *text: document*: a text with document structure, such as a title, paragraph breaks or sections, e.g. a set of news reports for summarisation. □ *text: dialogue*: a dialogue of any length, excluding a single turn which would come under one of the other text types. □ *text: other*: input is text but doesn't match any of the above *text:** categories. □ *speech*: a recording of speech. □ *visual*: an image or video. □ *multi-modal*: catch-all value for any combination of data and/or linguistic representation and/or visual data etc. □ *control feature*: a feature or parameter specifically present to control a property of the output text, e.g. positive stance, formality, author style. □ *no input (human generation)*: human generation1, therefore no system inputs. □ *other (please specify)*: if input is none of the above, choose this option and describe it. 1We use the term 'human generation' where the items being evaluated have been created manually, rather than generated by an automatic system. Question 2.2: What type of output do the evaluated system(s) generate? Select all that apply. If none match, select 'Other' and describe. Describe the type of output, where output refers to the representations and/or data structures shared by all evaluated systems. This question is about output type, regardless of number. E.g. if the output is a set of documents, you would still select *text: document* below. Note that the options for outputs are the same as for inputs except that the *no input (human generation) option* is replaced with human-generated 'outputs', and the *control feature* option is removed. Check-box options (select all that apply): □ *raw/structured data*: numerical, symbolic, and other data, possibly structured into trees, graphs, graphical models, etc. May be the input e.g. to Referring Expression Generation (REG), endto-end text generation, etc. NB: excludes linguistic structures. □ *deep linguistic representation (DLR)*: any of a variety of deep, underspecified, semantic representations, such as abstract meaning representations (AMRs; Banarescu et al., 2013) or discourse representation structures (DRSs; Kamp and Reyle, 2013). □ *shallow linguistic representation (SLR)*: any of a variety of shallow, syntactic representations, e.g. Universal Dependency (UD) structures; typically the input to surface realisation. □ *text: subsentential unit of text*: a unit of text shorter than a sentence, e.g. Referring Expressions (REs), verb phrase, text fragment of any length; includes titles/headlines. □ *text: sentence*: a single sentence (or set of sentences). □ *text: multiple sentences*: a sequence of multiple sentences, without any document structure (or a set of such sequences). □ *text: document*: a text with document structure, such as a title, paragraph breaks or sections, e.g. a set of news reports for summarisation. □ *text: dialogue*: a dialogue of any length, excluding a single turn which would come under one of the other text types. □ *text: other*: select if output is text but doesn't match any of the above *text:** categories. □ *speech*: a recording of speech. □ *visual*: an image or video. □ *multi-modal*: catch-all value for any combination of data and/or linguistic representation and/or visual data etc. ✓ *human-generated 'outputs'*: manually created stand-ins exemplifying outputs.1 □ *other (please specify)*: if output is none of the above, choose this option and describe it. Question 2.3: How would you describe the task that the evaluated system(s) perform in mapping the inputs in Q2.1 to the outputs in Q2.2? Occasionally, more than one of the options below may apply. If none match, select 'Other' and describe. This field records the task performed by the system(s) being evaluated. This is independent of the application domain (financial reporting, weather forecasting, etc.), or the specific method (rulebased, neural, etc.) implemented in the system. We indicate mutual constraints between inputs, outputs and task for some of the options below. Check-box options (select all that apply): □ *content selection/determination*: selecting the specific content that will be expressed in the generated text from a representation of possible content. This could be attribute selection for REG (without the surface realisation step). Note that the output here is not text. □ *content ordering/structuring*: assigning an order and/or structure to content to be included in generated text. Note that the output here is not text. □ *aggregation*: converting inputs (typically deep linguistic representations or *shallow linguistic* representations) in some way in order to reduce redundancy (e.g. representations for 'they like swimming', 'they like running' → representation for 'they like swimming and running'). □ *referring expression generation*: generating text to refer to a given referent, typically represented in the input as a set of attributes or a linguistic representation. □ *lexicalisation*: associating (parts of) an input representation with specific lexical items to be used in their realisation. □ *deep generation*: one-step text generation from raw/structured data or *deep linguistic representations*. One-step means that no intermediate representations are passed from one independently run module to another. □ *surface realisation (SLR to text)*: one-step text generation from *shallow linguistic representations*. One-step means that no intermediate representations are passed from one independently run module to another. □ *feature-controlled text generation*: generation of text that varies along specific dimensions where the variation is controlled via *control* features specified as part of the input. Input is a non-textual representation (for featurecontrolled text-to-text generation select the matching text-to-text task). □ *data-to-text generation*: generation from raw/structured data which may or may not include some amount of content selection as part of the generation process. Output is likely to be text:* or *multi-modal*. □ *dialogue turn generation*: generating a dialogue turn (can be a greeting or closing) from a representation of dialogue state and/or last turn(s), etc. □ *question generation*: generation of questions from given input text and/or knowledge base such that the question can be answered from the input. □ *question answering*: input is a question plus optionally a set of reference texts and/or knowledge base, and the output is the answer to the question. □ *paraphrasing/lossless simplification*: text-totext generation where the aim is to preserve the meaning of the input while changing its wording. This can include the aim of changing the text on a given dimension, e.g. making it simpler, changing its stance or sentiment, etc., which may be controllable via input features. Note that this task type includes meaningpreserving text simplification (non-meaning preserving simplification comes under *compression/lossy simplification* below). □ *compression/lossy simplification*: text-to-text generation that has the aim to generate a shorter, or shorter and simpler, version of the input text. This will normally affect meaning to some extent, but as a side effect, rather than the primary aim, as is the case in *summarisation*. □ *machine translation*: translating text in a source language to text in a target language while maximally preserving the meaning. ✓ *summarisation (text-to-text)*: output is an extractive or abstractive summary of the important/relevant/salient content of the input document(s). □ *end-to-end text generation*: use this option if the single system task corresponds to more than one of tasks above, implemented either as separate modules pipelined together, or as one-step generation, other than *deep generation* and *surface realisation*. □ *image/video description*: input includes *visual*, and the output describes it in some way. □ *post-editing/correction*: system edits and/or corrects the input text (typically itself the textual output from another system) to yield an improved version of the text. □ *other (please specify)*: if task is none of the above, choose this option and describe it. ## Question 2.4: Input Language(S), Or 'N/A'. This field records the language(s) of the inputs accepted by the system(s) being evaluated. ## English. What to enter in the text box: any language name(s) that apply, mapped to standardised full language names in ISO 639-12. E.g. English, Herero, Hindi. If no language is accepted as (part of) the input, enter 'N/A'. Question 2.5: Output Language(s), or 'N/A'. This field records the language(s) of the outputs generated by the system(s) being evaluated. ## English. What to enter in the text box: any language name(s) that apply, mapped to standardised full language names in ISO 639-1 (2019)2. E.g. English, Herero, Hindi. If no language is generated, enter 'N/A'. ## 3 Questions About Output Sample, Evaluators, Experimental Design 3.1 Sample of system outputs (or ## Human-Authored Stand-Ins) Evaluated (Questions 3.1.1–3.1.3) Questions 3.1.1–3.1.3 record information about the size of the sample of outputs (or human-authored stand-ins) evaluated per system, how the sample was selected, and what its statistical power is. Question 3.1.1: How many system outputs (or other evaluation items) are evaluated per system in the evaluation experiment? Answer should be an integer. Qualification Task: 3*6=18 Endurance Task: 10*4=40 Reference-based Task: 30*8=240 What to enter in the text box: The number of system outputs (or other evaluation items) that are evaluated per system by at least one evaluator in the experiment, as an integer. Question 3.1.2: How are system outputs (or other evaluation items) selected for inclusion in the evaluation experiment? If none match, select 'Other' and describe. Multiple-choice options (select one): ◦ *by an automatic random process from a larger* set: outputs were selected for inclusion in the experiment by a script using a pseudo-random number generator; don't use this option if the script selects every nth output (which is not random). ✓ **by an automatic random process but using** stratified sampling over given properties: use this option if selection was by a random script as above, but with added constraints ensuring that the sample is representative of the set of outputs it was selected from, in terms of given properties, such as sentence length, positive/negative stance, etc. ◦ *by manual, arbitrary selection*: output sample was selected by hand, or automatically from a manually compiled list, without a specific selection criterion. ◦ *by manual selection aimed at achieving balance or variety relative to given properties*: selection by hand as above, but with specific selection criteria, e.g. same number of outputs from each time period. ◦ *Other (please specify)*: if selection method is none of the above, choose this option and describe it. Question 3.1.3: What is the statistical power of the sample size? See Section 5 and Appendix A.4 for details. What to enter in the text box: The results of a statistical power calculation on the output sample: provide numerical results and a link to the script used (or another way of identifying the script). See, e.g., Card et al. (2020); Howcroft and Rieser (2021). ## 3.2 Evaluators (Questions 3.2.1–3.2.5) Questions 3.2.1–3.2.5 record information about the evaluators participating in the experiment. Question 3.2.1: How many evaluators are there in this experiment? Answer should be an integer. Qualification Task: 200 Endurance Task: 26 Reference-based Task: 12 What to enter in the text box: the total number of evaluators participating in the experiment, as an integer. Question 3.2.2: What kind of evaluators are in this experiment? Select all that apply. If none match, select 'Other' and describe. In all cases, provide details in the text box under 'Other'. Check-box options (select all that apply): □ *experts*: participants are considered domain experts, e.g. meteorologists evaluating a weather forecast generator, or nurses evaluating an ICU report generator. ✓ *non-experts*: participants are not domain experts. ✓ **paid (including non-monetary compensation** such as course credits): participants were given some form of compensation for their participation, including vouchers, course credits, and reimbursement for travel unless based on receipts. □ *not paid*: participants were not given compensation of any kind. □ *previously known to authors*: (one of the) researchers running the experiment knew some or all of the participants before recruiting them for the experiment. ✓ *not previously known to authors*: none of the researchers running the experiment knew any of the participants before recruiting them for the experiment. □ *evaluators include one or more of the authors*: one or more researchers running the experiment was among the participants. ✓ *evaluators do not include any of the authors*: none of the researchers running the experiment were among the participants. □ *Other* (fewer than 4 of the above apply): we believe you should be able to tick 4 options of the above. If that's not the case, use this box to explain. Question 3.2.3: How are evaluators recruited? Qualification Task: On Amazon Mechanical Turk (MTurk) with pre-defined qualification settings (i.e. Location, etc.). Endurance Task: On MTurk with evaluators who passed Qualification Task. Reference-based Task: On MTurk with evaluators who passed Endurance Task. What to enter in the text box: Please explain how your evaluators are recruited. Do you send emails to a given list? Do you post invitations on social media? Posters on university walls? Were there any gatekeepers involved? What are the exclusion/inclusion criteria? Question 3.2.4: What training and/or practice are evaluators given before starting on the evaluation itself? Qualification Task: We include a training part to illustrate evaluation dimensions along with examples, and require evaluators to write an instruction summary in their own words. Endurance Task: Evaluators are provided with task instructions. Reference-based Task: Evaluators are provided with instructions and examples of the rating at the beginning of the task. What to enter in the text box: Use this space to describe any training evaluators were given as part of the experiment to prepare them for the evaluation task, including any practice evaluations they did. This includes any introductory explanations they're given, e.g. on the start page of an online evaluation tool. Question 3.2.5: What other characteristics do the evaluators have, known either because these were qualifying criteria, or from information gathered as part of the evaluation? Qualification Task: Evaluators are satisfied with: (i) the Location as "UNITED STATES (US)"; (ii) the Number of HITs Approved is "greater than 1000"; (iii) the HIT Approval Rate (%) is "greater than or equal to 99". Endurance Task: Evaluators pass the attention check and make no (GOLD) or only one mistake (SILVER) when annotating each dimension of the documents in the qualification part. Reference-based Task: Evaluators (GOLD and SILVER) finish all 10 HITs in Endurance Task. What to enter in the text box: Use this space to list any characteristics not covered in previous questions that the evaluators are known to have, either because evaluators were selected on the basis of a characteristic, or because information about a characteristic was collected as part of the evaluation. This might include geographic location of IP address, educational level, or demographic information such as gender, age, etc. Where characteristics differ among evaluators (e.g. gender, age, location etc.), also give numbers for each subgroup. ## 3.3 Experimental Design Questions 3.3.1–3.3.8 Questions 3.3.1–3.3.8 record information about the experimental design of the evaluation experiment. Question 3.3.1: Has the experimental design been preregistered? If yes, on which registry? No. What to enter in the text box: State 'Yes' or 'No'; if 'Yes' also give the name of the registry and a link to the registration page for the experiment. Question 3.3.2: How are responses collected? E.g. paper forms, online survey tool, etc. Amazon Mechanical Turk (MTurk). What to enter in the text box: Use this space to describe how you collected responses, e.g. paper forms, Google forms, SurveyMonkey, Mechanical Turk, CrowdFlower, audio/video recording, etc. Question 3.3.3: What quality assurance methods are used? Select all that apply. If none match, select 'Other' and describe. In all cases, provide details in the text box under 'Other'. Check-box options (select all that apply): □ *evaluators are required to be native speakers* of the language they evaluate: mechanisms are in place to ensure all participants are native speakers of the language they evaluate. ✓ **automatic quality checking methods are** used during/post evaluation: evaluations are checked for quality by automatic scripts during or after evaluations, e.g. evaluators are given known bad/good outputs to check they're given bad/good scores on MTurk. ✓ *manual quality checking methods are used* during/post evaluation: evaluations are checked for quality by a manual process during or after evaluations, e.g. scores assigned by evaluators are monitored by researchers conducting the experiment. ✓ **evaluators are excluded if they fail quality** checks (often or badly enough): there are conditions under which evaluations produced by participants are not included in the final results due to quality issues. ✓ *some evaluations are excluded because of* failed quality checks: there are conditions under which some (but not all) of the evaluations produced by some participants are not included in the final results due to quality issues. □ *none of the above*: tick this box if none of the above apply. □ *Other (please specify)*: use this box to describe any other quality assurance methods used during or after evaluations, and to provide additional details for any of the options selected above. Question 3.3.4: What do evaluators see when carrying out evaluations? Link to screenshot(s) and/or describe the evaluation interface(s). See details in Appendix A.9. What to enter in the text box: Use this space to describe the interface, paper form, etc. that evaluators see when they carry out the evaluation. Link to a screenshot/copy if possible. If there is a separate introductory interface/page, include it under Question 3.2.4. Question 3.3.5: How free are evaluators regarding when and how quickly to carry out evaluations? Select all that apply. In all cases, provide details in the text box under 'Other'. Check-box options (select all that apply): ✓ **evaluators have to complete each individual** assessment within a set time: evaluators are timed while carrying out each assessment and cannot complete the assessment once time has run out. □ *evaluators have to complete the whole evaluation in one sitting*: partial progress cannot be saved and the evaluation returned to on a later occasion. □ *neither of the above*: Choose this option if neither of the above are the case in the experiment. □ *Other (please specify)*: Use this space to describe any other way in which time taken or number of sessions used by evaluators is controlled in the experiment, and to provide additional details for any of the options selected above. Question 3.3.6: Are evaluators told they can ask questions about the evaluation and/or provide feedback? Select all that apply. In all cases, provide details in the text box under 'Other'. Check-box options (select all that apply): □ *evaluators are told they can ask any questions during/after receiving initial training/instructions, and before the start of the* evaluation: evaluators are told explicitly that they can ask questions about the evaluation experiment *before* starting on their assessments, either during or after training. □ *evaluators are told they can ask any questions* during the evaluation: evaluators are told explicitly that they can ask questions about the evaluation experiment *during* their assessments. ✓ *evaluators are asked for feedback and/or comments after the evaluation, e.g. via an exit* questionnaire or a comment box: evaluators are explicitly asked to provide feedback and/or comments about the experiment *after* their assessments, either verbally or in written form. □ *None of the above*: Choose this option if none of the above are the case in the experiment. □ *Other (please specify)*: use this space to describe any other ways you provide for evaluators to ask questions or provide feedback. Question 3.3.7: What are the experimental conditions in which evaluators carry out the evaluations? If none match, select 'Other' and describe. Multiple-choice options (select one): ✓ *evaluation carried out by evaluators at a place* of their own choosing, e.g. online, using a paper form, etc.: evaluators are given access to the tool or form specified in Question 3.3.2, and subsequently choose where to carry out their evaluations. ◦ *evaluation carried out in a lab, and conditions* are the same for each evaluator: evaluations are carried out in a lab, and conditions in which evaluations are carried out are controlled to be the same, i.e. the different evaluators all carry out the evaluations in identical conditions of quietness, same type of computer, same room, etc. Note we're not after very fine-grained differences here, such as time of day or temperature, but the line is difficult to draw, so some judgment is involved here. ◦ *evaluation carried out in a lab, and conditions* vary for different evaluators: choose this option if evaluations are carried out in a lab, but the preceding option does not apply, i.e. conditions in which evaluations are carried out are not controlled to be the same. ◦ *evaluation carried out in a real-life situation,* and conditions are the same for each evaluator: evaluations are carried out in a real-life situation, i.e. one that would occur whether or not the evaluation was carried out (e.g. evaluating a dialogue system deployed in a live chat function on a website), and conditions in which evaluations are carried out are controlled to be the same. ◦ *evaluation carried out in a real-life situation,* and conditions vary for different evaluators: choose this option if evaluations are carried out in a real-life situation, but the preceding option does not apply, i.e. conditions in which evaluations are carried out are not controlled to be the same. ◦ *evaluation carried out outside of the lab, in a* situation designed to resemble a real-life situation, and conditions are the same for each evaluator: evaluations are carried out outside of the lab, in a situation intentionally similar to a real-life situation (but not actually a real-life situation), e.g. user-testing a navigation system where the destination is part of the evaluation design, rather than chosen by the user. Conditions in which evaluations are carried out are controlled to be the same. ◦ *evaluation carried out outside of the lab, in a* situation designed to resemble a real-life situation, and conditions vary for different evaluators: choose this option if evaluations are carried out outside of the lab, in a situation intentionally similar to a real-life situation, but the preceding option does not apply, i.e. conditions in which evaluations are carried out are not controlled to be the same. ◦ *Other (please specify)*: Use this space to provide additional, or alternative, information about the conditions in which evaluators carry out assessments, not covered by the options above. Question 3.3.8: Unless the evaluation is carried out at a place of the evaluators' own choosing, briefly describe the (range of different) conditions in which evaluators carry out the evaluations. The evaluation is carried out at a place of the evaluators' own choosing. What to enter in the text box: use this space to describe the variations in the conditions in which evaluators carry out the evaluation, for both situations where those variations are controlled, and situations where they are not controlled. ## 4 Quality Criterion N **– Definition And** Operationalisation Questions in this section collect information about the nth quality criterion assessed in the single human evaluation experiment that this sheet is being completed for. ## 4.1 Quality Criterion Properties (Questions 4.1.1–4.1.3) Questions 4.1.1–4.1.3 capture the aspect of quality that is assessed by a given quality criterion in terms of three orthogonal properties. They help determine whether or not the same aspect of quality is being evaluated in different evaluation experiments. The three properties characterise quality criteria in terms of (i) what type of quality is being assessed; (ii) what aspect of the system output is being assessed; and (iii) whether system outputs are assessed in their own right or with reference to some system-internal or system-external frame of reference. For full explanations see Belz et al. (2020). ## Question 4.1.1: What Type Of Quality Is Assessed By The Quality Criterion? Multiple-choice options (select one): ◦ *Correctness*: Select this option if it is possible to state, generally for all outputs, the conditions under which outputs are maximally correct (hence of maximal quality). E.g. for Grammati- cality, 3 outputs are (maximally) correct if they contain no grammatical errors; for Semantic Completeness, outputs are correct if they express all the content in the input. ◦ *Goodness*: Select this option if, in contrast to correctness criteria, there is no single, general mechanism for deciding when outputs are maximally good, only for deciding for any two outputs which is better and which is worse. E.g. for Fluency, even if outputs contain no disfluencies, there may be other ways in which any given output could be more fluent. ✓ *Feature*: Choose this option if, in terms of property X captured by the criterion, outputs are not generally better if they are more X, but instead, depending on evaluation context, more X may be either better or worse. E.g. for *Specificity*, outputs can be more specific or less specific, but it's not the case that outputs are, in the general case, better when they are more specific. ## Question 4.1.2: Which Aspect Of System Outputs Is Assessed By The Quality Criterion? Multiple-choice options (select one): ◦ *Form of output*: Choose this option if the criterion assesses the form of outputs alone, e.g. Grammaticality is only about the form, a sentence can be grammatical yet be wrong or nonsensical in terms of content. ◦ *Content of output*: Select this option if the criterion assesses the content/meaning of the output alone, e.g. *Meaning Preservation* only assesses content; two sentences can be considered to have the same meaning, but differ in form. ✓ *Both form and content of output*: Choose this option if the criterion assesses outputs as a whole, not just form or just content. E.g. *Coherence* is a property of outputs as a whole, either form or meaning can detract from it. Inherently extrinsic criteria such as *Usefulness* or Task Completion also fall in this category. Question 4.1.3: Is each output assessed for quality in its own right, or with reference to a system-internal or external frame of reference? Multiple-choice options (select one): ◦ *Quality of output in its own right*: Select this option if output quality is assessed without referring to anything other than the output itself, i.e. no system-internal or external frame of reference. E.g. *Poeticness* is assessed by considering (just) the output and how poetic it is. ✓ *Quality of output relative to the input*: Choose this option if output quality is assessed relative to the input. E.g. *Answerability* is the degree to which the output question can be answered from information in the input. ◦ **Quality of output relative to a system-external** frame of reference: Choose this option if output quality is assessed with reference to systemexternal information, such as a knowledge base, a person's individual writing style, or the performance of an embedding system. E.g. Factual Accuracy assesses outputs relative to a source of real-world knowledge. ## 4.2 Evaluation Mode Properties (Questions 4.2.1–4.2.3) Questions 4.2.1–4.2.3 record properties that are orthogonal to quality criteria (covered by questions in the preceding section), i.e. any given quality criterion can in principle be combined with any of the modes (although some combinations are more common than others). ## Question 4.2.1: Does An Individual Assessment Involve An Objective Or A Subjective Judgment? Multiple-choice options (select one): ◦ *Objective*: Choose this option if the evaluation uses objective assessment, e.g. any automatically counted or otherwise quantified measurements such as mouse-clicks, occurrences in text, etc. Repeated assessments of the same output with an objective-mode evaluation method always yield the same score/result. ✓ *Subjective*: Choose this option in all other cases. Subjective assessments involve ratings, opinions and preferences by evaluators. Some criteria lend themselves more readily to subjective assessments, e.g. *Friendliness* of a conversational agent, but an objective measure e.g. based on lexical markers is also conceivable. ## Question 4.2.2: Are Outputs Assessed In Absolute Or Relative Terms? Multiple-choice options (select one): ✓ *Absolute*: Select this option if evaluators are shown outputs from a single system during each individual assessment. ◦ *Relative*: Choose this option if evaluators are shown outputs from multiple systems at the same time during assessments, typically ranking or preference-judging them. ## Question 4.2.3: Is The Evaluation Intrinsic Or Extrinsic? Multiple-choice options (select one): ✓ *Intrinsic*: Choose this option if quality of outputs is assessed *without* considering their *effect* on something external to the system, e.g. the performance of an embedding system or of a user at a task. ◦ *Extrinsic*: Choose this option if quality of outputs is assessed in terms of their *effect* on something external to the system such as the performance of an embedding system or of a user at a task. ## 4.3 Response Elicitation (Questions 4.3.1–4.3.11) The questions in this section concern response elicitation, by which we mean how the ratings or other measurements that represent assessments for the quality criterion in question are obtained, covering what is presented to evaluators, how they select response and via what type of tool, etc. The eleven questions (4.3.1–4.3.11) are based on the information annotated in the large scale survey of human evaluation methods in NLG by Howcroft et al. (2020). 14974 Question 4.3.1: What do you call the quality criterion in explanations/interfaces to evaluators? Enter 'N/A' if criterion not named. We evaluate a summary according to 6 dimensions: Understandability, Compactness, Grammaticality, Coherence, Faithfulness, and Saliency. What to enter in the text box: the name you use to refer to the quality criterion in explanations and/or interfaces created for evaluators. Examples of quality criterion names include Fluency, Clarity, Meaning Preservation. If no name is used, state 'N/A'. Question 4.3.2: What definition do you give for the quality criterion in explanations/interfaces to evaluators? Enter 'N/A' if no definition given. For a summary S, - Understandability: can the worker understand S and is S worth being annotated. - Compactness: S does not contain duplicated information. - Grammaticality: S is free from grammatical spelling errors. - Coherence: S is presented in a clear, wellstructured, logical, and meaningful way. - Faithfulness: all of the information in S can also be found in the article; S accurately reflects the contents of the article. - Saliency: S captures the most important information of the article and does not include parts of the article that are less important. What to enter in the text box: Copy and past the verbatim definition you give to evaluators to explain the quality criterion they're assessing. If you don't explicitly call it a definition, enter the nearest thing to a definition you give them. If you don't give any definition, state 'N/A'. Question 4.3.3: Size of scale or other rating instrument (i.e. how many different possible values there are). Answer should be an integer or 'continuous' (if it's not possible to state how many possible responses there are). Enter 'N/A' if there is no rating instrument. Qualification Task: 2 (binary classification) Endurance Task: 10 (10-point EASL scale) Reference-based Task: 5 (5-point Likert scale) What to enter in the text box: The number of different response values for this quality criterion. E.g. for a 5-point Likert scale, the size to enter is 5. For two-way forced-choice preference judgments, it is 2; if there's also a no-preference option, enter 3. For a slider that is mapped to 100 different values for the purpose of recording assessments, the size to enter is 100. If no rating instrument is used (e.g. when evaluation gathers post-edits or qualitative feedback only), enter 'N/A'. Question 4.3.4: List or range of possible values of the scale or other rating instrument. Enter 'N/A', if there is no rating instrument. Qualification Task: Yes, No ![31_image_0.png](31_image_0.png) Endurance Task: 1-10 Reference-based Task: 1-5 What to enter in the text box: list, or give the range of, the possible values of the rating instrument. The list or range should be of the size specified in Question 4.3.3. If there are too many to list, use a range. E.g. for two-way forced-choice preference judgments, the list entered might be *A better, B* better; if there's also a no-preference option, the list might be *A better, B better, neither*. For a slider that is mapped to 100 different values for the purpose of recording assessments, the range *1–100* might be entered. If no rating instrument is used (e.g. when evaluation gathers post-edits or qualitative feedback only), enter 'N/A'. Question 4.3.5: How is the scale or other rating instrument presented to evaluators? If none match, select 'Other' and describe. Multiple-choice options (select one): ◦ *Multiple-choice options*: choose this option if evaluators select exactly one of multiple options. ◦ *Check-boxes*: choose this option if evaluators select any number of options from multiple given options. ◦ *Slider*: choose this option if evaluators move a pointer on a slider scale to the position corresponding to their assessment. ◦ *N/A (there is no rating instrument)*: choose this option if there is no rating instrument. ✓ *Other (please specify)*: choose this option if there is a rating instrument, but none of the above adequately describe the way you present it to evaluators. Use the text box to describe the rating instrument and link to a screenshot. Qualification Task: Multiple-choice options Endurance Task: Slider Reference-based Task: Multiple-choice options Question 4.3.6: If there is no rating instrument, describe briefly what task the evaluators perform (e.g. ranking multiple outputs, finding information, playing a game, etc.), and what information is recorded. Enter 'N/A' if there is a rating instrument. What to enter in the text box: If (and only if) there is no rating instrument, i.e. you entered 'N/A' for Questions 4.3.3–4.3.5, describe the task evaluators perform in this space. Otherwise, here enter 'N/A' if there is a rating instrument. Question 4.3.7: What is the verbatim question, prompt or instruction given to evaluators (visible to them during each individual assessment)? ## Qualification Task: - In each of the following sections, we explain the different dimensions you will evaluate and provide example summaries with ratings. You must answer each question, but these training examples are not part of the qualification. - This section contains examples of summaries and ratings for each dimension. These examples show how a summary can be good on one dimension and bad on another. Please read these examples and move on. - To make sure you understand the instructions, please summarize them briefly in your own words (2-3 sentences). This is required as part of the qualification (min. 100 characters). - This section contains the actual qualification questions. Read the documents and the corresponding summaries carefully, then annotate the summaries across the various dimensions. You will be graded using these questions, so the answers will not be shown to you. N/A. ![32_image_0.png](32_image_0.png) Endurance Task: In this task, you will evaluate the salience of different summaries of an article. First, read the article, then assign each summary a salience score from 1 to 10. A salient summary is one which captures the most important information of the article and does not include parts of the article that are less important. Please use the sliders to rate the salience of the summary from 1 to 10 (see the instructions above for the definition of salience). Reference-based Task: In this task, you will be shown a reference summary and several candidate summaries and asked to assign each candidate summary two scores from 1 to 5 based on how much you agree with the following statements: - All of the information in the candidate summary can also be found in the reference summary. - All of the information in the reference summary can also be found in the candidate summary. What is important is if the candidate summary and reference summary convey the same information, not if they use exactly the same words. Usually the reference summary and candidate summary are not exactly the same nor totally different. If the score is 1, it means that almost no information in one summary can be found in the other. If the score is 5, it means that almost all of the information in one summary can be found in the other. What to enter in the text box: Copy and paste the verbatim text that evaluators see during each assessment, that is intended to convey the evaluation task to them. E.g. *Which of these texts do you prefer?* Or Make any corrections to this text that you think are necessary in order to improve it to the point where you would be happy to provide it to a client. ## Question 4.3.8: Form Of Response Elicitation. If None Match, Select 'Other' And Describe. Multiple-choice options (select one): 4 ◦ *(dis)agreement with quality statement*: Participants specify the degree to which they agree with a given quality statement by indicating their agreement on a rating instrument. The rating instrument is labelled with degrees of agreement and can additionally have numerical labels. E.g. *This text is fluent - 1=strongly* disagree...5=strongly agree. ✓ *direct quality estimation*: Participants are asked to provide a rating using a rating instrument, which typically (but not always) mentions the quality criterion explicitly. E.g. *How fluent is* this text? - 1=not at all fluent...5=very fluent. ◦ *relative quality estimation (including ranking)*: Participants evaluate two or more items in terms of which is better. E.g. *Rank these texts in terms* of fluency; *Which of these texts is more fluent?*; Which of these items do you prefer?. ◦ *counting occurrences in text*: Evaluators are asked to count how many times some type of phenomenon occurs, e.g. the number of facts contained in the output that are inconsistent with the input. ◦ *qualitative feedback (e.g. via comments entered in a text box)*: Typically, these are responses to open-ended questions in a survey or interview. ◦ *evaluation through post-editing/annotation*: Choose this option if the evaluators' task consists of editing or inserting annotations in text. E.g. evaluators may perform error correction and edits are then automatically measured to yield a numerical score. ◦ *output classification or labelling*: Choose this option if evaluators assign outputs to categories. E.g. What is the overall sentiment of this piece of text? - Positive/neutral/negative. ◦ *user-text interaction measurements*: choose this option if participants in the evaluation experiment interact with a text in some way, and measurements are taken of their interaction. E.g. reading speed, eye movement tracking, comprehension questions, etc. Excludes situations where participants are given a task to solve and their performance is measured which comes under the next option. ◦ *task performance measurements*: choose this option if participants in the evaluation experiment are given a task to perform, and measurements are taken of their performance at the task. E.g. task is finding information, and task performance measurement is task completion speed and success rate. ◦ *user-system interaction measurements*: choose this option if participants in the evaluation experiment interact with a system in some way, while measurements are taken of their interaction. E.g. duration of interaction, hyperlinks followed, number of likes, or completed sales. ◦ *Other (please specify)*: Use the text box to describe the form of response elicitation used in assessing the quality criterion if it doesn't fall in any of the above categories. Question 4.3.9: How are raw responses from participants aggregated or otherwise processed to obtain reported scores for this quality criterion? State if no scores reported. We use raw responses to calculate InterAnnotator Agreement (IAA), but sometimes the median of scores is taken to increase IAA. What to enter in the text box: normally a set of separate assessments is collected from evaluators and is converted to the results as reported. Describe here the method(s) used in the conversion(s). E.g. macro-averages or micro-averages are computed from numerical scores to provide summary, persystem results. Question 4.3.10: Method(s) used for determining effect size and significance of findings for this quality criterion. See Section 5 and Appendix A.4 for details. What to enter in the text box: A list of methods used for calculating the effect size and significance of any results, both as reported in the paper given in Question 1.1, for this quality criterion. If none calculated, state 'None'. Question 4.3.11: Has the inter-annotator and intra-annotator agreement between evaluators for this quality criterion been measured? If yes, what method was used, and what are the agreement scores? We use Cohen's Kappa and Krippendorff's Alpha for the inter-annotator agreement between evaluators. For the agreement scores, see Section 4 for details. What to enter in the text box: the methods used to compute, and results obtained from, any measures of inter-annotator and intra-annotator agreement obtained for the quality criterion. ## 5 Ethics Questions (Questions 5.1-5.4) The questions in this section relate to ethical aspects of the evaluation. Information can be entered in the text box provided, and/or by linking to a source where complete information can be found. Question 5.1: Has the evaluation experiment this sheet is being completed for, or the larger study it is part of, been approved by a research ethics committee? If yes, which research ethics committee? This research is conducted by following the equivalent hourly rate listed here: https: //livingwage.mit.edu/counties/27053 What to enter in the text box: Typically, research organisations, universities and other highereducation institutions require some form ethical approval before experiments involving human participants, however innocuous, are permitted to proceed. Please provide here the name of the body that approved the experiment, or state 'No' if approval has not (yet) been obtained. 14978 Question 5.2: Do any of the system outputs (or human-authored stand-ins) evaluated, or do any of the responses collected, in the experiment contain personal data (as defined in GDPR Art. 4, **§1: https://gdpr.eu/article4-definitions/)? If yes, describe data and** state how addressed. In our experiments, personal data (any information relating to an identifiable natural person) was collected, processed, and stored based on certain data protection regulations, given relevant privacy concerns. What to enter in the text box: State 'No' if no personal data as defined by GDPR was recorded or collected, otherwise explain how conformity with GDPR requirements such as privacy and security was ensured, e.g. by linking to the (successful) application for ethics approval from Question 5.1. Question 5.3: Do any of the system outputs (or human-authored stand-ins) evaluated, or do any of the responses collected, in the experiment contain special category information (as defined in GDPR Art. 9, §1: https://gdpr.eu/article-9-processing-specialcategories-of-personal-data-prohibited/)? If yes, describe data and state how addressed. ## No. What to enter in the text box: State 'No' if no special-category data as defined by GDPR was recorded or collected, otherwise explain how conformity with GDPR requirements relating to special-category data was ensured, e.g. by linking to the (successful) application for ethics approval from Question 5.1. Question 5.4: Have any impact assessments been carried out for the evaluation experiment, and/or any data collected/evaluated in connection with it? If yes, summarise approach(es) and outcomes. ## No. What to enter in the text box: Use this box to describe any ex ante or *ex post* impact assessments that have been carried out in relation to the evaluation experiment, such that the assessment plan and process, as well as the outcomes, were captured in written form. Link to documents if possible. Types of impact assessment include data protection impact assessments, e.g. under GDPR.5 Environmental and social impact assessment frameworks are also available. ## Credits Questions 2.1–2.5 relating to evaluated system, and 4.3.1–4.3.8 relating to response elicitation, are based on Howcroft et al. (2020), with some significant changes. Questions 4.1.1–4.2.3 relating to quality criteria, and some of the questions about system outputs, evaluators, and experimental design (3.1.1–3.2.3, 4.3.5, 4.3.6, 4.3.9–4.3.11) are based on Belz et al. (2020). HEDS was also informed by van der Lee et al. (2019, 2021) and by Gehrmann et al. (2021)'s6 data card guide. More generally, the original inspiration for creating a 'datasheet' for describing human evaluation experiments of course comes from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019) and Gebru et al. (2020). ## References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In *Proceedings of the 7th Linguistic* Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. 5https://ico.org.uk/for-organisations/ guide-to-data-protection/guide-to-thegeneral-data-protection-regulationgdpr/accountability-and-governance/dataprotection-impact-assessments/ 6https://gem-benchmark.com/data cards/ guide Anya Belz, Simon Mille, and David M. Howcroft. 2020. Disentangling the properties of human evaluation methods: A classification system to support comparability, meta-evaluation and reproducibility testing. In Proceedings of the 13th International Conference on Natural Language Generation, pages 183–194, Dublin, Ireland. Association for Computational Linguistics. Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020. With little power comes great responsibility. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9263–9274, Online. Association for Computational Linguistics. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daume III, and Kate Crawford. 2020. ´ Datasheets for datasets. Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D Dhole, et al. 2021. The gem benchmark: Natural language generation, its evaluation and metrics. arXiv preprint arXiv:2102.01672. David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In *Proceedings of the 13th International Conference* on Natural Language Generation, pages 169–182, Dublin, Ireland. Association for Computational Linguistics. David M. Howcroft and Verena Rieser. 2021. What happens if you treat ordinal ratings as interval data? human evaluations in NLP are even more underpowered than you think. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 8932–8939, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hans Kamp and Uwe Reyle. 2013. *From discourse to* logic: Introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory, volume 42. Springer Science & Business Media. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19, page 220–229, New York, NY, USA. Association for Computing Machinery. Anastasia Shimorina and Anya Belz. 2022. The human evaluation datasheet: A template for recording details of human evaluation experiments in NLP. In *Proceedings of the 2nd Workshop on Human Evaluation* of NLP Systems (HumEval), pages 54–75, Dublin, Ireland. Association for Computational Linguistics. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of automatically generated text: Current trends and best practice guidelines. *Computer Speech & Language*, 67:101151. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In *Proceedings of the 12th International Conference on Natural Language Generation*, pages 355–368. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Please see Section 7. ✓ A2. Did you discuss any potential risks of your work? Please see Section 8. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Please see the abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Please See Section 3. ✓ B1. Did you cite the creators of artifacts you used? Please see Section 3. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Please see Section 3. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Please see Section 3. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Please see Section 8. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Please see Section 3. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Please see Section 4. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Please See Section 3&4 For Details Of The Implementation And Results Involving Human Annotators. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Please see Section 3 and Appendix A.7. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Please see Section 3. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Please see Section 8. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Please see Section 3.1.
lin-etal-2023-tavt
{TAVT}: Towards Transferable Audio-Visual Text Generation
https://aclanthology.org/2023.acl-long.836
Audio-visual text generation aims to understand multi-modality contents and translate them into texts. Although various transfer learning techniques of text generation have been proposed, they focused on uni-modal analysis (e.g. text-to-text, visual-to-text) and lack consideration of multi-modal content and cross-modal relation. Motivated by the fact that humans can recognize the timbre of the same low-level concepts (e.g., footstep, rainfall, and laughing), even in different visual conditions, we aim to mitigate the domain discrepancies by audio-visual correlation. In this paper, we propose a novel Transferable Audio-Visual Text Generation framework, named TAVT, which consists of two key components: Audio-Visual Meta-Mapper (AVMM) and Dual Counterfactual Contrastive Learning (DCCL). (1) AVMM first introduces a universal auditory semantic space and drifts the domain-invariant low-level concepts into visual prefixes. Then the reconstruct-based learning encourages the AVMM to learn {``}which pixels belong to the same sound{''} and achieve audio-enhanced visual prefix. The well-trained AVMM can be further applied to uni-modal setting. (2) Furthermore, DCCL leverages the destructive counterfactual transformations to provide cross-modal constraints for AVMM from the perspective of feature distribution and text generation. (3) The experimental results show that TAVT outperforms the state-of-the-art methods across multiple domains (cross-datasets, cross-categories) and various modal settings (uni-modal, multi-modal).
# Tavt:Towards Transferable Audio-Visual Text Generation Wang Lin∗, Tao Jin∗**, Ye Wang**∗, Wenwen Pan, Linjun Li, Xize Cheng, Zhou Zhao† Zhejiang University {linwanglw,jint_zju,yew}@zju.edu.cn {wenwenpan,lilinjun21,chengxize,zhaozhou}@zju.edu.cn ## Abstract Audio-visual text generation aims to understand multi-modality contents and translate them into texts. Although various transfer learning techniques of text generation have been proposed, they focused on uni-modal analysis (*e.g.,* text-to-text, visual-to-text) and lack consideration of multi-modal content and cross-modal relation. Motivated by the fact that humans can recognize the timbre of the same low-level concepts (*e.g.,* footstep, rainfall, and laughing), even in different visual conditions, we aim to mitigate the domain discrepancies by audiovisual correlation. In this paper, we propose a novel Transferable Audio-Visual Text Generation framework, named TAVT, which consists of two key components: Audio-Visual MetaMapper (AVMM) and Dual Counterfactual Contrastive Learning (DCCL). (1) AVMM first introduces a universal auditory semantic space and drifts the domain-invariant low-level concepts into visual prefixes. Then the reconstructbased learning encourages the AVMM to learn "which pixels belong to the same sound" and achieve audio-enhanced visual prefix. The welltrained AVMM can be further applied to unimodal setting. (2) Furthermore, DCCL leverages the destructive counterfactual transformations to provide cross-modal constraints for AVMM from the perspective of feature distribution and text generation. (3) The experimental results show that TAVT outperforms the stateof-the-art methods across multiple domains (cross-datasets, cross-categories) and various modal settings (uni-modal, multi-modal). ## 1 Introduction Audio-visual text generation bridges the gap between perception (visual and auditory) and communication (via language), and is hence becoming an increasingly important goal for artificial agents. Uni-modal text generation tasks like machine translation (Wang et al., 2022; Jin et al., 2022b; Yin Figure 1: Examples of transferable multi-modal text ![0_image_0.png](0_image_0.png) generation in the source domain (real-word) and the target domain (animation). For the same events ("speaking", "cutting"), although the visual differences are significant, the sounds are similar. et al., 2022), and image caption (Chen et al., 2017; Tewel et al., 2021; Hu et al., 2022) have already flourished as a result of the large-scale pre-training and huge model capacity. However, for audiovisual text generation tasks, data annotation is more arduous (temporal structure) and expensive (requires monitors and speakers) than uni-modal text generation. Moreover, despite the effectiveness, existing works (Iashin and Rahtu, 2020a; Le et al., 2020; Hori et al., 2021) inevitably suffer severe degradation due to varying construction conditions in different domains. In this paper, to break through the constraint, we propose a novel task, named transferable audiovisual text generation. The main challenge of this task is the multi-modal domain shifts caused by varying conditions, like the visual style and audio energy. One common approach to handle the domain shift is domain-alignment-based transfer learning. However, existing works (Sun and Saenko, 2016; Rozantsev et al., 2018; Ding et al., 2022) focus on the uni-modal analysis, which is insufficient due to the lack of consideration of crossmodal relations. We observe that while audio and visual are often correlated in natural events and jointly affect human perception, they have different characteristics. As shown in Figure 1, the fact *Timbre is an* ∗ Equal contribution. † Corresponding author 14983 intrinsic property of the object leads to sounds of the same concept ("speaking" or "cutting") being similar across domains where the appearance like background, perspective, and style is significantly different. Based on this phenomenon, domain invariant low-level concepts can be extracted from the visual with the supervision of the audio, which is pervasive, reliable, and cheaper in contrast to expensive human annotation. Grounded on the above discussions, we propose an Audio-Visual Meta-Mapper network (AVMM). The key idea of AVMM is to use a universal auditory semantic space to align low-level concepts across different visual domains. In particular, we introduce a visual prefix that serves as a multimodal bridge between the visual and the audio. Then, we reconstruct audio features from the universal auditory semantic space and produce the audio-enhanced visual prefix. Essentially, the accuracy of reconstructed audio provides a constraint for AVMM to learn the latent visual-textual alignment. The reconstruct-based paradigm has another windfall, allowing AVMM to transfer to both multimodal and uni-modal settings. While the reconstruct-based paradigm implicitly learns the visual-textual alignment, we propose Dual Counterfactual Contrastive Learning (DCCL) to directly optimize the visual-audio alignment score and promote the robustness of reconstructed audio. We introduce distribution-based contrastive learning to further improve the accuracy of reconstructed audio and dependency-based contrastive learning with token-wise diversity-aware weights to provide modality-aware constraint from the perspective of text generation. Then, we apply the above module and a base audio-visual text generation network to the meta-learning framework, named TAVT, to empower AVMM with the ability to accrue knowledge across domains, which would assist in building internal multimodal representations broadly suitable for many domains. Our main contributions are as follows: - We are the first one to study the transferable audio-visual text generation task. - We introduce the audio-visual meta-mapping network that aligns domain-invariant lowlevel concepts between visual and a universal auditory semantic space. - Experimental results on both cross-dataset and cross-category benchmarks demonstrate the effectiveness of our models. ## 2 Related Work Audio-Visual Learning. In the past few years, there have been several works that focus on audiovisual learning. (Arandjelovic and Zisserman, 2018) used a two-stream neural network to find the most similar visual area to the current audio clip. Some works (Hu et al., 2019, 2020) employed contrastive learning to match the visual and audio components. Recently, (Liu et al., 2021; Cheng et al., 2023) proposed a framework for cross-modal representation learning with a discrete embedding space that was shared amongst different modalities and promoted model interpretability. The above approaches focus on the correlation of audio-visual pairs. While we aim to align the visual of different domains with a universal auditory semantic space and enhance multi-modal transfer learning. Audio-Visual Text Generation. The mainstream audio-visual text generation task is video captioning, which has attracted many researchers (Venugopalan et al., 2015; Le et al., 2020; Ye et al., 2022; Jin et al., 2022a). (Hao et al., 2018) proposed multimodal feature fusion strategies to integrate audio information into models. (Guo et al., 2019; Iashin and Rahtu, 2020a) proposed an attention mechanism to combine the visual and audio features for better knowledge representation. (Tian et al., 2019) introduced an audio-visual controller to manipulate the parameters and generate diverse modalityaware captions. (Rahman et al., 2019) utilized the idea of cycle consistency to build a model with visual and audio inputs. (Iashin and Rahtu, 2020b) encoded the feature representation of audio and speech for a specific event proposal and produces a caption. Compared to our work, none of the above address the problem of domain shift and suffer severe degradation when deployed to a low-resource domain. Furthermore, they all take audio as supplementary information for visuals while we attempt to utilize the audio-visual correlation to minimize the domain discrepancy. ## 3 Methods We aim to train a model that can learn and quickly adapt to new multimodal domains with limited labeled data under the meta-learning setting. In the next sections, we will first define the audio-visual meta-learning setting, then explain our architecture, and finally describe how it is used during training and inference time. ![2_image_0.png](2_image_0.png) ## 3.1 Problem Formulation We define a meta-training stage, where the model is trained on the meta-train Dmeta−*train* partition and a separate meta-test stage where the performance is measured on the Dmeta−*test* partition. For the support set, there are k samples chosen for each of the N − 1 randomly sampled domains, and for the query set, there are m samples from the remaining domain, with *m > k*. The set Diis defined as Di = (v i1 , ai1 , ti1 )*, ...*(v i k , aik , tik ), where v i j is the visual feature, a i j represents the audio features, and t i j is the output text, *i.e.,* a caption to the video. ## 3.2 Model Architecture The architecture that we present in this paper is modular and consists of three components: the meta-mapper, an audio-visual encoder, and a language model as illustrated in Figure 2. Audio-Visual Meta-Mapper Network. Intuitively speaking, low-level visual concepts in different domains often share similar sounds, *e.g.,* footsteps, laughing, and rain. Therefore, we propose an audiovisual meta-mapping network (AVMM) to map different visuals across domains into a universal auditory semantic space and as well as addressing shifts in the semantic distribution. We first introduce the universal auditory semantic space which has audio clusters learned from Flickr(Thomee et al., 2015). In particular, we cut the audio into units with fixed time length T and run the clustering algorithm to find the k-centres of all audio features as the audio clusters. We define the audio clusters as M = {m1, m2*, ..., m*k}. These audio clusters could serve as a bridge between the audio and visual and assist in learning quickly new domains by observing only limited labeled examples. To map the visual features into the latent space ![2_image_1.png](2_image_1.png) of the audio clusters, inspired by prompt learning(Lester et al., 2021), we introduce a set of l learnable tokens [p1, p2*, ..., p*l]. These tokens are called visual prefix for the audio clusters. Particularly, considering that the audio and visual are aligned in temporal, we also cut the frame sequence into l clips with time length T and obtain the embedding [c1, c2*, ..., c*l] by pooling the features of each clip. Then, we apply a mapper with self-attention (SA) on the cito obtain the visual prefix pi as follows: $$\begin{array}{l}{{c_{i}={\frac{1}{|T|}}\sum_{t\in T}v_{t}}}\\ {{p_{i}=\mathrm{SA}(c_{i})}}\end{array}\qquad\qquad(1)$$ Then, we develop a reconstructor to reproduce the audio features from the visual prefix, and the accuracy of reconstructed audio provides a constraint for the self-attention layer to retrieve cross-domain invariance from the visual features ci, and accumulate it into pi. To achieve this, the reconstructor learns to combine audio clusters from M to reconstruct dynamic audio features Arec = a 1 rec, a2 rec, ..., alrec . The predicted audio features a irec is a weighted sum of audio clusters m1, m2*, ..., m*k, where the weights are predicted by applying a linear layer ϕ to pi followed by a softmax function to normalize weights, as follows: $$a_{rec}^{i}=\sum_{k=1}^{K}w_{k}m_{k},\tag{2}$$ where, $w_{k}=$ SoftMax($\phi_{k}(p_{i})$) During meta-training, we apply the mean square error (MSE) loss Lrec on the reconstructed audio Arec against the ground-truth audio A to update the parameters in the meta-mapper network and shared across all domains in Dmeta−*train*. Audio-Visual Encoder. Formally, given a sequence of video frames, we first extract a sequence of the frame features V = {v1, v2*, . . . , v*m} and the audio features A = {a1, a2*, . . . , a*m}. Then, we prepend this visual prefix to the frame features, yielding the following sequence V ′= [p1, ..., pl, v1*, ..., v*m]. After that, we adopt a selfattention module to learn visual representation f ′ v ∈ R dand audio representation f ′ a ∈ R d as follows: f ′= MHA(*f, f, f*), where, f ∈ {V′, A} and MHA(·) denotes multi-head attention (Vaswani et al., 2017). Then we apply audio-visual cross-attention to identify attention across two different kinds of feature fields. For simplicity, we formulate this stage as: $$x_{t}=\text{AV--Encoder}(f_{i},f_{j},f_{j}),\text{where},i,j\in\{v,a\}\tag{3}$$ where $x_{t}\in\{x_{av},x_{vu}\}$. Illustratively, the details of where xt ∈ {xav, xva}. Illustratively, the details of the encoder are provided in Appendix A. Language Model Generator. As opposed to the original Transformer's decoder, we introduced an α to evaluate the contribution of different modalities (audio and visual) to each word. At time step t, αtis computed by measuring the relevance between the cross-attention of each modality and the previous words Y = {y1, y2*, ..., y*t−1} as follows: $$\alpha_{t}=\sigma\left(W_{t}\left[Y,\mathrm{MHA}\left(Y,x_{t},x_{t}\right)\right]+b_{t}\right)\quad\mathrm{(4)}$$ where [·, ·] indicates concatenation, σ is the sigmoid activation and Wtis a 2d × d weight matrix. The decoder outputs caption Y˜ is defined as: $$\tilde{Y}{=}\alpha_{a v}{\cdot}\mathrm{MHA}(Y,x_{a v},x_{a v}){+}\alpha_{v a}{\cdot}\mathrm{MHA}(Y,x_{v a},x_{v a})\quad(5)$$ With αt, the model can provide interpretability for the audio-visual fusion strategy. ## 3.3 Counterfactual Contrastive Learning Although the reconstruction-based paradigm provides a constraint for AVMM, it cannot directly optimize the visual-audio alignment scores. Therefore, we propose a Dual Counterfactual Contrastive Learning (DCLL) which constructs fine-grained supervision signals from counterfactual results to directly optimize the visual-textual alignment without relying on the quality of randomly-selected negative samples. Distribution-based Contrastive Learning. Concretely, we take the reconstructed audio cues Arec as positive samples A+ and inverse the audio clusters M and weight matrix wk pairings to construct counterfactual audio cues as negative samples A−. Then, we illustrate the contrastive learning method with the causal triplet (A, A+, A−). Intuitively, we construct distribution-based contrastive learning as follows $${\mathcal{L}}_{d i s}=-\log\left({\frac{e^{\left(s\left(A,A^{+}\right)/\tau\right)}}{\sum_{i=1}^{n}e^{\left(s\left(A,A_{i}\right)/\tau\right)}}}\right)\quad\quad(6)$$ where s (*p, q*) = p Tq/ ∥p∥ ∥q∥ denotes the dot product between l2 normalized p and q; τ is the temperature parameter. The distribution-based contrastive learning further improves the accuracy and robustness of reconstructed audio. Dependency-based Contrastive Learning. For the audio-visual text generation task, there exists a modality imbalance in natural language tokens as different tokens depend on different modalities and the reconstructed audio should also show similar dependence for different tokens. We consider the dependency-based contrastive loss to maintain consistency in the distribution of scores for the original and positive samples. First, the (A, A+, A−) paired with the V are fed into the audio-visual encoder to generate the joint embeddings of them. Then, we compute the dependence score ψ (*V, A*) = αav/αva of original sample as the anchor r, the score ψ (*V, A*+) of the factual sample as the positive r + and the score ψ (*V, A*−) of the counterfactual sample as the negative r−. Concretely, the contrastive loss is formulated as follows: $${\mathcal{L}}_{d e p}=-\log\left({\frac{e^{\left(s\left(r,r^{+}\right)/\tau\right)}}{\sum_{i=1}^{n}e^{\left(s\left(r,r_{i}\right)/\tau\right)}}}\right)\qquad\qquad(7)$$ The dependency-based contrastive learning considers the different performances of audio-visual in text generation to further provide cross-modal constraints on the mapper. Token-Wise Modality-Aware Weights. In order to identify some vague concepts like talking and 14986 singing, we devise the token-wise modality-aware weights to encourage the model to use the corresponding modality in the text generation process. We obtain the association of each word with audio modality and visual modality as follows: $$W_{m a}^{i}={\frac{1}{N}}\sum_{i=1}^{N}\left({\frac{\alpha_{a v}^{i}}{\alpha_{v a}^{i}}}\right)\qquad\qquad(8)$$ $\mathfrak{g}$ Where Wima indicates the i-th word's weights and N is the data sample size. We apply the weight Wma on the cross-entropy loss, as follows: $$\mathcal{L}_{cap}\text{=}-\sum_{t=1}^{n}W_{ma}*\log(w_{t}=y_{t}|y_{1:t-1})\tag{9}$$ where $\mathcal{L}_{cap}$ is the number of $n$ elements. where log(wt = yt|y1:t−1) denotes the probability of predicting word wt given the previously generated y1:t−1. ## 3.4 Meta-Training And Inference The holistic training procedure is shown in Algorithm 1. Here, for simplicity, we assume that our full model is defined as a function fθ, which receives the visual features v and audio features a as input and produces y as output. The loss function, optimized per domain during training, is as follow: $${\mathcal{L}}={\mathcal{L}}_{c a p}+{\mathcal{L}}_{r e c}+\lambda{\mathcal{L}}_{d i s}+\mu{\mathcal{L}}_{d e p},\qquad(1)$$ where the hyper-parameter λ and µ seek a trade-off between the two counterfactual-contrastive learning losses (details about the hyper-parameter can be found in the supplementary materials). Meta-Training To meta-train the model, we randomly select K − 1 specific domains in D as support set Ds, and the remaining domain as a query set Dq. When adapting to a new domain Di, the trainable meta parameters θ become *domainspecific* parameters, namely θi. These domainspecific parameters are computed with N gradientstep updates, similar as in MAML(Finn et al., 2017), with the following rule for one gradient update: θ ′ i = θ−α∇θLDi (fθ). This is referred as the inner-loop update, where α is the hyperparameter for the step size. Next, the model meta-parameters θ are optimized for the performance of fθ ′ i , using the query set Dq samples and the domain-specific parameters θ ′ i as initialization of the model: $$\operatorname*{min}\sum{\mathcal{L}}_{D_{i}}(f_{\theta^{\prime}_{i}})=\sum{\mathcal{L}}_{D_{i}}(f_{\theta-\alpha\nabla\theta}{\mathcal{L}}_{D_{i}}(f_{\theta}))\tag{11}$$ which is called outer-loop optimization. The metaoptimization across all domains Diis performed Algorithm 1: Transferable Audio-Visual Text Generation Input: k source domains D ={D1*, ..., D*k} Output: Model parameters θ 1 **while** *not done* do 2 Randomly select (k − 1)Ds ∼ D, and the remaining domain as Dq ; 3 Sample Bi ∼ Ds ; 4 foreach Bi do ′ i = θ − α∇θLBi (fθ) 7 Sample Bq ∼ Dq ; ′ q = θ − α∇θLBq(fθ); 9 Meta-optimization; PK−1 i=1 (LDi (fθ ′ i )+LDq(fθ ′q )) 5 θ 6 end 8 θ 10 θ=θ−β∇θ 11 end 12 **Meta-Test**; 17 end **12** **M**e**a-**r**s**, **13** **if** _audio in_ $D_{target}$ **then** $\theta=\theta-\alpha\nabla_{\theta}{\cal L}(f_{\theta})$; **14** $\theta=\theta-\alpha\nabla_{\theta}{\cal L}(f_{\theta})$; **15** **else** $\theta=\theta-\alpha\nabla_{\theta}{\cal L}_{cap}(f_{\theta})$; **16** $\theta=\theta-\alpha\nabla_{\theta}{\cal L}_{cap}(f_{\theta})$; **17** **and** using stochastic gradient descent update rule, as follows: θ ← θ −β∇θ PLDi (fθ ′ i ), where β is the step size hyperparameter. Meta-Test In the meta-test stage, we consider a new domain D*target*, which also has a support set Ds for fast adaptation by fine-tuning the model meta-parameters θ to a given task, and a query set Dq to evaluate the model on this domain. Note that in audio-absent datasets like MSVD, we can freeze the parameters of the audio-visual meta-mapper network and utilize the reconstructed audio features to boost the performance. ## 4 Experiment 4.1 Datasets And Metrics. Datasets For transferable audio-visual text generation, we design two benchmark datasets. A cross-domain benchmark was constructed based on MSR-VTT(Xu et al., 2016) containing 20 categories as well as multimodal audio and video streams. We divided a new 10-domain dataset based on MSR-VTT categories as shown in Table 1 (more detailed information about the dataset can be found in Appendix B). For cross-dataset benchmark, we use MSVD(Chen and Dolan, 2011) | Domain Dmeta−train | Domain | Dmeta−test | | |----------------------|----------|--------------|-----| | News | 1727 | Animation | 816 | | Movie | 1652 | Music | 733 | | Sports | 1623 | Animal | 613 | | Cooking | 985 | Kids | 558 | | Traffic | 815 | Beauty | 478 | Table 1: The number of videos for 10 domains reorganized from MSRVTT. which consists of 1,970 video clips collected from YouTube, and MSR-VTT† which consists of the whole Dmeta−*test* as target domains. Metrics. We evaluate the methods across all four commonly used metrics for video captioning: BLEU-n (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (ROUGE, 2004), CIDEr (Vedantam et al., 2015). We follow the standard practice to use the Microsoft COCO evaluation server (Chen et al., 2015). ## 4.2 Train Details. Feature Extraction. For the visual feature, we use the ResNet-101 (He et al., 2016) model pre-trained on ImageNet as the backbone feature extractor for frames. The frames are used as the input of the CNN without re-sizing or cropping. The audio features are extracted by the VGGish(Hershey et al., 2017). For the sentences, to simplify the implementation, we use a joint vocabulary containing words in both the source domain and the target domain. And words appearing less than 3 times are replaced with a special token. Proposed Method Settings. The hidden size is 1024 for all the multi-head attention mechanisms. The numbers of heads and attention blocks are 8. For meta-training, we adopt Adam with a fixed inner learning rate of 0.0001 and outer learning rate of 0.001. We train the source domains with a batch size of 32. For the meta-test, we use the beam-search method with a beam width of 5 to generate the predicted sentences during testing. We train TAVT on an NVIDIA GeForce RTX 2080, for TAVT each epoch takes around 4 hours. ## 4.3 Performance Evaluation Compared Models. To the best of our knowledge, there is no work investigating transferable audio-visual text generation. So we first choose the state-of-the-art video caption approaches based on two different approaches: (1)The RNN-based models: RecNet (Wang et al., 2018), AVAF(Guo et al., 2019), MARN(Pei et al., 2019), AVIC(Tian et al., 2019), SGN(Ryu et al., 2021) and SHAN(Deng et al., 2022). (2) The Transformer-based models Att-TVT(Chen et al., 2018) and SBAT (Jin et al., 2020). Then, for a fair comparison, these methods are all trained on Dmeta−*train* **with the same** meta-learning framework as TAVT and tested on the target domain. Evaluation Results. In Table 2 we report the performance of our method in comparison with the aforementioned competitors on the cross-datasets benchmark. As it can be observed: (1) Our method outperforms all compared methods on all metrics by a large margin on MSR-VTT†and MSVD. (2) In particular, AVIC and Att-TVT focus on designing complex multimodal fusion strategies to learn visual-audio representations, but leave the audio invariance unexploited. TAVT uses audio as a supervisory signal to align visual information in different domains, focusing on transferring the invariant in audio to visual prefix. Therefore, our model outperforms them by a significant margin (+1.5%∼9.6% of CIDEr on MSR-VTT†). (3) Note that in the MSVD which has only a visual stream, we freeze the parameters of AVMM and use the reconstructed audio instead the real audio *i.e.,* TAVT(Arec). TAVT still performs well (+13.5% of CIDEr on MSVD), which indicates that the metamapper network has accumulated domain-sharing knowledge through meta-training. In other words, the frozen meta-mapper network can produce discriminative visual prefix and reconstruct informative audio features even in the absence of real audio supervision. In Table 3 we also report the performance on cross-categories benchmark. TAVT outperforms all other methods on five domains, which indicates that our method generalizes well. In particular, for some low-resource domains with only a few labeled data such as kids and beauty, other methods suffer from severe performance degradation, while TAVT outperforms them by a large margin (+3.2% on kids and +4.5% on beauty). ## 4.4 Ablation Studies To analyze the effect of different components, we conduct ablation studies on the MSR-VTT† dataset. The following variants of our method are evaluated: Effectiveness of audio-visual meta-mapper. To evaluate the advantage of AVMM and audio features. We first remove both AVMM and other sub- | Methods | →MSR-VTT† (mutli-modal) | →MSVD (uni-modal) | | | | | | | | | |----------------------------|---------------------------|---------------------|------|------|--------|--------|------|------|------|------| | BLEU-1 | BLEU-4 | M | R | C | BLEU-1 | BLEU-4 | M | R | C | | | RecNet(Wang et al., 2018) | 71.1 | 40.8 | 26.9 | 59.5 | 43.4 | 79.8 | 52.9 | 34.4 | 70.6 | 83.3 | | AVAF(Guo et al., 2019) | 72.2 | 40.7 | 27.3 | 60.0 | 44.9 | - | - | - | - | - | | MARN(Pei et al., 2019) | 73.7 | 40.2 | 27.9 | 60.2 | 47.6 | 83.6 | 50.1 | 35.7 | 72.5 | 93.7 | | AVIC(Tian et al., 2019) | 74.0 | 41.4 | 28.1 | 61.2 | 48.6 | - | - | - | - | - | | SGN(Ryu et al., 2021) | 75.8 | 41.2 | 28.1 | 60.6 | 50.2 | 84.4 | 53.3 | 35.6 | 72.6 | 95.2 | | SHAN(Deng et al., 2022) | 76.2 | 40.9 | 28.2 | 60.1 | 51.5 | 84.2 | 53.5 | 35.4 | 72.8 | 95.7 | | Att-TVT(Chen et al., 2018) | 74.9 | 40.3 | 28.1 | 60.3 | 48.5 | - | - | - | - | - | | SBAT(Jin et al., 2020) | 75.4 | 40.9 | 28.3 | 60.9 | 50.4 | 82.0 | 53.6 | 35.5 | 72.2 | 91.1 | | TAVT (Arec) | 78.3 | 41.8 | 28.3 | 61.8 | 52.6 | 84.7 | 53.9 | 36.1 | 73.3 | 96.8 | | TAVT | 78.5 | 42.1 | 28.6 | 61.9 | 53.0 | - | - | - | - | - | Table 2: Performance comparisons of two transfer tasks on the cross-datasets benchmark. All methods use the Dmeta−*train* as the source domain and transfer to →MSVD and →MSR-VTT†. The best results are bold. Domain Methods B@4 M R C →Animation SBAT 44.7 29.2 64.2 47.6 SHAN 45.5 29.6 64.7 48.6 TAVT **47.3 30.6 65.8 50.4** →Music SBAT 42.7 27.9 61.2 43.8 SHAN 43.8 28.6 62.4 44.5 TAVT **45.5 29.2 63.1 46.1** →Animal SBAT 36.6 26.0 56.1 46.7 SHAN 37.8 26.4 56.8 47.7 TAVT **38.3 27.0 58.1 48.4** →Kids SBAT 40.5 22.2 57.6 44.4 SHAN 41.1 23.9 58.0 45.6 TAVT **42.7 26.1 60.4 47.6** →Beauty SBAT 31.6 24.1 52.9 27.3 SHAN 33.1 24.5 53.5 28.5 TAVT **35.0 25.8 55.0 31.8** Method B@1 B@4 M R C w/o. audio 75.3 37.9 26.3 59.7 47.2 w/o. AVMM 77.1 39.6 27.5 60.0 50.4 w/o. real audio 78.0 41.8 28.2 61.4 52.4 TAVT **78.5 42.1 28.6 61.9 53.0** Table 4: Ablation studies about audio features. Methods B@1 B@4 M R C w/o. MAML 76.4 39.8 26.8 59.8 50.2 w/o. Lrec 76.9 40.8 27.2 61.8 51.6 w/o. Ldis 77.5 41.4 28.0 61.4 52.2 w/o. Ldep 77.3 41.6 28.5 61.9 52.4 w/o. ma 77.8 41.8 28.4 61.5 52.7 TAVT **78.5 42.1 28.6 61.9 53.0** modules which are related to audio and only use the visual information as the lower bound ( **w/o.** audio). Then, we give the results without AVMM (**w/o. AVMM**). We also report the result that retains AVMM but uses the reconstructed audio instead of real audio (**w/o. real audio**). Table 4 shows that while audio contains information that is complementary to visual and can improve performance somewhat ( **w/o. audio** vs w/o. meta-mapper), the more important for transfer learning is the cross-domain invariance which contained in audio and provided the supervised signal to align visual in different domains (**w/o. metamapper** vs TAVT). In addition, the reconstructed audio can ultimately exhibit an upper bound on performance close to that achieved using real audio, demonstrating the accuracy and validity of the meta mapper network (**w/o. real audio** vs TAVT). Effectiveness of different modules. The results Table 5: Ablation studies about different modules. in Table 5 illustrate that the constraints provided by the accuracy of the reconstructed audio features are effective and critical (**w/o.** Lrec). The counterfactual thinking is helpful and can further improve the accuracy by +0.8% on CIDEr (**w/o.** Ldis). Moreover, optimizing the cross-modal relationship between audio and visual from a text generation perspective can further improve the performance of the model (**w/o.** Ldep). In addition, **w/o. MAML** denotes the model without meta-learning. We can observe that TAVT performs significantly better than **w/o. MAML**. Effectiveness of token-wise modality-aware weight. To investigate how token-wise modalityaware weight improves the performance from the perspective of linguistics, we visualize some token's weight and the text generation process in Appendix E. The result in Table 5 shows that tokenwise modality-aware weights can further improve the performance of TAVT by optimizing the association of text and audio-visual modalities. ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) The performance in low-resource domain. To quantitatively verify the performance in the low resource domain, we compare TAVT and previous methods under different sizes of labeled video as shown in Figure 5. We observe that TAVT consistently outperforms the other methods and can achieve close to full performance with only 40% of the training data, while other methods require about 70% of the training data. Hyper-Parameter Analysis. To seek the tradeoff between the DCCL, we introduce the hyperparameter λ and µ. Figure 6 shows that the model achieves the best performance when λ=1e-4 and µ=1e-2, suggesting that proper hyper-parameters are crucial to achieve good performance. ## 4.5 Qualitative Analysis To qualitatively verify the effectiveness of our TAVT, we display the results of TAVT in the multimodal and uni-modal settings. As shown in Figure 4, TAVT can accurately describe low-level concepts such as "chirping", "rainy" and "piano" with ![7_image_1.png](7_image_1.png) the help of audio-enhance visual prefix. When transferred to a uni-modal domain where missing audio, the AVMM can capture the correlation audio-visual correlation and reconstruct audio features that are semantically relevant to the visual. ## 5 Conclusion In this paper, we first study the task of transferable audio-visual text generation tasks. To mitigate multimodal content domain shift, we observe that low-level visual concepts have similar sounds in different domains and propose a novel framework TAVT with two technical contributions. The first one is the audio-visual meta-mapper, which can transfer the domain-invariant concept information within the universal auditory semantic space into the visual prefix. Moreover, the well-trained audio-visual meta-mapper can also reconstruct semantic audio features in audio-absent mode. We then apply dual counterfactual contrastive learning to directly optimize the visual-audio alignment. Extensive experiments on both cross-datasets and cross-domains benchmarks verify the effectiveness of our model. Furthermore, our TAVT framework can be transferred to other text generation tasks *e.g.,* video QA in a plug-and-play fashion. ## 6 Limitation We identify a few limitations of the current work. Our approach still suffers from biases in the training data and may produce incorrect output or lead to an inaccurate understanding of multi-modal content. And a large-scale audio-visual pre-trained model is a promising direction toward more advanced and cheaper approaches for transfer learning, which we leave for future study. ## 7 Ethics Statement We adopt the widely-used datasets that were produced by previous researchers and followed all relevant legal and ethical guidelines for their acquisition and use. Besides, we recognize the potential influence of our technique. When deployed our approach will have to record, store and process video and audio information related to human activities, which will have privacy implications for some application domains. We are committed to conducting our research ethically and ensuring that our research is beneficial. We hope our work can inspire more investigations for transfer learning on multi-modal tasks and wish our framework can serve as a solid baseline for further research. ## 8 Acknowledge This work was supported in part by the National Key R&D Program of China under Grant No.2022ZD0162000, National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397, and Yiwise. ## References Relja Arandjelovic and Andrew Zisserman. 2018. Objects that sound. In Proceedings of the European conference on computer vision (ECCV), pages 435– 451. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. David Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pages 190–200. Ming Chen, Yingming Li, Zhongfei Zhang, and Siyu Huang. 2018. Tvt: Two-view transformer network for video captioning. In *Asian Conference on Machine Learning*, pages 847–862. PMLR. Tseng-Hung Chen, Yuan-Hong Liao, Ching-Yao Chuang, Wan-Ting Hsu, Jianlong Fu, and Min Sun. 2017. Show, adapt and tell: Adversarial training of cross-domain image captioner. In Proceedings of the IEEE international conference on computer vision, pages 521–530. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Xize Cheng, Linjun Li, Tao Jin, Rongjie Huang, Wang Lin, Zehan Wang, Huangdai Liu, Ye Wang, Aoxiong Yin, and Zhou Zhao. 2023. Mixspeech: Crossmodality self-learning with audio-visual stream mixup for visual speech translation and recognition. arXiv preprint arXiv:2303.05309. Jincan Deng, Liang Li, Beichen Zhang, Shuhui Wang, Zhengjun Zha, and Qingming Huang. 2022. Syntaxguided hierarchical attention network for video captioning. IEEE Transactions on Circuits and Systems for Video Technology, 32(2):880–892. Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, and Dacheng Tao. 2022. Source-free domain adaptation via distribution estimation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 7212–7222. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pages 1126–1135. PMLR. Ningning Guo, Huaping Liu, and Linhua Jiang. 2019. Attention-based visual-audio fusion for video caption generation. In *2019 IEEE 4th ICARM*, pages 839– 844. IEEE. Wangli Hao, Zhaoxiang Zhang, and He Guan. 2018. Integrating both visual and audio cues for enhanced video caption. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on* Computer Vision and Pattern Recognition, pages 770– 778. Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. Cnn architectures for largescale audio classification. In *2017 icassp*, pages 131– 135. IEEE. Chiori Hori, Takaaki Hori, and Jonathan Le Roux. 2021. Optimizing latency for online video captioningusing audio-visual transformers. arXiv preprint arXiv:2108.02147. Di Hu, Feiping Nie, and Xuelong Li. 2019. Deep multimodal clustering for unsupervised audiovisual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9248–9257. Di Hu, Zheng Wang, Haoyi Xiong, Dong Wang, Feiping Nie, and Dejing Dou. 2020. Curriculum audiovisual learning. *arXiv preprint arXiv:2001.09414*. Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022. Scaling up vision-language pre-training for image captioning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 17980–17989. Vladimir Iashin and Esa Rahtu. 2020a. A better use of audio-visual cues: Dense video captioning with bi-modal transformer. *arXiv preprint* arXiv:2005.08271. Vladimir Iashin and Esa Rahtu. 2020b. Multi-modal dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 958–959. Tao Jin, Siyu Huang, Ming Chen, Yingming Li, and Zhongfei Zhang. 2020. Sbat: Video captioning with sparse boundary-aware transformer. *arXiv preprint* arXiv:2007.11888. Tao Jin, Zhou Zhao, Peng Wang, Jun Yu, and Fei Wu. 2022a. Interaction augmented transformer with decoupled decoding for video captioning. *Neurocomputing*, 492:496–507. Tao Jin, Zhou Zhao, Meng Zhang, and Xingshan Zeng. 2022b. Prior knowledge and memory enriched transformer for sign language translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3766–3775. Hung Le, Doyen Sahoo, Nancy F Chen, and Steven CH Hoi. 2020. Hierarchical multimodal attention for end-to-end audio-visual scene-aware dialogue response generation. *Computer Speech & Language*, 63:101095. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*. Alexander H Liu, SouYoung Jin, Cheng-I Jeff Lai, Andrew Rouditchenko, Aude Oliva, and James Glass. 2021. Cross-modal discrete representation learning. arXiv preprint arXiv:2106.05438. James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281–297. Oakland, CA, USA. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Wenjie Pei, Jiyuan Zhang, Xiangrong Wang, Lei Ke, Xiaoyong Shen, and Yu-Wing Tai. 2019. Memoryattended recurrent network for video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8347– 8356. Tanzila Rahman, Bicheng Xu, and Leonid Sigal. 2019. Watch, listen and tell: Multi-modal weakly supervised dense event captioning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8908–8917. Lin CY ROUGE. 2004. A package for automatic evaluation of summaries. In *Proceedings of Workshop on* Text Summarization of Association for Computational Linguistics, Spain. Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. 2018. Beyond sharing weights for deep domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 41(4):801–814. Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D Yoo. 2021. Semantic grouping network for video captioning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 2514–2522. Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443– 450. Springer. Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. 2021. Zero-shot image-to-text generation for visual-semantic arithmetic. arXiv preprint arXiv:2111.14447. Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2015. The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817, 1(8). Yapeng Tian, Chenxiao Guan, Justin Goodman, Marc Moore, and Chenliang Xu. 2019. Audio-visual interpretable and controllable video captioning. In *IEEE* Computer Society Conference on Computer Vision and Pattern Recognition workshops. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 4566–4575. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In *ICCV*, pages 4534–4542. Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. 2018. Reconstruction network for video captioning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7622–7631. Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, and Michael Lyu. 2022. Understanding and improving sequence-tosequence pretraining for neural machine translation. arXiv preprint arXiv:2203.08442. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In *Proceedings of the IEEE conference on Computer Vision and Pattern Recognition*, pages 5288–5296. Hanhua Ye, Guorong Li, Yuankai Qi, Shuhui Wang, Qingming Huang, and Ming-Hsuan Yang. 2022. Hierarchical modular network for video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17939– 17948. Aoxiong Yin, Zhou Zhao, Weike Jin, Meng Zhang, Xingshan Zeng, and Xiaofei He. 2022. Mlslt: Towards multilingual sign language translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5109–5119. ## Appendix This appendix contains four sections. (1) Appendix A introduce the detail about audio-visual encoder. (2) Appendix B introduces the details of the benchmark construction. (3) Appendix C provides the complete performance comparison on cross-categories benchmark (3) Appendix D discusses the design of universal auditory semantic space (Appendix D.1) and visualize the universal Auditory semantic space (Appendix D.2) (4) Appendix E provide the analysis of token-wise modality-aware weights and visualize the text generation process. ## A Encoder For the visual input V ′= [p1, ..., pl, v1*, ..., v*m] and audio input A = [a1*, ..., a*m], we first adapt the self-attention layer SelfAtt to learn the visual representation and audio representation as follows: $$\begin{array}{l}{{f_{v}^{\prime}=\mathrm{SelfAtt}(V^{\prime},V^{\prime},V^{\prime})}}\\ {{f_{a}^{\prime}=\mathrm{SelfAtt}(A^{\prime},A^{\prime},A^{\prime})}}\end{array}$$ ′, V ′, V ′) (12) ′, A′, A′) (13) The SelfAtt layer contains multi-head attention which can be calculated by multiple single heads: $$\begin{array}{c}\mbox{MHA}(F,F,F)=\mbox{Concat}(h_{1},h_{2},...,hh),\\ h_{i}=\mbox{ATT}(FW_{i}^{Q},FW_{i}^{K},FW_{i}^{V})\end{array}\tag{14}$$ $\Gamma_{\mu\nu}=U_{\mu\nu}=-\frac{\mu}{\hbar}d_{\mu\nu}$. where, W Q i , WK i, WV i ∈ R d d h and W1 ∈ R d×d. hi denotes the i-th head and h is the number of heads. ATT represents scaled dot-product attention as ATT(*Q, K, V* ) = softmax( Q·KT √dk ). Then we apply audio-visual cross-attention to identify attention across two different kinds of feature fields as follows: $$\begin{array}{l l}{{x_{a v}=\mathrm{CrossAtt}(f_{v}^{\prime},f_{a}^{\prime},f_{a}^{\prime})}}&{{}}\\ {{x_{v a}=\mathrm{CrossAtt}(f_{a}^{\prime},f_{v}^{\prime},f_{v}^{\prime})}}&{{}}\end{array}$$ where CrossAtt layer is similar to SelfAtt contains multi-head attention. ## B Details About Datasets To more adequately validate the effectiveness of our proposed approach, we first regrouped the 20 categories of MSR-VTT and reorganization a new 10-domain dataset based on the MSR-VTT (Xu et al., 2016) categories information as shown in Table 6. The number of videos regrouped 10 domains is shown in Figure 7. We use five of them as source ![11_image_0.png](11_image_0.png) domains and the remaining five domains ("Animation", "Music", "Animal", "Kids" and "Beauty") as target domains. And a random sampling strategy is applied in the target domain for dataset split. $$\begin{array}{l}{(12)}\\ {(13)}\end{array}$$ | Domain | Category | |-----------|---------------------------------| | News | howto, education, science, news | | Movie | tv shows, movie, doc, ads | | Sports | people, sports, travel | | Cooking | food, cooking | | Traffic | vehicle | | Animation | gaming, animation | | Music | music | | Animal | animal | | Kids | kids | | Beauty | beauty | Table 6: Statistics of our 10 regroup captioning domains based on the MSR-VTT dataset. ## C Experiment Results C.1 Baseline Setting $$\mathbf{\Sigma}_{7}$$ To the best of our knowledge, there is no work investigating transferable audio-visual text generation, thus we choose the state-of-the-art video caption approach and use MAML on top of it to construct the baseline. We compare the performance of the TAVT with that of state-of-the-art methods based on two different approaches: (1)The RNN-based models: RecNet (Wang et al., 2018) which adds a reconstructor to reconstruct the visual features from the generated caption, AVAF(Guo et al., 2019) combined different multimodal fusion methods and attention mechanism, MARN(Pei et al., 2019) which equips with a memory consisting of words and corresponding visual contexts, AVIC(Tian et al., 2019) introduce audio-visual controller to balance the importance between audio and | Methods | →Animation | →Music | | | | | | | | | |----------------------------|--------------|----------|------|------|--------|--------|------|------|------|------| | BLEU-1 | BLEU-4 | M | R | C | BLEU-1 | BLEU-4 | M | R | C | | | RecNet(Wang et al., 2018) | 72.7 | 42.8 | 27.4 | 62.0 | 45.3 | 71.6 | 41.5 | 26.5 | 59.1 | 40.1 | | MARN(Pei et al., 2019) | 75.5 | 44.5 | 28.3 | 63.4 | 47.2 | 74.6 | 43.4 | 27.6 | 60.3 | 42.6 | | AVIC(Tian et al., 2019) | 75.2 | 44.8 | 28.7 | 63.6 | 47.1 | 74.9 | 43.0 | 27.7 | 60.5 | 42.9 | | SGN(Ryu et al., 2021) | 75.6 | 45.1 | 29.2 | 64.2 | 48.5 | 75.8 | 43.4 | 28.0 | 61.9 | 44.5 | | SHAN(Deng et al., 2022) | 76.2 | 45.5 | 29.6 | 64.7 | 48.6 | 76.0 | 43.6 | 28.2 | 61.8 | 44.2 | | Att-TVT(Chen et al., 2018) | 72.4 | 44.3 | 29.0 | 63.8 | 46.3 | 73.1 | 42.2 | 27.8 | 61.7 | 42.0 | | SBAT(Jin et al., 2020) | 72.1 | 44.7 | 29.2 | 64.2 | 47.6 | 74.8 | 42.7 | 27.9 | 61.2 | 43.8 | | TAVT | 76.8 | 47.3 | 30.6 | 65.8 | 50.4 | 79.0 | 45.5 | 29.2 | 63.1 | 46.1 | Table 7: The results of performance comparisons. All methods use the Dmeta−*train* as the source domain and transfer to vehicle and music domains. The best results are bold. Methods →Animal →Kids BLEU-1 BLEU-4 M R C BLEU-1 BLEU-4 M R C RecNet(Wang et al., 2018) 72.9 35.6 24.3 53.5 43.6 73.0 37.2 21.4 55.0 41.8 MARN(Pei et al., 2019) 74.7 36.9 25.7 55.1 45.0 75.5 38.7 22.5 55.8 43.3 AVIC(Tian et al., 2019) 75.0 36.8 25.9 55.7 45.6 74.9 39.1 22.8 56.3 43.2 SGN(Ryu et al., 2021) 76.6 38.1 26.6 56.2 47.3 76.1 40.7 23.3 57.6 45.1 SHAN(Deng et al., 2022) 76.3 37.8 26.4 56.8 47.7 76.5 41.1 23.9 58.0 45.6 Att-TVT(Chen et al., 2018) 74.7 36.0 25.8 56.3 46.2 75.3 40.2 21.7 57.1 44.3 SBAT(Jin et al., 2020) 75.8 36.6 26.0 56.1 46.7 75.1 40.5 22.2 57.6 44.4 TAVT **78.5 38.3 27.0 58.1 48.4 77.3 42.7 26.1 60.4 47.6** Table 8: The results of performance comparisons. All methods use the Dmeta−*train* as source domain and transfer to animal and kids domains. The best results are bold. visual modalities, SGN(Ryu et al., 2021) which encode a video into semantic groups and SHAN(Deng et al., 2022) which use syntax-guided hierarchical attention to integrate visual and sentence-context features. (2) The Transformer-based models AttTVT(Chen et al., 2018) which fuses the modalities from video and text with attention mechanism, SBAT (Jin et al., 2020) which uses boundary-aware pooling operation to reduce the redundancy. ![12_image_0.png](12_image_0.png) ## C.2 Performance On Cross-Categories Benchmark To further validate the generalizability of the proposed method, we also conduct experiments on five reorganized Dmeta−*test* domains. From the results in Table 7,8 and 9, we observe that our method outperforms other methods in all five target domains, which indicates that our method has good generalization and can compensate for the performance degradation caused by limited label data in low-resource domains such as kids and beauty. | Methods | →Beauty | | | | | |-----------|-----------|------|------|------|------| | BLEU-1 | BLEU-4 | M | R | C | | | RecNet | 60.7 | 28.9 | 21.8 | 51.3 | 24.6 | | MARN | 64.4 | 30.5 | 23.0 | 52.2 | 26.4 | | AVIC | 64.8 | 30.7 | 23.5 | 52.0 | 26.3 | | SGN | 65.3 | 33.3 | 24.5 | 53.7 | 28.6 | | SHAN | 65.0 | 33.1 | 24.5 | 53.5 | 28.5 | | Att-TVT | 62.5 | 31.1 | 23.7 | 52.3 | 26.8 | | SBAT | 63.4 | 31.6 | 24.1 | 52.9 | 27.3 | | TAVT | 67.8 | 35.0 | 25.8 | 55.0 | 31.8 | Cluster Number B@1 B@4 M R C 10 77.3 41.4 28.2 61.2 52.0 100 78.5 42.1 28.6 61.9 53.0 200 78.2 41.7 28.5 61.7 52.3 300 77.0 41.2 27.9 61.1 51.8 Table 10: The Comparison of different cluster number. ## C.3 Hyper-Parameter Analysis We evaluate 11 different values τ from 0.01 to 1.0 on MSRVTT†and report the results in Figure 8. It shows that the performance achieves the best when τ is set to 0.4 and becomes poor when τ is too small or too large. This result suggests that a proper τ value is crucial to achieving good performance. ## D Analysis Of Universal Auditory Semantic Space D.1 Construction Details Audio Clusters. We design our model by introducing a universal auditory semantic space consisting of a set of audio clusters. We are interested in natural environmental sounds. We download videos from videos on Flickr(Thomee et al., 2015) and extract their sounds. We downloaded over 750,000 videos from Flickr, which provides over a year (377 days) of continuous audio. The only pre-processing we do on the sound is to extract the spectrogram from the video files and subtract the mean. We extract spectrograms for approximately five seconds of audio and obtain the audio clip features by VGGish and select conv4_1 as the extraction layer. Selection of Clustering Algorithm. To obtain the audio cluster, we apply K-means (MacQueen et al., 1967) to the extracted audio features. Specifically, K-Means require a manual setting of cluster number values. Thus, we experimented with different cluster numbers. From the results shown in Table 10, we can see that when setting the cluster number as 100 achieve better results on our extracted audio features from Flickr. ## D.2 Visualization Of Audio Clusters. In Figure 9, we visualize the semantic space. We can see that there are a large number of low-level concepts shared across different domains. The sounds of these low-level concepts are similar but have significant visual differences. This provides a guarantee for the effectiveness of our approach. ## E Analysis Of Token-Wise Modality-Aware Weights Modality Imbalance. There exists a modality imbalance in natural language tokens as different tokens depend on different modalities. For example, some nouns of objects like "shirts", "lamps" and "flowers" are only visually related, but some objects like "guitar" and "alarm clock" can make sounds that are auditory related. Moreover, the verbs which indicate the low-level concepts like "talking", "hit" and "kick" are obviously auditory related. And we visualize the word clouds in Figure 10. Visualization of Text generation. To investigate how the token-wise modality aware weight improves the performance from the perspective of linguistics, we visualize the text generation process. In Figure 11, we present the modality dependency scores of each word. Firstly, we can find that words have different dependencies on different modalities in the process of generation, which proves the existence of modal imbalance. For example, some quantifiers (a, group) and some nouns (children) rely more on the visual modality, while guitar and sing rely more on the audio modality. There are also some prepositions and conjunctions that often rely on the context text for a generation. Secondly, comparing **w/o. ma** and TAVT, we can find that token-wise modality-aware weights can optimize the dependence of different words on different modalities, encouraging the model to use the correct modality to generate words, which can help identify some vague concepts such as talking and singing. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) ![14_image_3.png](14_image_3.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✓ A2. Did you discuss any potential risks of your work? Section 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
prasad-etal-2023-meetingqa
{M}eeting{QA}: Extractive Question-Answering on Meeting Transcripts
https://aclanthology.org/2023.acl-long.837
With the ubiquitous use of online meeting platforms and robust automatic speech recognition systems, meeting transcripts have emerged as a promising domain for natural language tasks. Most recent works on meeting transcripts primarily focus on summarization and extraction of action items. However, meeting discussions also have a useful question-answering (QA) component, crucial to understanding the discourse or meeting content, and can be used to build interactive interfaces on top of long transcripts. Hence, in this work, we leverage this inherent QA component of meeting discussions and introduce MeetingQA, an extractive QA dataset comprising of questions asked by meeting participants and corresponding responses. As a result, questions can be open-ended and actively seek discussions, while the answers can be multi-span and distributed across multiple speakers. Our comprehensive empirical study of several robust baselines including long-context language models and recent instruction-tuned models reveals that models perform poorly on this task (F1 = 57.3) and severely lag behind human performance (F1 = 84.6), thus presenting a challenging new task for the community to improve upon.
# Meeting**Qa: Extractive Question-Answering On Meeting Transcripts** Archiki Prasad1 Trung Bui2 Seunghyun Yoon2 **Hanieh Deilamsalehy**2 Franck Dernoncourt2 **Mohit Bansal**1 1UNC Chapel Hill 2Adobe Research {archiki, mbansal}@cs.unc.edu {bui, syoon, deilamsa, dernonco}@adobe.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) With the ubiquitous use of online meeting platforms and robust automatic speech recognition systems, meeting transcripts have emerged as a promising domain for natural language tasks. Most recent works on meeting transcripts primarily focus on summarization and extraction of action items. However, meeting discussions also have a useful question-answering (QA) component, crucial to understanding the discourse or meeting content, and can be used to build interactive interfaces on top of long transcripts. Hence, in this work, we leverage this inherent QA component of meeting discussions and introduce MEETINGQA, an extractive QA dataset comprising of questions asked by meeting participants and corresponding responses. As a result, questions can be open-ended and actively seek discussions, while the answers can be multi-span and distributed across multiple speakers. Our comprehensive empirical study of several robust baselines including long-context language models and recent instruction-tuned models reveals that models perform poorly on this task (F1 = 57.3) and severely lag behind human performance (F1 = 84.6), thus presenting a challenging new task for the community to improve upon.1 ## 1 Introduction Millions of meetings occur every day worldwide, which results in vast amounts of meeting transcripts. Meeting transcripts are typically long documents, often domain-specific depending on the subject matter, and contain a lot of information. Basic tasks such as catching up with a missed meeting, looking up a specific discussion or response to a query can be time-consuming. These tasks can be facilitated by NLP systems, including summarization and question-answering. To this end, several publicly available small-scale corpora of meeting 1MEETINGQA data and code is available at https:// archiki.github.io/meetingqa.html transcripts have been released (Carletta et al., 2005; Janin et al., 2003; Garofolo et al., 2004, *inter alia*). Prior NLP work on meeting transcripts mainly focuses on summarization (Oya et al., 2014; Li et al., 2019; Zhu et al., 2020, *inter alia*). However, lack of annotated data impedes research on other important NLP tasks in this domain. To address this gap, we introduce a question-answering (QA) task based on conversations in meeting transcripts. Specifically, we consider *questions asked by participants during the meeting* and aim to extract corresponding answer spans from relevant discussions among meeting participants (refer to Figure 1). This task has several practical applications such as building an interactive meeting browser/interface for navigating through transcripts and informing tasks such as meeting summarization and handling action items involving QA pairs (Kathol and Tur, 2008; August et al., 2022). 15000 While standard QA datasets consist of human generated questions either based on short supplied contexts (Rajpurkar et al., 2016, 2018; Rogers et al., 2021) or are answered using a large collection of documents (Joshi et al., 2017; Kwiatkowski et al., 2019; Zhu et al., 2021b), our task setting is challenging yet interesting in several ways. First, meeting transcripts are long documents and QA systems still struggle to understand long contexts (Pang et al., 2022; Soleimani et al., 2021). Second, successfully answering questions asked within meetings requires robust understanding of the conversation and discourse that takes place both before and after a question. Third, the multi-party spoken text falls under a different domain when compared to typical text documents. While standard long documents rarely include any meaningful (nonrhetorical) questions, multi-party meetings often involve discussions asked by one participant and answered by the rest, allowing us to use these questions to create a QA dataset. Furthermore, the conversational nature of transcribed text differs from written documents and may contain disfluencies and other artifacts. Finally, instead of using annotator-generated questions (like in Wu et al. (2022)), questions asked by participants are more open-ended and discussion-seeking, with interesting answer types that can be multi-span and/or contributed by multiple speakers (e.g., Figure 1). To this end, we first introduce our dataset MEET-INGQA, created by annotating meetings transcripts from the popular AMI (Augmented Multi-party Interaction) corpus, containing over 100 hours of meetings (Carletta et al., 2005), via a robust annotation pipeline . MEETINGQA comprises of 7,735 questions asked by participants across 166 meetings. Unlike other datasets, questions in MEET-INGQA are less concise (12 words on average) and reflect queries asked in a conversational setting. The answers include realistic situations such as rhetorical questions, multiple discontiguous spans and/or contributions from multiple speakers. Next, on MEETINGQA dataset, we test diverse models designed for long input contexts such as Longformer (Beltagy et al., 2020), and BigBird (Zaheer et al., 2020) as well as RoBERTa (Liu et al., 2019), and DeBERTa-v3 (He et al., 2020) with as much meeting context surrounding the question as possible. To incorporate the multi-span nature of answers in our dataset, we design and experiment with multi-span variants of the aforementioned models. Furthermore, we also investigate how well recent instruction-tuned large language models fare at answering questions from MEETINGQA. Lastly, we create a silver-annotation pipeline using ME-DIASUM (Zhu et al., 2021a), a corpus containing 463.6K short interview transcripts, to provide additional training data. We find that the best performance is achieved by finetuned short-context models (F1 = 57.3). Overall, we show that models struggle to identify rhetorical questions and selecting which utterances constitute the answer. Thus, model performance significantly trails behind human performance on MEETINGQA (F1 = 84.6), leaving a large potential for future improvements on this challenging task. ## 2 Our Dataset: M**Eeting**Qa We first describe our data collection process in Section 2.1 and then provide an extensive analysis of MEETINGQA in Section 2.2. ## 2.1 Data Collection Question Selection. We leverage the punctuated text to identify possible questions (ending with '?'). We also filter out questions containing ≤ 2 words as we manually find them to be either meaningless or rhetorical. While questions are marked to facilitate annotators, we encourage them to find missed potential questions due to incorrect punctuation. Answer Annotation. For each possible question, we ask annotators to label the set of sentences (each identified by a unique number) from the meeting transcript that form the answer. Additionally, we also collect meta-data about the question. First, we ask the annotators to label if the question was *meaningful*, used to filter out rhetorical, unanswered or logistical questions and incorrect punctuations. Some speakers can ask consecutive or multiple questions in the same turn that are often related and answered together. In such scenarios, we allow annotators to combine questions and provide a common answer from the meeting transcript. The annotators mark these questions using the *combined question* attribute. Finally, since our questions are conversation segments, they may not be self-contained. Hence, we ask annotators to mention the *question context* sentences (if any) separately. We refer readers to Appendix A for more details and examples from MEETINGQA. ![2_image_0.png](2_image_0.png) Annotation Process. All annotators were hired by a professional crowdsourcing company TELUS.2 The company obtained consent from the crowd workers and conducted ethical reviews. To train annotators, we provide comprehensive instructions for each type of annotation with several manually annotated examples from a small subset of transcripts and different possible scenarios curated by the first author. The annotations were collected in multiple batches, starting with the first batch containing a small subset of 250 questions. We iteratively provided extensive feedback to the crowdworkers on their annotations and resolved existing issues till the annotations were satisfactory. Next, we assigned three independent annotators to each question, and calculated Krippendorff's α = 0.73 (Krippendorff, 1980) using MASIdistance (Passonneau, 2006), indicating substantial agreement. We then collected annotations for the remaining questions in two additional batches using one annotator per question followed by a quality assurance stage to validate the outcome of the annotations. Overall, we spent $10,427 in the annotation process, amounting to $61 per meeting. For additional details refer to Appendix A. ## 2.2 Dataset Information And Analysis After filtering and quality control, we were left with a total of 7,735 questions from 166 meetings (≈ 100 hours of meeting recordings). Size and Splits. We split our dataset into train, dev, and test sets such that questions in each split come from distinct meetings. Table 1 shows dataset statistics across different answer types, namely 2https://www.telusinternational.com/ | Train | Dev | Test | | |----------------------------|-------|--------|-------| | Number of Meetings | 64 | 48 | 54 | | Number of Questions | 3007 | 2252 | 2476 | | w/ No Answer | 956 | 621 | 764 | | w/ Multi-Span Answers | 787 | 548 | 663 | | w/ Multi-Speaker Answers | 1016 | 737 | 840 | | Avg. Questions per Meeting | 46.98 | 46.92 | 45.85 | unanswerable, *multi-span*, and *multi-speaker* (described below). Due to relatively small number of meetings in the AMI corpus and diversity in meeting content, our test set contains a larger fraction of questions from the dataset as opposed to the conventional 80:10:10 split across train/dev/test sets. Question Types. Unlike most QA datasets, questions in MEETINGQA are extracted directly from the meeting transcripts. Consequently, we find that questions may not be concise, and may not begin with 'wh' prefixes, making our dataset challenging yet interesting for the community. We perform a manual analysis of question types based on 200 randomly selected questions from the test set in Figure 2 (left). First, we observe that a majority of questions in MEETINGQA are framed in a 'yes/no' manner, followed by 'what' and 'how' questions that are typically information-seeking. We find that in a discussion-heavy setting such as ours, yes/no questions elicit a detailed response that cannot be reduced to a direct 'yes/no' response in over 40% of the cases (see Figure 2 (right)). Further, manual analysis shows that nearly half the questions are subjective, i.e., seeking opinions of meeting participants, and as high as 20% of answerable questions ![3_image_0.png](3_image_0.png) are framed rhetorically. Appendix A contains additional tri-gram-based analysis of questions. Length. Figure 3 shows the distribution of the length of meeting transcripts, questions, and answers in MEETINGQA. On average, each meeting transcript comprises of 5.8K words which constitute as long documents unlikely to fit entirely in the input context of typical pretrained language models (Devlin et al., 2019; Liu et al., 2019; He et al., 2020). Further, questions and their answers contain an average of 12, and 35 words respectively. Answer Types. Due to the nature of meeting conversations and questions asked by participants, most answers are direct responses or follow-up discussions. However, some questions are rhetorical or do not elicit any discussion. These questions are unanswerable (30% of MEETINGQA). Among answerable questions, we note two scenarios of interest: *multi-span* and *multi-speaker* answers. Multispan answers contain non-consecutive and discontinuous utterances or sentences, typically in the form of relevant discussion interleaved with irrelevant chit-chat (see examples in Appendix A). Additionally, multi-speaker answers occur when multiple participants contribute to answering a question which is typical in a discussion. Note that multispeaker and multi-span answer cases are not mutually exclusive (refer to Figure 1 for an example). We find that 40% of all answers (excluding unanswerable questions) in our dataset are multi-span and 48% of answers are multi-speaker in nature. Moreover, Figure 2 (right) shows from our manual analysis that a considerable amount of disagreement exists among speakers in multi-speaker answers, with approximately 70% of cases displaying some form of disagreement. Notably, 22% of answers involve additional follow-up or action items, which are specific to the context of meetings. Human Performance. We estimate human performance on MEETINGQA using a random subsample of 250 questions from the test split. Each question is assigned a different annotator who had not previously annotated the meeting containing that question. Scoring the provided answers relative to the reference answers in our dataset, yields an F1 of 84.6. This breaks down to F1 of 80.7 and 86.3 for unanswerable and answerable questions respectively. The F1 score for multi-span and multispeaker answers is 88.1 and 87.7 respectively. ## 3 Methods In this section, we investigate the difficulty level of our new MEETINGQA for state-of-the-art QA systems and establish strong baseline results. We describe strategies for retrieving contexts from transcripts in Section 3.1, followed by different QA models in Section 3.2, and silver data annotation for data augmentation methods in Section 3.3. ## 3.1 Retrieving Contexts From Transcripts Given that meeting transcripts are very long documents, it is infeasible to input the entire transcript as context to typical QA models. Thus, we first select a smaller transcript segment that fits the model's input length limitations. We explore two strategies to retrieve contexts as described below. Location-based Context Retrieval. We use the relative location of the question in the meeting transcript to retrieve a context by fitting as many (complete) sentences as possible under a fixed length budget (measured in words). Further, we split this budget into two components: *prefix* and *suffix* referring to the sentences that precede and succeed the question respectively. We set the prefix budget to 50 words and the suffix budget to 250 words respectively, resulting in a total budget of 300 words.3 3Ensures context fits into QA models limited 512 tokens. | Retrieval Method | Answer-Span Overlap Upper Bound F1 IoU | | | |---------------------|------------------------------------------|-------|-------| | Location | 99.20 | 99.99 | 99.98 | | ROUGE-1 | 14.81 | 23.21 | 19.95 | | Embedding Cos. Sim. | 32.45 | 38.24 | 34.17 | Note that the suffix budget is significantly larger than the prefix budget since we expect to find answers in sentences following the question. The sentences before the question only provide additional context to the ongoing discussion. Score-based Context Retrieval. Alternatively, we use the question as a query and compare it to other sentences from the entire transcript via two scoring methods consistent with Pang et al. (2021). First, we retrieve sentences using ROUGE-1 score relative to the question. Second, we use cosine similarity based on sentence embeddings (Reimers and Gurevych, 2019). We concatenate sentences in the order they appear in the transcript until reaching the total length budget. Similar to location-based retrieval, we set the total budget to 300 words. Results of Context Retrieval. Table 2 compares both retrieval methods using the same total length budget on the answerable questions split. We observe that the sentence-level overlap between extracted contexts and annotated answers for score-based retrieval is significantly lower than for location-based retrieval. We use this overlap to compute the maximum achievable performance of QA systems for each type of retrieval. Correspondingly, we find similar trends in upper-bound performance metrics (discussed in Section 4) with location-based contexts (near-perfect max F1) considerably outperforming score-based contexts (max F1 < 40). Therefore, for short-context models, we henceforth use location-based contexts. ## 3.2 Models For Meeting-Based Qa We primarily focus on extractive models including both short and long-context models. Given the transcript or a segment from it (context) and the question, models are tasked with extracting answer-span(s) from the context. We use two highperforming short-context models RoBERTa and DeBERTaV3, each supporting up to 512 tokens, with extracted context from Section 3.1. Additionally, we explore Longformer and BigBird which support longer sequences of up to 4096 tokens by utilizing a combination of sliding window and global attention mechanisms. Further, the Longformer Encoder-Decoder (LED) model supports up to 16,384 input tokens. These models allow us to use most or all portions of the transcript needed for answering the questions as the context. In case of an overflow, we use as many utterances from the transcript around the question as possible and truncate the rest. Note that these models output a single answer-span by default. Therefore, for multi-span answers, we train models to predict a span starting with first utterance and ending with the last utterance of the gold answer. Multi-Span Models. In order to better model multi-span answers, we follow Segal et al. (2020) and pose multi-span QA as a sequence tagging task, predicting if each token in the context is part of the answer. For simplicity, we restrict ourselves to their proposed IO tagging. Thus, the answer prediction is a concatenation of all token-spans contiguously tagged with I. Similar to single-span models, we train multi-span variants of RoBERTa, DeBERTa, Longformer, and BigBird models. Instruction-Tuned Models. Furthermore, we use FLAN-T5 (Chung et al., 2022), a publiclyavailable instruction-tuned model, to study zeroshot performance on our MEETINGQA. Given the relatively large size of contexts and distinct nature of our task, we rely on succinct instructions instead of few-shot demonstrations. Furthermore, due to the model's generative nature, we cannot directly use the predictions for our extractive QA task. Therefore, we adapt instruction-tuned models for our setting by employing instructions that ask models to *list sentences* instead of directly generating answers that may be less faithful to the context. Next, we filter out predicted sentences not present in the context. While this is a strict selection criterion, it removes any possible hallucinations.4 ## 3.3 Silver Data Augmentation Due to high annotation costs of gold labels, and unavailability of similar QA datasets, we investigate automatic methods to annotate answers. We match the salient features of MEETINGQA, such as meaningful questions within the transcript and 4Appendix E shows these choices improve overall scores. | Model | Intermediate Train Data | Overall | No Answer | Answerable | | | |---------------------|---------------------------|-------------|-------------|--------------|-------------|-------------| | All | Multi-Span Multi-Speaker | | | | | | | F1 / IoU | F1† | F1 / IoU | F1 / IoU | F1 / IoU | | | | RoBERTa-base | − | 56.5 / 51.1 | 41.0 | 63.1 / 55.6 | 60.8 / 50.1 | 64.1 / 54.7 | | SQuADv2 | 54.1 / 49.4 | 37.4 | 61.5 / 54.7 | 50.8 / 40.2 | 56.2 / 46.9 | | | + silver | 55.4 / 50.7 | 47.4 | 58.9 / 52.2 | 57.2 / 46.9 | 60.2 / 51.4 | | | DeBERTa-base | − | 57.3 / 52.9 | 55.8 | 58.0 / 51.6 | 49.6 / 39.3 | 55.3 / 46.7 | | SQuADv2 | 56.5 / 52.1 | 51.0 | 58.9 / 52.6 | 49.6 / 39.1 | 55.7 / 47.2 | | | + silver | 55.2 / 50.4 | 46.7 | 59.0 / 52.0 | 51.4 / 40.5 | 57.7 / 48.6 | | | Longformer-base | − | 55.6 / 50.9 | 46.1 | 59.9 / 53.0 | 55.3 / 44.9 | 59.4 / 50.4 | | SQuADv2 | 54.2 / 49.1 | 31.4 | 64.4 / 56.9 | 58.0 / 47.2 | 62.6 / 53.0 | | | + silver | 54.9 / 50.2 | 51.2 | 56.6 / 49.8 | 54.5 / 44.0 | 58.6 / 49.9 | | | LED-base | − | 27.8 / 25.0 | 59.0 | 13.9 / 9.7 | 12.1 / 7.0 | 12.4 / 7.4 | | BigBird-base | − | 53.7 / 48.6 | 44.4 | 57.8 / 50.4 | 58.1 / 47.5 | 62.6 / 53.4 | | TriviaQA | 54.5 / 49.5 | 35.2 | 63.2 / 55.9 | 56.3 / 45.5 | 60.6 / 51.1 | | | + silver | 54.7 / 49.8 | 43.7 | 59.6 / 52.4 | 57.6 / 46.9 | 60.5 / 51.2 | | | Turn-based Baseline | − | 35.9 / 30.4 | 0.5 | 51.8 / 43.8 | 42.0 / 31.5 | 47.3 / 40.0 | | Human Performance − | 84.6 / 83.5 | 80.7 | 86.3 / 84.6 | 88.1 / 86.2 | 87.7 / 85.3 | | multi-speaker discussions using the MEDIASUM dataset (Zhu et al., 2021a). This dataset contains 463.6K short multi-party interview transcripts, detailed speaker information, and identifies a host or interviewer who steers the discussion via questions. We begin by identifying the host speaker and focusing on their questions. Next, we predict which speaker(s) would answer the question by identifying speaker entities mentioned in utterances or from previous dialogue turns. Finally, we search utterances from the identified speakers until a stopping criterion is met and label it as the answer. Due to the assumptions made in the above process, models trained directly on this data could overfit on spurious correlations (Jia and Liang, 2017; Wang and Bansal, 2018). Thus, we apply various perturbations to the context such as separating the question and answer utterances, converting to unanswerable questions by removing relevant sentences, creating more speaker transitions, and masking speaker names. Refer to Appendix F for additional details. ## 4 Experiments And Results Evaluation Metrics. Following Rajpurkar et al. (2016) we report macro-averaged F1 on the entire test set as well as on specific answer types (Section 2.2).5 However, F1 treats sequences as bagof-words, and thus, there can be a non-significant overlap between a random span and the target span for large span lengths. To address this, Soleimani et al. (2021) propose reporting Intersection-overUnion (IoU) defined as: $$\mathrm{IoU}=|p\cap t|{\Big/}|p\cup t|,$$ where p and t are the predicted and target spans, respectively. Since our answer spans are much longer than those in SQuAD (refer to Figure 3), we also report macro-averaged IoU to measure performance. Training Settings. We measure performance of various models in both finetuned and zero-shot settings. First, we directly finetune the base pretrained model on the model on MEETINGQA. Next, to supplement training data we explore intermediatetraining (Phang et al., 2018; Pruksachatkun et al., 2020) with SQuAD v2.0 (Rajpurkar et al., 2018) 6 or a combination including silver data from Section 3.3 prior to finetuning on MEETINGQA, increasing the training data by 5x and 10x respectively. Additional details on checkpoints, hyperparameters, and training are present in Appendix B. Turn-based Baseline. We devise a straightforward algorithm called turn-based baseline that is inspired by the automatic silver data annotation 6SQuADv2.0 is used for all models except BigBird, for which we use TriviQA due to lack of reliable existing model checkpoint on HuggingFace (Wolf et al., 2019). 5We also report exact match (EM) scores in Appendix C. | Model | Int. Train Data | Overall | No Answer | Answerable | | | |-------------------|--------------------------|-------------|-------------|--------------|-------------|-------------| | All | Multi-Span Multi-Speaker | | | | | | | F1 / IoU | F1 | F1 / IoU | F1 / IoU | F1 / IoU | | | | RoBERTa-base | − | 54.0 / 48.1 | 41.1 | 59.8 / 51.4 | 58.2 / 47.2 | 60.9 / 50.9 | | silver | 55.1 / 50.0 | 40.1 | 61.9 / 54.5 | 56.4 / 45.8 | 60.0 / 50.2 | | | DeBERTa-base | − | 54.5 / 47.9 | 35.3 | 63.0 / 53.8 | 62.9 / 51.1 | 64.9 / 53.6 | | silver | 55.1 / 49.8 | 36.1 | 63.6 / 56.1 | 63.0 / 52.7 | 66.1 / 56.5 | | | Longformer-base − | 53.8 / 48.2 | 39.4 | 60.3 / 52.3 | 58.8 / 48.3 | 62.0 / 52.0 | | | silver | 52.3 / 48.0 | 57.2 | 50.2 / 44.0 | 47.2 / 38.0 | 49.0 / 40.8 | | | BigBird-base | − | 49.6 / 43.4 | 28.3 | 59.2 / 50.2 | 57.3 / 45.5 | 60.9 / 50.2 | | silver | 53.5 / 48.0 | 36.4 | 61.2 / 53.2 | 61.3 / 50.9 | 63.9 / 54.1 | | Table 4: Comparing performance of finetuned multi-span models across evaluation metrics and answer types. algorithm explained in Section 3.3. In the turnbased baseline, when a speaker asks a question, the predicted answer includes all the subsequent utterances of other speakers until the same speaker gets another turn (stopping criterion). Note that, turn-based baseline assumes all questions can be answered and always provides single-span answers, although the predictions may be multi-speaker. ## 4.1 Results And Discussion We report performance of various fine-tuned singlespan, multi-span models in Tables 3, and 4 respectively on the test split of MEETINGQA. Further, we evaluate zero-shot performance in Table 5. We summarize our findings below and refer readers to Appendix C for additional results. Main Baselines and Comparison with Human Performance. Results from Tables 3 and 4 show that single-span models (narrowly) outperform the multi-span models, with the best overall performance achieved by single-span variant of DeBERTa-base (overall F1 = 57.3). Other singlespan variants of Longformer and BigBird achieve higher performance on answerable questions (up to F1 = 64.4) but have lesser overall performance due to lower F1 scores on unanswerable questions.7 Comparing to the human performance (overall F1 = 84.6), we find at least a 25 point difference in overall F1 of all finetuned models. Across various answer types, the difference in F1 scores is still at least 20 points. Similar trends holds for EM and IoU metrics too.8In the zero-shot setting (refer to Table 5), the difference in overall scores with respect to human performance is even greater (≥ 44 7Model predictions may be biased against (or towards) empty spans impacting score of unanswerable questions. 8Following the order EM ≤ IoU ≤ F1 for all models. points across all metrics). Furthermore, all finetuned models outperform the turn-based baseline (with the exception of LED-base), whereas the corresponding zero-shot variants fail to outperform the turn-based baseline on overall metrics. This suggests that our dataset is challenging for current QA systems, leaving significant scope for improvement via interesting future work. Impact of Long-Context Models. We observe that in a majority cases short-context models (especially RoBERTa) outperforms long-context models (Longformer and BigBird) by 1-2 points. Furthermore, the LED model that completely fits 90% of transcripts has significantly lower overall score (≈30 F1 point difference) due to poor performance on answerable questions.9 We believe that the ability to fit larger contexts is traded-off by welloptimized design of short-context models. This is consistent with the findings of Pang et al. (2022) and suggests better long-context models may be needed to outperform shorter extractive models. Impact of Multi-Span Models. Table 5 shows that in the zero-shot setting, multi-span variants slightly outperform their single-span counterparts for long-context models and slightly underperform for DeBERTa. In Appendix C, we find that within answer types zero-shot performance drops for unanswerable questions while improving for multi-span and multi-speaker answers. For finetuned models (Tables 3 and 4), the overall performance of multispan models is comparable if not slightly less than single-span variants.10 Notably, for short-context Model Int. Train Data F1 IoU ![7_image_1.png](7_image_1.png) ![7_image_3.png](7_image_3.png) RoBERTa-base (SS)SQuADv2 27.9 26.0 + silver 34.6 31.1 DeBERTa-base (SS)SQuADv2 19.8 17.5 + silver 34.2 32.1 Longformer-base (SS)SQuADv2 15.1 9.4 + silver 32.5 29.6 BigBird-base (SS)TriviaQA 7.6 3.5 + silver 33.7 31.2 RoBERTa-base (MS) silver 34.9 30.9 DeBERTa-base (MS) silver 31.6 27.5 Longformer-base (MS) silver 35.1 31.3 BigBird-base (MS) silver **35.3 31.7** FLAN-T5 XL − 33.8 26.1 FLAN-T5 XL (self ans) − 34.0 28.6 FLAN-T5 XL (ext ans) − 25.6 23.8 models, there is significant gain in performance for all answerable questions. Further, we observe that multi-span models consistently underperform on unanswerable questions (as high as 15 F1 points). Performance of multi-span model on unanswerable questions can be negatively impacted by even one false positive I tag, changing the prediction from unanswerable to answerable. While prior work on multi-span QA (Segal et al., 2020; Li et al., 2022) have found tagging-based approaches to outperform single-span variants, they only explore factoid questions on relatively shorter contexts. Future work can focus on improving multi-span QA for more open-ended questions like in MEETINGQA. Impact of Intermediate Training. Silver data augmentation is effective in zero-shot settings with ≥15 point improvement for single-span longcontext models (Table 5). For finetuned models, however, we do not observe significant improvements in overall scores from intermediate-training compared to directly finetuning. Interestingly, silver data augmentation improves performance on unanswerable questions for single-span models (except DeBERTA) and multi-span models. Instruction-Tuned Models. Lastly, Table 5 shows zero-shot performance of instruction-tuned FLAN-T5 model. We find the FLAN-T5 XL model (3B parameters) outperforms most zero-shot singlespan models and narrowly underperforms zero-shot multi-span models. Despite the design of instruc- ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) tions and filtering (Section 3.2), the model underperforms on unanswerable questions. Thus, we add an additional step to identify answerable questions and use model responses only for predicted answerable questions. The question classification can be done zero-shot using the same FLAN-T5 model11 or by training an external supervised model.12 We observe that using the FLAN-T5 model is more effective (yields best performance) than using a supervised model (6 F1 point drop) as the predictions of the latter are biased towards the question being unanswerable. Future work can further focus on accurately identifying answerable questions to improve overall performance. Error Analysis. Next, we analyze some intriguing patterns in the errors within model predictions. Firstly, we observe that identifying rhetorical or unanswerable questions asked in a meeting is a challenging sub-task. Training a separate binary classification model that classifies whether a question is answerable based on the context from MEET-INGQA yields only an F1= 49.2 (see Appendix B). In Figure 4a, it becomes apparent that a significant portion of errors in predictions for answerable questions stem from the model incorrectly predicting that the question is rhetorical, particularly in the zero-shot setting. Additionally, in case of multispan answers, single-span models exhibit higher fraction of errors where predictions include sentences not present in the gold answer, in contrast to their multi-span counterparts (for details refer to Appendix D). This follows from the construction of single-span models, as described in Section 3.2. Lastly, for multi-speaker answers, we analyze the overlap in speakers (measured via IoU) of predicted and gold answers in Figure 4b. We find that even incorrect predictions of finetuned models contain roughly 55% speaker overlap with the gold answer, i.e., models can effectively predict which speakers answer the question. However, incorrect predictions in the zero-shot setting contain only 30% speaker overlap indicating that zero-shot models may struggle to predict which speakers answer the question. Future works can explore methods to effectively identify rhetorical questions and predict which speakers answer the question to improve overall performance. A more detailed analysis of errors can be found in Appendix D. ## 5 Related Work Our work builds upon prior work on meeting transcripts and question answering. Rogers et al. (2021) provide a comprehensive survey of several QA datasets and formats. Meeting Transcripts. Several other small-scale corpora of meeting recordings or transcripts are publicly available (Janin et al., 2003; Garofolo et al., 2004; Chen et al., 2005; Mostefa et al., 2007). We restrict ourselves the most popular and frequently used AMI corpus. Other works study various aspects of summarizing meeting transcripts (Mehdad et al., 2013; Wang and Cardie, 2013; Shang et al., 2018; Li et al., 2019; Zhu et al., 2020, *inter alia*) or extracting action-items (Morgan et al., 2006; Purver et al., 2007; Cohen et al., 2021). The work closest to ours uses Markov models to classify dialogue-acts as questions, answers or others (Kathol and Tur, 2008). QA on Conversational Text. Prior work comprises of QA datasets based on small chit-chat from TV shows (Sun et al., 2019; Yang and Choi, 2019) or domain-specific chat-rooms (Li et al., 2020). The QACONV (Wu et al., 2022) dataset builds on these works with conversations from multiple domains (including MEDIASUM). However, these works employ human annotators for generating questions based on their understanding of the conversation resulting in straight-forward questions testing local information. Consequently, the answer spans of these datasets are significantly shorter, single-span, restricted to one speaker and often correspond to simple noun phrases (as high as 80% for QACONV). In contrast, questions asked by meeting participants are more open-ended, discussionseeking, and correspond to longer answers (≈ 7x) with complex multi-span and multi-speaker scenarios. Note that our work is different from conversational QA datasets that consist of a sequence of questions and answers simulating a conversation grounded in a short paragraph (Choi et al., 2018; Reddy et al., 2019; Campos et al., 2020). Long-Context QA. Recent works show that QA models struggle to understand answer questions correctly using long contexts (Pang et al., 2022; Mou et al., 2021; Soleimani et al., 2021; Dasigi et al., 2021). However, unlike our work, the source (long) documents for building these datasets are taken from written-text domains such as books, film-scripts, research papers, or news articles. ## 6 Conclusion In this work, we present MEETINGQA, an extractive QA dataset based on meeting transcripts to identify answers to questions asked during discussion among meeting participants. Detailed analysis of the data reveals it is a challenging real-world task. Baseline experiments with a wide variety of models show the current performance lags behind human performance by at least 25 and 44 overall F1 points for finetuned and zeroshot models respectively. This demonstrates that current QA systems find our task challenging, leaving tremendous scope for improvement. We hope that future works will aim to bridge this gap and our work fosters research in NLP tasks (especially QA) on other text domains such as meeting transcripts. ## Acknowledgements We thank the reviewers and the area chairs for their helpful comments and feedback. We thank TELUS International for their help with data collection. We also thank Shiyue Zhang and Swarnadeep Saha for their helpful comments. This work was partially supported by NSF-CAREER Award 1846185, and NSF-AI Engage Institute DRL-2112635. The views contained in this article are those of the authors and not of the funding agency. ## Limitations Due to the structure of MEETINGQA, the answers to questions asked by participants (if any) are present in the transcript itself, making it an extractive task. Therefore, we do not extensively explore the use of generative models since the predictions do not stick to the sentences in the transcript and could possibly include hallucinations. However, we aim to mitigate hallucinations by using instruction-tuned generative models with suitably designed instructions and enforce a strict exact match criteria for filtering any possible hallucinations. Future work can explore how to adapt or evaluate non-instruction-tuned generative models on this task and better identify hallucinations with a more relaxed filtering to improve performance. We also do not report zero-shot performance of InstructGPT (Ouyang et al., 2022) as these models are not freely accessible. Additionally, we use a simple multi-span QA adaptation technique from Segal et al. (2020), but predicting answer spans by classifying each token can be difficult to train leading to slightly lower performance (discussed in Section 4.1). We hope our dataset provides additional motivation for future work on multi-span QA. Finally, MEETINGQA only comprises of publicly available meeting transcripts in English, but our methodology of data collection and model training (using multilingual variants) should still be applicable for other languages in future work. ## Ethical Considerations The human participants in our work were recruited by an external crowd-sourcing company that ensured annotators provided informed consent, were given fair compensation, and no personally identifiable information (PII) was collected or released. We use existing publicly available meeting transcripts collected by the AMI project (Carletta et al., 2005) in controlled scenarios and filtered for offensive/toxic content. We also conducted manual inspection of a random sample from annotated transcripts and did not find any toxic content or PII. Furthermore, the collected data and experiments are conducted in English and we do not claim generalization of our findings across languages. Given the broad nature of meetings, the content can fall into a number of domains, of which only a few are represented in the AMI corpus. Therefore, we do not expect models trained on MEETINGQA to generalize to certain domains such as judicial, ethical review, congressional proceedings, etc. which involve specific jargon and rules of engagement. ## References Tal August, Lucy Lu Wang, Jonathan Bragg, Marti A Hearst, Andrew Head, and Kyle Lo. 2022. Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing. *arXiv preprint arXiv:2203.00130*. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Jon Ander Campos, Arantxa Otegi, Aitor Soroa, Jan Deriu, Mark Cieliebak, and Eneko Agirre. 2020. DoQA - accessing domain-specific FAQs via conversational QA. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7302–7314, Online. Association for Computational Linguistics. Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The ami meeting corpus: A pre-announcement. In International workshop on machine learning for multimodal interaction, pages 28–39. Springer. Lei Chen, R Travis Rose, Ying Qiao, Irene Kimbara, Fey Parrill, Haleema Welji, Tony Xu Han, Jilin Tu, Zhongqiang Huang, Mary Harper, et al. 2005. Vace multimodal meeting corpus. In *International Workshop on Machine Learning for Multimodal Interaction*, pages 40–51. Springer. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Amir Cohen, Amir Kantor, Sagi Hilleli, and Eyal Kolman. 2021. Automatic rephrasing of transcriptsbased action items. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 2862–2873, Online. Association for Computational Linguistics. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4599–4610, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. John S Garofolo, Christophe Laprun, Martial Michel, Vincent M Stanford, Elham Tabassi, et al. 2004. The nist meeting room pilot corpus. In *LREC*. Citeseer. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. Paria Jamshid Lou and Mark Johnson. 2020. Improving disfluency detection by self-training a self-attentive model. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 3754–3763, Online. Association for Computational Linguistics. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In *2003 IEEE International Conference on Acoustics, Speech, and* Signal Processing, 2003. Proceedings.(ICASSP'03)., volume 1, pages I–I. IEEE. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Andreas Kathol and Gokhan Tur. 2008. Extracting question/answer pairs in multi-party meetings. In *2008* IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5053–5056. IEEE. Klaus Krippendorff. 1980. *Content analysis: An Introduction to Its Methodology*. Sage publications. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Haonan Li, Martin Tomko, Maria Vasardani, and Timothy Baldwin. 2022. MultiSpanQA: A dataset for multi-span question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1250–1260, Seattle, United States. Association for Computational Linguistics. Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. *arXiv preprint arXiv:2004.05080*. Manling Li, Lingyu Zhang, Heng Ji, and Richard J. Radke. 2019. Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2190– 2196, Florence, Italy. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yashar Mehdad, Giuseppe Carenini, Frank Tompa, and Raymond T. Ng. 2013. Abstractive meeting summarization with entailment and fusion. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 136–146, Sofia, Bulgaria. Association for Computational Linguistics. William Morgan, Pi-Chuan Chang, Surabhi Gupta, and Jason M. Brenier. 2006. Automatically detecting action items in audio meeting recordings. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, pages 96–103, Sydney, Australia. Association for Computational Linguistics. Djamel Mostefa, Nicolas Moreau, Khalid Choukri, Gerasimos Potamianos, Stephen M Chu, Ambrish Tyagi, Josep R Casas, Jordi Turmo, Luca Cristoforetti, Francesco Tobia, et al. 2007. The chil audiovisual corpus for lecture and meeting analysis inside smart rooms. *Language resources and evaluation*, 41(3):389–407. Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, and Hui Su. 2021. Narrative question answering with cutting-edge opendomain QA techniques: A comprehensive study. Transactions of the Association for Computational Linguistics, 9:1032–1046. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Tatsuro Oya, Yashar Mehdad, Giuseppe Carenini, and Raymond Ng. 2014. A template-based abstractive meeting summarization: Leveraging summary and source text relationships. In Proceedings of the 8th International Natural Language Generation Conference (INLG), pages 45–53, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics. Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. 2022. QuALITY: Question answering with long input texts, yes! In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5336–5358, Seattle, United States. Association for Computational Linguistics. Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, et al. 2021. Quality: Question answering with long input texts, yes! *arXiv preprint arXiv:2112.08608*. Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In *Proceedings of the Fifth International* Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Resources Association (ELRA). Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 5231–5247, Online. Association for Computational Linguistics. Matthew Purver, John Dowding, John Niekrasz, Patrick Ehlen, Sharareh Noorbaloochi, and Stanley Peters. 2007. Detecting and summarizing action items in multi-party dialogue. In *Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue*, pages 18–25, Antwerp, Belgium. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. Qa dataset explosion: A taxonomy of nlp resources for question answering and reading comprehension. *arXiv preprint arXiv:2107.12708*. Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. 2020. A simple and effective model for answering multi-span questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3074–3080, Online. Association for Computational Linguistics. Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorré. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 664–674, Melbourne, Australia. Association for Computational Linguistics. Amir Soleimani, Christof Monz, and Marcel Worring. 2021. NLQuAD: A non-factoid long question answering data set. In *Proceedings of the 16th Conference of the European Chapter of the Association* for Computational Linguistics: Main Volume, pages 1245–1255, Online. Association for Computational Linguistics. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231. Lu Wang and Claire Cardie. 2013. Domain-independent abstract generation for focused meeting summarization. In *Proceedings of the 51st Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1395–1405, Sofia, Bulgaria. Association for Computational Linguistics. Yicheng Wang and Mohit Bansal. 2018. Robust machine comprehension models via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 575–581, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Chien-Sheng Wu, Andrea Madotto, Wenhao Liu, Pascale Fung, and Caiming Xiong. 2022. QAConv: Question answering on informative conversations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5389–5411, Dublin, Ireland. Association for Computational Linguistics. Zhengzhe Yang and Jinho D. Choi. 2019. FriendsQA: Open-domain question answering on TV show transcripts. In *Proceedings of the 20th Annual SIGdial* Meeting on Discourse and Dialogue, pages 188–197, Stockholm, Sweden. Association for Computational Linguistics. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. *Advances in Neural Information* Processing Systems, 33:17283–17297. Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021a. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5927–5934, Online. Association for Computational Linguistics. Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 194– 203, Online. Association for Computational Linguistics. Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021b. Retrieving and reading: A comprehensive survey on open-domain question answering. arXiv preprint arXiv:2101.00774. ![12_image_0.png](12_image_0.png) ## A Additional Details On M**Eeting**Qa A.1 Tri-Gram Analysis Of Question Types In contrast to most QA datasets, questions in MEET-INGQA are extracted directly from the meeting transcripts and thus are conversation segments. Consequently, we find that questions may not be concise, often use auxiliary verbs, and do not typically begin with 'wh' or 'how' prefixes, making our new QA task and dataset challenging yet interesting for the community. This makes conventional analysis of question types based on prefixes less relevant here, and instead, we compute the top-25 most common trigrams from all questions, shown in Figure 5. The three most common question patterns are: '*do you/we ...*', and '*what ...*'. Additionally, the trigrams demonstrate that our questions are open-ended and seeking opinions or thoughts from other participants that tend to elicit long responses. ## A.2 Dataset Format And Meta-Information We provide annotations for each meeting transcript at the sentence level in '.json' format, and each sentence has 4 primary attributes: displayText, speakerFaceId, sentenceId, and question which contain the sentence text, integer identifier of the speaker (unique within a meeting), integer identifier of the sentence, and information about the sentence as a question respectively. The question attribute is relevant only if the sentence is identified as a question. It contains additional attributes: possible, meaningful, questionContext, combinedQuestion, and answerSpan. First, we perform "ques- ![13_image_0.png](13_image_0.png) tion selection" described in Section 2.1 and set the possible tag as True to guide the annotators. The remaining attributes are set to default values meaningful = False, answerSpan = [], questionContext = [], and combinedQuestion = [] prior to annotation. Annotators modify the question attribute during the course of the annotation and can even mark additional questions outsider our question selection criteria by setting possible = True. They lable the remaining attributes according to the "answer annotation" steps mentioned in Section 2.1. The list type attributes questionContext, combinedQuestion, and answerSpan contain sentences specified using the value of the corresponding sentenceId attributes. The domain of meeting transcripts (from AMI corpus) is a combination of elicited scenario-driven data, and natural data. We refer interested readers to the AMI project page for more information about the topic or scenario of each meeting.13 We find that out of 7.7K questions in MEET-INGQA, only 66 (< 1%) additional questions were identified by the annotators that were missed by our question selection criteria. Further, 751 questions (9.7%) were annotated with additional context sentences via questionContext and a total of 784 (10.1%) were combined with another question via combinedQuestion attribute. Among the latter, an average of 2.2 (maximum 4) questions were combined and these questions were an average of 1.5 sentences apart. The average length of questionContext (when annotated) was 1.7 sentences (maximum 3) which preceded the question by 1.7 sentences. Note that for the purposes of QA evaluation, we only use the possible and answerSpan attributes. The remaining attributes serve as meta-information to understand the dataset better and can facilitate error analysis and/or future work. Also to come up with overall question counts, we ignore the combinedQuestion attributes and count all the questions individually. Therefore, this attribute serves as an indicator of when and why different questions share the same answer. Empirically, we note that the combined ![14_image_0.png](14_image_0.png) questions and question context typically fit within the contexts created using location-based retrieval (Section 3.1) and are present in the input fed into QA models in the vast majority of cases. ## A.3 Additional Annotated Examples Next, we show multiple examples of snippets from meeting transcripts with QA components present in MEETINGQA in Figures 6-11. Figure 7 also contains an example of an unanswerable question asked by Speaker 4 ("*my data is coming?*") which is either rhetorical or corresponds to incorrect punctuation. In such cases, annotators label meaningful = False and an empty/null answer annotation (answerSpan = []). On the other hand, Figure 11 also contains two consecutive questions asked by Speaker 1 but the annotators mark both as meaningful = True, but choose to combine them via (combinedQuestion) and share a common answer. This because the first question is more generic, and the second question builds on top of it, by providing a specific example of what is loaded and what isn't. Further, in Figure 8 we provide an example of question which needs additional context sentences annotated via questionContext. Figures 6, 9, and 11 are diverse instances of multi-speaker and multi-span answers in our dataset. ![14_image_1.png](14_image_1.png) ## A.4 More On Data Collection AMI Transcript Preprocessing. The AMI corpus is a collection of 171 meeting transcripts containing manually annotated and punctuated speakerspecific XML files for each meeting. We parse these XML files and combine utterances from multiple speakers by aligning the start times into a single transcript (with speaker information) corresponding to each meeting. We then use a disfluency detector model to identify and remove disfluencies from the utterances (Jamshid Lou and Johnson, 2020). Annotator Recruitment and Training. All annotators are hired by a professional crowdsourcing company TELUS.14 The company obtained consents from the crowdworkers before the annotation process and conducted ethical reviews. The company recruited 18 annotators, all based in the United States and native English speakers, who had previously successfully participated in text-based annotation projects. In addition to the instruction document (shared in the supplementary) curated by the first author, TELUS conducted a series of (virtual) meetings to deliver instructions, conduct example walk-through of the annotation and clarify doubts. At the end of initial training, a small batch of 5 meetings was provided to each of the annotators to calibrate performance. The responses were then compared to the good quality annotations performed by 2 project leads at TELUS manually in consultation with the authors. Feedback was pro14https://www.telusinternational.com/ ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) vided to the annotators to improve the quality of annotations, and based on their final responses the top-6 best performing annotators were selected to work on the project. Quality Control. From the pool of selected annotators, project leads recruited two of the best performing and experienced annotators to help with quality control. Any annotated meeting transcript was assigned to either of these two annotators for review. Between the two, meeting annotated by one was assigned to the other for review. After the review, minor errors in annotation were fixed directly, otherwise major errors were sent back to the respective annotators for a re-annotation of the question and in some rare cases the annotation was redone by the reviewers. At the end of annotation batch (total of 4), the transcripts were sent to the authors who extensively reviewed them and provided feedback. We also looked for typos, and other issues which were fixed promptly. The TELUS project leads did not find any toxic and offensive content at their end and no such concerns were reported in the quality control stage. Further, all communication with the annotators is done by the crowdsourcing company and no personally identifiable information (PII) is released to the authors. Additionally, their execution platform contains unique identifiers for all annotators ensuring their PII is not released along with the annotated data. Finally, based on the feedback from annotators, we removed all questions corresponding to 5 meetings as the meeting content was hard to follow. Time Taken. On an average across meeting transcript and annotators, it took ≈ 1 hour to annotate each meeting transcript which averages to ≈ 1.3 minutes per question. However, the per meeting annotation time strongly correlated with the length of the transcript (number of sentences). Among questions, annotators spent more time on answerable questions. This is because if the question is marked unanswerable (not meaningful), they did not have find exact answer span (in sentences) and annotated other meta-information. The project leads also ensured that the amount of time taken by each annotators was consistent with internal estimated by the intial batch used during annotator recruitment. The total production time for this project (for TELUS) was 186.5 hours. | Int. | Exact Match (EM) | | | | | |--------------------|----------------------|---------|------------|------|------| | Model | Train | Overall | Answerable | | | | Data | All M-Span M-Speaker | | | | | | − | 28.8 | 23.3 | 0 | 12.1 | | | RoBERTa | SQuADv2 | 31.0 | 28.2 | 0 | 12.0 | | (base) | + silver | 30.3 | 22.7 | 0 | 11.6 | | − | 34.9 | 25.6 | 0 | 12.9 | | | DeBERTa | SQuADv2 | 34.6 | 27.3 | 0 | 12.9 | | (base) | + silver | 31.4 | 24.5 | 0 | 12.2 | | − | 31.0 | 24.3 | 0 | 11.7 | | | Longformer SQuADv2 | 27.3 | 25.5 | 0 | 11.8 | | | (base) | + silver | 30.8 | 21.7 | 0 | 13.0 | | − | 31.0 | 24.3 | 0 | 11.7 | | | BigBird | TriviaQA | 34.6 | 27.3 | 0 | 12.9 | | (base) | + silver | 31.4 | 24.5 | 0 | 12.2 | | Human | − | 75.2 | 73.0 | 72.5 | 66.4 | Compensation. The annotators were compensated through a fixed hourly rate defined for each participant. No additional bonus was provided to incentivize faster turnaround times. The average hourly wage for participants was roughly $20/hour in compliance with all the federal and local local laws to ensure fair payment. ## B Experimental Details GPU Compute. For training and/or inference we used a combination of 6 NVIDIA A10 24 GB GPUs and 2 NVIDIA RTX A6000 48GB GPUs. Directly finetuning on MEETINGQA starting from a pretrained checkpoints is quite fast and takes not more than 4 GPU hours (depending on the batch size). Intermediate training on the silver-annotated data (5 epochs) takes about 12-18 GPU hours (training time is higher for long-context models). Hyperparameters. For the short-context models (RoBERTa and DeBERTa-v3) we used max sequence length of 512, (stride of 128 but not utilized due to location-based context retrieval), and batch size of 16. For long context models, we used max sequence length of 4096, stride of 128, and batch size varied between 8 and 16 (depending on GPU availability). Note that there is no stride for multispan models. For model training on MEETINGQA, we use a learning rate of 3e-5, warmup ratio of 0.2, and train for 15 epochs with an early stopping | Int. | Exact Match (EM) | | | | | |------------|----------------------|------------|------|------|------| | Model | Train Overall | Answerable | | | | | Data | All M-Span M-Speaker | | | | | | RoBERTa | − | 26.0 | 19.3 | 7.0 | 11.3 | | (base) | silver | 30.3 | 26.0 | 8.0 | 12.8 | | DeBERTa | − | 25.2 | 20.6 | 7.9 | 11.2 | | (base) | silver | 29.2 | 26.1 | 13.3 | 17.9 | | Longformer | − | 26.4 | 20.6 | 7.6 | 10.1 | | (base) | silver | 32.0 | 20.7 | 5.0 | 12.0 | | BigBird | − | 20.0 | 16.3 | 7.7 | 9.1 | | (base) | silver | 25.8 | 21.0 | 4.8 | 12.9 | | Human | − | 75.2 | 73.0 | 72.5 | 66.4 | criteria set to a patience of 2 epochs (F1 on dev split). For intermediate training with silver data, we use the same hyperparameters except we train for 8 epochs with the same early stop criteria. The hyperparameters values used are pretty standard and were not tuned explicitly for MEETINGQA. Model Sizes. RoBERTa base and large models comprise of 125M and 355M parameters respectively, while DeBERTa base and large models comprise of 86M and 304M parameters. On the other hand, Longformer-base comprises of 149M parameters. Since BigBird is initialized with the RoBERTa checkpoints they share the same model size. Finally, instruction-tuned models FLAN-T5 LARGE, XL and XXL consist of 770M, 3B, and 11B model parameters respectively. Pretrained Checkpoints. For models without intermediate-training we use the standard checkpoints for all models available on HuggingFace. For the score-based context retrieval in Section 3.1, we use HuggingFace's evaluate library for computing ROUGE1 and the multi-qa-MiniLM-L6-cos-v1 model from the sentence-transformers python package for embedding cosine similarity. During silver data annotation, we used the en_core_web_sm from spacy package for NER. For intermediate training (SS) we used the following pretrained checkpoints (base size): - RoBERTa: deepset/roberta-base-squad2 - DeBERTa: deepset/deberta-v3-base-squad2 | Model | Int. Train Data | Overall | No Answer | Answerable | | | |--------------------------|--------------------|--------------------|--------------------|--------------------------------------------------------|-------------------------------------|-------------------| | All | Multi-Span | Multi-Speaker | | | | | | F1 / EM / IoU | F1† | F1 / EM / IoU | F1 / EM / IoU | F1 / EM / IoU | | | | RoBERTa-base (SS) | SQuADv2 | 27.9 / 25.9 / 26.0 | 80.2 | 4.6 / 1.6 / 1.8 | 2.9 / 0 / 1.2 | 3.6 / 0.1 / 1.6 | | + silver | 34.6 / 20.7 / 31.1 | 32.6 | 35.4 /15.4 / 30.4 | 26.3 / 0 / 19.2 | 28.1 / 1.7 / 21.0 | | | DeBERTa-base (SS) | SQuADv2 | 19.8 / 16.2 / 17.5 | 50.3 | 6.2 / 1.0 / 2.9 | 5.6 / 0 / 2.6 | 6.0 / 0 / 2.9 | | + silver | 34.2 / 25.4 / 32.1 | 63.1 | 21.3 / 8.6 / 18.3 | 15.5 / 0 / 11.6 | 16.5 / 1.4 / 12.6 | | | Longformer-base (SS) | SQuADv2 | 15.1 / 0 / 9.4 | 0.1 | 21.8 / 0 / 13.5 | 32.8 / 0 / 21.3 | 28.8 / 0 / 18.3 | | + silver | 32.5 / 20.5 / 29.6 | 39.8 | 29.2 / 11.9 / 25.0 | 23.3 / 0 /17.4 | 23.3 / 1.8 / 17.5 | | | BigBird-base (SS) | SQuADv2 | 7.6 / 0.8 / 3.5 | 0.1 | 10.9 / 1.2/ 5.0 | 9.6 / 0 / 5.0 | 10.7 / 0 / 5.5 | | + silver | 33.7 / 23.8 / 31.2 | 53.3 | 25.0 / 10.7 / 21.4 | 18.4 / 0 / 13.5 | 20.0 / 1.9 / 15.1 | | | RoBERTa-base (MS) | silver | 34.9 / 19.8 / 30.9 | 24.0 | 39.8 / 17.9 / 34.0 29.2 / 0.3 / 20.9 29.4 / 0.5 / 21.2 | | | | DeBERTa-base (MS) | silver | 31.6 / 17.0 / 27.5 | 15.8 | 38.6 / 17.5 / 32.7 27.2 / 0.2 / 19.4 | 27.4 / 0 / 19.2 | | | Longformer-base (MS) | silver | 35.1 / 21.3 / 31.3 | 32.6 | 36.2 / 16.3 / 30.9 | 25.7 / 0 / 18.5 | 27.3 / 0.4 / 19.9 | | BigBird-base (MS) | silver | 35.3 / 20.9 / 31.7 | 35.5 | 35.2 / 14.4 / 30.1 26.4 / 0.2 / 19.2 28.1 / 2.0 / 20.8 | | | | FLAN-T5 LARGE | − | 26.0 / 12.4 / 20.6 | 17.4 | 29.8 / 10.2 / 22.0 23.9 / 0.2 / 15.7 25.6 / 1.6 / 17.1 | | | | FLAN-T5 LARGE (self ans) | − | 26.3 / 13.0 / 21 | 20.0 | 29.1 9.9 / 21.4 / | 23.5 / 0.2 / 15.4 24.9 / 1.6 / 16.6 | | | FLAN-T5 LARGE (ext ans) | − | 22.8 / 20.9 / 21.7 | 62.0 | 5.7 / 2.9 / 4.1 | 2.5 / 0 / 1.7 | 3.5 / 0.1/ 2.3 | | FLAN-T5 XL | − | 33.8 / 17 / 26.1 | 15.6 | 41.9 / 17.6 / 30.8 20.8 / 0.2 / 13.3 23.7 / 2.3 / 16.3 | | | | FLAN-T5 XL (self ans) | − | 34.0 / 22.2 / 28.6 | 45.3 | 28.9 / 11.9 / 21.1 20.8 / 0.2 / 13.3 23.7 / 2.3 / 16.3 | | | | FLAN-T5 XL (ext ans) | − | 25.6 / 22.8 / 23.8 | 62.0 | 9.4 / 5.3 / 6.8 | 4.4 / 0 / 2.8 | 5.0 / 0.6 / 3.5 | | FLAN-T5 XXL | − | 31.0 / 15.1 / 24.2 | 26.2 | 33.1 / 10.2 / 23.3 32.7 / 0.3 / 22.6 31.4 / 1.0 / 21.4 | | | | FLAN-T5 XXL (self ans) | − | 31.6 / 19.3 / 26.2 | 44.6 | 25.7 / 7.9 / 18.0 | 25.1 / 0.3 / 17.3 24.4 / 0.7 / 16.5 | | | FLAN-T5 XXL (ext ans) | − | 24.3 / 22.2 / 22.8 | 62.0 | 6.0 / 2.9 / 3.8 | 3.5 / 0 / 2.3 | 3.7 / 0 / 2.4 | Table 8: Comparing performance of zero-shot models for different answer-types. Single-span and multi-span models trained on intermediate training data are denoted by SS and MS respectively. Identifying answerable questions using FLAN-T5 is denoted by 'self ans' whereas 'ext ans' denotes use of external supervised model. †F1, EM and IoU are the same for unanswerable questions as the reference is an empty string. - Longformer: mrm8488/longformer-base-4096finetuned-squadv2 - BigBird: google/bigbird-base-trivia-itc Licensing. We used the AMI dataset that has CC-BY-4.0 license. Our released data will have the CC-BY-NC license. We do not violate the constraints put in the MEDIASUM dataset to use interview files for reasearch purposes only. Instructions/Prompts. For instruction-tuned FLAN models we use the following prompt template to generate sentences from the context that answer the question. [CONTEXT] Based on the conversation above, which sentences from the converstation answer [SPEAKER]'s question: [QUESTION] Here, [.] is a placeholder filled in separately for each instance/question. Additionally, for the 'self ask' setting, we first use a prompt (shown below) to get the model to output if the question is answerable. If the model outputs "no", filter out those questions and use an empty string as the predictions. We find answers to the remaining questions using the prompt above. [CONTEXT] Based on the conversation above, did anyone answer [SPEAKER]'s question: [QUESTION] Respond "yes" if answered, "no" otherwise. Binary Answerable Classification Model. As mentioned in Section 4.1, we train a separate supervised RoBERTa-base model to detect if a question is answerable. This is formulated as a binary classification task, therefore we train the sequence classification head on questions from MEETINGQA. We use the same hyperparameters as for single-span RoBERTa models mentioned above. The final performance of this model, is not as strong with an overall F1 = 49.2. This indicates that even a simple binary task formulation from MEETINGQA is challenging and requires thorough understanding of meeting discussions. ## C Additional Results Building on Tables 3 and 4, which contain F1 and IoU scores, we present the exact match (EM) scores | Model | Int. Train Data | Overall | No Answer | Answerable | | | |------------------------------|--------------------|--------------------|---------------------------------------------------------|--------------------------------------|--------------------|--------------------| | All | Multi-Span | Multi-Speaker | | | | | | F1 / EM / IoU | F1† | F1 / EM / IoU | F1 / EM / IoU | F1 / EM / IoU | | | | RoBERTa-large (SS) | − | 54.7 / 34.5 / 50.7 | 59.8 | 52.4 / 23.2 / 46.6 | 45.6 / 0 / 36.5 | 50.8 / 11.1 / 43.1 | | SQuADv2 | 55.6 / 32.3 / 51.0 | 59.4 | 53.8 / 20.2 / 47.3 | 52.8 / 0 / 43.3 | 54.9 / 10.0 / 46.2 | | | + silver | 55.3 / 31.7 / 50.7 | 63.4 | 51.7 / 17.5 / 45.1 | 53.4 / 0 / 43.9 | 54.4 / 7.7 / 45.5 | | | Finetuned DeBERTa-large (SS) | − | 56.1 / 30.9 / 51.0 | 43.1 | 61.9 / 25.5 / 54.5 | 55.2 / 0 / 44.3 | 59.6 / 12.4 / 50.1 | | SQuADv2 | 57.2 / 32.0 / 52.4 | 52.0 | 59.5 / 23.0 / 52.6 | 55.6 / 0 / 45.8 | 59.4 / 11.8 / 50.9 | | | + silver | 55.7 / 31.5 / 51.0 | 52.1 | 57.3 / 22.4 / 50.5 | 55.2 / 0 / 44.9 | 58.5 / 10.4 / 49.3 | | | RoBERTa-large (MS) | − | 48.6 / 18.3 / 38.7 | 47.2 | 49.2 / 5.4 / 34.9 | 50.2 / 1.1 / 34.9 | 50.1 / 1.2 / 34.5 | | silver | 49.4 / 19.3 / 40.1 | 48.9 | 50.6 / 6.8 / 35.7 | 52.0 / 2.1 / 36.2 | 51.7 / 2.3 / 35.8 | | | DeBERTa-large (MS) | − | 56.8 / 29.4 / 50.2 | 47.1 | 61.2 / 21.5 / 53.0 58.8 / 6.0 / 47.9 | 62.2 / 9.9 / 52.0 | | | silver | 57.5 / 31.2 / 51.4 | 48.4 | 62.3 / 22.9 / 54.2 59.6 / 7.1 / 48.3 63.5 / 10.7 / 52.9 | | | | | SQuADv2 | 31.7 / 29.7 / 29.9 | 92.4 | 4.6 / 1.8 / 2.0 | 2.6 / 0 / 1.1 | 3.4 / 0 / 1.5 | | | RoBERTa-large (SS) | + silver | 33.4 / 20.6 / 30.0 | 38.0 | 31.3 / 12.8 / 26.4 | 24.2 / 0 / 17.3 | 24.7 / 0.1 / 17.9 | | Zero-shot | SQuADv2 | 27.3 / 23.5 / 24.8 | 72.4 | 7.1 / 1.7 / 3.5 | 6.0 / 0 / 3.0 | 6.0 / 0.1 / 3.0 | | DeBERTa-large (SS) | + silver | 35.8 / 24.4 / 32.9 | 52.2 | 28.5 / 12.0 / 24.3 | 21.0 / 0 / 15.2 | 21.7 / 1.0 / 16.3 | | RoBERTa-large (MS) | silver | 35.2 / 20.3 / 31.7 | 25.1 | 39.7 / 17.2 / 33.6 | 28.5 / 0 / 20.2 | 29.2 / 0 / 20.7 | | DeBERTa-large (MS) | silver | 32.1 / 17.4 / 28.2 | 16.6 | 38.3 / 16.9 / 32.2 | 27.1 / 0 / 19.3 | 27.2 / 0 / 18.7 | ![18_image_0.png](18_image_0.png) in Tables 6 and 7 for finetuned single-span and multi-span respectively. While the relative trends across the models remains the same, we find that the EM scores are the lowest because it reflects predictions that perfectly match the reference. Another noteworthy observation is that the EM scores of all single-span models on the multi-span split is 0. This can be explained by the training procedure of single-span models (described in Section 3.2). The models are trained to predict a single "super-span" starting from the first sentence in the reference to the last sentence in the reference. Therefore, even in the theoretical best-case-scenario, the models would predict a single super-span containing all the reference sentences interleaved by irrelevant sentences for questions with multi-span answers. We analyze errors due to this in Appendix D. In Table 5, we only present the overall scores for various models. All the scores on different splits are given in Table 8. We observe that for all single-span models (except RoBERTa on unanswerable questions) adding silver data in intermediate training helps improve performance across all splits. Furthermore, the multi-speaker and multispan splits consistently pose a challenge for all models evaluated in a zero-shot setting. Also, within answer-types performance drops for unanswerable questions while improving for multi-span and multi-speaker answers. The challenge posed by unanswerable questions can be explained by the multi-span adaptation (Section 3.2). By posing question answering as a token-classification task, even one false positive (I tag instead of all Os) in token label, changes the answer prediction from unanswerable to answerable. Finally, we note that when using a pipeline-approach of isolation unanswerable questions separately, we find the errors in this step cascade and are reflected in the per- ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) formance on answerable questions of the overall system. These systems perform better on unanswerable questions (not identified by regular instruction and filtering), however the false-positives decrease performance on answerable questions, even more so when using an external supervised model. In Table 8, we see clear scaling of performance as we move from FLAN-T5 large (770M) to FLANT5 XL (3B). However, for FLAN-T5 XXL (11B) the performance of unanswerable, multi-span and multi-speaker questions increases (≥ 8 F1 points) but performance on other answerable questions decreases (≈ 9 F1 points) which in turn reduces the overall performance as compared to FLAN-T5 xl. Table 9 evaluates the performance of RoBERTalarge and DeBERTa-large architectures for singlespan and multi-span models in both finetuned and zero-shot settings. The corresponding performance of the base models can be found in Tables 3, 4, and 5 respectively. We do not observe any significant increase in performance when using the larger checkpoints, thus leaving ample room for future work to bridge the gap between model and human performance on MEETINGQA. ## D Error Analysis In this section, we analyze error patterns across models discussed in Sections 3.2 and 4 in detail. First, we note that for the unanswerable questions split, any error corresponds to the model predicting a non-empty answer span. The frequency of this for a given model can be calculated by 100 − F1 score for this split (provided in Tables 3, 4, and 8). However, for answerable questions, errors in model predictions are diverse as categorized below. I. Prediction is an empty-span (unanswerable) II. Predicted span contains a sentence not present in the gold or annotated reference span III. At least one of the sentences in the reference span is not present in the predicted span IV. Combination of errors with respect to reference span (both II and III) Therefore, whenever the model prediction does not exactly match the annotated reference span, we can put it in one of the above 4 categories. We perform this analysis for various finetuned single-span, finetuned multi-span models as well as zeroshot single-span, multi-span and instruction tuned models discussed in Sections 3.2 and 4. For brevity, we ![20_image_0.png](20_image_0.png) | Model | Training Data | Multi-Span Split | Multi-Speaker Split | | |----------------------|------------------|--------------------|-----------------------|-------| | % Preds. | % Preds. | Speaker IoU | | | | RoBERTa-base (SS) | MEETINGQA | 0.0 | 65.04 | 58.71 | | DeBERTa-base (SS) | MEETINGQA | 0.0 | 43.44 | 48.88 | | Longformer-base (SS) | MEETINGQA | 0.0 | 52.29 | 52.04 | | Bigbird-base (SS) | MEETINGQA | 0.0 | 63.86 | 57.15 | | RoBERTa-base (MS) | MEETINGQA | 52.58 | 48.66 | 54.72 | | DeBERTa-base (MS) | MEETINGQA | 70.86 | 54.48 | 63.76 | | Longformer-base (MS) | MEETINGQA | 53.98 | 50.54 | 57.27 | | Bigbird-base (MS) | MEETINGQA | 71.12 | 42.72 | 55.70 | | RoBERTa-base (SS) | SQuADv2 + silver | 0.0 | 8.72 | 31.71 | | Longformer-base (SS) | SQuADv2 + silver | 0.0 | 12.36 | 27.19 | | RoBERTa-base (MS) | silver | 5.30 | 3.47 | 34.35 | | Longformer-base (MS) | silver | 1.51 | 1.2 | 30.30 | | FLAN-T5 | − | 11.01 | 10.71 | 31.11 | | FLAN-T5 (self ans) | − | 9.50 | 8.69 | 22.75 | | FLAN-T5 (ext ans) | − | 1.51 | 2.02 | 4.85 | Finetuned ![20_image_1.png](20_image_1.png) pick representative models from different possible combinations of intermediate training data. This is illustrated in Figures 12-15 with error I shown in red, error II shown in yellow, error III shown in blue, and error IV shown in green. Table 3 shows very similar performance different intermediate training data configurations for a given model architecture. Thus, we present error distribution for single-span models directly finetuned on MEETINGQA in Figure 12. We find that most of the errors belong to categories II-IV. The DeBERTa model has a relatively high unanswerable prediction error which is primarily because its predictions skew towards unanswerable as explained by the F1 score on No Answer (unanswerable) split in Table 3. Next, in Figure 13 we show the error distributions on the corresponding multispan models finetuned directly on MEETINGQA. We observe that, for all model (except RoBERTa) the frequency of incorrectly predicting unanswerable goes down as well as the prediction span containing sentences outside the reference (error II). However, the frequency of hybrid error IV increases significantly. This can partly be explained by the design of single-span and multi-span models. As mentioned in Section 3.2, training data of the single span model involves creating a single "super-span" starting from the first sentence in the reference to the last sentence in the reference. This by construction involves error II and supervision on this data directs the model to include irrelevant sentences in the answer span if it is sandwiched between two relevant sentences. Also, for multi-span models an unanswerable prediction implies all tokens are labeled with the O tag, and even one false positive (I tag) would make the prediction answerable. Due to this, one can expect these models to mispredict empty spans less frequently. Interestingly, when we look at zero-shot performance of single-span and multi-span models in Figure 14, we find relatively high frequency of error I and very low frequency of error II. The errors III Zero-shot and hybrid IV also become more common. These models which are trained only on intermediate data, do not generalize in their ability to predict when a question is unanswerable. Further, we find that in the zero-shot evaluation setting, model predictions are shorter than the reference span by at least one sentence on average (≈ 2 sentences for multispan split). This indicates in zero-shot evaluation models are more likely to predict a part of the answer than output spans that covers all sentences in the reference, containing extra sentences that lie outside the reference. Finally, we look at the errors of instruction-tuned FLAN-T5 models in Figure 15. When using FLANT5 with appropriate instruction and filtering we find that most of the errors are hybrid, i.e. predicted sentences do not cover the reference span entirely and also contains irrelevant sentences. When we add the self-ans pipeline on top of it with additional instructions to spot unanswerable questions, the predictions contain more empty spans (relatively) which is reflected in the increase in frequency of error I and the No Answer F1 (in Table 8). Surprisingly, when we use an external supervised model to predict unanswerable questions, it contributes to the vast majority of errors in the pipeline (error I). This is consistent with the fact that the test F1 score of this model on the task of classifying questions as answerable or not was only 49.2. So far, we have analyzed errors in predictions for all the answerable questions. Next, we focus our attention on questions with multi-span and multispeaker answers. Within the multi-span split, we calculate the fraction of incorrect predictions (as per exact match) that are multi-span, denoted by multi-span preds (%). Similarly, for multi-speaker split, we calculate the fraction of incorrect predictions (as per exact match) that are multi-speaker in nature, denoted by *multi-speaker preds (%)*. Further, we compare the list of speakers in the reference and predicted spans using Jaccard similarity (IoU) denoted as *speaker IoU*. We compute and report these metrics for all the aforementioned models in Table 10. As expected, due to the single-span training, none of the predictions of the single-span models are multi-span in nature. On the other hand, even incorrect predictions of the finetuned multispan models are multi-span in nature at least half of the times. However, a significant fraction, between 29-46%, of the errors in this split can be attributed | Method | F1 | EM | IoU | |--------------------|------|------|-------| | list instruction | 33.4 | 13.0 | 25.3 | | + filtering | 35.1 | 17.6 | 27.6 | | direct instruction | 14.0 | 5.4 | 7.9 | | + filtering | 28.0 | 20.6 | 22.4 | to single-span predictions for various models. For zero-shot models, over 90% of incorrect predictions are single span (also vast majority of all predictions are single span). On the multi-speaker split, the incorrect predictions of finetuned models are multi-speaker in nature. However, the speaker IoU (< 65) indicates that predicted spans often miss utterances from relevant speakers in the reference and also include irrelevant utterances from other speakers. Zero-shot models on the other hand, only tend to give single-speaker responses which is the primary source for errors. Note that, relatively high frequency of error I or prediction unanswerable spans also contributes in driving down the values of these metrics (empty spans have no span or speaker information). ## E Instruction-Tuned Model Ablations In Section 3.2, we describe adaptation of generative FLAN-T5 model to our extractive setting by (i) designing instructions that ask models to *list* which sentences from the context contain the answer (mentioned in Section B); and (ii) filtering out all sentences from the model response that are not present in the context to remove any possible hallucinations. To show the importance of both these steps, we first compare with an instruction eliciting a direct response (answer) from the model, mentioned below. ## [Context] Based on the conversation above, state the answer(s) given to [SPEAKER]'s question: [QUESTION] We call this *direct instruction* as opposed to the *list instruction* mentioned used in Sections 3.2 and 4.1. Further, we examine the importance of filtering by comparing raw model responses to their filtered counterparts. The comparison on a random subset of 500 question from the dev splits is shown in Table 11. We find that our chosen type of instruction (list) significantly outperforms the direct ID Transcript Segment | CNN-130961 | ... SPEAKER 3: He wants to spread it across the country. SPEAKER 2: Let me go back to the Confederate flag issue that you raised a few moments ago, Faye. I'm getting a couple of e-mails on that where "Republican policies have done little to support AfricanAmericans. Isn't Bush the same man who was indifferent about the Confederate flag? We do not forget so soon". And also Peter in New York says: "Bush may be trying to appeal to black voters, but if he goes down in the polls, don't be surprised if he wraps himself in that Confederate flag". This is an issue that will not go away with these candidates and with these races. So I mean, how much of an influence is this going to be? SPEAKER 0: Well, it's going to be a big influence. It's an issue that ought not to go away. And actually we don't have to look to the future. We already know what happened when Bush went down in the polls, when it was following New Hampshire, when he was nervous about the South Carolina primary when he wrapped himself in the Confederate flag. So there's already a bit of a history there. And even though in the scheme of things the Confederate flag is a very symbolic issue, but symbols do matter. Try telling Jewish Americans if they should move on, that they should forget about a swastika that a candidate who would remain silent on a swastika is deserving of their support. SPEAKER 3: Well, look, Bobbie, I think what I want to establish right here is that George Bush has made it very clear that he thinks that there is only one flag that matters, that there is only one flag that represents freedom and that there's only one flag that one should die for in this country, and that is Old Glory. And I think that he has said that he has a personal point of view on the flag, and he thinks that it is a matter that should be resolved at the state level. The state has taken down the flag due to voices... SPEAKER 2: But how does that illustrate that we're one country under one flag? ... | |--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | CNN-116 | ... SPEAKER 7: That poll that just flashed up on the screen suggests that she did do well because that poll seems to have been taken around the time of the Letterman interview, and it shows almost a dead heat statistically, if you take the margin of error into account. So if that poll is accurate, then she's moved up from where she was according to other polls. SPEAKER 9: Yes, she has moved up. SPEAKER 1: But she is down from a year ago. So I'm curious as to why you think that she may have lost her edge a little bit from a year ago or people don't seem to be quite as enthusiastic about her? SPEAKER 7: People always like politicians better when they're not running for office. As soon as you decide that you are actually going to run, there are a certain number of people who decide they don't like you anymore. They liked you when you were a proposed candidate, and they don't like you are your ambition comes to show. SPEAKER 9: And Bobbie, she benefited I think from the entire impeachment situation. She was, you know, standing in support of Bill Clinton, and I think her poll numbers went up. But then when it looked like she was going to be just a typical politico, and people started looking at her views on the issues, and her stumbles in Israel and so forth, the numbers just started to go down. SPEAKER 1: Well she had that victim status when she first started thinking about entering this race. SPEAKER 9: Yes, I think ... | Table 12: Representative examples from the silver annotated data. The question and the automatically identified answers are highlighted in red and blue respectively. In the first example, Speaker 0's utterances are not automatically annotated since the entity in the utterance of Speaker 2 ("Faye") corresponds to Speaker 3's name. Thus, the algorithm predicts only Speaker 3 would answer. instruction (tends to be more abstractive). Furthermore, filtering consistently improves overall performance especially when using direct instructions possibly due to higher number of hallucinations in the corresponding model answers. ## F Silver Data Augmentation Details Section 3.3 describes the silver data annotation process to annotate publicly available interview transcripts from CNN and NPR (Zhu et al., 2021a) for extractive QA task similar to MEETINGQA. We first identify the subset of speakers that act as the host or interviewer and focus on questions asked by these speakers to generate answer annotations. Based on the utterance containing the question, we first automatically identify speaker(s) answer the question using a rule-based approach. This is done by (i) finding speakers mentioned in the question sentence or utterance using an off-the-shelf NER model (mentioned in Appendix B), (ii) identifying speakers from previous speaker turns if the same speaker takes the turn after the host speaker assuming this is a case of follow-up questions, or (iii) in the absence of first two conditions, all speaker that take turns after the host. Finally, we search utterances corresponding to identified speakers in the transcript until a stopping criterion (max number of utterances, or reach the next host utterance) is met and label it as the answer. From the 463.6K transcripts, only 15% of the files have identify host who steer the interview and have sufficiently high frequency of questions. However, each of these transcripts result in roughly 20 annotated questions on average. For intermediate training, we sample a total of 150K questions from this set and split it randomly into train, dev, test splits in 80 : 10 : 10 ratio. Table 12 shows a few examples of silver answer annotations for questions asked in MEDIASUM interviews. Perturbations. First, we add random sentences between the question and answer utterances to prevent a location bias in which model predicts sentences that immediately follow the question as the answer. Second, we create scenarios where the question is unanswerable by removing annotated answer spans from the context. Third, we replace speaker names in the context with a numeric identifier because information about speaker names are not always available in the transcript including AMI dataset. For multi-span models, we further insert random sentences from elsewhere in the transcript in between annotated answers to facilitate better span selection. Finally, the number of speaker turns in MEDIASUM are 10x smaller than those in AMI dataset (refer to Table 2 of Zhu et al. (2021a)). Therefore, we create more speaker transitions by splitting a long speaker utterance into shorter utterances by multiple speakers. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 2, 3 and 4 verify the claims made. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 B1. Did you cite the creators of artifacts you used? No response. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix B ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 8, Appendix B ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 8, Appendix A ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2, 8, Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2 ## C ✓ **Did You Run Computational Experiments?** Section 3-4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B, Sec 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B, Sec 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sec 4, Appendix B, C ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 2, Appendix A ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix A and supplementary data folder ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 2, Appendix A ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 2, Appendix A ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Ethical Review was conducted by crowd-sourcing company. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix A
sivakumar-moosavi-2023-fermat
{FERMAT}: An Alternative to Accuracy for Numerical Reasoning
https://aclanthology.org/2023.acl-long.838
While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.
# Fermat: An Alternative To Accuracy For Numerical Reasoning Jasivan Alex Sivakumar and **Nafise Sadat Moosavi** Department of Computer Science University of Sheffield United Kingdom {jasivakumar1|n.s.moosavi}@sheffield.ac.uk ## Abstract While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect.The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.1 ## 1 Introduction Numerical reasoning is an aspect that is often forgotten despite being an integral part of natural language. It is the ability to interact with numbers using the fundamental mathematical properties and thus model an area of human cognitive thinking (Saxton et al., 2019). Better understanding of numbers in language models would benefit various tasks like fact-checking (Vlachos and Riedel, 2015), text generation (Moosavi et al., 2021; Suadaa et al., 2021), and educational tools 1https://github.com/jasivan/FERMAT (Mandal et al., 2022). Current models' performance are still too weak with respect to numerical accuracy to then be used in downstream tasks like Infotabs (Gupta et al., 2020) which requires identifying numbers in tables and then performing operations to correctly label statements causing factuality errors in such tasks. Recently, we have observed improved performances on relevant datasets about numerical reasoning using very large language models (Wei et al., 2022b; Lewkowycz et al., 2022; Kojima et al., 2022). However, there are two main limitations to this recent trend. First, as models become larger their access becomes restricted to fewer users, i.e., users with the computational resources of large companies. For example, using one of the best mathematical models, the 540B parameter model Minerva (Lewkowycz et al., 2022), would require over 2212G of memory for inference only. Second, the numerical reasoning capabilities of existing models are measured using a single score, i.e., mostly accuracy on common benchmarks like GSM8K (Cobbe et al., 2021). Therefore, their strengths and shortcomings in different aspects of numerical reasoning compared to other models are not clear. As a result, it is unclear what numerical reasoning aspects should be improved to improve their performance on datasets requiring numerical reasoning. Motivated by CheckList (Ribeiro et al., 2020), which is a behavioral test set concerning various linguistic aspects of the input language, we propose a unique and open Flexible Evaluation set for Representating Multiviews of Arithmetic Types,2 FERMAT, for evaluating the numerical reasoning capabilities of models based on multiple key aspects . It evaluates models according to (a) different ranges and representations of numbers, (b) different mathematical operations, and (c) the dependence of models on the fine-tuning data. In 2We use the terms type, aspect and view interchangeably. 15026 addition, it contains a tool to automatically generate new instances for each of its aspects. FERMAT enables (a) the identification of the strength and shortcomings of models according to its aspects, and (b) the automatic creation of additional training and evaluation instances using expert written templates that reflect FERMAT's categories. FERMAT complements the recently proposed L¯ILA benchmark (Mishra et al., 2022a) for mathematical reasoning. L¯ILA evaluates high-level aspects, e.g. whether performing mathematical reasoning also depends on commonsense knowledge or how the performance changes depending on the difficulty of the input language. However, even the best-performing model on the L¯ILA benchmark, i.e., a 2.7B parameter model that is fine-tuned on mathematical datasets, only achieves an accuracy of around 20-30 points when the input is formulated using a simple language and the test data is from a different distribution than that of the training, and it is not clear how to further improve this performance. FERMAT, on the other hand, takes a deeper look at more fine-grained aspects by diving into the core mathematical abilities of the models and reporting which specific operations a model can or cannot perform and on which numbers. It also provides templates for creating more instances for each aspect, e.g., to generate additional data to further train or evaluate models on certain aspects. FERMAT formulates the evaluation of numerical reasoning using the question answering format, which is commonly used in NLP for evaluating various skills (Tafjord et al., 2019; Dasigi et al., 2019; Jin et al., 2019). We use FERMAT to highlight that single accuracy scores fail to give a holistic understanding of a model, that template diversity has a high impact in improving performance, and that number encodings play an important part in numerical reasoning. The FERMAT framework could subsequently be adapted for different tasks according to the target application,3to give a more targeted approach to improving models. Moreover, while the expertwritten templates in FERMAT are written in English, they can easily be translated to be adapted to other languages. ## 2 Related Work 2.1 Datasets Mathematical datasets focus on exploring different levels of difficulties and areas of maths. Some look at general symbolic maths, where the questions at least involve algebraic notations. A certain group of datasets explores numerical reasoning in context, but the answers may not exclusively be numerical. Unlike FERMAT, all these datasets evaluate models' performances on the whole dataset based on a single score. Moreover, as a result of the availability of many datasets, new benchmarks have also been created based on regrouping the existing datasets according to specific criteria. Such benchmarks are created based on high-level aspects, e.g., how the performance changes when solving maths also depends on commonsense reasoning, when the maths is presented using equations, a simple language, or a complex language, or when the input is presented using a different task format. However, the performance of existing general-purpose models is very low, even on the simplest aspects, e.g., when the maths is presented using a simple language without requiring external knowledge. FERMAT, on the other hand, focuses on a fine-grained analysis of numerical reasoning by aiming to decipher models' ability to understand numbers, operations, and their reliance on the training data. ## 2.1.1 General Maths Dolphin18K (Huang et al., 2016), DeepMind Mathematics (Saxton et al., 2019) and AQUA (Ling et al., 2017) are datasets that have a focus on solving algebraic problems and therefore use algebraic notation. These datasets are too complex for existing general purpose language models, mainly because they expect multi-hop reasoning.4 For instance, Wei et al. (2022b) only report an accuracy around 25% for AQUA with a large, 62B parameter, model. ## 2.1.2 Numerical Context Instead of the algebraic notation, some datasets are worded problems but are formulated as multiple choice questions, e.g. McTaco (Zhou et al., 2019) and AQUA. This multiple choice format simplifies the task into a classification which prevents working with the continuous essence of numbers. Even if these are formatted into generative output tasks they then sometimes expect textual outputs like 4E.g. [(6 × 8) − (3 × 6)] ÷ (6 + 4) (Ling et al., 2017). 15027 DROP (Dua et al., 2019). DROP has textual answers that can be extracted from the context which, similarly to the multiple choice questions, are disjoint from the numerical reasoning skill. ## 2.1.3 Numerical Solutions The only datasets with textual input that solely expect numerical answers are GSM8K (Cobbe et al., 2021), MAWPS (Koncel-Kedziorski et al., 2016), CommonCore (Roy and Roth, 2015) and Illinois (Roy and Roth, 2016). GSM8K provides textual explanation for the solutions which has been effectively used by Wei et al. (2022b). However, similar to AQUA, GSM8K is very difficult for general purpose language models with reported results below 5% accuracy using an 8B parameter model (Wei et al., 2022b). Likewise, MAWPS requires some use of algebra to solve the problems. However, CommonCore and Illinois, which are subsets of MAWPS, are constituted of simpler one or twohop problems.5 Since FERMAT is designed to gain better insight by focusing on more accessible problems, CommonCore and Illinois are the ideal datasets. ## 2.1.4 View-Based Evaluation Sets Ribeiro et al. (2020) explain the motivation to move away from raw accuracy but towards more informative evaluation sets which give better insight into a given model. They look at different aspects of a test set; the skills needed to correctly solve the problem, in their case, linguistic phenomena like negation in sentiment analysis. NumGLUE (Mishra et al., 2022b), on the other hand, is a multi-task benchmark that involves numerical reasoning. It combines different tasks like commonsense, domain specific language, quantitative expressions, with arithmetic understanding to create a more challenging benchmark. It also uses different question format such as fill-in-the-blanks, textual entailment, multiple choice questions, span extraction and numerical outputs. A more mathematically expansive set is the recently introduced L¯ILA dataset (Mishra et al., 2022a) where they regroup 20 existing datasets into 23 reasoning tasks including some of NumGLUE. These tasks are split into maths domains (e.g. geometry or arithmetics), language complexity (e.g. only maths, simple language, or long passages involving co-reference), question format (e.g. gener5An n-hop problem is one with the combination of, at most, n of the basic operations. ative answer or fill in the blank), and background knowledge required (e.g. knowledge of formulae or commonsense). However, as mentioned, existing models struggle even with simple aspects that do not require background knowledge or do not contain complex language or maths. FERMAT complements L¯ILA by looking in-depth at more fine-grained numerical reasoning aspects . It also contains expert-written templates associated with each aspect that can be used to generate an arbitrary number of new instances to address the identified shortcomings or generate more evaluation instances. We design FERMAT for arithmetic problems presented using simple language. However, our methodology can be tailored to refine the analysis of L¯ILA's other aspects. ## 2.2 Improving Numerical Reasoning The literature has two main ways of improving numerical reasoning: (a) by designing task-specific models capable of numerical reasoning (Kumar et al., 2021, 2022; Liang et al., 2022; Dua et al., 2019; Andor et al., 2019; Yang et al., 2021), and (b) by scaling up (Brown et al., 2020; Chowdhery et al., 2022; Chen et al., 2021). Both methods also attempt to further pre-train existing models on maths related data (Geva et al., 2020; Cobbe et al., 2021; Wei et al., 2022b; Lewkowycz et al., 2022; Zhou et al., 2022). Other existing ways include using better number encoding (Muffo et al., 2022) or objective functions (Petrak et al., 2022). ## 2.2.1 Task-Specific Models: Maths Solvers Some models have been specifically created to solve maths problems by outputting expressions (Kumar et al., 2021, 2022; Patel et al., 2021) or pseudo-programs (Liang et al., 2022; Dua et al., 2019) which are then evaluated using an external module. Notwithstanding the performance of these models, they can only be used to solve maths problems that, moreover, need to be represented in a closed arithmetic form. This restricts the versatility of these models both in terms of the maths and tasks that they can solve. Unlike the other maths solvers, GenBERT (Geva et al., 2020) and NT5 (Yang et al., 2021) generate the final output as text, making them more generalpurpose. Both are pre-trained on numerical and textual tasks to solve mathematical problems. Both of these models are evaluated on DROP (Dua et al., 2019) which only provides an accuracy score, so their general numerical skill performance is not ## 2.2.2 Improving Maths By Scaling More general-purpose models that perform well with respect to mathematical reasoning are GPT3 (175B) (Brown et al., 2020), PaLM (540B) (Chowdhery et al., 2022) and Codex (175B) (Chen et al., 2021) where their parameter size is given in brackets. GPT3 was fine-tuned by Cobbe et al. (2021) on GSM8K to achieve state of the art results. Similar works using PaLM and Codex investigate prompting (Wei et al., 2022b; Zhou et al., 2022) and extended training (Lewkowycz et al., 2022). All of these models are general-purpose so are able to do more than solve maths problems but are not well understood. Some ablation studies analyse specific aspects of specific models. For instance, Lewkowycz et al. (2022) conducted a digit study and highlighted that Minerva is unable to perform any multiplication of numbers with more than seven digits. However, their sizes make it impossible for many research and industry communities to utilise them, even just at inference time. We do not have the computation resources or access for running these large models. However, FERMAT, which is publicly available and easily accessible, can be used to perform a more comprehensive analysis of these models to further identify their strengths and shortcomings. ## 3 Multi-View Evaluation Set: Fermat FERMAT gives a holistic view of a model by evaluating fine-detailed aspects of numerical reasoning. It is akin to Ribeiro et al. (2020)'s CheckList, which focuses on linguistic variations for defining its aspects. FERMAT is used to interpret models by evaluating them on three orthogonal views including (a) Number Understanding, (b) Mathematical Operations, and (c) Training Dependency. It also provides an automated method of generating new training or evaluation examples for a given number type or operation. We collect the initial instances for creating the FERMAT evaluation set using the established Illinois (Roy and Roth, 2016) and CommonCore (Roy and Roth, 2015) datasets. After removing duplicates, we collect 1111 unique instances from these two datasets which we name the *Original* set.7 We choose instances from CommonCore and Illinois because they perfectly fit with FERMAT's design by providing one or two-hop questions. Moreover, their extensive annotation is supplemented with an alignment between the numbers in the question and the corresponding expression that the solution is calculated from. We leverage these annotations in FERMAT to create different variations of the same problem for different aspects. ## 3.1 Number Understanding Each instance of the *Original* set is used to generate 18 different numerical types where the numbers ![3_image_0.png](3_image_0.png) change but the language is fixed. These are categorised as (a) Alternative Representations, and (b) Range of Numbers. Examples of each is given in Table 1. ## 3.1.1 Alternative Representations Alternative Representations transforms the numbers into 11 different forms. The first four categories (rows 1 to 4) have the same number as the Original set but represented differently whereas the next five categories (rows 5 to 9) use the same digits in the same order but by varying the magnitude of the number. The last two (rows 10 and 11) form the digit grouping subcategory where comma and space separators are used between groups of three digits.8 This would give insight into the breadth of representations a model can accommodate, independent of the specific digit used, for instance, elucidate whether a model would be able to equally answer "12×34", "34×12" and "1.2×3.4". Note that the commutative category (row 4) refers only to operations that are invariant to operand permutation and thus only has 611 associated questions instead of 1111. ## 3.1.2 Range Of Numbers The *Original* set has a highly skewed distribution towards smaller integers with 94.89% of numbers being 1 or 2 digit integers. Therefore, a random number generator is used to create 7 sub-categories of a "Range of Numbers" split into integers (rows 12 to 16) with large integers (greater than 1000), small integers (less than 1000) and 2, 3 and 4 digit integers, and decimals (rows 17 and 18) with 1 or 2 decimal place numbers. ## 3.2 Mathematical Operations The operations sought by the model plays a vital role in numerical reasoning. A one-hop problem which requires a single operation, to a human, would seem much easier than a two-hop problem where an intermediate calculation would need to be computed first. With regards to this, we consider 9 operation sets generated using basic operations (addition, subtraction, multiplication and division). Their distribution is given in Appendix A. ## 3.3 Training Dependency Classification The frequency of the occurrence of a number in pre-training data has a great impact on the performance of the model on those numbers (Razeghi et al., 2022). Motivated by this, FERMAT also includes a view for training dependency, but at the fine-tuning or prompting-level only. Despite the test being unseen, a model could be learning the training data and focalise on seen numbers or seen operations. Therefore, we include a Training Dependency Classification aspect to FERMAT using the following classes based on what was seen during training:9 (a) *Exact*: all the numbers and operations are seen with the same operations modulo commutativity, e.g. "(3 + 2) × 5", (b) *All Numbers*: all the numbers are seen but with different operations, e.g. "(5 − 2) ÷ 3", 9All the examples are associated to the test expression, "5 × (2 + 3)". (c) *Number & Operation*: at least one number and operation are seen, e.g. "(5 + 3) ÷ 4", the "5" and the addition are at least seen, (d) *One Number*: at least one number is seen with none of the operations, e.g. "9−5", the "5" is seen but nor with the "9", nor with subtraction, (e) *One Operation*: at least one operation is seen without any numbers, e.g. "4+7", the addition is seen but not with these numbers. It is important to note that all operations from the test set are seen in the training set, therefore according to our classification criteria, the least common class is always *One Operation*. Future work may have more complicated mathematical operations in the test set that are never seen at training time such as powers or trigonometric functions, but we believe these to be too difficult for the models to learn without prior exposure. ## 3.4 Generating Training Data In addition to the evaluation set, FERMAT also provides a solution for generating an arbitrary length dataset that targets specific number or operation types.10 This dataset is generated based on templates that come from three separate sources that are completely independent to the FERMAT evaluation set. The first set comprises of 100 questions written by two professional secondary school mathematics teachers and reviewed by a third one. The distribution of the templates generated reflect a uniform distribution over the operations. The second and third sources are GSM8K and AQUA where 155 and 71 templates were selected respectively. Only the questions that used at most two basic operations were extracted and the numbers were replaced by place holders to transform them into templates. These templates are only used in Section 5.4 to enhance the linguistic and mathematical variety of the templates. The distribution of operations used in the templates alongside some examples are given in Appendix B. ## 4 Experimental Setup To demonstrate the effectiveness of our evaluation set, FERMAT, we will perform the evaluations in two settings, (a) zero-shot, where we evaluate existing models, and (b) fine-tuned, where we further 10In this work, it is used for training but it could also be used for evaluation. train the models on arithmetic data generated using our training data in Section 3.4. ## 4.1 Zero-Shot Evaluation For zero-shot performance, we evaluate the following models on FERMAT without any training:11 T0 (3B) (Sanh et al., 2022), FLAN-XL (3B) (Wei et al., 2022a), BHASKARA (2.7B) ( ¯ Mishra et al., 2022a), FLAN-large (770M), FLAN-base (220M), T5-base (220M) (Raffel et al., 2020), BARTbase (140M) (Lewis et al., 2020), and NT5 (3M) (Yang et al., 2021), where the size of the models is given in brackets. A zero-shot evaluation is appropriate because these models are intended to be used as off-the-shelf multi-purpose models. T0, FLAN, BHASKARA and NT5 have been ¯ trained using prompts, so we also test them with and without prompts. We select the prompts by consulting the original papers and judge which fit closest with our question answering task (see Appendix C for the exact prompts used). From the models we considered, BHASKARA, FLAN and ¯ NT5 are the ones that have also been trained for maths related datasets. BHASKARA is trained on ¯ L¯ILA and reaches near state of the art performance, thus is a reliable model to compare numerical reasoning capabilities. However, since L¯ILA contains lots of existing data, BHASKARA has seen 46.89% ¯ of the *Original* test set (Mishra et al., 2022a) at training time. It also includes DeepMind Mathematics (Saxton et al., 2019) in its pre-training data. FLAN has also seen DeepMind Mathematics in training. NT5 is pre-trained on synthetic numerical tasks involving non-worded problems with integers up to 20000, decimals, negatives and percentages and textual tasks as described by Geva et al. (2020), and then fine-tuned on DROP. ## 4.2 Fine-Tuned Evaluation For this setting, we create a training data called Base (see Section 4.2.1) on which we fine-tune the following models: FLAN-large, FLAN-base, T5-base , BART-base and NT5 accessed from Huggingface (Wolf et al., 2020). We also use a digit tokeniser as implemented by Petrak et al. (2022) which gives more promising results in finetuning experiments compared to using the default tokeniser for numbers.12 Due to limitations in computational resources, we are unable to use the 3B parameter models for fine-tuning. Moreover, despite BHASKARA being advertised as a good start- ¯ ing point for maths related data, it is still too big for us to train.13 ## 4.2.1 Training Data The templates described in Section 3.4 were used to generate the *Base* training set of 200K questions with a uniform distribution over four common number types, i.e. integers and decimals with 1 or 2 decimal places all between 0 and 1000, and integers between 1000 and 1000000. This distribution also means that each of these types have 50K questions, so we would suspect that all 1000 integers between 0 to 1000 and most of the 10000 1 decimal place numbers would appear in the training set whereas all 100000 and 999900 respectively from the other two categories cannot be seen. Furthermore, all of the expert templates were used therefore the operation distribution is the same as the one for the template set (see Appendix B). The same methodology was used to create a development set of 1K questions. This was used to decide on hyperparameters which are described in Appendix D. ## 5 Results Table 2 illustrates the zero-shot and fine-tuning performance of eight models on FERMAT with green highlighting the stronger performances for a given arithmetic type and red the poorer ones. For models that use prompts (T0, BHASKARA, ¯ FLAN and NT5), for each type, we report their mean accuracy using all the prompts and no-prompt settings. For these models, the standard deviation between the prompted and non-prompted results is below 1.5%, therefore the reported results are representative (see Appendix E for the full results). ## 5.1 Zero-Shot Evaluation Firstly, from Table 2's sea of red, we can deduce that most of these models, especially T0 and the base models, tend to perform poorly at arithmetic reasoning, irrespective of size. The bestperforming models, BHASKARA and FLAN-XL, ¯ are ones trained on maths data. But their performance is only respectable for a variant of the Orig- ![6_image_0.png](6_image_0.png) inal set where nearly half of the numbers are single digits. Secondly, the accuracy level for *Original* is always part of the highest values, expect for NT5, so it is not a representative test set for numerical reasoning despite being derived from existing benchmarks. This could also be due to the poor diversity of the *Original* set as stressed in Section 3.1.2. Contrastingly, NT5 has its highest accuracy for addition and subtraction meaning that it is generally learning operations over specific number types. Thirdly, even the larger models that are explicitly trained on maths datasets, i.e., BHASKARA ¯ and FLAN-XL, perform poorly on numbers that contain more than one digit indicating a limitation for their use in real-world tasks where the numbers can be of any range. This is in line with previous studies showing the shortcomings of models on longer digits (Lewkowycz et al., 2022; Muffo et al., 2022). ## 5.2 Evaluation After Fine-Tuning As expected, with many greener cells, the finetuned models are better than their zero-shot counterparts and demonstrate more consistent performance across all the types. FERMAT's training and evaluation set templates, while covering similar aspects, are from completely independent sources. However, we observe that fine-tuning smaller commonly used models on this training data outperforms larger models like BHASKARA that are ¯ fine-tuned on various maths datasets, for instance BHASKARA is trained on over 1.32K distinct ¯ questions and programs. This underlines the benefit of creating the training data based on a diverse set of mathematical aspects. The larger FLAN is the only model to consistently improve on the two-hop questions suggesting that more parameters may be required to learn more complex reasoning as observed by Xiong et al. (2021). Similarly, NT5 only makes significant improvement with addition and subtraction, which it was pre-trained on with synthetic questions. Therefore, as a smaller model, NT5 is only able to better generalise mathematical addition and subtraction but struggles to learn new operations during finetuning. However, instead of its size, this could also be due to the complexity of mathematics it has seen at pre-training. In addition, we observe that models' performances on the "Commuted" aspect within the "Same numbers" subset are considerably lower than the other aspects. This indicates a potential for developing better number encodings that learn similar representations for the same number regardless of the position or input representation, e.g., "three" and 3, and 3.0. ## 5.3 Training Dependency Of Performance ![6_Image_1.Png](6_Image_1.Png) It is important to understand why our fine-tuned models are better across multiple types. For this, we class the expression required to answer the test sets using the Training Dependency Classification described in Section 3.3. Figure 1 presents the dependency of the training data for the FLAN-large (left bars) and T5-base (right bars) models. For each bar, the ratio of correct (orange) and incorrect (blue) predicted samples are identified (the full results are given in Appendix F). The bars' monotonic trend suggests that if more of a test expression is seen at training, the model is more likely to answer it correctly. However, even for the exact match category, the performance is only 46%. This is because the language that is used to describe the targeted equation may be different in different instances, e.g. the words "another" and "increases" are only two possible terms suggesting an addition (see Appendix B for their use in context), indicating that the model needs exposure to a variety of different ways maths is expressed and that enriching the training data with higher language diversity can be beneficial. In addition, the accuracy for *Exact* and *All Numbers* classes are similar for both models highlighting that seeing numbers during training, and therefore having a correct encoding for them, plays an important role in solving their corresponding maths operations, e.g. 89 and 30 appear both in the training set, "*Stacey prints 30 letters to post. The printer* was filled with 89 sheets of paper. How many more letters could she print?", and in the 2 digit test set, "*89 beavers were working on their home. 30 went* for a swim. How many beavers are still working on their home?". This could be seconded by FLANlarge having higher accuracy than T5-base for each class as is has seen more maths at pre-training. ## 5.4 Impact Of Training Templates As eluded in Section 5.3, linguistic and mathematical diversity seem to be key to the improvement of numerical reasoning. Therefore, we investigate a model's performance when trained with the different templates, thus diverse language and mathematics. We fix the distribution of the aspects used in all those training instances to equal amounts of "Integers 0 to 1000", "1000+ random", "1dp random" and "2dp random". We use FLAN-base for the experiments of this section as it still has particularly low performances in mainly two-hop aspects according to the results of Table 2, even after finetuning. Moreover, it is a small enough model to train on larger datasets. In this section, we consider the following three training sets to compare the effect of template diversity (see Appendix G for detailed distribution): (1) *Base* is the 200K training data from Section 4.2.1 which only uses the expert templates, (2) Base Scaled Up is *Base* with an addition 100K instances from the same distribution of aspects. To make a fair comparison with the next training set, the language and mathematics is fixed as it only uses the expert templates, (3) *Base Diversified* starts with *Base* and also adds 100K instances from the same distribution of aspects. However, unlike all the other training sets which purely use the expert templates, this augments the initial set using templates recovered from GSM8K and AQUA (see Section 3.4) which enhances the language and mathematics seen. We compare FLAN-base fine-tuned on the above training set along with the model's zero-shot baseline performance. Figure 2 illustrates the results of these experiments. ![7_image_0.png](7_image_0.png) First, as already established, training on diverse templates over a variety of aspects is beneficial by the shear difference illustrated by Figure 2 between Zero-shot (black) and the fine-tuned performance (blue, orange, green). In contrast, when comparing *Base* (blue) and *Base Scaled Up* (orange), we remark that despite seeing 100K more combinations of numbers and operations, the learning stagnates when using the same templates meaning that the model has learnt as much as it could from the breadth of the available templates. Consequently, either linguistic or mathematical diversity is required to make a sufficient contribution. This phenomenon is, in fact, displayed by the improvement generated by *Base Diversified* (green), in certain aspect by over 21%. The diversity helps the model map the language used to describe particular mathematics better, for instance "share" to mean "division", and possibly observing more variety of this in different context seems to improve the model. Therefore, a diversity in the templates used is important, suggesting that a large variety of language may be required to attempt to further ameliorate the performance. Nevertheless, the mathematical diversity seems to also play a more important role as the diverse templates from GSM8K and AQUA have more two-hop operations (see Appendix B). Relatedly, the mean percentage increase of one-hop operations from Base to *Base Diversified* is approximately 95% which is about half the mean percentage increase for two-hop operations, i.e. 187%. This suggests that mathematical variation may be more central than language diversity. Second, the variance in accuracy between "1dp random" and "2dp random" and analogously "Integers 0 to 1000" and "1000+ random" is also intriguing. Despite having the same number of training instances with these aspects the accuracy is always lower for "2dp random" and "1000+ random" respectively, the reason for this is that these aspects involve harder skill for which either the additional 100K examples or the size of the examined model is not enough to learn this skill.14 On the other hand, for a simpler aspect like "2 digit" representation, the model's performance improves considerably using the additional training instances. We can conclude that template diversity alone may not improve the models and that work on generalisation over larger sequence of integers (i.e. integers larger than 1000, more than two decimal places) such as tokenisation and representation of numbers is critical. Third, a noteworthy observation is that *Base Diversified* (green) performs worse than *Base* (blue) only on the "Original 2dp no 0" aspect, e.g., using ".32" instead of "0.32". When further analysing the model's output of this aspect for *Base Diversified*, we note that the model, on top of the 19.8% accuracy, produces an additional 19.7% of outputs containing correct digits but an incorrect magnitude, e.g., the correct answer might be "1.8", but the model predicts "0.18". The model might be disturbed by the decimal place or the absence of zero, implying that number encoding including positioning is vital, and thus, an accurate encoding of numbers is crucial. ## 6 Conclusion The majority of existing datasets for numerical reasoning evaluate models based on a single score, making it impossible to identify their strengths and shortcomings to further improve them. Multi-view benchmarks are the alternative for a more comprehensive and informative evaluation of models. In this direction, we introduce FERMAT, a multi-view evaluation set that enables a fine-grained analysis of models based on three key aspects including number understanding, mathematical operations, and training dependency. FERMAT's aspects are associated with separate templates for generating instances for both evaluation and training sets, which are collected from completely independent sources and domains. Our results confirm that comparing a single accuracy score, as with all existing maths datasets, is not representative of the performance on various numerical reasoning aspects as the evaluation dataset may be skewed towards a specific data distribution. Based on our results, a wider language and mathematical variation can improve even smaller models. However, an apparent future direction is to focus on improving number encodings in existing models and understanding how these affect performance. ## 7 Limitations Three main limitations with regards to certain aspects of this paper are the comparison against very large models, the distribution of the *Original* set, and the restriction of the output length. Firstly, due to the lack of computational resources and availability of some models, we were unable to make a rigorous comparison of our finetuned models' as described in Section 5.2 against very large models like Minerva (Lewkowycz et al., 2022) or even Codex (Chen et al., 2021). However, these larger models can still be evaluated as FERMAT is made publicly available. Secondly, another limitation of FERMAT is its use of Illinois and CommonCore which have highly skewed distributions of numbers (see Section 3.1.2) and their answers are mainly integers which is not representative of the real-world. This undesired effect is mirrored in the number types that use the same numbers as *Original*. However, this was part of our design for FERMAT as the alternative would have been to combined all the ranges of numbers used with the representation, creating too many aspects but mainly conflicting with non-independent analyses between representation and range of numbers. Therefore, we chose to use the same numbers as *Original*, and since the templates will be openly accessible, they can be used to generate more combinations for wider aspects. Lastly, when generating training questions, despite our best intentions, we had to limit the length of the output to an arbitrary length of 12 digits, therefore some number combination were not possible, for example 1÷3 = 0.3333... . This practical implication could have been avoided with the use of fractions or rounding. But we judged that it would have added an extra layer of difficulty for the models and decided to restrict the output length instead. ## Acknowledgements This work was supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT) and their Applications funded by UK Research and Innovation [grant number EP/S023062/1]. Additional thanks to our mathematics teachers Ana Maria Ocampo Lucumi and Liz Scott for creating and checking the expert templates. A further acknowledgement to Constantinos Karouzos, Mugdha Pandya and Valeria Pastorino for their continued feedback in this research. ## References Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947– 5952, Hong Kong, China. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *arXiv* preprint arXiv:2107.03374. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*. Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. ´ Smith, and Matt Gardner. 2019. Quoref: A read- ing comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925–5932, Hong Kong, China. Association for Computational Linguistics. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 946–958, Online. Association for Computational Linguistics. Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2309–2324, Online. Association for Computational Linguistics. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 887–896, Berlin, Germany. Association for Computational Linguistics. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567– 2577, Hong Kong, China. Association for Computational Linguistics. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In *ICML 2022* Workshop on Knowledge Retrieval and Language Models. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152–1157, San Diego, California. Association for Computational Linguistics. Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2021. Adversarial examples for evaluating math word problem solvers. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2705–2712, Punta Cana, Dominican Republic. Association for Computational Linguistics. Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2022. Practice makes a solver perfect: Data augmentation for math word problem solvers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4194–4206, Seattle, United States. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In *Advances in Neural Information* Processing Systems. Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, and Xiangliang Zhang. 2022. MWP-BERT: Numeracy-augmented pre-training for math word problem solving. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 997–1009, Seattle, United States. Association for Computational Linguistics. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Sourav Mandal, Swagata Acharya, and Rohini Basak. 2022. Solving arithmetic word problems using natural language processing and rule-based classification. International Journal of Intelligent Systems and Applications in Engineering, 10(1):87–97. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022a. LILA: A unified benchmark for mathematical reasoning. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5807–5832, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022b. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505–3523, Dublin, Ireland. Association for Computational Linguistics. Nafise Moosavi, Andreas Rücklé, Dan Roth, and Iryna Gurevych. 2021. Scigen: a dataset for reasoningaware text generation from scientific tables. In *Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks*, volume 1. Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022. Evaluating transformer language models on arithmetic operations using number decomposition. In Proceedings of the Language Resources and Evaluation Conference, pages 291–297, Marseille, France. European Language Resources Association. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. Dominic Petrak, Nafise Sadat Moosavi, and Iryna Gurevych. 2022. Improving the numerical reasoning skills of pretrained language models. *arXiv preprint* arXiv:2205.06733. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot numerical reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 840–854, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 1743–1752, Lisbon, Portugal. Association for Computational Linguistics. Subhro Roy and Dan Roth. 2016. Illinois math solver: Math reasoning on the web. In *Proceedings of the* 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 52–56, San Diego, California. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations. Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, and Hiroya Takamura. 2021. Towards table-to-text generation with numerical reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1451–1465, Online. Association for Computational Linguistics. Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019. Quarel: A dataset and models for answering questions about qualitative relationships. In *Proceedings of the ThirtyThird AAAI Conference on Artificial Intelligence and* Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19. AAAI Press. Andreas Vlachos and Sebastian Riedel. 2015. Identification and verification of simple claims about statistical properties. In *Proceedings of the 2015 Conference* on Empirical Methods in Natural Language Processing, pages 2596–2601, Lisbon, Portugal. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In International Conference on Learning Representations. Peng-Jian Yang, Ying Ting Chen, Yuechan Chen, and Daniel Cer. 2021. Nt5?! training t5 to perform numerical reasoning. *arXiv preprint arXiv:2104.07307*. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. "going on a vacation" takes longer than "going for a walk": A study of temporal commonsense understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3363–3369, Hong Kong, China. Association for Computational Linguistics. Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi. 2022. Teaching algorithmic reasoning via in-context learning. *arXiv preprint arXiv:2211.09066*. ## Appendix A Distribution Of Mathematical Operations B Templates | Hops | Expression | Frequency | |-------------|--------------|-------------| | a + b | 154 | | | One-hop | a − b | 162 | | a × b | 113 | | | a ÷ b | 102 | | | (a + b) − c | 190 | | | a × (b + c) | 100 | | | Two-hop | (a + b) ÷ c | 90 | | a × (b − c) | 100 | | | (a − b) ÷ c | 100 | | | Total | 1111 | | | Operations | Freq | Operations | Freq | |--------------|--------|--------------|--------| | a + b | 16 | a − b | 28 | | a × b | 28 | a ÷ b | 35 | | a + b + c | 9 | a + b − c | 23 | | a × (b + c) | 20 | a × (b − c) | 13 | | (a + b) ÷ c | 20 | (a − b) ÷ c | 17 | | a − b − c | 3 | (a ÷ b) + c | 3 | | (a × b) + c | 13 | (a × b) − c | 5 | | (a × b) × c | 10 | (a × b) ÷ c | 51 | | a ÷ (b + c) | 6 | a ÷ (b − c) | 8 | | a × (b ÷ c) | 6 | (a ÷ b) × c | 12 | | Total | 326 | | | Table 3 gives the distribution of the various operations that exist in the *Original* set and thus FERMAT's evaluation set. Table 3: Distribution of the mathematical operations for the *Original* set. The templates' operation distribution is given by Table 4. Table 4: Table of operations present in the training templates with their corresponding frequency. The ones in bold are the ones present in the expert templates. Exemplar templates from each of three sources are given below where number place holders are in bold: Expert Template: Britney has **num1** knitting needles. She buys another **num2** . How many needles does she have? Expert Expression: num1 + num2 GSM8K Template: a trader sells **num1** meters of cloth for $ **num2** . what is the cost price of one metre of cloth ? GSM8K Expression: ( num2 / num1) AQUA Template: the average weight of **num1** persons increases by **num2** kg when a new person comes in place of one of them weighing **num3** kg . what might be the weight of the new person ? AQUA Expression: ( num3 +( num1*num2 )) ## C Prompts Examples of the prompts used for the respective models are given below. In the examples, the underlined text is the prompt. Model: T0 Prompt name: Trivia Example: Answer the following question. What is 2 plus 3? Model: T0, FLAN Prompt name: WebQA Example: Question: What is 2 plus 3? Answer: Model: FLAN Prompt name: Trivia Example: Please answer this question: What is 2 plus 3? Model: NT5 Prompt name: NT5 prompt Example: answer_me: What is 2 plus 3? ## D Hyperparameters The hyperparameters were tested on a smaller set for efficiency. During fine-tuning, we used 100 epochs with an early stopping patience of 10 and threshold of 1.0. The best model was based on accuracy of the evaluation set. All experiments were conducted with a learning rate of 5e-5, weight decay of 0.005, warm-up of 100, float32 and 3 generation beams. The rest of the hyperparameters were as the default setting in Huggingface. The max input length was 512 and max target length, 16 which is above the 12 digit limit we restrained ourselves to for the answers when generating questions. The resource used was an Nvidia Tesla V100 with 32G. ## E Zero-Shot Results With And Without Prompts The full results for each model including when prompts were used for all the arithmetic types are given by Table 6. ## F Training Dependency Results The full results for the Training Dependency classification is shown in Table 5. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) Table 5: Training Dependency for all fine-tuned models. ## G Distribution Of Training Sets Table 7 shows the distribution of the training set created from the templates, with raw numbers of instances generated based on the specific number aspect and mathematical operation design. The bold mathematical operations are the ones present in the expert templates. ## H Flan-Base Template Diversity Table 8 shows the results of FLAN-base for each numerical reasoning aspects as a zero-shot performance and when fine-tuned on different . Accuracy is given as a percentage. Green cells indicate higher accuracy and red poorer performance. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) | tegers | 2dp | | | |-----------|-------|-------|--------| | Coding to | Total | | | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 3750 | 3750 | 3750 | 15000 | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 6250 | 6250 | 6250 | 25000 | | 1250 | 1250 | 1250 | 5000 | | 3750 | 3750 | 3750 | 15000 | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 5000 | | | 1250 | 25000 | | | | 6250 | 6250 | 6250 | | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 1250 | 5000 | | 1250 | 1250 | 1250 | 5000 | | 75000 | 75000 | 75000 | 300000 | ![15_image_0.png](15_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 - Limitations ✗ A2. Did you discuss any potential risks of your work? We do not believe our work to have potential risks, instead we aim to reduce environmental impact by looking at alternative to large models. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 - Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 - Multi-View Evaluation Set: Fermat ✓ B1. Did you cite the creators of artifacts you used? Section 2 - Related Work ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We aim to provide this when these artifacts are made available in an open repository. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 1 - Introduction, Section 3 - Multi-view Evaluation Set: FERMAT ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We do not use data with sensitive information, all names are randomly generated ones. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Abstract, Section 1 - Introduction, Section 3 - Multi-view Evaluation Set: FERMAT and Appendix ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 - Experimental Setup, Section 5 - Results and Appendix C ✓ **Did you run computational experiments?** Section 4 - Experimental Setup, Section 5 - Results ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 - Experimental Setup, Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 - Experimental Setup, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 - Results, Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 - Experimental Setup, Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
finch-etal-2023-dont
Don{'}t Forget Your {ABC}{'}s: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems
https://aclanthology.org/2023.acl-long.839
Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity. Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with a lack of work to compare and assess the validity of those approaches. The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it. Thus, a dimensional evaluation of chat-oriented open-domain dialogue systems that reliably measures several aspects of dialogue capabilities is desired. This paper presents a novel human evaluation method to estimate the rates of many{pasted macro {`}LN{'}} dialogue system behaviors. Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches. The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems.
# Don'T Forget Your Abc'S: Evaluating The State-Of-The-Art In Chat-Oriented Dialogue Systems Sarah E. Finch∗and **James D. Finch**∗and **Jinho D. Choi** Department of Computer Science Emory University Atlanta, GA, USA {sfillwo, jdfinch, jinho.choi}@emory.edu ## Abstract Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity. Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for opendomain chats, with a lack of work to compare and assess the validity of those approaches. The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it. Thus, a dimensional evaluation of chat-oriented opendomain dialogue systems that reliably measures several aspects of dialogue capabilities is desired. This paper presents a novel human evaluation method to estimate the rates of many dialogue system behaviors. Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches. The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems. ## 1 Introduction Recent work in human-computer chat has made remarkable progress. Multi-turn open-domain (MTOD) models are capable of holding engaging conversations with humans (Roller et al., 2021; Adiwardana et al., 2020). However, there remain a number of challenges facing MTOD chatbots such as hallucinations (Shuster et al., 2021), commonsense violations (Zhou et al., 2021), and consistency issues (Nie et al., 2021). A significant obstacle for research that addresses these challenges is the difficulty in formulating an appropriate evaluation methodology due to the inherent subjectivity in determining chat quality (van Miltenburg et al., 2021). Since existing automatic evaluation metrics have been shown to be biased measures of chat quality (Liu et al., 2016; Sai et al., 2019; Deriu *Contributed equally to this work as first authors. et al., 2022), evaluation using human judgments is standard, although the type of human judgments varies widely across works (Finch and Choi, 2020). Overall, there are few works comparing and assessing the validity of various human evaluation methods. The result of this gap in the literature is that the relative sensitivity, interpretability, and importance of the metrics used to evaluate chat models are not well understood. A dimensional approach for evaluating chat models that measures different aspects of chat quality would surely aid progress (van Miltenburg et al., 2021). However, to our knowledge, no work has investigated the coverage of a comprehensive set of evaluation metrics. Consequently, existing chat model evaluation results provide an incomplete picture of the strengths and weaknesses of MTOD chatbots. This paper addresses these limitations of previous work through the following three contributions: 1. A novel, dimensional human evaluation method that measures the rate of chatbot behaviors impacting chat quality (Section 5). 2. A detailed validation of human evaluation methods, including likert scales and pairwise comparisons (Section 7). 3. A comprehensive evaluation of four MTOD chatbots using validated metrics (Section 8). By presenting a detailed picture of MTOD chatbot performance and standard methods to evaluate them, we aid future work's efforts to further understand and improve human-computer interaction. Our evaluation platform, analyses, and data are available at https://github.com/emorynlp/ ChatEvaluationPlatform. ## 2 Chatbots To evaluate the strengths and weaknesses of MTOD models, we select the chatbots for our study using a 15044 two-stage process: (1) a literature review to identify chatbot candidates, and (2) a pilot evaluation to select the final set of bots for our full study. Literature Review To promote diversity among the selected chatbots, we focus our review on four popular themes of the human-computer chat: (1) Knowledge-grounded chat, (2) Empathetic chat, (3) Self-consistent chat, and (4) General open-domain chat with large pre-training resources like Reddit. Candidate chatbots are selected from each theme using the following criteria: 1. The bot must demonstrate state-of-the-art performance in a task related to the theme.1 2. The implementation must be provided.2 3. The response latency of the bot must be <10 seconds using modern GPU hardware. This review yields the 6 chatbot candidates in Table 1: Blender-Decode (Nie et al., 2021), Blender2 (Weston and Shuster, 2021), BART-FiD-RAG (Shuster et al., 2021), Emora (Finch et al., 2020), DukeNet (Meng et al., 2020), and CEM (Sabour et al., 2022). Appendix A presents details of our literature review and selection process. | Model | Theme | N | Q | Pass | |----------------|-------------|-----|-----|--------| | Blender-Decode | Consistency | 10 | 4.1 | ✓ | | Blender2 | General | 10 | 3.8 | ✓ | | BART-FiD-RAG | Knowledge | 10 | 3.5 | ✓ | | Emora | General | 10 | 3.3 | ✓ | | DukeNet | Knowledge | 9 | 1.9 | ✗ | | CEM | Empathy | 12 | 1.1 | ✗ | Chatbot Selection A pilot evaluation using the 6 chatbot candidates is conducted in order to verify the multi-turn dialogue capability of the chatbot candidates. Appendix B provides details on the implementations of each chatbot candidate. 10 students majoring in Computer Science or Linguistics are invited to interact with randomly assigned chatbots in 3-5 text-based conversations,3each of 1Note that selection occurred in October 2021. 2We accepted either a trained English model or codebase with a fully-specified procedure to replicate the model. 3We use the web interface provided by ParlAI (Miller et al., 2017) hosted on our local webserver. which consisted of 30 turns.4 At the end of each conversation, students are asked to rate the quality from 1 (least) to 5 (most). Based on the pilot results (Table 1), DukeNet and CEM are excluded from our full study because they are unable to hold satisfying multi-turn conversations, despite their reasonable single-turn response generation capabilities. Appendix C shows example dialogues from these systems. ## 3 Conversation Collection The conversation dataset used for the full study is collected using human interactors in a text-based conversation setting. 46 undergraduates are recruited as interactors. Each interactor is compensated with a $5 gift card for every 6 conversations, and allowed to complete up to 18 conversations. Conversations are collected remotely using ParlAI's interactive web interface, and links to the web interface are sent to each interactor with instructions to be completed within 2 weeks. For each link, the interactor completes two conversations with a random pair of chatbots, for a minimum of 30 turns per conversation. We impose a similar open-ended, topic-free chatting environment to Adiwardana et al. (2020). Interactors are asked to rate 8 dimensions (Table 3) of each conversation after its completion on a 1-5 Likert scale, and to select the higher-quality conversation along the same 8 dimensions after each conversation-pair (ties allowed). Our final conversation dataset includes 400 human-bot dialogues (100 dialogues per chatbot), averaging 30.3 turns per dialogue (11.3 tokens per user turn). ## 4 Evaluation Methods For a comprehensive evaluation of MOTD chatbots, a robust dimensional evaluation of their chat capabilities is crucial (van Miltenburg et al., 2021). To have confidence that any evaluation metric yields useful information, its interpretability and sensitivity require validation. In addition, it is important to verify that each evaluation metric provides distinct information relative to the others. Several previous works propose sets of evaluation metrics that could be used for a dimensional evaluation but with insufficient analyses to validate them. Finch and Choi (2020) present an exhaustive set of metrics based on a literature survey of human 4A "turn" is defined as ONE message from a single interactor. evaluation methods, but do not quantitatively validate its interpretability, sensitivity, or per-metric distinctness. Mehri and Eskenazi (2020a) present a set of Likert metrics and analyze their relationship to overall dialogue quality, but do not validate the sensitivity or distinctness of the individual metrics. Mehri and Eskenazi (2020b) present 5 Likert metrics and evaluate their coverage with respect to explaining single response quality, but do not validate their sensitivity or distinctness. Similarly, some works look to identify common chatbot errors. Sanguinetti et al. (2020) and Higashinaka et al. (2021) present error taxonomies empirically grounded by error analyses, but do not present distinctness or sensitivity results for their error categories. See and Manning (2021) identify errors for one dialogue model and analyze the impact of each error on overall quality but do not attempt to verify the generalizability of their results. Furthermore, various works propose novel evaluation methods with varying degrees of validation of the reliability and effectiveness of such methods. Deriu et al. (2020) present Spot the Bot, a pairwise evaluation approach that uses survival analysis to rank bots based on self-chats, but do not directly compare to alternative methodologies other than for cost. Sedoc and Ungar (2020) apply Item-Response Theory (IRT) (Lord and Novick, 2008) to pairwise comparison dialogue evaluation, by using a latent variable Bayesian model to estimate both the ability of the evaluated systems and the informativeness of inputs in the static evaluation set. Their analysis of the utility of IRT for dialogue evaluation does not include comparisons to existing approaches or a dimensional focus since they exclusively consider overall response quality. Ji et al. (2022) propose a continuous-scale method for evaluating multi-turn dialogue systems with quality control measures for mitigating artifacts from human annotators. They validate their proposed method on various dialogue dimensions using replication studies, a sensitivity analysis, and a correlation analysis between dimensions, although they explicitly acknowledge that their set of dimensions is not intended to be comprehensive. Phy et al. (2020) assert 3 dimensions (understandability, sensibleness, and likability) are sufficient for capturing the quality of a dialogue and validate their claims using agreement, correlation analysis, and distinctness analysis on human annotations of their dimensions, although they are not applied to multi-turn dialogues. Two studies, Li et al. (2019a) and Smith et al. (2022), compare pairwise comparison and Likert evaluation methods via a sensitivity analysis. However, neither of them target a high-coverage set of dimensional metrics, as their studies were limited to 4 and 3 metrics respectively. Lee et al. (2020) also investigates pairwise evaluation using the ChatEval platform. However, this is not a multi-turn evaluation setup and it does not target a dimensional analysis since the comparisons are based exclusively on the overall quality of the responses. Finch and Choi (2020) ✓ ✓ ✓ Mehri and Eskenazi (2020a) ✓ ✓ ✓ ✓ ✓ Mehri and Eskenazi (2020b) ✓ ✓ ✓ Sanguinetti et al. (2020) ✓ ✓ ✓ Higashinaka et al. (2021) ✓ ✓ ✓ ✓ See and Manning (2021) ✓ ✓ ✓ Deriu et al. (2020) ✓ ✓ ✓ ✓ Sedoc and Ungar (2020) ✓ ✓ Ji et al. (2022) ✓ ✓ ✓ ✓ Phy et al. (2020) ✓ ✓ ✓ Li et al. (2019a) ✓ ✓ ✓ Smith et al. (2022) ✓ ✓ ✓ Lee et al. (2020) ✓ ✓ This Work ✓ ✓ ✓ ✓ ✓ ✓ ✓ ![2_image_0.png](2_image_0.png) Overall, the relative validity of human evaluation metrics requires further investigation before a comprehensive and reliable dimensional evaluation of human-computer chat is achieved. Table 2 summarizes the goals and contributions of the previous evaluation works. Our study addresses all existing gaps by conducting a detailed validation study of 4 different human evaluation methods and a wide range of fine-grained metrics. ## 4.1 Selected Methods Four human evaluation methods are chosen for our study. Since MTOD chat model evaluation is our goal, any domain- or approach-specific methods or single-response evaluation methods providing chatbots with a specific context are excluded.5 We also focus on external human evaluation methods, where human evaluators judge conversations they do not participate in. Three of the selected methods represent popular approaches: Dialogue Likert, Turn Likert, and Comparative. The fourth method, ABC-Eval, is our novel evaluation approach. | Label | Dialogue | Turn | Comparative | |----------------------------|------------|--------|---------------| | Likert | Likert | | | | Consistency | Cond | Cont | Conc | | Emotion | Emod | Emot | Emoc | | Understanding Engagingness | Engd | Engt | Engc | | Grammaticality | Grad | Grat | Grac | | Informativeness | Infd | Inft | Infc | | Quality | Quad | Quat | Quac | | Proactivity | Prod | Prot | Proc | | Relevance | Reld | Relt | Relc | Table 3: The 8 labels for Likert and Comparative evaluations (taken from Finch and Choi (2020)), henceforth referred to using their abbreviations and colors. Dialogue Likert Annotators provide dialoguelevel ratings from 1 (least) to 5 (most) for the 8 labels shown in Table 3. We use the dimension set proposed in Finch and Choi (2020) which results from a detailed survey of characteristics used in chat evaluation and has better coverage than alternatives like the set used in ACUTE-Eval (Li et al., 2019a). Bot-level metrics are calculated as the mean rating across all bot dialogues. Turn Likert Annotators provide turn-level ratings on the same scale and labels as those used for Dialogue Likert. The dialogue-level metric is measured as the mean rating of a single dialogue's turns. The bot-level metric is calculated as the mean rating of all turns in all bot dialogues. Comparative Annotators select the dialogue in which chatbot responses better fit a label definition from a side-by-side pair of dialogues, also using the labels in Table 3. A "neither" option is allowed, only for cases where the evaluator cannot distinguish which dialogue was a better fit. Bot-level metrics are calculated as bot pair win/tie/loss proportions between pairing of their dialogues. Behavior Classification: ABC-Eval Annotators provide binary labels on the turn-level indicating 5We do not include a turn-level comparative evaluation because controlled comparisons require comparing turns with identical historical contexts which is not viable for real human-bot dialogues like those used in this work. the presence or absence of a particular chat characteristic. The included chat characteristics are defined in Table 4. Dialogue-level metrics are calculated as the proportion of turns that display the characteristic of the dialogue. Bot-level metrics are calculated as the proportions of turns that display the characteristic over all bot dialogues. ABC-Eval is described in detail next in Section 5. ## 5 Abc-Eval Design We hypothesize that binary turn-level behavior labels provide more reliable and informative metrics for quantifying fine-grained aspects of chat quality than alternative approaches such as Likert or Comparative scoring. Our novel method, the Annotation of Behaviors in Chat Evaluation (ABC-Eval), is developed in three stages: (1) collecting a set of behavior label candidates, (2) developing and piloting our annotation instructions and procedure, and (3) selecting a subset of behavior labels based on the validation study results in Section 7. Collecting Behavior Label Candidates Based on a review of recent work in chat-oriented dialogue modeling and evaluation, we identify characteristics of chatbot responses relevant to conversation quality. These characteristics include those presented as error cases, evaluation metrics, or desirable response features. We then curate binarized definitions of these characteristics to create an initial set of behavior label candidates, which are revised through an iterative piloting and development process. Due to its high coverage of error categories, Higashinaka et al. (2021) is the primary source of inspiration for many of our behavior labels. However, we improve upon their presented taxonomies by considering additional labels based on characteristics of chat presented by other work, and by further refining their error categories to improve average Inter-Annotator Agreement (Section 7.1). Table 4 presents the final set and definitions of the 16 candidate behavior labels used in our full study, along with selected works from our review that inspired their inclusion. Appendix D details in full our development process. Annotation Procedure The ABC-Eval procedure includes 16 binary behavior labels divided between 8 independent annotation tasks (Table 4). In each task, human evaluators are provided with definitions and examples of the behavior labels associated with that task and asked to annotate every | Label | Abbr. | Description | Inspired by | |--------------------------|---------|----------------------------------------------------------------------------------------------|-------------------| | Uninterpretable | !Intb | It is difficult to understand the intended meaning of part or all of the response. | 1, 2, 3, 4, 5, 6 | | Antisocial | !Socb | The response is insulting, hateful, or excessively vulgar. | 2, 7, 8, 9 | | Preference Info | Preb | The response expresses the bot's preferences, wishes, or values. | 10, 11 | | Life Info | Lifb | The response shares information about the bot's life or experiences. | | | Empathetic | Empb | The response shows an understanding and reacts appropriately to someone's emotions. | 11, 12, 13 | | Lack of Empathy | !Empb | The bot misunderstands or reacts inappropriately to someone's emotions. | | | Commonsense | !Comb | The response misunderstands or contradicts common knowledge. | 2, 14, 15, 16 | | Contradiction Fact Usage | Facb | The response accurately incorporates encyclopedic or expert knowledge. | 1, 2, 11, 17, 18, | | Fact Contradiction | !Facb | The response hallucinates or inaccurately presents encyclopedic or expert knowledge. | 19, 20 | | Self Contradiction | !Selb | The bot contradicts something it said earlier in the dialogue. | 2, 3, 6, 20, 21, | | Partner Contradiction | !Parb | The bot contradicts or misremembers something the user said earlier in the dialogue. | | | Redundant | Redb | The response inappropriately repeats information presented earlier in the dialogue. | 22, 23 | | Ignore | Ignb | The response ignores what the user just said. | 1, 2, 3, 6, 24 | | Irrelevant | !Relb | The response interrupts the current topic of discussion by presenting unrelated information. | | | Follow-up | Folb | The response explores, elaborates on, or asks about the ideas shared in the previous turn. | | | Topic Switch | Topb | The response introduces a new topic of conversation. | | Table 4: The 16 behavior labels within ABC-Eval. Row separators denote evaluation task groupings. **Bold** indicates behavior labels kept in final set. [1] Gopalakrishnan et al. (2019), [2] Higashinaka et al. (2021), [3] Mehri and Eskenazi (2020a), [4] Mehri and Eskenazi (2020b), [5] Phy et al. (2020), [6] Sanguinetti et al. (2020), [7] Beattie et al. (2022), [8] Sun et al. (2022), [9] Xu et al. (2021), [10] Rashkin et al. (2021), [11] Smith et al. (2020), [12] Majumder et al. (2020), [13] Rashkin et al. (2019), [14] Zhong et al. (2021), [15] Zhou et al. (2021), [16] Zhou et al. (2022), [17] Gupta et al. (2022), [18] Honovich et al. (2021), [19] Santhanam et al. (2021), [20] Shuster et al. (2021), [21] Li et al. (2021), [22] Nie et al. (2021), [23] Welleck et al. (2019), [24] Xu et al. (2022) . chatbot turn in a given human-chatbot conversation with each behavior label. Evaluators complete these tasks using a custom web application based on the ParlAI evaluation interface (Appendix G). Training and Screening To improve annotation consistency and detect poorly performing evaluators, we develop automated training sessions each annotation task inspired by van Miltenburg et al. (2021). Each session consists of 3 conversations that evaluators annotate using an identical procedure and web interface to the corresponding task. The 3 conversations used for each session are handcrafted by the authors to represent a variety of positive and negative examples of the behavior labels for the corresponding task (Appendix D). The gold annotations for each training conversation are hidden from evaluators during the annotation; however, after completing each training conversation, any disagreements between the evaluator's annotations and gold labels are displayed along with an explanation to help the evaluator improve. We use the evaluator's performance on the third conversation of each training session to screen evaluators, where performance is measured by the number of turns where their annotations disagree with gold labels. Evaluators are eligible to complete the work on a task if they make mistakes on fewer than 2 turns for the antisociality and uninterpretability tasks, or on fewer than 3 turns for the other 6 tasks. ## 6 Evaluation Study Our full study consists of the collection of 40 labels per conversation. This collection was split into 18 independent evaluation tasks as follows: - 8 ABC-Eval tasks, each composed of 1 to 4 labels as denoted by groupings in Table 4 - 1 Dialogue Likert task, composed of all 8 labels from Table 3 completed in random order - 8 Turn Likert tasks, each composed of 1 label from Table 3 - 1 Comparative task, composed of all 8 labels from Table 3 completed in random order The 18 evaluation tasks are posted on SurgeHQ's annotation platform6to be completed by dedicated remote workers (Surgers) with experience in NLP annotation. Each time an evaluator connects to one of our tasks, they are assigned a randomly selected conversation to annotate. We are allocated a group of 125 Surgers, chosen by a SurgeHQ employee based on high annotation performance on past projects. Evaluators are compensated per annotated conversation per task, at an estimated rate of $20/hr7. We allow evaluators to annotate up to 60 conversations per task. 6https://www.surgehq.ai; Appx. E details annotator selection. 7Per-task payment rates provided in Appendix F. ![5_image_0.png](5_image_0.png) Our final evaluation dataset consists of 400 conversations, each with results for all 40 labels.8 Additionally, a randomly-selected subset of 100 conversations (and 50 of the conversation pairs) is evaluated a second time by a different Surger in order to measure IAA. ## 7 Metric Analysis 7.1 Interpretability We measure the reliability of interpreting each metric's annotation instructions by calculating IAA using our set of 100 double-annotated conversations (Figure 1). High agreement between annotators demonstrates that different people can reliably come to the same conclusions about how a metric's definition applies to each chatbot response. Our results suggest that the definitions of most ABC-Eval metrics can be interpreted more reliably than the definitions of most Dialogue Likert, Turn Likert, and Dialogue Comparison metrics. Likertstyle and comparison-style annotations appear to have similar interpretability, although Quac was a notable exception that produced higher agreement than Quad. ## 7.2 Importance The importance of each metric is estimated by a predictive validity analysis that measures the extent, to which the metric can predict conversation quality (Figure 2). We use Quad and Quac from interactors that participated in the conversations (Section 3) to avoid cases where the same evaluator produced the quality label and explanatory metric. 8Only 192 of our 200 dialogue pairs were evaluated with Comparative labels due to a collection mistake 9Bias-corrected and accelerated confidence intervals with k =10,000 Monte Carlo case resamples. 10!Socb and !Intb's confidence intervals are largely due to a low rate of positive examples (see Figure 4). The predictive validity of each metric was measured by fitting univariate linear or logistic regression models to predict Quad or Quac, respectively. Quac was represented as a binary encoding, where 0 and 1 represent choosing the first and second conversation, respectively. We excluded any conversation pairs in which the interactor could not distinguish a difference in quality between conversations, and fitted models on the remaining set of 184 conversations. To use non-comparative predictors for predicting Quac, the difference in metric value between each pair of conversations was used. Our results suggest that dialogue quality is substantially related to emotional understanding metrics (Emo, Empb, !Empb), relevance-related metrics (Rel, !Relb, Ignb), and consistency metrics (Con, !Selb, Redb, !*P ar*b). Within these metric groupings, ABC-Eval metrics were overall more predictive of quality than their Likert or comparative analogs, while comparative metrics were least predictive of quality. Chatbots' ability to express knowledge (Inf, F acb, !F acb Lifb, *P ref*b) was an overall poor predictor of quality; however, commonsense knowledge errors (!Comb) was one of the strongest predictors. ## 7.3 Sensitivity We investigate the sensitivity of each metric using two analyses. First, we use the fitness of the univariate regression models described in the previous section as one source of evidence for metric sensitivity, since a metric must be sufficiently sensitive in order to distinguish conversations of low and high quality. Second, we follow Li et al. (2019a) and run hypothesis tests to count the number of statistically significant differences each metric is able to detect between the 6 pairings of our 4 chatbots (Table 5). To make results comparable, we ![6_image_0.png](6_image_0.png) $\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}\;\frac{\frac{5}{6}}{\frac{5}{6}}$ . $\widehat{\mathfrak{M}}$ $\frac{1}{2}$ 2. $\mathfrak{H}$ $$\mathbf{\hat{x}}$$ α!Socb!CombFacbEmpbFolbIgnb!Facb!Relb!EmpbLifb!ParbPrebRedb!SelbTopb!IntbContEmotEngtGratInftProtQuatReltCondEmodEngdGradInfdProdQuadReldConcEmocEngcGracInfcProcQuacRelc 0.01 0 1 4 3 5 1 4 2 4 2 0 4 2 5 5 2 1 1 2 4 4 4 4 2 1 2 1 0 0 3 1 0 1 0 1 0 1 1 0 0 0.05 0 2 5 3 5 2 6 2 5 2 1 4 3 5 5 3 2 1 3 5 4 4 4 3 2 3 2 3 2 3 1 2 1 0 1 0 1 2 1 0 0.1 1 3 5 3 5 2 6 2 5 3 1 4 3 5 5 3 2 3 3 5 4 4 4 4 2 4 2 5 3 3 3 2 2 0 1 0 2 2 1 2 $\downarrow$ . downsample the conversations used for hypothesis testing to 32 conversations per bot for our Dialogue Likert, Turn Likert, and ABC-Eval metrics to match the 32 conversation-pairs per bot-pair produced by our Comparative evaluation. Our results show that the Likert evaluations were more sensitive than the Comparative evaluation for most labels. ABC-Eval metrics have a wide range of sensitivity, with some ABC-Eval metrics appearing to be more sensitive analogs of similar likert metrics. For example, the results suggest that !Selb and Redb are more sensitive than Con, that *F ac*b and !*F ac*b are more sensitive than Inf, and that Empb and !Empb are more sensitive than Emo. On the other hand, the likert-style Rel metric shows similar or slightly superior sensitivity compared to the analogous Ign and !Rel behavior metrics. ## 7.4 Coverage & Distinctness We investigate the coverage and distinctness of our metrics via incremental validity analysis. For this analysis, we perform backwards stepwise regression that determines (1) the ability of an evaluation method as a whole to explain conversation quality, and (2) whether each metric contributes distinct information about quality above and beyond other metrics (Figure 3). Specifically, we fit a multivariate regression model for each of our 4 evaluation methods. These models are fit similarly to those $${\hat{\Xi}}\ {\hat{\Xi}}\ {\hat{\Xi}}\ {\hat{\Xi}}\ {\hat{\Xi}}\ {\hat{\Xi}}\ {\hat{\Xi}}$$ $${\tilde{\Xi}}\ {\tilde{\mathcal{S}}}\ {\tilde{\Xi}}\ {\tilde{\mathcal{S}}}\ {\tilde{\mathcal{S}}}$$ $\begin{array}{cccccccc}\frac{\frac{8}{9}}{1}&\frac{8}{9}&\frac{4}{9}&\frac{5}{9}&\frac{6}{9}&\frac{7}{9}\\ \\[-4mm]\hline0&1&0&1&1&0&0\\ 0&1&0&1&2&1&0\\ 0&1&0&2&2&1&2\end{array}$ . $\square$ $$\begin{array}{l l l l}{{0}}&{{0}}&{{3}}&{{1}}\\ {{3}}&{{2}}&{{3}}&{{1}}\\ {{5}}&{{3}}&{{3}}&{{3}}\end{array}$$ $\dfrac{7}{4}$ 4. $\frac{2}{2}$ 2. $$\mathbf{\Sigma}_{4}^{5}\left|\mathbf{\Sigma}_{2}^{2}\right|$$ $\square$ presented in Section 7.2, but include all non-quality metrics within an evaluation method as predictors. Then, we remove predictors from each model one at a time based on a beam search (k=100) of which removed predictor results in the smallest decrease in model fitness (adjusted R2 or adjusted pseudo-R2). We perform this stepwise regression analysis twice to predict both Quad and Quac given by interactors, similar to our analysis in Section 7.2. Our results suggest that ABC-Eval has overall better coverage than other evaluation methods for explaining conversation quality. Furthermore, most ABC-Eval metrics that have a strong relationship with conversation quality appear to be appropriately distinct in the information they provide, especially !Empb, !Selb, Redb, !Relb, Empb, !Comb, and Ignb. Similar distinctness can also be seen in Turn Likert metrics, whereas dialogue-level metrics show relatively low distinctness. ## 7.5 Final Abc-Eval Metrics Given the results of our metric analysis, we select the final set of ABC-Eval metrics bolded in Table 4. In our analyses, this final set had better interpretability (Section 7.1), a wider coverage of distinct characteristics of chat that impact quality (Section 7.2 and Section 7.4), and overall higher measurement sensitivity (Section 7.3) than alternative evaluation methods. Furthermore, the fi- ![7_image_0.png](7_image_0.png) nal ABC-Eval metrics are less costly11 (a median of 15.2 min/dialogue) to collect than Turn Likert metrics (19.9 min/dialogue). Although dialoguelevel evaluations are least costly (2.8 min/dialogue for Dialogue Likert, 4.4 min/dialogue for Comparative), our results suggest that dialogue-level judgements may be ill-suited for dimensional evaluation, since the dialogue-level metrics we tested had worse coverage and distinctness (Section 7.4). ## 8 Chatbot Evaluation To evaluate the strengths and weaknesses of our 4 selected chatbots, we present results for the 400 collected conversations across all ABC-Eval metrics (Figure 4 and Figure 5), Likert Dialogue metrics (Figure 6), Likert Turn metrics (Figure 8), and Comparative metrics (Figure 7). We focus our discussion on the final set of ABC-Eval metrics since they performed best in our metric analysis. The results highlight the notable recent progress in human-computer chat, as the vast majority of chatbot turns are interpretable, relevant responses to the dialogue context. Less than 1% of re-11See Appendix F for detailed cost results. sponses have interpretability issues, and Blender2 and BART-FiD-RAG each achieve a relevant response rate of nearly 90%. Blender2 specifically is also able to incorporate factual knowledge into about 20% of its responses while hallucinating factual information at a remarkably low rate, less than 1%. Furthermore, the chatbots almost never produce responses with offensive language.12 The chatbots also show a high rate of emotional understanding, with 40% of their responses containing emotionally-appropriate reactions to the user. Despite these strengths, our results also show several clear directions for improvement. Commonsense violations are present in about 15-20% of the bots' responses. Consistency issues are prevalent across all bots: self-contradictions, partner contradictions, and redundancies appear in about 5% of the bots' responses overall. Also, all chatbots have a substantial rate of violating natural dialogue structure: about 10% of responses are judged as ignoring the user, and depending on the chatbot, ![8_image_0.png](8_image_0.png) around 10-20% of responses include irrelevant contributions to the dialogue. Additionally, 5-15% of the chatbots' responses show a lack of empathy or other emotional misunderstandings. The reality of these observed rates of problematic behaviors is that, in most 30-turn conversations with these chatbots, a human interactor is likely to experience several issues that impact conversation quality. ![8_image_1.png](8_image_1.png) Figure 5: Proportions of turns expressing desirable behaviors, with 95% Wilson score confidence intervals. ![8_image_2.png](8_image_2.png) Figure 6: Average Dialogue Likert ratings of the conversations, with 95% Student's t confidence intervals. ![8_image_3.png](8_image_3.png) ![8_image_4.png](8_image_4.png) ## 9 Conclusion As illustrated here, dialogue quality is a complex construct with many dimensions. Depending on the approach, dialogue systems can have markedly different weaknesses among these quality dimensions. Our research highlights several outstanding challenges, especially regarding the relevance, consistency, common-sensibility, and emotional understanding of chat model responses. Our analyses not only demonstrate that these four dimensions have a high impact on conversation quality, but also that current chatbots have substantial response error rates in these areas. To efficiently address the challenges facing opendomain dialogue models, we need a reliable, dimensional evaluation method; however, our results show that popular evaluations such as dialoguelevel Likert and comparative methods may not be suitable. The presented ABC-Eval serves as a promising alternative in this direction. Although the popular dialogue-level likert evaluation method may be the most cost-effective and robust method for measuring overall dialogue quality, we recommend that researchers additionally use the final set of ABC-Eval metrics, or a subset relevant to their scientific goals, to evaluate specific strengths and weaknesses of new chat models. Overall, we hope future work can use insights from our study to make better-informed decisions about which evaluation method to use, and to tackle the challenges facing current chatbots. ## 10 Limitations There are several characteristics of the presented analyses that limit the scope of conclusions that can be drawn. We discuss how each of these limitations affect the takeaways of our results below. Number of Chatbots The generalizability of our metric analysis results (Section 7) is constrained by the fact that we were only able to include conversations from 4 chatbots in our analyses. We did our best to choose chatbots representative of the field and seem to have selected a fairly diverse group of models (Section 8). However, it is possible that not all results we found in our metric analyses will generalize when evaluating other chat models. One possible example is the number of partner contradictions we observed among our 4 chatbots (Figure 4), which may be similar by coincidence. If other chatbot models indeed differ more substantially in partner contradiction rates, our sensitivity metric analysis may have underestimated the sensitivity of our partner contradiction metric (Section 7.3). In general, including a larger number of chatbots in a metric analysis will improve the chance that its results will apply to new chatbot models. Future work that performs metric analyses like those we presented, but with different chatbots than the 4 selected in this work, would aid further analysis of our results' generalizability. Use of Surgers as Evaluators We perform our analyses using only a single evaluator group (Surgers). This choice of evaluator group does not harm the replicability of our methods, as other researchers have access to use of SurgeHQ or similar third-party annotation companies. However, several other evaluator groups are more popularly used for chat model evaluation, such as university students and Amazon Mechanical Turkers (MTurkers). We attempted to carry out our study with three evaluator groups (see Appendix E for details), but were unable to proceed with student and MTurker evaluator groups due to time constraints. Consequently, it is unclear to what extent our metric analysis results will generalize to other choices of evaluator. Number of Collected Conversations As with any study involving a sampling procedure, resource constraints limit the number of collected samples, which in turn limits the statistical power of the study's analyses. Our study included 400 conversations, which provided more than adequate statistical power for most of our analyses. For example, our investigation of each metric's predictive validity (Section 7.2) relied on a simple linear regression analyses. At a significance level of α=0.05, our 400 conversation samples would yield a statistical power of 1-β=0.80 to detect effect sizes of f 2=0.142 by F-test for each metric's regression. However, our analyses with the weakest statistical power are our dialogue-level analyses that compare bots with only 100 samples per bot. At 100 samples per bot, and assuming a standard deviation of 1.0 Likert points,13 a two-tailed t-test of mean Dialogue Likert rating would have a statistical power of 1-β=0.80 to detect differences of an effect size of Cohen's d=0.40. This is still a reasonable amount of statistical power, but leaves room for our study to produce inconclusive results when the true differences between chatbots are small. ## 11 Ethics Statement The presented work aims towards improving the scientific methodology of chat model evaluation. To this end, we present a battery of analyses comparing several aspects of metric validity for four different evaluation methods (Section 7). Our results allow other researchers in the field to make betterinformed decisions regarding appropriate evaluation methodology in human-computer chat. To ensure replicability of our methods we publicly release the annotation software and chatbot implementations used to collect our conversation and evaluation data. Additionally, we provide full transparency in our analyses by releasing the code for all our presented analyses. Finally, to aid future research efforts in human-computer chat modelling and evaluation, we release an anonymized version of our conversation and evaluation data. One ethical consideration involved in our work involved managing human workers in our data collection processes. All worker participation in our study was voluntary and involved zero subjective screening processes, with a complete description of worker tasks, workload, and timeframe provided before work was assigned. Workers could opt out of our study at any time for any reason. As compensation for work completed, we targeted a compensation rate of $10/hour for student14 and Amazon Mechanical Turk workers, and a rate of $20/hour for Surgers. We compensated on a pertask-completed basis to ensure timely completion of work, but verified that target hourly rates were reasonably approximated throughout the course of the study by measuring workers' median task completion times (see Appendix F for details). These measures ensured that all human work in our study 13Smith et al. (2022) reports standard deviations of Likert metrics between 0.8 and 1.3 14Students' compensation is given as an Amazon Gift Card for convenience; students are informed of this prior to any work being completed was fair, transparent, and mutually-beneficial. Other ethical considerations arise in our study's conversation collection. Unlike the collection of evaluation or annotation data, collecting interactive conversation data from human-computer interaction poses a small but meaningful risk that sensitive, damaging, or personally identifying information could get collected. We mitigated this risk in three ways. First, students were notified in multiple email communications and before each conversation that their conversations with our chatbots would be publicly released. Included in these notices was the instruction to refrain from releasing any personally identifiable or damaging information. Our instructions suggest that students fabricate personal information at any time during the conversations if it would make them feel more comfortable. Second, we hand-checked all 400 conversations to ensure the non-presence of any sensitive information. Third, we anonymize all data before public release. Our study's collection and analysis of conversation data did not investigate interactors as human subjects, and we did not seek institutional review board approval. Finally, there is a concern in our study about the potential of the chatbots to respond to student interactors with toxic, insensitive, or vulgar language. The data-driven nature of some of our evaluated chat models means the chatbots are prone to reflecting any biases, toxicity, and vulgarity present in the training data (see Dinan et al. (2022) for a quantitative analysis). A high rate of antisocial behaviors among our evaluated models could potentially make human interactors' experience talking with the bots quite uncomfortable, and would poorly reflect on the research field's potential for social good. To mitigate this risk, the authors extensively hand-tested all evaluated chat models, as well as conducting a pilot evaluation among the authors' lab group. As confirmed further in our results (Section 8), our chatbots exhibited negligible rates of antisocial behavior. ## 12 Acknowledgements Thank you to Bradley Webb, Scott Heiner, and the rest of the SurgeHQ team for their guidance in running our annotation projects on their platform. We are also grateful to our colleagues at Emory for their participation in piloting the bots and refining the annotation interfaces. Lastly, a thank you to our reviewers for their helpful feedback. ## References Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a Human-like OpenDomain Chatbot. ArXiv:2001.09977 [cs, stat]. Hedin Beattie, Lanier Watkins, William H. Robinson, Aviel Rubin, and Shari Watkins. 2022. Measuring and Mitigating Bias in AI-Chatbots. In *2022 IEEE International Conference on Assured Autonomy (ICAA)*, pages 117–123. Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, and Jie Zhou. 2020. Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3426–3437, Online. Association for Computational Linguistics. Jan Deriu, Don Tuggener, Pius von Däniken, Jon Ander Campos, Alvaro Rodrigo, Thiziri Belkacem, Aitor Soroa, Eneko Agirre, and Mark Cieliebak. 2020. Spot The Bot: A Robust and Efficient Framework for the Evaluation of Conversational Dialogue Systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3971–3984, Online. Association for Computational Linguistics. Jan Deriu, Don Tuggener, Pius Von Däniken, and Mark Cieliebak. 2022. Probing the Robustness of Trained Metrics for Conversational Dialogue Systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 750–761, Dublin, Ireland. Association for Computational Linguistics. Emily Dinan, Gavin Abercrombie, A Bergman, Shannon L Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2022. Safetykit: First aid for measuring safety in open-domain conversational systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4113–4133. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-Powered Conversational Agents. In *Proceedings of the International Conference on Learning Representations*. Sarah E. Finch and Jinho D. Choi. 2020. Towards Unified Dialogue System Evaluation: A Comprehensive Analysis of Current Evaluation Protocols. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 236–245, 1st virtual meeting. Association for Computational Linguistics. Sarah E. Finch, James D. Finch, Ali Ahmadvand, Ingyu, Choi, Xiangjue Dong, Ruixiang Qi, Harshita Sahijwani, Sergey Volokhin, Zihan Wang, Zihao Wang, and Jinho D. Choi. 2020. Emora: An Inquisitive Social Chatbot Who Cares For You. Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tür. 2019. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. In *Interspeech 2019*, pages 1891–1895. ISCA. Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. DialFact: A Benchmark for Fact-Checking in Dialogue. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3785–3801, Dublin, Ireland. Association for Computational Linguistics. Ryuichiro Higashinaka, Masahiro Araki, Hiroshi Tsukahara, and Masahiro Mizukami. 2021. Integrated taxonomy of errors in chat-oriented dialogue systems. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 89–98, Singapore and Online. Association for Computational Linguistics. Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. Q2: Evaluating Factual Consistency in KnowledgeGrounded Dialogues via Question Generation and Question Answering. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 7856–7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Tianbo Ji, Yvette Graham, Gareth Jones, Chenyang Lyu, and Qun Liu. 2022. Achieving Reliable Human Assessment of Open-Domain Dialogue Systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6416–6437, Dublin, Ireland. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Chandra Khatri, Behnam Hedayatnia, Anu Venkatesh, Jeff Nunn, Yi Pan, Qing Liu, Han Song, Anna Gottardi, Sanjeev Kwatra, Sanju Pancholi, Ming Cheng, Qinglang Chen, Lauren Stubel, Karthik Gopalakrishnan, Kate Bland, Raefer Gabriel, Arindam Mandal, Dilek Hakkani-Tur, Gene Hwang, Nate Michel, Eric King, and Rohit Prasad. 2018. Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize. Consistency in Dialogues through Pragmatic SelfConsciousness. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 904–916, Online. Association for Computational Linguistics. Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2021. Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2227–2240, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-Augmented Dialogue Generation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. Seolhwa Lee, Heuiseok Lim, and João Sedoc. 2020. An Evaluation Protocol for Generative Conversational Systems. ArXiv:2010.12741 [cs]. Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Jason Weston. 2020. Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4715– 4728, Online. Association for Computational Linguistics. Margaret Li, Jason Weston, and Stephen Roller. 2019a. ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons. Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, and Jie Zhou. 2019b. Incremental Transformer with Deliberation Decoder for Document Grounded Conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 12–21, Florence, Italy. Association for Computational Linguistics. Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021. Addressing Inquiries about History: An Efficient and Practical Framework for Evaluating Open-domain Chatbot Consistency. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1057–1067, Online. Association for Computational Linguistics. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2020. Will I Sound Like Me? Improving Persona Frederic M Lord and Melvin R Novick. 2008. *Statistical* theories of mental test scores. IAP. Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: MIMicking Emotions for Empathetic Response Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8968–8979, Online. Association for Computational Linguistics. Shikib Mehri and Maxine Eskenazi. 2020a. Unsupervised Evaluation of Interactive Dialog with DialoGPT. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 225–235, 1st virtual meeting. Association for Computational Linguistics. Shikib Mehri and Maxine Eskenazi. 2020b. USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681–707, Online. Association for Computational Linguistics. Chuan Meng, Pengjie Ren, Zhumin Chen, Weiwei Sun, Zhaochun Ren, Zhaopeng Tu, and Maarten de Rijke. 2020. DukeNet: A Dual Knowledge Interaction Network for Knowledge-Grounded Conversation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20, pages 1151–1160, New York, NY, USA. Association for Computing Machinery. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A Dialog Research Software Platform. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2021. I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1699–1713, Online. Association for Computational Linguistics. Timo Partala and Veikko Surakka. 2004. The effects of affective interventions in human–computer interaction. *Interacting with Computers*, 16(2):295–309. Vitou Phy, Yang Zhao, and Akiko Aizawa. 2020. Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 4164–4178, Barcelona, Spain (Online). International Committee on Computational Linguistics. Helmut Prendinger and Mitsuru Ishizuka. 2005. The Empathic Companion: A Character-Based Interface That Addresses Users' Affective States. *Applied Artificial Intelligence*, 19:267–285. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pettigrue. 2018. Conversational AI: The Science Behind the Alexa Prize. Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021. Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 704–718, Online. Association for Computational Linguistics. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards Empathetic Opendomain Conversation Models: A New Benchmark and Dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for Building an Open-Domain Chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Sahand Sabour, Chujie Zheng, and Minlie Huang. 2022. CEM: Commonsense-Aware Empathetic Response Generation. *Proceedings of the AAAI Conference on* Artificial Intelligence, 36(10):11229–11237. Number: 10. Ananya B. Sai, Mithun Das Gupta, Mitesh M. Khapra, and Mukundhan Srinivasan. 2019. Re-Evaluating ADEM: A Deeper Look at Scoring Dialogue Responses. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6220–6227. Number: 01. Manuela Sanguinetti, Alessandro Mazzei, Viviana Patti, Marco Scalerandi, Dario Mana, and Rossana Simeoni. 2020. Annotating Errors and Emotions in Human-Chatbot Interactions in Italian. In *Proceedings of the 14th Linguistic Annotation Workshop*, pages 148–159, Barcelona, Spain. Association for Computational Linguistics. Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2021. Rome was built in 1776: A Case Study on Factual Correctness in Knowledge-Grounded Response Generation. ArXiv:2110.05456 [cs]. João Sedoc and Lyle Ungar. 2020. Item Response Theory for Efficient Human Evaluation of Chatbots. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 21–33, Online. Association for Computational Linguistics. Abigail See and Christopher Manning. 2021. Understanding and predicting user dissatisfaction in a neural generative chatbot. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 1–12, Singapore and Online. Association for Computational Linguistics. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval Augmentation Reduces Hallucination in Conversation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3784–3803, Punta Cana, Dominican Republic. Association for Computational Linguistics. Eric Smith, Orion Hsu, Rebecca Qian, Stephen Roller, Y-Lan Boureau, and Jason Weston. 2022. Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents. In *Proceedings of the 4th* Workshop on NLP for Conversational AI, pages 77– 97, Dublin, Ireland. Association for Computational Linguistics. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 2021–2030, Online. Association for Computational Linguistics. Haoyu Song, Yan Wang, Kaiyan Zhang, Wei-Nan Zhang, and Ting Liu. 2021. BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–177, Online. Association for Computational Linguistics. Haoyu Song, Yan Wang, Wei-Nan Zhang, Xiaojiang Liu, and Ting Liu. 2020. Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5821–5831, Online. Association for Computational Linguistics. Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, and Xuanjing Huang. 2019. Generating Responses with a Specific Emotion in Dialog. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 3685–3695, Florence, Italy. Association for Computational Linguistics. Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3906–3923, Dublin, Ireland. Association for Computational Linguistics. Emiel van Miltenburg, Miruna Clinciu, Ondˇrej Dušek, Dimitra Gkatzia, Stephanie Inglis, Leo Leppänen, Saad Mahamood, Emma Manning, Stephanie Schoch, Craig Thomson, and Luou Wen. 2021. Underreporting of errors in NLG output, and what to do about it. In *Proceedings of the 14th International Conference* on Natural Language Generation, pages 140–153, Aberdeen, Scotland, UK. Association for Computational Linguistics. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue Natural Language Inference. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Jason Weston and Kurt Shuster. 2021. Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet. Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2021. Bot-Adversarial Dialogue for Safe Conversational Agents. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 2950–2968, Online. Association for Computational Linguistics. Jing Xu, Arthur Szlam, and Jason Weston. 2022. Beyond Goldfish Memory: Long-Term Open-Domain Conversation. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5180–5197, Dublin, Ireland. Association for Computational Linguistics. Hao-Tong Ye, Kai-Lin Lo, Shang-Yu Su, and Yun-Nung Chen. 2020. Knowledge-Grounded Response Generation with Deep Attentional Latent-Variable Model. Computer Speech & Language, 63:101069. Haolan Zhan, Lei Shen, Hongshen Chen, and Hainan Zhang. 2021. CoLV: A Collaborative Latent Variable Model for Knowledge-Grounded Dialogue Generation. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 2250–2261, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, and Chunyan Miao. 2021. CARE: Commonsense-Aware Emotional Response Generation with Latent Concepts. *Proceedings of the AAAI* Conference on Artificial Intelligence, 35(16):14577– 14585. Number: 16. Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2021. CommonsenseFocused Dialogues for Response Generation: An Empirical Study. In *Proceedings of the 22nd Annual* Meeting of the Special Interest Group on Discourse and Dialogue, pages 121–132, Singapore and Online. Association for Computational Linguistics. Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2022. Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1237–1252, Dublin, Ireland. Association for Computational Linguistics. ## A Chatbot Selection Details This appendix discusses the details of our literature review for each of the four chosen research themes. General Our work focuses primarily on opendomain chat. Large-scale language modeling using dialogue-like pretraining data can produce surprisingly human-like conversation on virtually any popular conversation topic (Roller et al., 2021). Of these approaches, we chose Blender2 (Weston and Shuster, 2021), which reportedly outperformed its previous iteration Blender (Roller et al., 2021) who had surpassed DialoGPT (Zhang et al., 2020) and Meena (Adiwardana et al., 2020). There had been also several chatbots produced by the Amazon Alexa Prize Socialbot Grand Challenge (Ram et al., 2018) focusing on general, opendomain chat, most of which incorporate rule-based methods to ensure interesting and consistent responses (Khatri et al., 2018). Since these chatbots performed well in practice but lack comparison to SOTA data-driven models, we selected the bot with the all-time highest final score, Emora15 (Finch et al., 2020), as one of our candidates. Knowledge Grounding chat with supplementary knowledge resources is a common way to improve engagingness and control the topic of conversation (Li et al., 2019b; Ye et al., 2020). ColV (Zhan et al., 2021) achieved SOTA performance in knowledgegrounded dialogue response generation on the popular WoW dataset (Dinan et al., 2019); however, no implementation was publicly available. DukeNet (Meng et al., 2020) and PIPM (Chen et al., 2020) report next-best performance in this task. DukeNet's implementation was available while PIPM's was not, therefore we selected DukeNet as a candidate. BART-FiD-RAG also reported compelling performance for knowledge-grounded chat (Shuster et al., 2021), but did not compare to other SOTA models we identified. Since BART-FiD-RAG's inclusion in ParlAI provided easy replication, we included it in our bot pilot. Consistency Improving consistency of chatbot responses is noted as a challenge and addressed in several works (Welleck et al., 2019; Nie et al., 2021; Li et al., 2021). DECODE (Nie et al., 2021) reported SOTA performance for general inconsistency avoidance, improving upon an approach that used unlikelihood training with dialogue natural 15https://github.com/emora-chat/emora_ap3_parlai language inference data (Li et al., 2020). Note that there were several works focusing specifically on persona consistency (Song et al., 2020; Kim et al., 2020; Song et al., 2021), which we did not consider due to their narrower contradiction scope. Empathy Several works demonstrated the importance of emotional understanding in chat (Partala and Surakka, 2004; Prendinger and Ishizuka, 2005; Kim et al., 2021; Sabour et al., 2022). To provide contrast with our knowledge-grounded candidates, we selected CEM (Sabour et al., 2022), which reported SOTA results in empathetic response generation. Many related works investigated controllable emotional response generation (Song et al., 2019; Zhong et al., 2021), but we did not consider models requiring an emotion label as input. ## B Chatbot Implementation Details For each selected candidate model, a brief overview of the implementation details required to use them as interactive models in this work is below: Emora We implement a ParlAI agent using the interactive chatting mode provided for the Emora system (Finch et al., 2020). BART-FiD-RAG An interactive chatting mode for BART-FiDRAG is provided through ParlAI. We use the default arguments, except for compressed-indexer-nprobe and beam-context-block-ngram which we set to 128 and 3, respectively, following internal testing by the authors before the pilot. Blender2 An interactive chatting mode for Blender216 is provided through ParlAI; however, the internet-search augmented functionality as described in Komeili et al. (2022) is not provided. Although there is an available alternative internet search server17 that is compatible with Blender2, it increased the response latency of Blender2 such that it no longer satisfied our response latency requirement. As a result, we use the 3B parameter version of Blender2 that does not include the internet search functionality. Blender-Decode We implement a custom agent in the Parlai framework following the bestperforming setup of Nie et al. (2021) of finetuning a consistency classification model and using it to 16https://parl.ai/projects/blenderbot2/ 17https://github.com/JulesGM/ParlAI_SearchEngine rerank generated responses from Blender (Roller et al., 2021) such that the least contradictory response is selected. DukeNet The published code for DukeNet is missing an interactive chatting mode, although a trained model on the multi-topic Wizard of Wikipedia (WoW) (Dinan et al., 2019) is provided. We modified the published code to support an interactive chatting mode. However, one major difference is that the relevant knowledge documents need to be retrieved in interactive chatting mode, whereas they are predetermined in the training and evaluation environments. To remedy this, we use a trained Dense Passage Retrieval model (Karpukhin et al., 2020) provided through Sentence-Transformers18 to select knowledge at each dialogue turn. We implement a ParlAI agent using the available DukeNet model and our custom interactive chatting mode. CEM The published code for CEM is missing both an interactive chatting mode and a trained model. We trained a CEM model using the default arguments from the repository and modified the published code to support an interactive chatting mode. We implement a ParlAI agent using this trained model and our custom interactive chatting mode. | User | Hi! | | |--------|----------------------------------------------------------------------------------------|------------------------| | Bot | What is happening? | | | User | I am just getting my day started. | Fin | | ished breakfast and booted up my computer to begin working. What about you? | | | | Bot | That is great! | I wish you the best of | | luck! | | | | User | Yeah, I think it will be a good day and I am excited for it. What are you doing today? | | | Bot | I am sure you will be fine! | | | User | I'm not too worried about it. | | | Bot | I am so happy for you! | | | User | So, what do you do for work? | | | Bot | I am sure you will do great! | | Table 6: Chatbot pilot conversation: CEM | User | Hi! | | | | | | |----------------------------------------------|-----------------------------------------------------------------------------------------------------------------|-------|-------|--------|-----|---------| | Bot | Hello, how are you? | | | | | | | User | I am pretty good what about you | | | | | | | Bot | I'm | doing | well, | thanks | for | asking. | | How's the weather in your neck of the woods? | | | | | | | | User | pretty warm | | | | | | | Bot | That's good. Do you have any plans for the weekend? I'm thinking of going to the beach. | | | | | | | User | cool I have a gathering with my church friends | | | | | | | Bot | That sounds like a lot of fun. I haven't been to church in a long time. How long have you been going to church? | | | | | | | User | many years since I was very young | | | | | | | Bot | That's cool. I don't think I've ever been to a church service in my life. | | | | | | | User | what do you do for fun | | | | | | | Bot | I like to watch movies and play video games. What do you like to do in your free time? | | | | | | ## C Bot Pilot Examples Tables 6 and 7 show two of our chatbot pilot (Section A) conversations, one from CEM (Sabour et al., 2022) and one from Blender2 (Weston and Shuster, 2021), that exemplify the difference between single-turn and multi-turn dialogue response generation models. The CEM model is trained to give an empathetic response to a dialogue context, and achieves good performance towards this goal. However, as shown in the example, this response policy does not translate well for multi-turn interaction with a human. By contrast, Blender2 is trained and evaluated specifically to achieve multi-turn dialogue. ## D Pilots And Development The final ABC-Eval label set, annotation procedure, and software application are created using an iterative process of development and piloting. 14 students are invited to serve as evaluators for piloting the evaluation. To avoid overfitting the evaluation design, our pilots evaluated conversations collected between Blender (Roller et al., 2021) and one of the authors, and a new set of conversations was used for each pilot round. We ran 4 pilot rounds, making revisions after manually reviewing each round's annotation. ![17_image_0.png](17_image_0.png) Table 8 presents a summary of the major changes made in each pilot round and IAA metrics. It is important to note that each annotator performed all of the annotation tasks in one sitting in sequence for each pilot. These piloting rounds are not necessarily directly comparable to one another when taken as a whole, since the annotator groups and dialogues to be annotated varied between each round. Instead, we will discuss below the major takeaways afforded by different splits of the pilots that informed the final design of ABC-Eval. Subtask Formulation The decision to format ABC-Eval into several small annotation subtasks, each with a tailored subset of the behavior labels, was made from the results of Pilot 1. In Pilot 1, we divided the initial set of annotation labels into 3 annotation subtasks each with 4-9 labels: errors, information usages (commonsense, world knowledge, etc.), and utterance types (request, presentation, etc.). Each annotator performed the annotation tasks in one sitting in sequence. The overall interannotator agreement was quite low (α = 0.18), which was concerning. Based on ad-hoc feedback from the pilot annotators, the consensus was that each subtask demanded an unreasonable cognitive load on annotators due to the large number of labels to keep track of. For Pilot 2 we increased the number of annotation tasks such that each covered a small and related scope of behavior labels, with 1-4 labels per task. Table 8 shows the boost to interannotator agreement between Pilots 1 and 2. However, this agreement increase could have resulted from an increase in the quality of the annotators (as Pilot 2 was composed primarily of annotators with a graduate-level education whereas Pilot 1 was more evenly split between annotators with an undergraduate-level education and graduate-level education). To remove this confound, we calculated the agreement in Pilots 1 and 2 when only considering graduate-level annotators. Although it was less dramatic, there remained an increase in agreement from 0.39 to 0.50, which encouraged the decision to maintain the smaller annotation subtasks. Dividing the annotation into tailored subtasks seemed to reduce the cognitive load on annotators, thus allowing them to perform more accurate annotations per task. Training and Screening Manual analysis of the pilot annotations from Pilots 1 and 2 revealed some recurring annotation mistakes, arising from misunderstandings of the guidelines for the tasks. In an attempt to correct such misunderstandings, a training procedure was introduced for each task. Each round of training consists of 1 curated conversation with ground-truth labels and explanations that are shown as feedback to the annotator after they complete the training round. We used the results of Pilots 1 and 2 in order to develop these curated conversations as follows: 1. **Label Specifications:** We constructed a label specification that consisted of a comprehensive enumeration of positive and negative cases of the label with the goal of defining a decision boundary that the annotators should strive towards. We especially focused on the utterances for which several of the annotators failed to produce labels that matched the ground truth annotations we had defined for each of the Pilots. 2. **Training Conversation Selection:** We selected 3 conversations between Blenderbot and a human (from a collection within our lab) for each label to be used as training conversations for it. This selection was manually done by ranking the conversations on their coverage of the label specification. 3. **Training Conversation Modification:** We heavily revised the selected conversations by hand to ensure that all of the cases identified in the specification were adequately represented, most often by inserting new utterances that corresponded to any underrepresented cases. To evaluate the utility of this training process, a third pilot was conducted using 4 undergraduates. ![18_image_0.png](18_image_0.png) We observed a general upwards trend in annotation performance between the training rounds for the annotators, suggesting that the training was aiding in the annotation accuracy for the annotators. The final agreements were 0.43 and 0.45 between all annotators and annotators who passed the training, respectively, on the annotated conversations. Due to the small nature of this pilot, we are unable to conclude whether this difference is meaningful. However, ad-hoc feedback from the annotators suggested that the training rounds were useful towards their understanding of the tasks, although the amount of training did increase the overall workload of participation. Accordingly, the decision was made to treat each subtask independently, rather than require all subtasks to be completed for one dialogue in a single sitting for each annotator. General Revisions Throughout each of these pilot rounds the annotation instructions, examples, and training rounds were updated based on manual review of the annotations in an attempt to correct any unclear or misleading information. ## E Evaluator Training And Screening We attempted to use three different groups of evaluators for our full evaluation study: Students Undergraduate students were recruited from the authors' university via word-of-mouth and email advertisements sent to computer science, psychology, and quantitative methods departmental mailing lists.19 MTurkers Our 20 evaluation tasks were posted to the Amazon Mechanical Turk crowdsourcing platform.20 Surgers Our 20 evaluation tasks were posted on SurgeHQ's annotation platform21 to be completed by dedicated workers with experience in NLP annotation. A group of 125 Surgers were qualified to participate in our tasks, chosen by a SurgeHQ employee on the basis of high annotation performance on past projects. All three groups were compensated per task per annotated conversation, at an estimated rate of $10/hr for Students and MTurkers, and $20/hr for Surgers. To check the viability of each worker group to produce evaluation data for our full study, we released a random 5 conversations out of our set of 400, to be fully evaluated by each worker group in each of our 8 ABC-Eval tasks. After a two week period, Surgers were the only worker group that were able to fully evaluate the 5 conversations in all 8 ABC-Eval tasks. This was due to an overall lack of participation from the Student group, and due to low training pass rates from the MTurk group (see Figure 9 for quantitative outcomes). Although worker group differences in work rate and training performance might be explained by the difference in compensation structure, we decided to proceed with the Surgers group only for our full study to collect our evaluation data in a timely manner. ## F Collection Cost Compensation rates are based on per-task completion times from an internal pilot run. The rates per task paid to Surgers are shown in Table 9. We also present the real and theoretical costs for collecting each method included in our evaluation data (Table 10). As expected, turn-level annotation tasks are an order of magnitude more expensive to collect than dialogue-level tasks. Notably, the final set of ABC-Eval labels (Table 4) are, on average, less expensive to collect than turn-level Likert labels. 21https://www.surgehq.ai/ | Task | Payment | Task | Payment | |-----------------------|------------------------|-------------|-----------| | Uninterpretable | $0.63 | Antisocial | $0.44 | | Preference Info | $0.70 | Empathetic | $1.15 | | Life Info | Lack of Empathy | | | | Commonsense | $0.92 | Fact Usage | $1.96 | | Contradiction | Fact Contradiction | | | | Self Contradiction | Ignore | $1.87 | | | Partner Contradiction | Irrelevant | | | | $0.87 | | | | | Redundant | Follow-up Topic Switch | | | | Dialogue Likert | $0.60 | Turn Likert | $0.70 | | Comparative | $1.43 | | | Table 9: Payment per annotation task in USD. The payment for Turn Likert is per label whereas the indicated payment for Dialogue Likert and Comparative covers all labels, due to how the annotation tasks were constructed (Section 6). | Metric | TI | TP | EC | OC | |---------------------|-------|-------|---------|---------| | Dialogue Collection | 8.08 | 7.43 | 1077.14 | 333.33 | | Dialogue Likert | 2.81 | 21.37 | 374.36 | 240.00 | | Comparative | 4.35 | 13.81 | 289.68 | 286.67 | | Turn Likert | 19.94 | 3.01 | 2658.40 | 2240.00 | | ABC-Evalall | 25.60 | 2.34 | 3413.58 | 3422.67 | | ABC-Evalf inal | 15.17 | 3.95 | 2022.98 | - | ## G Evaluation Interfaces Examples of the annotation interfaces for each annotation task of ABC-Eval are provided in Figures 10 - 17, and an example for the conversation collection interface is provided in Figure 21. Examples of the annotation interfaces for Dialogue Likert, Turn Likert, and Comparative evaluations are provided in Figures 18, 19, and 20, respectively. The definitions that were shown to the annotators in the interface for each of the 8 dimensions of Dialogue Likert, Turn Likert, and Comparative are taken verbatim from Finch and Choi (2020). ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) ![21_image_0.png](21_image_0.png) ![22_image_0.png](22_image_0.png) ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) ![24_image_2.png](24_image_2.png) ![25_image_0.png](25_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 10 ✓ A2. Did you discuss any potential risks of your work? Section 11 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3, 5, And 6 ✓ B1. Did you cite the creators of artifacts you used? Sections 2 and 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Sections 2, 4, 5, and 11 and Appendix A, B, D, G ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 11 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 3 and 6 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 3 and 6 ## C ✓ **Did You Run Computational Experiments?** Sections 3, 7, And 8 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 2 and Appendix B for models and parameters used. Computational budget (GPU hours) and computing infrastructure was not provided because experiments were of low-resource intensity (GPU only required for decoding steps of 300 conversations and the bulk of computational experiments had quick runtimes and did not require GPU infrastructure). The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Computational experiments were mainly on evaluation methodologies which did not require hyperparameter tuning; Appendix B provides hyperparameter specifications for dialogue models used in this work ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 7 and 8 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? All code, software, data, and procedural descriptions for our study will be released at the time of publication. Our list of used packages is well-documented in the files, including settings used. Additionally, Appendix B reports model implementation details with parameter settings. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Sections 2, 3, And 6 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 3, 5 and Appendix D, E, G ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3, 6 and 11 and Appendix E, F ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 11 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Sec. 11 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3 and 6
cui-etal-2023-decoder
Decoder Tuning: Efficient Language Understanding as Decoding
https://aclanthology.org/2023.acl-long.840
With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a 200x speed-up. Our code is available at \url{https://github.com/thunlp/DecT}.
# Decoder Tuning: Efficient Language Understanding As Decoding Ganqu Cui1, Wentao Li1, Ning Ding1, Longtao Huang2, Zhiyuan Liu1,3∗**, Maosong Sun**1,3∗ 1 NLP Group, DCST, IAI, BNRIST, Tsinghua University, Beijing 2 Alibaba Group 3 IICTUS, Shanghai cgq22@mails.tsinghua.edu.cn ## Abstract ![0_Image_0.Png](0_Image_0.Png) With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking for powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. In light of this, we present Decoder Tuning (DecT), which in contrast optimizes task-specific decoder networks on the output side. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradientbased optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a 200× speed-up. Our codes are available at https://github.com/thunlp/DecT. ## 1 Introduction Recent advances in pre-trained models (PTMs) demonstrate the power of the "pre-trainingfine-tuning" paradigm, which empowers broad downstream NLP tasks with a single backbone model (Devlin et al., 2019; Raffel et al., 2020; Radford et al., 2019). Given the million even billionscale models, model-as-a-service (MaaS) has become an emerging practice in deploying massive PTMs, where users can only get access to model inference APIs (Brown et al., 2020; Sun et al., 2022b). Under such a scenario, PTMs' parameters ∗Corresponding Author. are frozen, and users cannot fine-tune the model on downstream tasks for adaptation. To find an alternative way, researchers have studied MaaS PTM adaptation methods extensively. Most existing approaches in this line are based on *prompts*, which modify inputs with specific patterns. By wrapping inputs into cloze-style questions or prepending inputs with a few demonstrative examples, PTMs could produce the right outputs directly and show strong "in-context" learning abilities (Petroni et al., 2019; Brown et al., 2020) without any parameter update. Besides heuristic prompt design, some recent works try to optimize the input prompts without gradients. Among them, Black-box Tuning (BBT) (Sun et al., 2022b) and BBTv2 (Sun et al., 2022a) apply evolutionary algorithm (Hansen and Ostermeier, 2001) on continuous prompt tokens, while RLPrompt (Deng et al., 2022) adopts reinforcement learning to find discrete prompt tokens. Nevertheless, gradientfree optimization is rather difficult and these input15072 side methods need to query the PTMs thousands of times for optimization, which leads to huge inference costs in terms of time and computation resources. Moreover, their final performance is not satisfying as well. Given the flaws of *input-side* adaptation, we turn to *output-side* adaptation, which builds tunable decoder networks on model outputs. Comparatively, output-side adaptation enjoys two major advantages: (1) We can directly tune decoder networks on top of model outputs with back-propagation rather than arduous alternatives. (2) We can reduce thousands of model queries to only once per sample. However, designing decoder networks is not straightforward. Past studies have shown that merely tuning an MLP or LSTM (Hochreiter and Schmidhuber, 1997) over output features cannot provide satisfying results (Sun et al., 2022a,b), leaving this path underexplored. In this work, we aim to solve the performance issue for output-side adaptation, and we argue that there are two critical reasons behind it: (1) Simply utilizing PTMs as feature extractors ignores the infilling ability of PTMs, which is a strong prior for adaptation. (2) MLP and LSTM are not proper networks especially when training data is not sufficient. Based on these findings, we present Decoder Tuning (DecT), an enhanced output-side adaptation method. Specifically, DecT has two crucial design choices to address the above issues. First, DecT queries the PTM with prompts and adopts model output scores as the initial predictions, which takes advantage of internal model knowledge. Second, on top of the output representations, we select a Prototypical Network (ProtoNet) (Snell et al., 2017) as the decoder network and train it to fit the training data, which is more suitable for few-shot learning. In this way, DecT modifies the initial model scores with subsequent training data, thus achieving better performance. Through few-shot learning experiments on ten language understanding datasets, we highlight three advantages of DecT (see Figure 1). (1) DecT achieves over 3% absolute accuracy improvement on average, greatly outperforming previous works. (2) DecT is highly efficient. Compared with major prompt engineering baselines, DecT dramatically reduces the average adaptation time from over 9,800 seconds (BBTv2) to 3 seconds. (3) DecT only requires one PTM query for each example, while other input-side optimization methods need about 104 calls. This advantage is vital when PTM calls are not for free. In addition, we conduct extensive ablation studies and validate the impact of each component of DecT. ## 2 Preliminaries Given a set of training data Dtrain = {(xi, yi)}Ni=1 and PTM M, we need to predict the label y ∈ {1*,...,K*} for sample x, where K is the number of classes. We assume that each class has the same amount of n training samples. In the MaaS setting, M is a black-box inference API with fixed parameters. Therefore, we can only query the model with input x and get corresponding outputs. To better utilize the PTMs, it has been a common practice to wrap input samples into prompts. Specifically, we enclose each input x into a template T with a [MASK] token (here we assume using a masked language model). Then, we query M with T (x) and get the final layer hidden states h at the [MASK] position and scores s = SM(T (x)) ∈ RK over label words V. Take sentiment analysis as an example, we can use T (x) = x In summary, it was [MASK]. as the template with V = {bad, great} as label words for negative and positive sentiment respectively. The output scores on these label words further correspond to the classes. ## 3 Methodology In this section, we elaborate on our proposed Decoder Tuning (DecT) method for the classification task. We start with reviewing current input-side adaptation methods, then give an overview of DecT and finally detail it step-by-step. ## 3.1 Input-Side Adaptation Previous MaaS adaptation methods seek for optimal prompts that stimulate PTMs to output correct answers1. Without loss of generality, we formulate these methods with a transformation function f(·) which pre-processes the input x. f(·) can be specialized by adding demonstrations (Brown et al., 2020), discrete prompt tokens (Deng et al., 2022) or soft ones (Sun et al., 2022a,b). Denote the final score as q(x) and probability as 1BBTv2 (Sun et al., 2022a) further optimizes prompt tokens in the intermediate layers, but we omit this here. ![2_image_0.png](2_image_0.png) P(y|x) = Softmax(q(x)), these methods define q(x) = SM(f(x)) and optimize f(·) for correct predictions. Although optimizing f(·) without model gradients is possible, we argue that it is highly burdensome. Forwarding through a large "black box" model M, it is rather challenging to find corresponding inputs for specific outputs without the guidance of gradient signals. As a result, users may get suboptimal performance with expensive query costs. We empirically validate it in experiments. ## 3.2 Overview Of Dect For more effective and efficient PTM adaptation, we turn to output-side adaptation rather than inputside. Overall, output-side adaptation can be viewed as a post-processing of model outputs which uses another function g(·) to process the model outputs, and get the final scores q(x) = g(SM(T (x))). Different from input-side ones, output-side adaptation is easy-to-optimize with gradient descent, and for each sample, we only need to query the PTM once. For DecT, as shown in Figure 2, we model the post-processing as decoding, which refers to a post-modification to the initial model predictions. Specifically, we first query the PTM with promptenclosed inputs to get model outputs, including the scores for each class and hidden states. Intuitively, output scores contain prior knowledge inside the PTM, so we retain them as part of the final scores.Then, we tune an additional decoder function on the hidden states to fit the training data and make final predictions. Next, we describe how we query the model and then specify the implemen- ## Tation Of The Score Function. 3.3 Querying With Prompts To get model outputs, we simply follow the procedure in Section 2 and query the model with manual template-wrapped inputs. We then process the scores by calibration. Calibration. As stated in Zhao et al. (2021), PTMs tend to assign higher probabilities on those frequent label words, leading to biased output scores. To eliminate the prediction bias, we further calibrate the output scores with empty input xc ="" following (Zhao et al., 2021). Querying the model with xc, we can obtain the calibaration scores sc and normalize them by sc/mean(sc). Then we calibrate s by $${\hat{\mathbf{s}}}=\mathrm{diag}(\mathbf{s}_{c}/\mathrm{mean}(\mathbf{s}_{c}))^{-1}\mathbf{s}.$$ $\left(1\right)$. After that, the calibrated scores ˆs are balanced over classes. ## 3.4 Tuning The Outputs After getting the hidden states and calibrated scores, we perform DecT outside the PTM to modify the output scores fitting the training data. Denote the final score on class k as q(*x, k*), we calculate it by the following function: $$q(x,k)=\mathrm{Dec}({\bf h},k)+\lambda\hat{\bf s}_{k},$$ $$(2)$$ where Dec(·) is a trainable decoder function, λ is a hyperparameter controlling the weight of PTM scores and ˆsk is the k-th logit in ˆs. By tuning Dec(·), the final predictions incorporate training 15074 data on top of PTM outputs, which combine both knowledge effectively. The design choice of Dec(·) is fairly flexible. In practice, we select Prototypical Networks (ProtoNet) (Snell et al., 2017) due to their simplicity and remarkable performance in few-shot learning and prompt-based tuning (Cui et al., 2022). For this, we project the hidden states with a linear layer parameterized by W and get sample representation v = Wh. (3) On prototypes, classical approaches model them as points in the embedding space, which overlook the different class characteristics. Inspired by Ding et al. (2022a), we model prototypes as hyperspheres with an additional radius parameter. Concretely, the prototype for class k contains two parameters, center position vector zk and radius scalar rk. We randomly initialize zk and initialize rk as the average distance between zk and instances in class k: ${\ r_k=\frac{1}{N_k}\sum_{i}^{y_i=k}\|\mathbf{v}_i-\mathbf{z}_k\|_2.}$ - access function - we calculate the - $$\mathrm{il\logit\;is}$$ As for the score function, we calculate the Euclidean distances between instances and prototypes. $$\mathrm{Dec}(\mathbf{h},k)=-\|\mathbf{W}\mathbf{h}-\mathbf{z}_{k}\|_{2}+r_{k}.$$ According to Eq. 2, the final logit is $$q(x,k)=-\|\mathbf{Wh}-\mathbf{z}_{k}\|_{2}+r_{k}+\lambda\hat{\mathbf{s}}_{k}.$$ From a geometric view, the score function calculates the distance from instance x to the "surface" of the prototype, where rk+λˆsk is the whole radius acting like the bias term. With the scores, we can calculate the predicted probability by the Softmax function: Then $\ P(y=k|x)=\frac{\exp(q(x,k))}{\sum_{k'=1}^K\exp(q(x,k'))},$ we can optimize W and $r_k$ by the cr. and we can optimize W and rk by the crossentropy loss $$\mathcal{L}=-\frac{1}{N}\sum_{i=1}^{N}\log P(y_{i}|x_{i}).\tag{8}$$ **Experiments** $\mathbf{M}$ In this section, we first introduce the experimental settings (Section 4.1), then discuss the results for few-shot experiments (Section 4.2), efficiency comparison (Section 4.3), and experiment results for more training data (Section 4.4). ## 4.1 Experimental Settings Datasets. We conduct experiments on four typical natural language understanding tasks. For sentiment analysis, we select SST2 (Socher et al., 2013), Yelp P. (Zhang et al., 2015) and IMDB (Maas et al., 2011). For text classification, we use AG's News, Yahoo (Zhang et al., 2015) and DBPedia (Lehmann et al., 2015). For natural language inference (NLI), we adopt RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). For entity typing, we experiment on FewNERD (Ding et al., 2021b). We report dataset statistics in Appendix A.1. Splits. We randomly sample n = 1, 4, 16 data instances for each class from the training set for few-shot learning, and sample same amount data for validation. For datasets in GLUE (Wang et al., 2019) (SST2, RTE, MNLI) and SNLI, we use the original validation sets as test sets following Zhang et al. (2021). For other datasets, we evaluate on their original test sets. $$\quad(4)$$ $$({\mathfrak{H}})$$ $$(6)$$ Baselines. We compare with representative MaaS PTM adaptation methods. **Prompt** refers to directly performing zero-shot classification with template-wrapped examples. **In-context learning** (ICL) (Brown et al., 2020) further concatenates some exemplars before the test samples. BBT (Sun et al., 2022b) optimizes soft prompt tokens with an evolutionary algorithm, and **BBTv2** (Sun et al., 2022a) further inserts deep prompts to intermediate layers for better performance. **RLPrompt** (Deng et al., 2022) is another recent algorithm that optimizes discrete prompts with reinforcement learning. **PromptBoosting** (Hou et al., 2022) is a concurrent work that applies boosting algorithm for prompt ensembling. We report the details of baselines in Appendix A.2. $$\mathbf{\Pi}(T)$$ Environments. For all experiments, we use NVIDIA A100 and RTX 2080 Ti GPUs. We implement DecT with PyTorch (Paszke et al., 2019), HuggingFace Tansformers (Wolf et al., 2020), and OpenPrompt (Ding et al., 2022b). Implementation Details. For all methods, we use the same RoBERTaLARGE (Liu et al., 2019) as the backbone model. For DecT, we set the representation dimension to 128 and optimize the parameters for 30 epochs with Adam optimizer (Kingma and Ba, 2015). The learning rate is 0.01. On the n **Method SST2 IMDB Yelp AG DB Yahoo RTE SNLI MNLI-m/mm NERD Avg.** 0 **Prompt** 83.3 89.4 87.1 80.9 68.4 49.9 52.4 40.7 50.8/51.7 21.1 61.4 ICL 81.53.7 65.611.4 81.110.6 66.74.8 71.72.6 53.26.2 45.04.7 46.15.3 53.60.5/53.90.8 34.73.3 59.44.9 BBT 83.41.3 89.00.1 89.70.1 75.40.8 59.11.7 31.22.7 52.31.4 38.50.8 43.42.5/42.93.3 14.12.3 56.31.5 BBTv2 83.32.5 89.00.2 89.90.2 74.33.2 74.25.2 34.03.5 48.25.7 38.64.0 44.23.2/44.34.5 29.00.8 59.03.0 RLPrompt 63.56.3 65.06.5 66.36.9 72.54.5 65.65.5 38.15.8 53.85.3 36.53.0 40.32.0/41.02.1 14.51.8 50.64.7 PromptBoosting 86.72.6 82.46.1 88.72.5 58.711.8 73.04.8 23.77.0 50.05.9 43.56.1 36.81.6/36.32.3 22.00.8 54.74.6 DecT 90.80.2 91.20.3 94.80.1 79.91.1 78.80.9 55.20.8 56.02.7 47.74.1 52.22.7/53.33.0 35.71.5 66.91.6 ICL 60.39.8 80.46.6 77.414.6 65.15.4 71.76.5 49.99.9 42.73.9 42.13.2 44.75.9/45.26.0 31.74.8 55.67.0 BBT 84.51.2 89.80.9 90.20.6 79.02.1 67.73.5 42.90.6 48.44.0 40.51.3 41.21.7/40.72.0 19.41.5 58.61.8 BBTv2 86.62.2 89.40.6 90.30.5 79.12.1 89.01.7 46.01.4 46.22.3 40.84.3 44.00.9/44.81.6 31.91.4 62.61.7 RLPrompt 80.77.5 75.810.1 78.87.3 76.14.8 76.35.9 45.03.1 53.52.9 36.32.6 44.42.9/45.53.8 16.72.4 57.44.8 PromptBoosting 88.92.3 83.05.2 92.32.1 78.26.8 90.10.7 36.45.1 53.55.9 53.43.4 39.84.5/40.35.7 40.92.5 63.44.0 DecT 87.61.6 89.60.9 94.80.7 81.92.6 89.10.6 59.92.1 56.72.7 53.22.9 52.22.3/53.42.4 46.71.7 69.51.9 | 1 4 16 | |----------| ICL 71.515.8 80.66.0 73.714.5 64.46.0 71.89.1 52.65.7 43.87.0 42.06.3 51.43.0/52.13.3 35.12.6 58.17.2 BBT 89.6b0.3 89.30.4 91.5b0.2 81.5b0.8 87.8b3.0 48.31.4 52.6b2.2 46.6b1.3 40.02.6/39.92.9 17.81.4 62.31.5 BBTv2 90.3a1.7 88.62.1 92.9a0.6 85.3a0.5 93.6a0.7 52.01.4 56.7a3.3 57.3a2.3 50.12.4/51.73.2 33.31.0 68.31.7 RLPrompt 87.02.6 87.62.4 95.1c1.0 80.2c0.7 80.83.3 48.12.2 54.32.8 41.15.0 43.33.9/44.34.5 17.51.4 61.82.7 PromptBoosting 87.6d3.0 86.23.1 94.71.0 85.2d0.9 95.00.5 46.62.4 60.0d5.5 61.3d3.5 52.5d1.5/50.45.1 52.12.6 70.12.6 DecT 91.00.5 91.00.9 95.40.3 86.40.4 94.60.5 64.20.7 59.71.8 60.50.8 55.31.3/56.81.5 53.51.8 73.51.0 selection of λ, we directly set λ = 1/n for most datasets based on the intuition that λ should decrease as the amount of training data increases. On MNLI and FewNERD, we tune λ on the validation set and select λ = 1 and λ = 1/16 respectively. We give the templates and label words in Appendix A.3. ## 4.2 Main Results Table 1 presents the main few-shot learning results. From the results, we have these observations: Overall, DecT outperforms the state-of-theart baseline methods by a large margin (more than 3% **on average),** especially under extreme data scarcity, showing its superior performance. Across different tasks, DecT and baselines obtain similar results on some easy sentiment analysis and topic classification tasks, but we highlight that DecT is much more favorable on difficult datasets, such as Yahoo and FewNERD. While other baseline methods struggle to optimize well, DecT surpasses them significantly (about 10% on Yahoo and 20% on FewNERD under 16-shot setting compared with BBTv2 and ICL). On stability, DecT also has consistently low variance and some baselines (ICL, RLPrompt and PromptBoosting) are unstable. Given the difficulty of few-shot PTM adaptation, it is of great significance that the adaptation method is robust to random seeds. On baselines, optimization-free methods, i.e. zero-shot prompt and ICL are strong baselines. However, as shown in the table, ICL gives the best results in the 1-shot setting, and it can hardly improve with more training data due to the input length restriction. To compare, merely optimizing the input prompts (BBT and RLPrompt) can hardly outperform them, showing the limitation of input-side prompt optimization. In contrast, two other baselines, BBTv2 and PromptBoosting, are more powerful because they either inserts additional learnable prompt tokens inside the PTM or ensembles the outputs of different prompts. With the superior results of DecT, we argue that outputside optimization is a promising way for MaaS PTM adaptation. ## 4.3 Efficiency Comparison Despite the superior performance, another major advantage of DecT is its high efficiency. In Fig- | Method | Tr. Time (s) | # Query | # Param. (K) | |----------------|----------------|-----------|----------------| | ICL | 0 | 0 | 0 | | BBT | 10,512 | 8,000 | 0.5 | | BBTv2 | 9,856 | 8,000 | 12 | | RLPrompt | 65,579 | 12,000 | 3,100 | | PromptBoosting | 644 | 10 | 0.4 | | DecT | 3 | 1 | 130 | | n | Method | SST2 | IMDB | Yelp | AG | DB | Yahoo | RTE | SNLI | MNLI-m/mm | NERD | Avg. | |------|--------------|---------|---------|---------|---------|---------|---------|---------|-----------------|-----------------|---------|---------| | 64 | Fine-tuning† | 92.51.9 | 86.33.8 | 94.51.4 | 87.40.6 | 98.20.2 | 69.00.7 | 67.73.2 | 66.66.4 | 65.62.9/67.74.0 | 67.60.8 | 78.52.4 | | DecT | 92.40.5 | 91.30.5 | 94.90.5 | 89.20.3 | 97.00.1 | 69.30.4 | 65.71.7 | 67.21.0 | 62.01.4/63.31.3 | 56.10.8 | 77.10.8 | | | 256 | Fine-tuning† | 92.00.9 | 92.10.2 | 94.30.3 | 89.60.3 | 98.50.2 | 70.20.4 | 79.81.0 | 84.40.4 | 77.20.2/78.70.3 | 71.40.5 | 84.40.4 | | DecT | 92.70.2 | 92.10.1 | 95.60.1 | 90.30.1 | 97.40.1 | 71.30.1 | 69.21.0 | 69.70.4 | 68.00.3/69.40.3 | 56.20.3 | 79.30.3 | | ure 1, we plot average accuracy versus training time for each method under different shots. We also provide detailed statistics of training time, query numbers, and parameter numbers for 16-shot experiments in Table 2. From Figure 1 and Table 2, we clearly see that DecT can be optimized quickly and only requires one model query per training sample, **which is** about 200×faster and queries 10×**fewer than all** prompt optimization methods. For BBT, BBTv2, and RLPrompt, users have to query the model near 104 times and spend several hours for sufficient optimization even in the few-shot scenario. When the inference API is not for free such as OpenAI API 2, using these methods would be expensive, and this further burdens their usage in the scenarios of rich data and large models. In terms of tunable parameters, DecT demands 130K additional parameters for the linear projection layer, which is less than 0.04% of RoBERTaLARGE (355M) that largely saves storage space. Although some other methods (BBT, BBTv2 and PromptBoosting) require fewer parameters, DecT is much easier to optimize. ## 4.4 Beyond Few-Shot As shown in Section 4.3, the simple architecture and high efficiency enable DecT to scale on more training data, while baseline methods struggle to finish training within acceptable time limits. To explore the scalability of DecT beyond the few-shot setting, we conduct experiments with increased training data (n = 64 and 256). For reference, we compare DecT with fine-tuning, the strongest baseline which update full model parameters. The detailed results are presented in Figure 1 and Table 3 and we have the following conclusions. (1) DecT continually improves its performance on more training data at a low cost. The average accuracy gains 6% from 16-shot to 256-shot while the average training time is less than 100 seconds. (2) Compared with fine-tuning, DecT is even on par with it in the 64-shot scenario and gradually falls behind in the 256-shot setting, which is reasonable as we only tune a small portion of parameters outside the model. Through further task-level observation, we find DecT still performs well on sentiment analysis and topic classification, but cannot catch up with fine-tuning on NLI and entity typing, which are identified as harder tasks as they require complex reasoning or fine-grained semantic understanding. (3) In experiments, we find fine-tuning is more sensitive to random seeds in the few-shot setting due to the huge amount of trainable parameters and relatively few loss signals, which is evidenced by the high variance in the 64-shot setting. In such scenario, DecT has lower variances due to most parameters are frozen. Therefore, the stability advantage of DecT has been verified again. To conclude, we take the first step to applying MaaS methods beyond few-shot learning. The results show that DecT is competitive against fine-tuning on regular classification tasks, but is limited on difficult tasks. How to adapt PTMs on challenging tasks without parameter updates still needs further exploration. ## 5 Analysis In addition to main experiments, we further provide more analytical experiments for understanding DecT. We conduct ablation study on several components in Section 5.1. Then we evaluate the scaling effect (Section 5.2), the impact of hyperparameter λ (Section 5.3) and templates (Section 5.4) respectively. We further conduct transferability experiments in Appendix B. ## 5.1 Ablation Study To validate each component of our proposed DecT, especially the effect of model scores s, radius parameter r, and ProtoNet, we conduct extensive ablation studies. We present results in Table 4 and Figure 4. | s | r | Average Accuracy | | | |-----|-----|--------------------|---------|---------| | 1 | 4 | 16 | | | | ✗ | ✗ | 54.06.3 | 66.32.5 | 73.01.2 | | ✓ | ✗ | 64.82.6 | 69.31.7 | 73.51.0 | | ✗ | ✓ | 54.06.2 | 67.22.0 | 73.01.1 | | ✓ | ✓ | 66.91.6 | 69.51.9 | 73.51.0 | ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Ablating model scores. Apparently, model scores contribute largely to the few-shot performance of DecT, especially when the training data is extremely scarce (1-shot), which illustrates that model scores contain beneficial prior model knowledge for language understanding. Also, incorporating training data reduces the variance. When there are more training data, model scores bring less enhancement, which is reasonable as the relative weights of model and ProtoNet scores change. Ablating radius. Meanwhile, the radius is also helpful under low-shot scenarios, which characterizes the difference across classes. But as the number of training data increases, ProtoNet dominates model predictions and the impact of r is weakened as well. Ablating decoder. As stated previously, the design choice of the decoder function is flexible. We replace ProtoNet with a two-layer MLP and evaluate the performance. In Figure 3 we can see that ProtoNet significantly outperforms MLP in the 1-shot setting, which matches the advantages of ProtoNet in the few-shot setting. In 4shot and 16-shot experiments, ProtoNet still gets higher scores, but with smaller margins. On stability, ProtoNet achieves consistently lower standard deviation scores, which serve as another advantage. Overall, we find ProtoNet is a vital component in DecT, and simply replacing it with MLP would worsen the performance. ## 5.2 Model Scaling In this section, we explore how DecT applies to PTMs with different architecture and scales. We select T5 (Raffel et al., 2020), an encoderdecoder PTM, at different scales, from T5Small, T5Base, T5Large to T53B. Table 5 presents the full results of T5 models. First of all, DecT has been successfully deployed on T5, a generative language model, which verifies its transferability across PTMs. Furthermore, we can observe an apparent trend of the scaling effect where larger models consistently perform better. We also evaluate the DecT on CPM-Bee3, which is a bilingual generative pre-trained language model with 10B parameters. Table 6 presents the results of CPM-Bee in different settings. The results show that DecT strongly enhances the adaptation of large PLM on downstream tasks. Moreover, CPM-Bee achieves great performance on NLI tasks, which flags that DecT could deal with more difficult tasks with powerful backbone models. ## 5.3 Impact Of Λ As a hyperparameter, λ controls the relative importance of model scores and prototype scores. Here we examine its impact on AG's News and SST2. In Figure 4, we can observe that: (1) λ largely affects DecT in the 1-shot settings. As λ increases, DecT gradually performs better and stabler, which illustrates the importance of model knowledge in this case. (2) With the shot number increases, the impact of varying λ is weakened, and the best practices become smaller. These observations validate our selection strategy in Section 4.1, which effectively balances model and data knowledge. ## 5.4 Impact Of Templates Although DecT is an output-side adaptation method, the choice of templates also affects the final performance. To assess the influence of templates, we conduct experiments on AG's News and SST2 and show results in Table 7. Overall, DecT does not rely much on templates. While different templates may induce fluctuant zero-shot perfor3https://live.openbmb.org/models/bee | Model | SST2 | IMDB | Yelp | AG | DB | Yahoo | RTE | SNLI | MNLI-m/mm | NERD | Avg. | |---------|---------|---------|---------|---------|---------|---------|---------|---------|-----------------|---------|---------| | T5Small | 73.41.8 | 68.82.1 | 79.51.4 | 79.10.6 | 76.80.7 | 57.50.7 | 51.90.8 | 38.72.1 | 38.60.4/39.00.3 | 35.11.9 | 58.01.2 | | T5Base | 83.81.1 | 86.30.9 | 91.50.6 | 84.30.6 | 93.50.4 | 61.10.8 | 54.01.6 | 44.91.6 | 47.80.6/49.40.7 | 50.23.0 | 67.91.1 | | T5Large | 92.30.4 | 92.00.4 | 94.41.3 | 85.50.8 | 94.90.3 | 63.60.9 | 55.92.3 | 49.51.9 | 49.71.4/50.81.9 | 53.21.6 | 71.11.2 | | T53B | 89.90.4 | 92.70.7 | 94.92.0 | 87.70.8 | 96.20.3 | 66.50.7 | 55.82.2 | 52.01.9 | 52.81.6/52.22.1 | 51.91.4 | 72.11.3 | Table 5: Experiment results (16-shot) for our method on different versions of T5 (Raffel et al., 2020). We run each experiment over 5 random seeds and report average accuracy and standard deviation (%). Shot SST2 IMDB Yelp AG DB Yahoo RTE SNLI MNLI-m/mm NERD Avg. 0 80.5 89.1 96.6 74.6 71.3 46.7 84.1 45.4 45.6/45.6 1.6 61.9 4 91.2 96.5 97.8 80.5 81.1 56.5 82.2 77.8 66.0/66.5 52.9 77.2 16 92.7 96.2 97.5 85.5 89.8 65.2 86.0 86.4 76.3/76.3 54.6 82.4 64 94.3 96.5 98.3 88.5 93.5 68.7 87.1 88.9 78.0/79.0 59.8 84.8 256 94.5 96.7 98.4 89.7 94.2 69.9 87.7 89.4 81.7/80.6 59.1 85.6 Table 6: Experiment results for our method on CPM-Bee. We run each experiment over 5 random seeds and report ![7_image_0.png](7_image_0.png) average accuracy (%). mance, DecT largely moderates the gaps between them. Additionally, we try two templates searched from RLPrompt (Deng et al., 2022) and they both achieve satisfying results. On SST2, the template from RLPrompt is even better than manually designed ones. **Therefore, we highlight that DecT** is complementary with input-side adaptation algorithms, and they can work together for better performance. ## 6 Related Work Our work explores how to efficiently adapt large PTMs. In this section, we review three lines of research for *prompt-based tuning* (data efficiency), parameter-efficient tuning (parameter efficiency), and MaaS adaptation methods respectively. ## 6.1 Prompt-Based Tuning Prompt-based tuning aims to bridge the gap between pre-training and downstream tasks for data- Table 7: Accuracy of prompt (zero-shot) and DecT (16shot) across different templates. Templates marked with c are taken from Deng et al. (2022). efficient model adaptation. The major practice for prompt-based tuning (Liu et al., 2021) is wrapping text pieces into human-designed templates. By this means, prompt-based tuning converts downstream tasks to pre-training tasks (e.g. masked language modeling) and greatly enhances the zero/few-shot learning ability of PTMs. Prompt-based tuning is first applied in knowledge probing (Trinh and Le, 2018; Petroni et al., 2019), and it has been adopted broadly in NLP (Schick and Schütze, 2021; Hu et al., 2022b; Ding et al., 2021a; Han et al., 2021; Cui et al., 2022). Despite manually designed prompts, other works also investigate automatic and learnable prompts (Shin et al., 2020; Gao et al., 2021; Hambardzumyan et al., 2021; Schick et al., | Template | Prompt | DecT | | |--------------------------------------------------------|----------|--------|------| | SST2 | | | | | x In summary, it was [MASK]. | 83.3 | 91.0 | | | x It was [MASK]. | 73.3 | 88.4 | | | x AgentMediaGrade | c | 90.4 | 92.2 | | Officials Grade [MASK]. AG's News [ Topic : [MASK] ] x | 80.9 | 86.4 | | | [ Category : [MASK] ] x | 78.6 | 86.8 | | | [MASK] Alert Blog Dialogue Diary Accountability x c | 78.8 | 86.0 | | 2020; Lester et al., 2021) to alleviate the prompt engineering efforts. However, the optimization of prompts is a non-trivial problem (Ding et al., 2022c; Lester et al., 2021) which sometimes leads to more computation costs and suboptimal performance. Thus in our work, we adopt manual prompts to stimulate model knowledge and help data-efficient model adaptation. ## 6.2 Parameter-Efficient Tuning Another line of work explores tuning a small fraction of model parameters to reduce computation and storage budgets, namely parameterefficient tuning (PET) (Ding et al., 2022c). Typical PET methods include inserting tunable modules (Houlsby et al., 2019; Li and Liang, 2021; Hu et al., 2022a), adding soft prompt tokens (Lester et al., 2021) and specifying certain parameters (Zaken et al., 2022). Although PET methods achieve remarkable performance with few parameter updates, they still require model gradients, which are unavailable in the MaaS setting. ## 6.3 Maas Adaptation With inference-only APIs, there are also works that adapt models without tuning any model parameters. Brown et al. (2020) present in-context learning, which concatenates test inputs with several exemplars. Further improvements focus on reliving the instability issues caused by model biases (Zhao et al., 2021; Han et al., 2022) and examplar orders (Lu et al., 2022). PromptBoosting (Hou et al., 2022) employs boosting algorithm for prompt ensembling, giving strong performance. Other approaches try to optimize prompts with either blackbox optimization methods (Sun et al., 2022a,b) or reinforcement learning (Deng et al., 2022). However, due to the lack of gradient signals, they need thousands of model queries, resulting in high costs when the model is large and API calls are not for free. Different from the abovementioned methods, we adapt models at the output side, which need not optimize the distant input prompts. We demand only one API call for each training sample and achieve better results across tasks. ## 7 Conclusion In this paper, we present DecT, which performs both data and parameter-efficient adaptation with off-shelf PTMs. By fusing prior model knowledge and posterior data knowledge, DecT achieves superior performance on ten language understanding tasks. Meanwhile, DecT exceeds existing baselines by three orders of magnitude in terms of training time and the number of queries, highlighting its advantages in real-world deployment. In future works, we are eager to explore how to combine input and output-side adaptation methods for better PTM adaptation, and how to extend this line of research to more challenging scenarios. ## Limitation DecT explores how to adapt black-box PTMs on downstream tasks. As we show in Section 4.4, our method is not comparable to fine-tuning on hard tasks with increased data points. Moreover, we only focus on classification tasks in this work and do not testify DecT on free-form generation tasks. In the future, we will work toward more general MaaS adaptation strategies across tasks. ## Ethical Statement As large language models are getting more and more popular in NLP research and application, DecT provides a cost-efficient way to adapt these large models. However, we need also to be cautious about the improper adaptation of large language models, such as generating toxic and biased speeches. ## Acknowledgements This work is supported by the National Key R&D Program of China (No.2022ZD0116312), Institute Guo Qiang at Tsinghua University and sponsored by Tsinghua-Toyota Joint Research Fund. Ganqu Cui made the original research proposal and wrote the paper. Ganqu Cui and Wentao Li conducted experiments. Ning Ding revised the paper and participated in the discussion. Longtao Huang, Zhiyuan Liu and Maosong Sun advised the project. ## References Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of EMNLP*. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of NeurIPS. Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, and Zhiyuan Liu. 2022. Prototypical verbalizer for prompt-based few-shot tuning. In Proceedings of ACL. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P Xing, and Zhiting Hu. 2022. Rlprompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*. Ning Ding, Yulin Chen, Ganqu Cui, Xiaobin Wang, HaiTao Zheng, Zhiyuan Liu, and Pengjun Xie. 2022a. Few-shot classification with hypersphere modeling of prototypes. *arXiv preprint arXiv:2211.05319*. Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021a. Prompt-learning for fine-grained entity typing. arXiv preprint arXiv:2108.10604. Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, and Maosong Sun. 2022b. Openprompt: An open-source framework for prompt-learning. In *Proceedings of ACL*. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, and Maosong Sun. 2022c. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. In *arXiv*. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021b. Few-nerd: A few-shot named entity recognition dataset. In *Proceedings of ACL*, pages 3198–3213. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In *Proceedings of ACL*, pages 3816–3830. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL workshop on textual entailment and paraphrasing. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In *Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual* Entailment, volume 7. Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: word-level adversarial reprogramming. In *Proceedings of ACL*, pages 4921–4933. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification. *arXiv preprint* arXiv:2105.11259. Zhixiong Han, Yaru Hao, Li Dong, Yutao Sun, and Furu Wei. 2022. Prototypical calibration for fewshot learning of language models. arXiv preprint arXiv:2205.10183. Nikolaus Hansen and Andreas Ostermeier. 2001. Completely derandomized self-adaptation in evolution strategies. *Evolutionary computation*, 9(2):159–195. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, and Yang Zhang. 2022. Promptboosting: Black-box text classification with ten forward passes. *arXiv* preprint arXiv:2212.09257. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *Proceedings of ICML*. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022a. Lora: Low-rank adaptation of large language models. In *Proceedings of ICLR*. Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2022b. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In *Proceedings of ACL*. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *Proceedings* of ICLR. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of EMNLP*. Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing: System Demonstrations. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of ACL-IJCNLP. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *Proceedings of ICLR*. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proceedings of ACL*. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Proceedings of NeurIPS*, pages 8024–8035. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of EMNLPIJCNLP*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. In *Proceedings of COLING*. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of EACL*. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of EMNLP. Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In *Proceedings of NIPS*. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of EMNLP*. Tianxiang Sun, Zhengfu He, Hong Qian, Yunhua Zhou, Xuanjing Huang, and Xipeng Qiu. 2022a. BBTv2: Towards a gradient-free future with large language models. In *Proceedings of EMNLP*. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. 2022b. Black-box tuning for language-model-as-a-service. In *Proceedings of* ICML. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of ICLR*. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of NAACL-HLT*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of EMNLP*, pages 38–45, Online. Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of ACL. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2021. Revisiting few-sample bert fine-tuning. In *Proceedings of ICLR*. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Proceedings of NIPS*. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of ICML. ## A Experiment Details A.1 Dataset Statistics We provide dataset statistics in Table 8. We obtain AG's News, Yahoo, and Yelp P. from https:// github.com/zhangxiangxiao/Crepe under the BSD-3-Clause license. We get FewNERD from https://ningding97.github. io/fewnerd/ with CC BY-SA 4.0 license. Other datasets are downloaded using Huggingface Datasets (Lhoest et al., 2021). | Task | Dataset | # Class | # Test | |----------------|-----------|-------------|----------| | SST2 | 2 | 872 | | | Sentiment | Yelp | 2 | 38,000 | | Analysis | IMDB | 2 | 25,000 | | AG's News | 4 | 7,600 | | | Topic | Yahoo | 10 | 60,000 | | Classification | DBPedia | 14 | 70,000 | | RTE | 2 | 277 | | | NLI | SNLI | 3 | 9,842 | | MNLI-m/mm | 3 | 9,815/9,832 | | | Entity Typing | FewNERD | 66 | 96,853 | ## A.2 Baselines In-context Learning (ICL). To guarantee meaningful results, we randomly permute the demonstrations and prepend them before input prompts. Due to the input length limit, we truncate the demonstrations which exceed input length. BBT and BBTv2. We reproduce BBT and BBTv2 using the official codes4. For datasets adopted in their work, we follow the original implementations including templates, label words, and hyperparameters. For other datasets, we reproduce with our templates, label words, and default hyperparameters. We take existing 16-shot experiment results from the paper and run other experiments with 5 random seeds for a fair comparison. RLPrompt. We also use the official codes5 for reproduction and take some experiment results from their original paper. It is worth noting that RLPrompt adopts the test set of SST2 and we use the ![12_image_0.png](12_image_0.png) validation set, so we report the reproduced results on the SST2 validation set. PromptBoosting. We follow the official codes6 for reproduction. Since the number of additional parameters is related to number of classes, we compute the average numbers across datasets. Fine-tuning. We adopt prompt-based fine-tuning which uses the same templates and label words with DecT. We tune the whole model for 5 epochs with AdamW optimizer (Loshchilov and Hutter, 2019) using a 3 × 10−5 learning rate. ## A.3 Templates And Label Words We report the used prompt templates and label words in Table 9. Most of them are taken from OpenPrompt (Ding et al., 2022b). ## B Transferability We conduct transferability experiments on sentiment analysis and present the results in Figure 5. We see that DecT is highly transferable across datasets. On SST2 and IMDB, DecT trained on other datasets even surpasses the original performance. More surprisingly, we find DecT trained on Yelp, a restaurant review dataset, performs best on SST2 and IMDB, which are two movie review datasets. This result further shows the great domain generalization ability of DecT. | Dataset | Template | Label Words | |-----------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | SST2 Yelp | x In summary, it was [MASK]. | bad, great | | IMDB AG's News | [ Topic : [MASK] ] x | politics, sports, business, technology society, science, health, education, | | Yahoo | [ Topic : [MASK] ] x | computers, sports, business, entertainment, family, politics company, school, artist, athlete, politics, transportation, building, river, village, animal, plant, album, film, book | | RTE | No, Yes | | | SNLI | No, Maybe, Yes | | | x1 ? [MASK], x2 | | | | MNLI-m/mm | No, Maybe, Yes | | | DBPedia | x1 x2 The category of x1 is [MASK]. | actor, director, artist, athlete, politician, scholar, soldier, person, show, religion, company, team, school, government, media, party, sports, organization, geopolitical, road, water, park, mountain, island, location, software, food, game, ship, train, plane, car, weapon, product, theater, facility, airport, hospital, library, hotel, restaurant, building, championships, attack, disaster, election, protest, event, music, written, film, painting, broadcast, art, biology, chemical, living, astronomy, god, law, award, disease, medical, language, currency, educational | | FewNERD | x [ENT] is [MASK]. | | Table 9: The templates and label words used in our experiments. For each dataset, x, x1, and x2 represent the input sentences or sentence pairs. [ENT] copies the entity mentioned in the input sentence. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the last section A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Experiments We Use Datasets ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A.1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
arodi-etal-2023-kitmus
The {KITMUS} Test: Evaluating Knowledge Integration from Multiple Sources
https://aclanthology.org/2023.acl-long.841
Many state-of-the-art natural language understanding (NLU) models are based on pretrained neural language models. These models often make inferences using information from multiple sources. An important class of such inferences are those that require both background knowledge, presumably contained in a model{'}s pretrained parameters, and instance-specific information that is supplied at inference time. However, the integration and reasoning abilities of NLU models in the presence of multiple knowledge sources have been largely understudied. In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts. These subtasks differ in terms of which knowledge sources contain the relevant facts. We also introduce subtasks where knowledge is present only at inference time using fictional knowledge. We evaluate state-of-the-art coreference resolution models on our dataset. Our results indicate that several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time. However, with task-specific training, a subset of models demonstrates the ability to integrate certain knowledge types from multiple sources. Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time.
# The Kitmus Test: Evaluating Knowledge Integration From Multiple Sources Akshatha Arodi12∗ , Martin Pömsl12∗ , Kaheer Suleman3, Adam Trischler3, Alexandra Olteanu3**, Jackie Chi Kit Cheung**12 1 McGill University 2 Mila Quebec AI Institute 3 Microsoft Research {akshatha.arodi@mail, martin.pomsl@mail, jcheung@cs}.mcgill.ca {kaheer.s}@gmail.com {adam.trischler, alexandra.olteanu}@microsoft.com ## Abstract Many state-of-the-art natural language understanding (NLU) models are based on pretrained neural language models. These models often make inferences using information from multiple sources. An important class of such inferences are those that require both background knowledge, presumably contained in a model's pretrained parameters, and instance-specific information that is supplied at inference time. However, the integration and reasoning abilities of NLU models in the presence of multiple knowledge sources have been largely understudied. In this work, we propose a test suite of coreference resolution subtasks that require reasoning over multiple facts. These subtasks differ in terms of which knowledge sources contain the relevant facts. We also introduce subtasks where knowledge is present only at inference time using fictional knowledge. We evaluate state-of-the-art coreference resolution models on our dataset. Our results indicate that several models struggle to reason on-thefly over knowledge observed both at pretrain time and at inference time. However, with taskspecific training, a subset of models demonstrates the ability to integrate certain knowledge types from multiple sources. Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time. ## 1 Introduction Progress on natural language understanding (NLU) benchmarks has recently been driven by pretrained large language models (LLMs), which can be adapted to specific tasks via finetuning (Peters et al., 2018; Devlin et al., 2019; Le Scao and Rush, 2021). These models draw on a variety of knowledge sources, such as knowledge given in inputs at inference time and knowledge contained in their parameters, usually acquired via pretraining. 1. **Servin** is a judge. Kea is a baker. Servin and Kea met at a park. After a long day at work deciding cases in a law court, he was happy to relax. [Answer: **Servin**] 2. Schwing is a gladiower. The work of a gladiower is inwaging ledmonly. The work of a popesmer is chodoling larely. Bate is a popesmer. At the coffee shop, **Schwing** and **Bate** connected. The coffee was excellent. She shared experiences from a career of chodoling larely. [Answer: **Bate**] Figure 1: Examples from KITMUS showing coreference cases that require real (1.) and fictional (2.) knowledge. To resolve the pronoun (in red), a model needs to draw on entity-specific knowledge about an entity's occupation as well as on background knowledge about what kind of work the occupation entails. Recent work suggests that models can use pretrain-time knowledge in tasks like translation and question answering to obtain performance gains (Brown et al., 2020; Roberts et al., 2020). However, natural language understanding often requires knowledge that is only supplied at inference time because of, e.g., time sensitivity or instance specificity. Consider the passage "John saw the newly elected president on TV." Pretrained parameters can conceivably contain information about what presidents do and what a TV is, but they cannot contain reliable knowledge about who John is (since "John" is an instance-specific identifier) or who the president is (because the president might have changed since pretraining). It follows that successful models for knowledge-intensive NLU tasks might require the ability to use both pretrain-time and inference-time knowledge. To effectively use these two knowledge sources, models must (1) retrieve relevant information from each knowledge source, (2) adjudicate between potentially conflicting information, and (3) integrate multiple units of information from both knowledge sources and reason over them on the fly. For example, pretrained parameters might contain the knowledge that Donald Trump is the president of ∗ Equal contribution 15088 the United States, but inference-time inputs might state that Joe Biden is the president. Based on the contextual information available in a task, models must infer the correct president. Studying whether current models can use multiple knowledge sources effectively can help identify and debug errors that models make when relying on outdated sources. To this end, we introduce a coreference resolution task designed to probe models' ability to draw on knowledge available in different sources. Recent work by Longpre et al. (2021) examined the effects of knowledge conflicts across different knowledge sources. In our work, we aim to more broadly examine the behaviour of NLU models in the presence of multiple knowledge sources. While Longpre et al. (2021) study how models handle conflicting facts, our goal is to evaluate whether models can combine complementary knowledge drawn from multiple sources rather than choose between sources. In our task, the resolution of a given pronoun requires two types of knowledge (see Figure 1): 1) entity-specific knowledge, e.g., "Servin is a judge" and 2) background knowledge, e.g., "Judges decide cases in law courts." Generally, background knowledge is learned during the pretraining of LLMs i.e., at pretrain-time, while entity-specific knowledge is typically observed at inference time. We vary the availability of the required information such that it may either be found in a single source or in multiple sources. We evaluate a model's ability to integrate and reason over the two knowledge types (entity-specific and background knowledge), given in two knowledge sources (pretrain-time and inference-time). We propose KITMUS, 1a diagnostic test suite. The KITMUS tests evaluate Knowledge InTegration from MUltiple Sources. KITMUS's distinguishing feature is that it contains texts in which we methodically vary the mapping of knowledge types to knowledge sources, which allows us to pinpoint the specific strengths and limitations of models. We also analyze the behaviour of models when knowledge is available only at inference-time by introducing variants where a model needs to reason over fictional knowledge, which is presumably not contained in the parameters. Unlike previous reasoning datasets, where inference-time knowledge is retrieved (Onoe et al., 2021), we provide the knowledge necessary to solve the task in each instance of KITMUS. This allows for a more controlled setting where we can focus on knowledge integration, rather than on retrieval, which we hold out as a separate problem. In a study with human participants, we empirically validated that both entity-specific and background knowledge are required to perform well on KITMUS, and that the automatically generated labels are consistent with human annotations. We evaluate state-of-the-art coreference resolution models on the KITMUS. In our experiments, many established models appear unable to integrate knowledge from two different knowledge sources and reason over them. With task-specific training, two models—BERT4Coref (Joshi et al., 2019) and C2F (Lee et al., 2018)—demonstrate the ability to reason over both knowledge observed at pretrain time and at inference time. However, we find that the ability to integrate knowledge from different sources seems to depend on the knowledge type in that source. While knowledge integration through concatenation at inference time seems to be effective for entity-specific knowledge, experiments with fictional knowledge indicate that even the best performing models cannot reliably integrate all types of background knowledge when provided only at inference time. ## 2 Related Work Coreference resolution as a reasoning task: There has been extensive work to study NLU models' ability to exploit linguistic knowledge that involves shallow cues such as gender, position, and number cues (Durrett and Klein, 2013), as well as other properties like semantic roles (Baker et al., 1998; Chambers and Jurafsky, 2009). The Winograd Schema Challenge (WSC) (Levesque et al., 2012) inspired a number of specialized datasets such as GAP (Webster et al., 2018) and Winogrande (Sakaguchi et al., 2020) where coreference resolution is used as a test bed for reasoning over knowledge and cases cannot be solved with shallow features (Emami et al., 2019; Rahman and Ng, 2012). Following this line of work, we use templates that omit shallow cues, such that a model must integrate knowledge about the world to determine the coreference. While WSC and KnowRef focus on abstract external knowledge that is valid independent of the specific entities involved (Emami et al., 2019), KITMUS is more diverse and allows both entity-specific and background knowledge. World knowledge for reasoning tasks: Prior work has shown that integrating world knowledge can lead to improvement in coreference solvers. Bean and Riloff (2004) learn caseframe co-occurrence statistics, which they use to predict coreference. Rahman and Ng (2012); Zhang et al. (2019); Aralikatte et al. (2019); Emami et al. (2019) showed improved results using data augmentation. Longpre et al. (2021) recognized the distinction between pretrain-time and inference-time knowledge, but they call them parametric and contextual knowledge. In the context of our work, the term "contextual" has many different interpretations and could consequently lead to misunderstandings. Therefore, we instead focus on the time at which the knowledge is typically observed in order to distinguish the two knowledge sources. Chan et al. (2022) show that transformers exhibit different inductive biases in how they represent and generalize from the information in pretraintime and inference-time knowledge sources using synthetic sequences of characters. Mallen et al. (2022) probe LMs on factual knowledge memorization using open-domain question answering and show improved results with retrieved knowledge augmentation. Complementing prior tasks that require background knowledge found in off-theshelf knowledge bases, KITMUS instances require both entity-specific and background knowledgewe map a mentioned entity to its occupation and occupations to situations. Onoe et al. (2021) pose fact-checking tasks that require combining entity knowledge with commonsense knowledge. In contrast to our dataset, they do not provide the required knowledge, and expect models to either use only pretrain-time knowledge in a closed-book setting or to retrieve the knowledge from an external knowledge base at inference time. In our work, the knowledge associated with each instance is generated in a controlled way and provided as part of the inputs. Reasoning over knowledge with Transformers: Zhou et al. (2021) present a dataset that evaluates the ability of pretrained Transformer language models to make inferences over axioms stated in natural language. Similarly, Clark et al. (2020) study the limits of reasoning in Transformer models with an approach where classical logic facts and rules are stated using natural language instead of a formal representation. Though our task is presented as a natural language text that requires reasoning, and is evaluated on Transformer models (among others), ![2_image_0.png](2_image_0.png) our work differs from Zhou et al. (2021) and Clark et al. (2020)'s in that the prediction target is the resolution of pronoun coreferences within a text. This requires identifying those mentions of an entity in a text that corefer with a pronoun using both pretrain-time and inference-time knowledge. In contrast, Zhou et al. (2021) and Clark et al. (2020) predict whether a conclusion is consistent with a preceding premise. ## 3 The Kitmus **Test Suite** We evaluate the knowledge integration capability of coreference resolution models from different knowledge sources: 1) pretrain-time: knowledge accumulated in the parameters during (pre-)training and 2) inference-time: knowledge observed in an input text. To design KITMUS, we formulate a coreference resolution task which requires access to two facts. We systematically vary the presence of these facts across the knowledge sources to evaluate the models. As an instantiation of the idea of presenting two facts, we experiment with the following two knowledge types: - **Entity-specific**: occupation of an entity e.g., "Rosenow is an architect." - **Background**: situation typical for an occupation e.g., "architects design building and houses." For example, consider the following task to predict whether Mujica or Rosenow is the correct antecedent of the pronoun "he." Mujica is a model. Rosenow is an architect. At the bus station, **Mujica** and **Rosenow** connected. Public transports are eco-friendly. He shared experiences from a career of designing buildings and houses. [Answer: **Rosenow**] Here, the occupations are *model* and *architect*, and the situational cue is designing building and houses. Both knowledge types are required in order to resolve this coreference. An illustration of this knowledge schema can be found in Figure 2. (a) BACKGROUND-PRETRAIN (b) BACKGROUND-BOTH (c) BACKGROUND-INFERENCE ![3_image_0.png](3_image_0.png) Figure 3: Variants of KITMUS based on the source of background knowledge. ![3_image_1.png](3_image_1.png) We explore three main variants of the dataset as shown in Figure 3. With entity-specific knowledge always provided in the instance, the variants differ based on when and where background knowledge is available: - BACKGROUND-PRETRAIN: Background knowledge is available only in the model parameters - BACKGROUND-BOTH: Background knowledge is available in the model parameters and explicitly provided in the instance - BACKGROUND-INFERENCE: Background knowledge is available only in the instance Each instance of the KITMUS task consists of two fragments of text that are concatenated: 1) a knowledge text—containing the inference-time knowledge that models are given access to—and 2) a task text—consisting of the coreference task that models solve. ## 3.1 Background-P**Retrain** In this variant, entity-specific knowledge is provided at inference time and background knowledge about occupations is assumed to be pretrain-time knowledge, since information such as "architects design buildings and houses" is likely to have been observed during pretraining. An example is shown in the previous section. Here, the entity-specific knowledge about Mujica and Rosenow is inferencetime; however, the knowledge about the occupations of a model and architect is pretrain-time. With this variant, we evaluate whether models have the ability to integrate and reason over both pretraintime and inference-time knowledge effectively. ## 3.2 Background-Both In this variant, background knowledge is provided at both inference-time and assumed to be captured by the parameters. Entity-specific and background facts are present in the same knowledge source. They both represent inference-time knowledge being listed in the knowledge text as part of the inference-time inputs. For example: Chichester is a politician. The work of a politician is seeking an elected seat in government. Klose is an astronomer. The work of an astronomer is studying the stars and the universe. Chichester and **Klose** met at the train station. After a long day at work seeking an elected seat in government, she was happy to relax. [Answer: **Chichester**] ## 3.3 Background-I**Nference** In order to evaluate whether a model can solve this task using exclusively inference-time knowledge (i.e., in the absence of pretrain-time knowledge), we introduce fictional "knowledge." Fictional knowledge such as "the work of a mornisdeiver is gupegaing advaily" is unlikely to have been observed during pretraining, in contrast to real-world knowledge which is likely to have been observed. The entities in all variants are always fictional, ensuring that entity-specific knowledge about them was not observed at pretrain time. Thus, in this variant, both knowledge types are fictional and not contained in the pretrained parameters. For example: The work of a johumker is toncing ignaftedly. The work of a fangher is sparluing gobewly. Amezcua is a johumker. Hundley is a fangher. **Hundley** and **Amezcua** met at the yoga class. Yoga is best done in silence. He reflected on whether sparluing gobewly for a living was a good career choice. [Answer: **Hundley**] Background knowledge about occupations maps occupations to situations that are typical for the occupation, such as "astronomer" and "studying the stars and the universe." To make background knowledge fictional, either the occupation, the situation, or both have to be fictional. For situations, we furthermore distinguish between levels of fictionality and define two sub-variants: 1) word-level fictional situations that use existing words but describe novel occupations, and 2) character-level fictional situations that use novel words. The methods we use to generate these fictional occupations and situations are detailed in the following section. Example texts resulting from different forms of fictionality can be seen in Table 1. Var. Occupation Situation Example | BB Real | Real | The work of a politician is seeking an elected seat in government. Chichester is a politician[...] | | |-----------|----------|------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------| | BI | Real | CharFict | The work of a politician is ehemting smorbtly. Chichester is a politician[...] | | BI | Real | WordFict | The work of a politician is controlling the pool of an aircraft by using its directional flight controls. Chichester is a politician[...] | | BI | CharFict | Real | The work of a mirituer is seeking an elected seat in government. Chichester is a mirituer[...] | | BI | CharFict | CharFict | The work of a mirituer is ehemting smorbtly. Chichester is a mirituer[...] | | BI | CharFict | WordFict | The work of a mirituer is controlling the pool of an aircraft by using its directional flight controls. Chichester is a mirituer. [...] | ## 4 Dataset Creation To construct KITMUS, we manipulate which entities are mentioned in each instance, what occupations those entities have, what situations those occupations pertain to, what contexts they are mentioned in, and whether noise is present. Each entry is structured to first (1) introduce the entities, (2) then place them in the same location, and (3) finally, place one of them in a situation related to their occupation. In the BACKGROUND-BOTH and BACKGROUND-INFERENCE variants, this is preceded by a knowledge text mapping entities to their respective occupations using the phrase "is a." The dataset entries are generated using handcrafted English-language templates and sampling from a variety of resource pools to fill the template slots. The use of templates facilitates control over the source a certain type of knowledge is stored in, which may not be possible to do with a natural dataset. We aim to minimize the likelihood of models learning to exploit any spurious correlations in the templates or resources and promote data diversity using the following methods: - We use multiple templates for each sentence. Examples are shown in Table 4 in the Appendix. - We sample from diverse resource pools to fill template slots as detailed in Section 4.1. - We include location-dependent noise statements that act as distractors and serve to vary the distance between entities. - We create canonical train, validation, and test splits for each variant that are generated using disjunct subsets of templates and resources. With these measures, we ensure that all entity names, occupations, situations, locations, templates, and noise statements that occur in the test instances do not occur in the train instances. ## 4.1 Resource Pools We collect 20,000 last names as entities, 60 common occupations as occupations and their associated job descriptions as situations and 112 common meet-up places as locations from a mix of governmental and other publicly available resources (see Appendix A.2.3 for more details). We manually filter for gender and semantic cues. For example, we remove the occupations that provide referential gender cues such as "fire-man" and locations that might provide surface cues related to an entity's occupation. Pronouns are sampled randomly from both the gendered pronouns he and she as well as genderindefinite pronouns such as singular they and the neopronouns ey and ze following the genderinclusive coreference resolution dataset GICoref (Cao and Daumé III, 2020). Ideally, we would want the distribution of pronouns to approximate the frequency in naturally occurring text, but few reliable statistics exist to estimate them. We include 40% he, 40% she, 10% they, and 10% neopronouns. Noise statements are sampled randomly from a collection of statements based on the selected location in order to maintain a natural flow of the text. Each location is associated with 25 noise sentences. These sentences are generated using GPT-2 (Radford et al., 2019), and then manually verified by the authors not to include cues related to any entity or occupation. ## 4.2 Fictional Occupations To create fictional background knowledge that maps occupations to situations, we create fictional occupations and fictional situations. Following the methodology of Malkin et al. (2021), we generate 60 names of fictional occupation by sampling from a character-level LSTM language model. ## 4.3 Dataset Formats Each variant in KITMUS consists of three subtasksbased on the number of entities—with increasing difficulty: two entity, three entity, and four entity subtasks. Each subtask has train, validation and test splits with 2000, 400, and 2000 examples respectively. The size of KITMUS is comparable to that of the GAP dataset (Webster et al., 2018), which similarly tests for a specific phenomenon in ambiguous pronoun coreference resolution. We provide the test suite in two different formats which are commonly used by state-of-the-art coreference solvers: the CoNLL 2012 format (Pradhan et al., 2012) and the GAP format (Webster et al., 2018). The CoNLL format allows for a comprehensive annotation of mentions of an entity—including in the knowledge text. The GAP format, however, allows for the annotation of only two entities and only one mention per entity. ## 4.4 Human Validation To investigate whether human assessors agree on the resolution of our test cases and whether this resolution is in agreement with our automatically generated labels, we conduct a human validation study. We also investigate whether our assumption that both background and entity-specific knowledge are required to resolve the cases by including instances where the knowledge text is not provided to human participants. We created a multiple-choice questionnaire by randomly selecting an instance from each variant of our dataset (e.g., BACKGROUND-PRETRAIN), from each subtask (e.g., two entities), and from each split (e.g., validation). Additionally, we included one instance from each variant and from each subtask where the participants were only given the task text and not the accompanying knowledge text. A total of 96 sampled instances were presented to six different participants in random order. A high inter-annotator agreement of 0.938 as measured by Fleiss' Kappa (Fleiss et al., 2003) leads us to believe that human participants agree on the resolution of KITMUS test cases. We use accuracy as a measure of agreement with the automatically generated labels and find that mean accuracy aggregated over all participants and subtasks is higher than 0.9 for all variants when the knowledge text is given. As expected, when neither background nor entity-specific knowledge are given, accuracy is below 0.1 for all variants, since most participants indicate that the question cannot be answered. This suggests that there are no inadvertent cues that can be exploited by humans to solve the task without having access to the entity-specific knowledge and background knowledge contained in the knowledge text. Additional details on collection and processing of resource pools, fictional occupations, dataset formats and human validation are in the Appendix. ## 5 Experimental Setup 5.1 Model Selection In this work, we focus on state-of-the-art and wellknown coreference resolution models. We experiment with two families of coreference resolution models: 1) general coreference models and 2) pronoun coreference models. Models that focus on general coreference resolution are often trained on the large Ontonotes corpus in the CoNLL 2012 format (Pradhan et al., 2012). We include BERT4Coref (Joshi et al., 2019) as an example of a state-of-the-art models on CoNLL 2012, C2F (Lee et al., 2018), which is the direct successor to the first end-to-end neural coreference resolution model (Lee et al., 2017), and Stanford's statistical (Clark and Manning, 2015) and neural (Clark and Manning, 2016) models. Models that focus on pronoun coreference resolution are trained on the smaller GAP dataset in the GAP format (Webster et al., 2018). We include GREP (Attree, 2019), the winner of the GAP Kaggle competition and PeTra (Toshniwal et al., 2020), an efficient memory-augmented model. ## 5.2 Training We conduct task-specific training with all models on the train split of KITMUS using their default hyperparameters. The larger general coreference models BERT4Coref and C2F are conventionally not trained on datasets with just 2000 train instances such as GAP or KITMUS, but rather trained on Ontonotes and then evaluated on smaller datasets (Joshi et al., 2019). Since coreference cases in KIT-MUS diverge significantly from those in Ontonotes, we test these models both in the Ontonotes-trained setting and KITMUS-trained setting. For these models, we report mean metrics over 6 runs. We use only the pretrained versions of the Stanford models, since they are conventionally used off-theshelf. We train the GAP-based models—PeTra and GREP—only on the two entity subtasks following the GAP format constraints outlined earlier. Additional training details are in Appendix A.5. ## 5.3 Evaluation We evaluate all models on the KITMUS test split of each subtask. We use two metrics to assess each model performance: antecedent classification F1 and pronoun accuracy. Antecedent classification F1 is typically used for GAP format datasets. It considers the coreference between each candidate antecedent mention and the pronoun as a binary classification decision i.e., for a text with two entities, it considers two binary predictions and calculates the scores accordingly. Pronoun accuracy considers for each pronoun whether the correct candidate antecedent is predicted by the model, so independent from the number of entities in a text, only one decision is made among all possible candidate antecedents. We compare against a random baseline, which is implemented as random choice among the gold candidate mentions. ## 6 Experimental Results 6.1 Background-P**Retrain** Table 2 shows that none of the evaluated models are able to outperform the random baseline without task-specific training on KITMUS. Some models exhibit below random performance, indicating that they may fail to recognize and choose the correct mentions that could be antecedents. When trained on KITMUS, BERT4Coref (all) and C2F (for the four-entities subtask) perform significantly better than random, as shown in Table 2b. The high performance of BERT4Coref and C2F on the BACKGROUND-PRETRAIN variant suggests that both models have the ability to draw background knowledge from their parameters, entity-specific knowledge from the inference-time inputs, and reason over them on-the-fly with task-specific training. The performance of all models we experimented with generally decreases as the number of entities increases; which is unsurprising since the more candidate entities there are, the less likely the accidental selection of the correct entity becomes. Moreover, we observe high variance across the six runs of KITMUS-trained C2F (see Table 7 in the Appendix A.6). In order to explore the effect of noise statements, we conduct additional experiments on the BACKGROUND-PRETRAIN variant without noise. The removal of noise does not result in a significant performance change (see Table 8 in the Appendix). ## 6.2 Background-Both And Background-I**Nference** We conduct additional experiments on the BACKGROUND-BOTH and BACKGROUNDINFERENCE variants with BERT4Coref and C2F, since they demonstrate the ability to learn the BACKGROUND-PRETRAIN variant of the task. In Table 3, we report results on the four-entity subtask, which Table 2 suggests to be the most challenging. While BERT4Coref's performance on the BACKGROUND-BOTH is comparable to its BACKGROUND-PRETRAIN variant results, C2F's performance is much worse, suggesting that it cannot effectively absorb the background knowledge provided at inference time and is distracted by it. On the BACKGROUND-INFERENCE variant, BERT4Coref seems to be able to integrate background knowledge about fictional occupations by outperforming the random baseline. However, it shows the ability to integrate word-level fictional, but not character-level fictional knowledge. ## 7 Discussion Models trained on "general" coreference datasets fail on **KITMUS**: The poor performance of Ontonotes-trained models suggests that when trained on general coreference resolution datasets, models learn to exploit surface cues, which does not help when testing on KITMUS where such cues are removed. Another factor might be the structure of the texts in KITMUS, which are designed to place knowledge in specific knowledge sources. This might affect models' abilities to form useful representations resulting in poor performance of Ontonotes-trained models. These failures suggest that training on (what are meant to be) "general" datasets is not enough to induce knowledge integration from multiple sources and task-specific training is required. Effect of dataset format and size: We observe that the models that accept input in the CoNLL format (Pradhan et al., 2012) perform better than those models that accept the GAP format (Webster et al., 2018). This indicates that mention annotations | Model | 2 Entities | 3 Entities | 4 Entities | Model | 2 Entities | 3 Entities | 4 Entities | |-----------------|--------------|--------------|--------------|---------|--------------|--------------|--------------| | BERT4Coref | 0.99 | 0.98 | 0.94 | | | | | | C2F | 0.52 | 0.28 | 0.48 | | | | | | GREP† | 0.49 | - | - | | | | | | PeTra† | 0.01 | - | - | | | | | | Random | 0.50 | 0.33 | 0.25 | | | | | | BERT4Coref | 0.43 | 0.18 | 0.14 | | | | | | C2F | 0.34 | 0.18 | 0.13 | | | | | | Stanford Neural | 0.20 | 0.10 | 0.09 | | | | | | Stanford Stat. | 0.05 | 0.02 | 0.01 | | | | | | Random | 0.50 | 0.33 | 0.25 | | | | | (a) Ontonotes-trained (b) KITMUS-trained | Var. | Occupation | Situation | BERT4Coref | C2F | |----------|--------------|-------------|--------------|-------| | BB | Real | 0.96 | 0.09 | | | BI | CharFict | 0.25 | 0.18 | | | Real | | | | | | BI | WordFict | 0.48 | 0.08 | | | BI | Real | 0.43 | 0.08 | | | BI | CharFict | 0.26 | 0.18 | | | CharFict | | | | | | BI | WordFict | 0.38 | 0.11 | | in the knowledge text—which only the CoNLL format provides—might be significant. To evaluate whether the failure cases are due to the small train set size of 2000, we repeat experiments with a train set size of 5000. While we do see some improvements, the general trends persist and our observations remain consistent with the previous results (see the limitations section for additional discussion). This suggests that further scaling of the train set size might not be sufficient to improve performance on cases where existing models are currently failing. Performance of current pretrained LLMs: BERT4Coref seems to consistently outperform C2F. This might be due to the difference in pretrained LLMs: BERT4Coref uses the Transformer architecture (Vaswani et al., 2017), which has been shown to be effective at reasoning tasks presented in natural language form (Clark et al., 2020) and utilizing information presented in inference-time contexts (Petroni et al., 2020), while C2F uses ELMo (Peters et al., 2018). To verify that BERT and ELMo contain background knowledge mapping occupations to situations, we ran a LAMA probe (Petroni et al., 2020). We find that BERT is more likely to contain the background knowledge compared to ELMo (see Section 8 for details). This corroborates the better performance of BERT on knowledge intensive tasks such as KITMUS. ## Integration Of Fictional Knowledge: As Shown in Table 3, BERT4Coref performs consistently poorly on character-level fictional situations compared to real and word-level fictional situations. An example of character-level fictional occupation knowledge erroneously answered by BERT4Coref is shown below: The work of a remaller is socring clatodemnly. Nims is a mamser. **Formica** is a remaller. The work of a mamser is slimbing murstly. At the birthday party, **Nims** and **Formica** ran into each other. The party is filled with local and national celebrities and entertainers. She shared experiences from a career of socring clatodemnly. [Correct answer: **Formica**; BERT4Coref: **Nims**] One possible reason could be BERT's tokenization strategy, which involves pooling subword representations (Devlin et al., 2019). In character-level fictional words, the subwords are meaningless, rendering their representations unhelpful. This is consistent with previous work showing that representations of LLMs for character-level fictional "Jabberwocky" words are less useful (Kasai and Frank, 2019) and that the presence of out-of-vocabulary (OOV) words decreases performance of neural models for NLU tasks (Schick and Schütze, 2020; Moon and Okazaki, 2020; He et al., 2021). Despite the character-fictional occupations and situations, we expect the models to resolve the coreferences successfully in this setting. In the given example, the pronoun "she" can be resolved by matching the situation "socring clatodemnly" to the occupation "remaller" (using the word overlap between the situations and the occupation descriptions) and identifying the correct entity associated with the occupation i.e, Formica. Humans can successfully make these inferences by matching fictional occupations and situations. However, the current models do not perform better than a random baseline in this setting. Our hope is that eventually, models should be able to handle even knowledge presented in previously unknown terms. Given that languages are forever growing, robustness to neologisms is crucial, considering that OOV words e.g., new occupations like "TikToker" develop constantly. Effects of knowledge type: Experiments on the BACKGROUND-PRETRAIN variant indicate that BERT4Coref is able to integrate fictional entityspecific knowledge observed at inference time reliably, yet this does not seem to be the case for fictional background knowledge. This suggests that models' ability to integrate and reason over the knowledge on-the-fly depends on the knowledge type—whether the knowledge is background or entity-specific—and not on whether it is fictional or not. One possible explanation could be that LMs observed different frequencies of unseen entities, occupations, and situations during pretraining, which result in a difference in their ability to adapt to novel instances of those categories. ## 8 Conclusion We investigated the ability of models to integrate knowledge from multiple knowledge sources to resolve linguistic ambiguities in a coreference resolution task. We formulated a task that requires access to two knowledge types, entity-specific and background knowledge, and controlled for two knowledge sources that knowledge is available in, pretrain-time and inference-time. Our results show that with task- and datasetspecific training, some models have the ability to reason over both knowledge observed at pretrain time and at inference time. For these models, knowledge can be integrated by concatenating textual knowledge to the model inputs. Furthermore, our findings imply that supplying additional information (e.g., from a retriever) at inference time to models can be successful even if the knowledge required for the task has not been observed before. However, in our task this ability seems to require task-specific training and depend on the type of knowledge being supplied. Future work could explore finetuning models on KITMUS to encourage knowledge integration across different sources. One might also consider extending the KITMUS test suite to other languages or to create a multilingual test suite. Instructions for using our code and adapting the templates and resources to other languages can be found in Appendix A.1. ## Limitations Data diversity: As a template-generated dataset, KITMUS does not reflect the full diversity of natural data. However, we do not attempt to emulate the diversity of natural datasets. Using templates over natural data for diagnostic purposes has a few advantages. Templates facilitate control over the source of a certain type of knowledge, which may not be possible to do with more natural datasets like Ontonotes. This allows us to isolate the model behavior we want to probe. We also take several steps to add diversity, like using multiple templates, sampling from large resource pools, random shuffling of entities, addition of noise sentences, and canonical data splits with non-overlapping templates and resources. To prevent spurious factors at lexical level, the templates are hand-crafted to remove surface cues and validated in a study with human participants. Background Knowledge Assumption in LMs: The results of our work is based on the assumption that pretrained LMs have access to background knowledge about real occupations. To verify that the pretrained LMs evaluated in this work contain background knowledge mapping occupations to situations, we ran a LAMA probe (Petroni et al., 2020) on BERT and ELMo. Given the template "The work of a [MASK] is [SITUATION].", we compared the probabilities the LMs assigned to all single-token occupation names used in KITMUS (probing for multi-token words is not supported by LAMA). BERT assigned higher probabilities to the correct occupation than to any other occupation for 90% of occupations. ELMo assigned the highest probability to the correct occupation for only 45% occupations, which might contribute to explaining why the ELMo-based model C2F generally performs worse than BERT4Coref on the BACKGROUND-PRETRAIN variant KITMUS, which requires such knowledge about occupations. Root Word Overlap: One potential limitation of testing for non-fictional background knowledge like "firefighters put out fires" is that the natural occurrence of the root word "fire" in both occupation and situation might enable models to solve the task without having access to background knowledge. An analysis of trigram overlaps in all occupationsituation pairs shows that 45% of non-fictional occupations have at least one overlapping root word. However, a comparison of performances on those samples with and without root word overlap showed neither systematic increase nor decrease for any model, indicating that models do not rely on the root word mappings. Results split up by root word overlap can be found in Table 10. Train Set Size: The size of the train set for KIT-MUS, 2000, was chosen to mirror that of GAP (Webster et al., 2018). To evaluate whether the failure of models to learn the task is due to the relatively small number of samples observed during training, we re-generated all variants with 5000 train examples and repeated all experiments. We observe an increase in the magnitude of performance both in BERT4Coref and C2F on those variants where performance was higher than random performance with 2000 examples, but not on those that were equal to or below random performance. Consistent with previous results, BERT4Coref performs well on BACKGROUNDPRETRAIN and BACKGROUND-BOTH, but not on all fictional BACKGROUND-INFERENCE variants (Tables 7 and 13). We release the KITMUS generation code to enable experimentation with other train set sizes in future work. ## Ethical Considerations While KITMUS is intended as a diagnostic tool, users should be aware of the possibility of unintended biases when interpreting model performances on this dataset. To document these in more detail, our dataset release will be accompanied by a datasheet (Gebru et al., 2018) which is included in Appendix A.7. Despite the synthetic nature, depending on its use, KITMUS might also have adverse impacts. The randomized sampling of resources to fill slots is meant to minimize bias in terms of the demographic cues that might be associated with the entities referenced in our tests (e.g., gender and nationality). The names and occupation descriptions in our test suite are drawn from United States governmental resources or English-language websites. This means that our test suite is not representative and likely skewed in terms of names, locations, occupations, and situations more common in the e.g., anglophone world. Additional resources such as noise statements and fictional entities were generated using word-level and character-level language models trained on English-language texts, which are known to reproduce a variety of biases found in natural data (Bordia and Bowman, 2019; Solaiman et al., 2019). ## Acknowledgements We would like to thank the anonymous reviewers for their valuable suggestions. This work was supported by Microsoft Research. Jackie Chi Kit Cheung is supported by the Canada CIFAR AI Chair program, and is also a consulting researcher for Microsoft Research. The authors acknowledge the material support of NVIDIA in the form of computational resources. This research was enabled in part by compute resources provided by Mila (mila.quebec). ## References Rahul Aralikatte, Heather Lent, Ana Valeria Gonzalez, Daniel Herschcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, and Anders Søgaard. 2019. Rewarding coreference resolvers for being consistent with world knowledge. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1229–1235, Hong Kong, China. Association for Computational Linguistics. Sandeep Attree. 2019. Gendered ambiguous pronouns shared task: Boosting model confidence by evidence pooling. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 134–146, Florence, Italy. Association for Computational Linguistics. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In *36th Annual Meeting of the Association for Computational* Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90, Montreal, Quebec, Canada. Association for Computational Linguistics. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 297–304, Boston, Massachusetts, USA. Association for Computational Linguistics. Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Yang Trista Cao and Hal Daumé III. 2020. Toward gender-inclusive coreference resolution. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568–4595, Online. Association for Computational Linguistics. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In *Proceedings of the Joint Conference* of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602–610, Suntec, Singapore. Association for Computational Linguistics. Stephanie CY Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K Lampinen, and Felix Hill. 2022. Transformers generalize differently from information stored in context vs in weights. *arXiv* preprint arXiv:2210.05675. Kevin Clark and Christopher D. Manning. 2015. Entitycentric coreference resolution with model stacking. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405– 1415, Beijing, China. Association for Computational Linguistics. Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2256–2262, Austin, Texas. Association for Computational Linguistics. Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers as soft reasoners over language. *arXiv* preprint arXiv:2002.05867. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In *Proceedings of the 2013 Conference on Empirical Methods* in Natural Language Processing, pages 1971–1982, Seattle, Washington, USA. Association for Computational Linguistics. Ali Emami, Paul Trichelair, Adam Trischler, Kaheer Suleman, Hannes Schulz, and Jackie Chi Kit Cheung. 2019. The KnowRef coreference corpus: Removing gender and number cues for difficult pronominal anaphora resolution. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 3952–3961, Florence, Italy. Association for Computational Linguistics. Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2003. *The Measurement of Interrater Agreement*, chapter 18. John Wiley and Sons, Ltd. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. Keqing He, Yuanmeng Yan, and Weiran Xu. 2021. From context-aware to knowledge-aware: Boosting oov tokens recognition in slot tagging with background knowledge. *Neurocomputing*, 445:267–275. Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803–5808, Hong Kong, China. Association for Computational Linguistics. Jungo Kasai and Robert Frank. 2019. Jabberwocky parsing: Dependency parsing with lexical noise. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 113–123. Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2627–2636, Online. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)*, pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In *Thirteenth International Conference on the Principles of* Knowledge Representation and Reasoning. Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 7052–7063, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nikolay Malkin, Sameera Lanka, Pranav Goel, Sudha Rao, and Nebojsa Jojic. 2021. GPT perdetry test: Generating new meanings for new words. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5542–5553, Online. Association for Computational Linguistics. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. *arXiv preprint* arXiv:2212.10511. Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Using Large Corpora, page 273. Sangwhan Moon and Naoaki Okazaki. 2020. Patchbert: Just-in-time, out-of-vocabulary patching. In *Proc. of* EMNLP, pages 7846–7852. Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, and Greg Durrett. 2021. Creak: A dataset for commonsense reasoning over entity knowledge. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. In Automated Knowledge Base Construction. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In *Joint Conference on* EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The Winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 777–789, Jeju Island, Korea. Association for Computational Linguistics. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426, Online. Association for Computational Linguistics. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proc. of AAAI, volume 34, pages 8732–8740. Timo Schick and Hinrich Schütze. 2020. Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking. In Proc. of AAAI, volume 34, pages 8766–8774. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. Shubham Toshniwal, Allyson Ettinger, Kevin Gimpel, and Karen Livescu. 2020. PeTra: A Sparsely Supervised Memory Model for People Tracking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5415– 5428, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *In NeurIPS*, volume 30. Curran Associates, Inc. Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. *Transactions of the Association for Computational Linguistics*, 6:605–617. Hongming Zhang, Yan Song, Yangqiu Song, and Dong Yu. 2019. Knowledge-aware pronoun coreference resolution. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 867–876, Florence, Italy. Association for Computational Linguistics. Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, and Xiang Ren. 2021. RICA: Evaluating robust inference capabilities based on commonsense axioms. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7560–7579, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. ## A Appendix A.1 Creating A Custom Dataset Our code can be used to create a custom dataset in different languages by using custom resources in place of the canonical resources listed in A.2. Detailed instructions for how to do this can be found in the code repository's README2 file. ## A.2 Dataset-Specific Resources This section details the resources that were used to create the KITMUS dataset. ## A.2.1 Templates Table 4 shows the sets of templates used to to introduce and refer to entities. ## A.2.2 Fictional Occupations And Situations We generally follow the methodology of Malkin et al. (2021) in creating fictional occupations and siutations. To bias the model towards strings that can be used as occupation names, we train it on a reversed sequence of characters and prompt with the suffix er. We manually filter the words to eliminate unpronounceable or pre-existing English words. We employ the following two methodologies to generate fictional situations: 1) character-level fictional—like the fictional occupations—is generated with the suffix prompts ing and ly, and 2) word-level fictional is generated by randomly shuffling existing words with the same POS tags followed by manual filtering based on semantic plausibility. Examples are shown in Table 1. ## A.2.3 Resource Pools Entities are sampled from a pool of the 20,000 most frequent last names in the 2010 U.S. census.3 We use last names as entity names in order to avoid introducing gender-related cues. We discard those last names that are also first names. The order of entities within a template is also randomized. We 2https://github.com/mpoemsl/kitmus/blob/main/ README.md 3https://www.census.gov/topics/population/ genealogy/data/2010_surnames.html assume that there is no confounding pretrain-time knowledge based on the entity names in the models. Occupations consist of a curated list of 60 common occupations compiled by scraping a career website4and the US Labor census data.5 Following Cao and Daumé III (2020), we remove referential gender cues from the occupations such as "fireman." The jobs pertaining to very specific domains or related to one of the locations where entities meet are removed from the list. Situations are assembled using the occupation descriptions of the scraped occupations. We manually filter the pairs of situations that are semantically similar, such as an accountant and an analyst. Locations are derived from a curated list of 112 locations scraped from a website of common meetup places.6 We manually filter out locations that could provide inadvertent surface cues related to the entities' occupation, nationality, or gender. ## A.3 Dataset Format The CoNLL format contains token and sentence boundaries, Penn Treebank POS tags (Marcinkiewicz, 1994), and gold coreference clusters for all entity mentions. This means that all mentions of an entity—including in the knowledge text—are annotated in a single cluster. Models that operate on the CoNLL format predict these clusters, which involves both detecting mentions and clustering them. In contrast, the GAP format allows for the annotation of only two entities and only one mention per entity (excluding the pronoun), so entity mentions in the knowledge text remain unannotated. Models that operate on the GAP format are presented with exactly two mentions and for each of them make a binary decision whether or not they are coreferring with a pronoun. The GAP format task is more restricted in that models do not have to detect mentions and there are at most two entities per instance. ## A.4 Human Validation The participants were undergraduate and graduate students with fluency in English which were recruited via an institution-wide open call. The participants were compensated with the equivalent At {location}, {mentions} met. At {location}, {mentions} ran into each other. At {location}, {mentions} started a conversation. At {location}, {mentions} came across each other. At {location}, {mentions} encountered each other. At {location}, {mentions} bumped into each other. At {location}, {mentions} connected. {mentions} met at {location}. {mentions} ran into each other at {location}. {mentions} started a conversation at {location}. {mentions} came across each other at {location}. {mentions} encountered each other at {location}. {mentions} bumped into each other at {location}. {mentions} connected at {location}. (a) Meet Sentence Templates After a long day at work {situation}, {pronoun} was happy to relax. {pronoun} told anecdotes from a career of {situation}. {pronoun} reflected on whether {situation} for a living was a good career choice. When a question related to {situation} arose, {pronoun} offered a professional opinion. {pronoun} was relieved to unwind after a demanding day at work {situation}. {pronoun} was glad to unwind after a long day at work {situation}. {pronoun} shared experiences from a career of {situation}. {pronoun} pondered whether choosing {situation} as a career was a wise decision. (b) Pronoun Sentence Templates Table 4: Templates used to introduce ("Meet Sentence") and refer to ("Pronoun Sentence") entities in KITMUS task. of 12 USD for their participation.7 The study was approved by the institution's ethics review board and the participants gave their written consent via a form. The participants were tasked to resolve the coreferences in a randomly sampled subset of KITMUS texts. The task is presented to the participants as a multiple choice questionnaire. The participants are given gold mentions and have to select the antecedent that is referred to by the pronoun. The answer options include the names of all mentioned entities and a "can't say" option to indicate that the question is not answerable. The questionnaire contained 96 questions to be completed in 60 minutes, which was generous for most participants. The human validation was conducted using Google forms. The participants are introduced to the task with examples as shown in Figure 4. This is followed by 96 questions where the participants have to choose one option among all entity names and the option "can't say," which indicates that the task cannot be solved for this instance. The aggregated results of the validation study are shown in Table 5. ## A.5 Training Details We train our models in a compute cluster infrastructure on Nvidia Quadro RTX 8000 GPUs. For BERT4Coref, training on the train split of one KIT-MUS subtask took about 8 hours per run. For C2F it took about 16 hours, the training of the ensemble model GREP took 18 hours. The training of smaller models and inference on pretrained models took about 4 hours per run. 7Matches the minimum wage in the participants' demographic ## A.6 Additional Experiments As a supplement to our main experiments, we report the following experiment results on the BACKGROUND-PRETRAIN variant: - F1 score in Table 6 - Accuracy with 5000 instead of 2000 train examples in Table 7 - Accuracy without noise in Table 8 - Accuracy on train set in Table 9 - Accuracy with and without root word overlap in Table 10 On the BACKGROUND-BOTH and BACKGROUND-INFERENCE variants, we report: - F1 score in Table 11 - Accuracy on train set in Table 12 - Accuracy with 5000 instead of 2000 train examples in Table 13 ## A.7 Datasheet A.7.1 Motivation For What Purpose Was The Dataset Created? The KITMUS dataset was created to enable research on reasoning over knowledge for the task of coreference resolution - i.e. given a piece of text, identify mentions and determine whether or not they co-refer. The dataset was created with the intention to focus on those cases of coreference resolution that require knowledge about specific entities and their occupations to accomplish the task. ## Who Created The Dataset And On Behalf Of Which Entities? The dataset was created by the authors of this paper. ## Who Funded The Creation Of The Dataset? (a) Top Half (b) Bottom Half ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) Figure 4: Human validation questionaire introduction (split into two halves because of space constraints). Table 6: Antecedent F1 on BACKGROUND-PRETRAIN variant of KITMUS. Models marked with † operate on GAP format which only allows for the annotation of two entities. All other models operate on the CoNLL format. PeTra has higher F1 scores than pronoun accuracy, since it defaults to always predicting true for each antecedent, which results in a recall of 1.00 and a thus a high F1 score. | Variant | Occupation | Situation | With Knowledge | Without Knowledge | |-------------------------------------------------------------------------------------------------------------------|--------------|-------------|------------------|---------------------| | BACKGROUND-PRETRAIN | Real | Real | 0.93 | 0.00 | | BACKGROUND-PRETRAIN without noise | 0.91 | 0.00 | | | | BACKGROUND-BOTH | Real | 1.00 | 0.00 | | | BACKGROUND-INFERENCE | CharFict | 1.00 | 0.00 | | | Real | | | | | | BACKGROUND-INFERENCE | WordFict | 0.98 | 0.00 | | | BACKGROUND-INFERENCE | Real | 0.98 | 0.00 | | | BACKGROUND-INFERENCE | CharFict | 0.98 | 0.00 | | | CharFict | | | | | | BACKGROUND-INFERENCE | WordFict | 0.96 | 0.06 | | | Table 5: Accuracy on all variants aggregated over subtasks, splits, and participants. Random performance is 0.25. | | | | | | Model | 2 Entities | 3 Entities | 4 Entities | Model | 2 Entities | 3 Entities | 4 Entities | |-----------------------|--------------------|--------------|--------------|------------|--------------|--------------|--------------| | BERT4Coref | 0.49 | 0.24 | 0.19 | | | | | | C2F | 0.48 | 0.33 | 0.25 | | | | | | Stanford Neural | 0.29 | 0.15 | 0.13 | | | | | | Stanford Stat. | 0.09 | 0.04 | 0.02 | | | | | | Random | 0.50 | 0.33 | 0.25 | BERT4Coref | 0.99 | 0.99 | 0.94 | | C2F | 0.52 | 0.35 | 0.48 | | | | | | GREP† | 0.49 | - | - | | | | | | PeTra† | 0.67 | - | - | | | | | | Random | 0.50 | 0.33 | 0.25 | | | | | | (a) Ontonotes-trained | (b) KITMUS-trained | | | | | | | Funding was provided by multiple sources as mentioned in the acknowledgements in section 8. Any other comments? None. ## A.7.2 Composition What do instances that comprise the dataset represent? The dataset consist of text pairs that were generated to capture knowledge about entities, occupations, and situations, as well as coreference cases whose resolution depends on this knowledge. The labels are clusters of tokens in the text. How many instances are there in total? There are 4400·3·(2+1+5) = 105600 instances in total: 4400 instances for each of the three entity numbers for variants BACKGROUND-PRETRAIN (also without noise), BACKGROUND-BOTH, and | 2 Entities | 3 Entities | 4 Entities | | | | | | |--------------|--------------|--------------|-------------|-------------|-------------|-------------|------| | Model | Train Data | 2k | 5k | 2k | 5k | 2k | 5k | | PeTra | 0.00 | 0.01 | - | - | - | - | | | GREP | 0.49 | 0.50 | - | - | - | - | | | KITMUS | | | | | | | | | BERT4Coref | 0.99 ± 0.00 | 1.00 ± 0.00 | 0.98 ± 0.01 | 0.97 ± 0.00 | 0.94 ± 0.01 | 0.94 ± 0.02 | | | C2F | 0.52 ± 0.02 | 0.58 ± 0.06 | 0.28 ± 0.08 | 0.63 ± 0.03 | 0.48 ± 0.06 | 0.24 ± 0.08 | | | Random | - | 0.50 | 0.50 | 0.33 | 0.33 | 0.25 | 0.25 | Table 7: Accuracy on BACKGROUND-PRETRAIN variant of KITMUS with 2000 (2k) and 5000 (5k) train examples. Standard deviation is given after ±. | 2 Entities | 3 Entities | 4 Entities | | | | | | |--------------|--------------|--------------|----------|-------|----------|-------|----------| | Model | Train Data | Noise | No Noise | Noise | No Noise | Noise | No Noise | | BERT4Coref | 0.43 | 0.43 | 0.18 | 0.23 | 0.14 | 0.13 | | | C2F | 0.34 | 0.34 | 0.18 | 0.18 | 0.13 | 0.14 | | | Ontonotes | | | | | | | | | Stfd. Neural | 0.20 | 0.33 | 0.10 | 0.15 | 0.09 | 0.14 | | | Stfd. Stat. | 0.05 | 0.15 | 0.02 | 0.06 | 0.01 | 0.06 | | | PeTra | 0.00 | 0.01 | - | - | - | - | | | GREP | 0.49 | 0.49 | - | - | - | - | | | KITMUS | | | | | | | | | BERT4Coref | 0.99 | 1.00 | 0.98 | 0.98 | 0.94 | 0.92 | | | C2F | 0.52 | 0.52 | 0.28 | 0.34 | 0.48 | 0.24 | | | Random | - | 0.50 | 0.50 | 0.33 | 0.33 | 0.25 | 0.25 | Table 8: Accuracy on BACKGROUND-PRETRAIN variant of KITMUS with and without noise. 2 Entities 3 Entities 4 Entities Model Train Data Test Train Test Train Test Train PeTra KITMUS 0.00 0.01 - - - - GREP 0.49 0.51 - - - - BERT4Coref 0.99 1.00 0.98 1.00 0.94 1.00 C2F 0.52 0.96 0.28 1.00 0.48 1.00 Random - 0.50 0.50 0.33 0.33 0.25 0.25 Table 9: Test and train accuracy on BACKGROUND-PRETRAIN variant of KITMUS. Table 10: Accuracy on BACKGROUND-PRETRAIN variant of KITMUS with and without root word overlap. five versions of BACKGROUND-INFERENCE with different degrees of fictionality. ## Does The Dataset Contain All Possible Instances Or Is It A Sample Of Instances From A Larger Set? The dataset contains all instances that we generated. They are generated by filling slots in a template by sampling from a pool of resources. The pool of resources only contains a subset of resources in the world, and the sampling process selects a random subset of the pool of resources. ## What Data Does Each Instance Consist Of? The instances are pairs of template-generated texts: one knowledge text and one task text. The knowledge text contains knowledge about fictional | 2 Entities | 3 Entities | 4 Entities | | | | | | |--------------|--------------|--------------|------------|---------|------------|---------|------------| | Model | Train Data | Overlap | No Overlap | Overlap | No Overlap | Overlap | No Overlap | | BERT4Coref | 0.43 | 0.45 | 0.18 | 0.19 | 0.15 | 0.14 | | | C2F | 0.34 | 0.36 | 0.17 | 0.19 | 0.13 | 0.12 | | | Ontonotes | | | | | | | | | Stfd. Neural | 0.20 | 0.19 | 0.11 | 0.08 | 0.08 | 0.09 | | | Stfd. Stat. | 0.05 | 0.04 | 0.02 | 0.01 | 0.01 | 0.00 | | | PeTra | 0.00 | 0.01 | - | - | - | - | | | GREP | 0.47 | 0.52 | - | - | - | - | | | KITMUS | | | | | | | | | BERT4Coref | 0.99 | 0.99 | 0.99 | 0.97 | 0.95 | 0.92 | | | C2F | 0.53 | 0.50 | 0.29 | 0.26 | 0.49 | 0.46 | | | Random | - | 0.50 | 0.50 | 0.33 | 0.33 | 0.25 | 0.25 | | Var. | Occupation | Situation | C2F | BERT4Coref | |----------|--------------|-------------|-------|--------------| | BB | Real | 0.11 | 0.96 | | | BI | CharFict | 0.20 | 0.25 | | | Real | | | | | | BI | WordFict | 0.10 | 0.49 | | | BI | Real | 0.09 | 0.43 | | | BI | CharFict | 0.21 | 0.27 | | | CharFict | | | | | | BI | WordFict | 0.14 | 0.39 | | entities and real or fictional occupations in text form. The task text contains a case of coreference involving the same fictional entities. Labels for the coreferences are given in the form of coreference clusters over tokens. Is there a label associated with each instance? Yes. The label is a coreference cluster that represents the true resolution of the coreference presented in the text. Is any information missing from individual ## Instances? No. Are Relationships Between Individual Instances Made Explicit? Yes. The entities are fictional and created separately for each instance. Instances are completely independent from each other and are not consistent across the dataset, i.e. conflicting knowledge may be given for the same fictional entity across different instances in the dataset. ## Are There Recommended Data Splits? Yes. Each subcategory of the dataset is provided in recommended data splits of 2000 train instances, 400 validation instances, and 2000 test instances. The numbers are chosen for size comparability with other coreference resolution datasets such as GAP (Webster et al., 2018). Resources are disjunct across the splits for each subcategory, which enables the evaluation of the ability of models to generalize beyond observed resources. ## Are There Any Errors, Sources Of Noise, Or Redundancies In The Dataset? None that we are aware of. Since the dataset is template-generated, only the intentionally provided noise in the appropriate subcategory is present. We control for redundancies in the dataset. A human validation has not brought to light any errors in the dataset, however, due to the synthetic nature of the dataset texts can appear wooden and non-natural to readers. ## Is The Dataset Self-Contained, Or Does It Link To or otherwise rely on external resources? The dataset is created using external resources to fill slots in templates, but the finished dataset is entirely self-contained. ## Does The Dataset Contain Data That Might Be Considered Confidential? The dataset contains only information about fictional entities and public knowledge about occupations which is not confidential. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? Both the templates and the resources used to fill the slots were manually inspected for content that might cause anxiety to viewers. The dataset does not contain any text that might cause anxiety to viewers. ## Does The Dataset Identify Any Subpopulations? The fictional entities have neither an explicit age nor gender. The only distinguishing features of the entities are their names and occupations, which are uniformly sampled, and their pronoun use, which is sampled according to the following distribution: 40% he, 40% she, 10% they, and 10% neopronouns. ## Is It Possible To Identify Individuals Either Directly Or Indirectly? No. Since the entities are entirely fictional, any similarities to existing individuals are due to chance. Does the dataset contain data that might be sensitive in any way? No. ## Any Other Comments? None. ## A.7.3 Collection Process How Was The Data Associated With Each Instance Acquired? The data was generated by filling slots in templates that were hand-engineered. The slot-filling resources were obtained from publicly available raw text sources such as governmental name statistics and professional job websites. Noise sentences were generated with the language model GPT-2 (Radford et al., 2019) and manually edited and verified to conform with the rest of the dataset. Fictional occupation names and descriptions were created by random sampling from a character-level LSTM language model following methodology of Malkin et al. (2021). C2F BERT4Coref Variant Occupation Situation Test Train Test Train BB Real Real 0.09 1.00 0.96 1.00 BI CharFict 0.18 0.97 0.25 0.88 BI WordFict 0.08 0.95 0.48 0.73 BI CharFict Real 0.08 0.96 0.43 0.97 BI CharFict 0.18 0.83 0.26 0.78 BI WordFict 0.11 1.00 0.38 0.96 Table 12: Train and test accuracy on BACKGROUND-BOTH (BB) and BACKGROUND-INFERENCE (BI) variants of KITMUS. Random performance is 0.25. C2F BERT4Coref Variant Occupation Situation 2k 5k 2k 5k BB Real Real 0.09 0.49 0.96 0.97 BI CharFict 0.18 0.25 0.25 0.27 BI WordFict 0.08 0.26 0.48 0.78 BI CharFict Real 0.08 0.21 0.43 0.57 BI CharFict 0.18 0.25 0.26 0.26 BI WordFict 0.11 0.25 0.38 0.59 Table 13: KITMUS-trained accuracy on BACKGROUND-BOTH (BB) and BACKGROUND-INFERENCE (BI) variants of KITMUS with four entities with 2000 (2k) and 5000 (5k) train examples. Random performance is 0.25. ## What Mechanisms Or Procedures Were Used To Collect The Data? The dataset was generated using Python scripts, which will be made publicly available in a GitHub repository. If the dataset is a sample from a larger set, what was the sampling strategy? Not applicable. The entire dataset will be released. Who was involved in the data collection process and how were they compensated? Not applicable. There was no human involved in the dataset creation prcoess. Over what timeframe was the data collected? The dataset was created immediately prior to the submission of this draft for review. Were any ethical review processes conducted for the data collection process? Not applicable, data was not collected. The human evaluation study used to evaluate the dataset was approved by an institutional review board. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources? The dataset was created via templates. The resources were collected directly from publicly available data online. Were the individuals in question notified about the data collection? The resources were collected directly online from institutions and authors who made the resources available publicly. The authors and institutions were not explicitly informed about the way their resources are used in this dataset. Did the individuals in question consent to the collection and use of their data? Not applicable. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? Not applicable. Has an analysis of the potential impact of the dataset and its use on data subjects been conducted? No. Any other comments? None. ## A.7.4 Preprocessing Was Any Preprocessing/Cleaning/Labeling Of The Data Done? The template building blocks were manually tokenized and POS tagged with the Stanford CoreNLP pipeline, which was then manually verified. In terms of resources, the occupations were filtered manually to avoid overlaps in descriptions. Referential gender cues such as "fireman" were removed from the occupations. Occupations pertaining to very specific domains or related to location were removed from the list. GPT-2 generated noise sentences were manually checked for coherence and also tokenized and POS tagged with the Stanford CoreNLP pipeline. Fictional occupation names and descriptions were likewise manually checked for coherence and suitability. Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data? No. Is the software that was used to preprocess/clean/label the data available? The Stanford CoreNLP pipeline is available here: https://stanfordnlp.github.io/CoreNLP/. Any other comments? None. A.7.5 Uses Has the dataset been used for any tasks already? None. Is there a repository that links to any or all papers or systems that use the dataset? Not applicable. What (other) tasks could the dataset be used for? The dataset could potentially be used for research on mention detection, cross-document coreference resolution, or entity linking, since the annotations are compatible with these tasks as well. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Due to its template-generated nature, the data does not consist of naturally occurring texts and should not be used for purposes which require naturally occurring texts. Are there tasks for which the dataset should not be used? The entities in the texts are entirely fictional and have an arbitrary distribution of attributes. Consequently, the information in this dataset should not be used to make decisions about real people. Any other comments? None. A.7.6 Distribution Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? Yes, the dataset will be available publicly on the internet. How will the dataset be distributed? The dataset will be released in the GitHub repository for this paper. When will the dataset be distributed? Upon publication of the corresponding paper. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? The dataset and the code used to generate it will be distributed under the license specified in the GitHub repository for the dataset. In the repository, we will also request to cite the corresponding paper if the dataset is used. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? None that we are aware of. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? None that we are aware of. Any other comments? No. A.7.7 Maintenance Who will be supporting/hosting/maintaining the dataset? The first authors will support and maintain the dataset. How can the owner/curator/manager of the dataset be contacted? Contact the first authors. Is there an erratum? No. Future updates and known errors will be specified in the README.md of the repository. Will the dataset be updated? Currently, no updates are planned. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? Not applicable, since the entities are fictional. Will older versions of the dataset continue to be supported/hosted/maintained? In the case of updates, the original version of the dataset will always be available on GitHub via a tagged release. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? Suggestions for the augmentation of the dataset can be made via GitHub pull requests. Any other comments? None. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? A.8.2 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? A.8.6 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? A.8.5 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? A.8.2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? A.8.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. A.8.2 ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? A.6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? A.8.4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 4.4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? A.5 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? A.5 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? A.5 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? A.5 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? A.5
treviso-etal-2023-crest
{CREST}: A Joint Framework for Rationalization and Counterfactual Text Generation
https://aclanthology.org/2023.acl-long.842
Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model{'}s predictions.
# Crest: A Joint Framework For Rationalization And Counterfactual Text Generation Marcos Treviso1,2∗ , Alexis Ross3, Nuno M. Guerreiro1,2**, André F. T. Martins**1,2,4 1Instituto de Telecomunicações, Lisbon, Portugal 2Instituto Superior Técnico & LUMLIS (Lisbon ELLIS Unit), Lisbon, Portugal 3Massachusetts Institute of Technology 4Unbabel, Lisbon, Portugal ## Abstract ![0_Image_0.Png](0_Image_0.Png) Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model's predictions. ## 1 Introduction As NLP models have become larger and less transparent, there has been a growing interest in developing methods for finer-grained interpretation and control of their predictions. One class of methods leverages **selective rationalization** (Lei et al., 2016; Bastings et al., 2019), which trains models to first select*rationales*, or subsets of relevant input tokens, and then make predictions based only on the selected rationales. These methods offer increased interpretability, as well as learning benefits, such as improved robustness to input perturbations (Jain et al., 2020; Chen et al., 2022). Another class of methods generates **counterfactual examples**, or modifications to input examples that change their labels. By providing localized views of decision boundaries, counterfactual examples can be used as explanations of model predictions, contrast datasets for fine-grained evaluation, or new training datapoints for learning more robust models (Ross et al., 2021; Gardner et al., 2020; Kaushik et al., 2020). This paper is motivated by the observation that selective rationales and counterfactual examples allow for interpreting and controlling model behavior through different means: selective rationalization improves model transparency by weaving interpretability into a model's internal decisionmaking process, while counterfactual examples provide external signal more closely aligned with human causal reasoning (Wu et al., 2021). We propose to combine both methods to leverage their complementary advantages. We introduce CREST (*ContRastive Edits with Sparse raTionalization*), a joint framework for rationalization and ∗Correspondence to: marcos.treviso@tecnico.pt 15109 ![1_image_0.png](1_image_0.png) counterfactual text generation. CREST first generates high-quality counterfactuals (Figure 1), then leverages those counterfactuals to encourage consistency across "flows" for factual and counterfactual inputs (Figure 2). In doing so, CREST unifies two key important dimensions of interpretability introduced by Doshi-Velez and Kim (2017, §3.2), forward simulation and counterfactual simulation. Our main contributions are:1 - We present **CREST-Generation** (Figure 1), a novel approach to generating counterfactual examples by combining sparse rationalization with span-level masked language modeling (§3), which produces valid, fluent, and diverse counterfactuals (§4, Table 1). - We introduce **CREST-Rationalization** (Figure 2), a novel approach to regularizing rationalizers. CREST-Rationalization decomposes a rationalizer into factual and counterfactual flows and encourages agreement between the rationales for both (§5). - We show that CREST-generated counterfactuals can be effectively used to increase model robustness, leading to larger improvements on contrast and out-of-domain datasets than using manual counterfactuals (§6.2, Tables 2 and 3). - We find that rationales trained with CRESTRationalization not only are more plausible, but also achieve higher forward and counterfactual simulabilities (§6.3, Table 4). 1Code at https://github.com/deep-spin/crest/. Overall, our experiments show that CREST successfully combines the benefits of counterfactual examples and selective rationales to improve the quality of each, resulting in a more interpretable and robust learned model. ## 2 Background 2.1 Rationalizers The traditional framework of rationalization involves training two components cooperatively: the generator—which consists of an encoder and an explainer—and the *predictor*. The generator encodes the input and produces a "rationale" (e.g., word highlights), while the predictor classifies the text given only the rationale as input (Lei et al., 2016). Assume a document x with n tokens as input. The encoder module (enc) converts the input tokens into d-dimensional hidden state vectors H ∈ R n×d, which are passed to the explainer (expl) to generate a latent mask z ∈ {0, 1} n. The latent mask serves as the rationale since it is used to select a subset of the input x ⊙ z, which is then passed to the predictor module (pred) to produce a final prediction yˆ ∈ Y, where Y = {1*, ..., k*} for k-class classification. The full process can be summarized as follows: $$z=\exp(\operatorname{enc}(x;\phi);\gamma),\qquad\qquad(1)$$ $$\hat{y}=\operatorname{pred}(x\odot z;\theta),\qquad\qquad(2)$$ where *ϕ, γ, θ* are trainable parameters. To ensure that the explainer does not select all tokens (i.e., zi = 1, ∀i), sparsity is usually encouraged in the 15110 rationale extraction. Moreover, explainers can also be encouraged to select contiguous words, as there is some evidence that it improves readibility (Jain et al., 2020). These desired properties may be encouraged via regularization terms during training (Lei et al., 2016; Bastings et al., 2019), or via application of sparse mappings (Treviso and Martins, 2020; Guerreiro and Martins, 2021). In this work, we will focus specifically on the SPECTRA rationalizer (Guerreiro and Martins, 2021): this model leverages an explainer that extracts a deterministic structured mask z by solving a constrained inference problem with SparseMAP (Niculae et al., 2018). SPECTRA has been shown to achieve comparable performance with other rationalization approaches, in terms of end-task performance, plausibility with human explanations, and robustness to input perturbation (Chen et al., 2022). Moreover, it is easier to train than other stochastic alternatives (Lei et al., 2016; Bastings et al., 2019), and, importantly, it allows for simple control over the properties of the rationales, such as sparsity via its constrained inference formulation: by setting a budget B on the rationale extraction, SPECTRA ensures that the rationale size will not exceed ⌈Bn⌉ tokens. ## 2.2 Counterfactuals In NLP, counterfactuals refer to alternative texts that describe a different outcome than what is encoded in a given factual text. Prior works (Verma et al., 2020) have focused on developing methods for generating counterfactuals that adhere to several key properties, including: - **Validity**: the generated counterfactuals should encode a different label from the original text. - **Closeness**: the changes made to the text should be small, not involving large-scale rewriting of the input. - **Fluency**: the generated counterfactuals should be coherent and grammatically correct. - **Diversity**: the method should generate a wide range of counterfactuals with diverse characteristics, rather than only a limited set of variations. While many methods for automatic counterfactual generation exist (Wu et al., 2021; Robeer et al., 2021; Dixit et al., 2022), our work is mostly related to MiCE (Ross et al., 2021), which generates counterfactuals in a two stage process that involves masking the top-k tokens with the highest ℓ1 gradient attribution of a pre-trained classifier, and infilling tokens for masked position with a T5-based model (Raffel et al., 2020). MiCE further refines the resultant counterfactual with a binary search procedure to seek strictly *minimal* edits. However, this process is computationally expensive and, as we show in §4.2, directly optimizing for closeness can lead to counterfactuals that are less valid, fluent, and diverse. Next, we present an alternative method that overcomes these limitations while still producing counterfactuals that are close to original inputs. ## 3 Crest-Generation We now introduce CREST (ContRastive Edits with Sparse raTionalization), a framework that combines selective rationalization and counterfactual text generation. CREST has two key components: (i) **CREST-Generation** offers a controlled approach to generating counterfactuals, which we show are valid, fluent, and diverse (§4.2); and (ii) **CREST-Rationalization** leverages these counterfactuals through a novel regularization technique encouraging agreement between rationales for original and counterfactual examples. We demonstrate that combining these two components leads to models that are more robust (§6.2) and interpretable (§6.3). We describe CREST-Generation below and CREST-Rationalization in §5. Formally, let x = ⟨x1*, ..., x*n⟩ represent a factual input text with a label yf . We define a counterfactual as an input x˜ = ⟨x1*, ..., x*m⟩ labeled with yc such that yf ̸= yc. A counterfactual generator is a mapping that transforms the original text x to a counterfactual x˜. Like MiCE, our approach for generating counterfactuals consists of two stages, as depicted in Figure 1: the mask and the edit stages. Mask stage. We aim to find a mask vector z ∈ {0, 1} nsuch that tokens xi associated with zi = 1 are relevant for the factual prediction yˆf of a particular classifier C. To this end, we employ a SPECTRA rationalizer as the **masker**. Concretely, we pre-train a SPECTRA rationalizer on the task at hand with a budget constraint B, and define the mask as the rationale vector z ∈ {0, 1} n(see §2.1). Edit stage. Here, we create edits by infilling the masked positions using an **editor** module G, such as a masked language model: x˜ ∼ GLM(x ⊙ z). In order to infill spans rather than single tokens, we follow MiCE and use a T5-based model to infill | IMDB | SNLI | | | | | | | | | | |--------------------|--------|--------|--------|--------|-------|--------|--------|--------|--------|------| | Method | val. ↑ | fl. ↓ | div. ↓ | clo. ↓ | #tks | val. ↑ | fl. ↓ | div. ↓ | clo. ↓ | #tks | | Chance baseline | 50.20 | - | - | - | - | 52.70 | - | - | - | - | | References | 97.95 | 66.51 | - | - | 184.4 | 96.75 | 63.52 | - | - | 7.5 | | Manual edits | 93.44 | 72.89 | 81.67 | 0.14 | 183.7 | 93.88 | 65.25 | 35.82 | 0.42 | 7.7 | | PWWS | 28.07 | 101.91 | 74.56 | 0.16 | 179.0 | 17.97 | 160.11 | 31.81 | 0.36 | 6.8 | | CFGAN | - | - | - | - | - | 34.46 | 155.84 | 68.94 | 0.23 | 7.0 | | PolyJuice | 36.69 | 68.59 | 56.41 | 0.45 | 94.6 | 41.80 | 62.62 | 39.01 | 0.40 | 11.6 | | MiCE (bin. search) | 72.13 | 76.72 | 73.76 | 0.20 | 171.3 | 76.17 | 63.94 | 42.18 | 0.35 | 7.9 | | MiCE (30% mask) | 76.80 | 79.35 | 49.64 | 0.39 | 161.3 | 77.26 | 59.71 | 34.08 | 0.40 | 8.3 | | MiCE (50% mask) | 83.20 | 89.92 | 20.71 | 0.65 | 115.7 | 84.48 | 68.32 | 24.27 | 0.52 | 7.6 | | CREST (30% mask) | 75.82 | 67.29 | 57.58 | 0.33 | 180.9 | 75.45 | 62.00 | 41.36 | 0.29 | 7.4 | | CREST (50% mask) | 93.24 | 50.69 | 23.08 | 0.66 | 193.9 | 81.23 | 62.60 | 30.53 | 0.41 | 7.3 | spans for masked positions. During training, we fine-tune the editor to infill original spans of text by prepending gold target labels yf to original inputs. In order to generate counterfactual edits at test time, we prepend a counterfactual label yc instead, and sample counterfactuals using beam search. Overall, our procedure differs from that of MiCE in the mask stage: instead of extracting a mask via gradient-based attributions and subsequent binary search, we leverage SPECTRA to find an optimal mask. Interestingly, by doing so, we not only avoid the computationally expensive binary search procedure, but we also open up new opportunities: as our masking process is differentiable, we can optimize our masker to enhance the quality of both the counterfactuals (§4.2) and the selected rationales (§6.3). We will demonstrate the latter with our proposed CREST-Rationalization setup (§5). All implementation details for the masker and the editor can be found in §B. ## 4 Evaluating Crest Counterfactuals This section presents an extensive comparison of counterfactuals generated by different methods. ## 4.1 Experimental Setting Data and evaluation. We experiment with our counterfactual generation framework on two different tasks: sentiment classification using IMDB (Maas et al., 2011) and natural language inference (NLI) using SNLI (Bowman et al., 2015). In sentiment classification, we only have a single input to consider, while NLI inputs consist of a premise and a hypothesis, which we concatenate to form a single input. To assess the quality of our automatic counterfactuals, we compare them to manually crafted counterfactuals in the revised IMDB and SNLI datasets created by Kaushik et al. (2020). More dataset details can be found in §A. Training. We employ a SPECTRA rationalizer with a T5-small architecture as the masker, and train it for 10 epochs on the full IMDB and SNLI datasets. We also use a T5-small architecture for the editor, and train it for 20 epochs with early stopping, following the same training recipe as MiCE. Full training details can be found in §B.3. Generation. As illustrated in Figure 1, at test time we generate counterfactuals by prepending a contrastive label to the input and passing it to the editor. For sentiment classification, this means switching between positive and negative labels. For NLI, in alignment with Dixit et al. (2022), we adopt a refined approach by restricting the generation of counterfactuals to entailments and contradictions only, therefore ignoring neutral examples, which have a subtle semantic meaning. In contrast, our predictors were trained using neutral examples, and in cases where they predict the neutral class, we default to the second-most probable class. Baselines. We compare our approach with four open-source baselines that generate counterfactuals: PWWS (Ren et al., 2019), PolyJuice (Wu et al., 2021), CounterfactualGAN (Robeer et al., 2021),2 2Despite many attempts, CounterfactualGAN did not converge on IMDB, possibly due to the long length of the inputs. ![4_image_0.png](4_image_0.png) and MiCE (Ross et al., 2021). In particular, to ensure a fair comparison with MiCE, we apply three modifications to the original formulation: (i) we replace its RoBERTa classifier with a T5-based classifier (as used in SPECTRA); (ii) we disable its validity filtering;3(iii) we report results with and without the binary search procedure by fixing the percentage of masked tokens. Metrics. To determine the general **validity** of counterfactuals, we report the accuracy of an off-the-shelf RoBERTa-base classifier available in the HuggingFace Hub.4 Moreover, we measure **fluency** using perplexity scores from GPT-2 large (Radford et al., 2019) and **diversity** with selfBLEU (Zhu et al., 2018). Finally, we quantify the notion of **closeness** by computing the normalized edit distance to the factual input and the average number of tokens in the document. ## 4.2 Results Results are presented in Table 1. As expected, manually crafted counterfactuals achieve high validity, significantly surpassing the chance baseline and establishing a reliable reference point. For IMDB, we find that CREST outperforms other methods by a wide margin in terms of validity and fluency. At the same time, CREST's validity is comparable to the manually crafted counterfactuals, while surprisingly deemed more fluent by GPT-2. Moreover, we note that our modification of disabling MiCE's minimality search leads to counterfactuals that are more valid and diverse but less fluent and less close to the original inputs. For SNLI, this modification allows MiCE to achieve the best overall scores, closely followed by CREST. However, when controlling for closeness, we observe that CREST outperforms MiCE: at closeness of ∼0.30, CREST (30% mask) outperforms MiCE with binary search in terms of fluency and diversity. Similarly, at a closeness of ∼0.40, CREST (50% mask) surpasses MiCE (30% mask) across the board. As detailed in §C, CREST's counterfactuals are more valid than MiCE's for all closeness bins lower than 38%. We provide examples of counterfactuals produced by CREST and MiCE in Appendix G. Finally, we note that CREST is highly affected by the masking budget, which we explore further next. Sparsity analysis. We investigate how the number of edits affects counterfactual quality by training maskers with increasing budget constraints (as described in §2.1). The results in Figure 3 show that with increasing masking percentage, generated counterfactuals become less textually similar to original inputs (i.e., less close) but more valid and fluent. This inverse relationship demonstrates that strict minimality, optimized for in methods like MiCE, comes with tradeoffs in counterfactual quality, and that the sparsity budget in CREST can be used to modulate the trade-off between validity and closeness. In Figure 3 we also examine the benefit of manually crafted counterfactuals in two ways: (i) using these examples as additional training data; and (ii) upon having a trained editor, further finetuning it with these manual counterfactuals. The results suggest that at lower budget percentages, exploiting a few manually crafted counterfactuals to fine-tune CREST can improve the validity of counterfactuals without harming fluency. Validity filtering. As previously demonstrated by Wu et al. (2021) and Ross et al. (2022), it is possible to filter out potentially disfluent or invalid counterfactuals by passing all examples to ![5_image_0.png](5_image_0.png) a classifier and discarding the subset with incorrect predictions. In our case, we use the predictor associated with the masker as the classifier. We found find that applying this filtering increases the validity of IMDB counterfactuals from 75.82 to 86.36 with B = 0.3, and from 93.24 to 97.36 with B = 0.5. For SNLI, validity jumps from 75.45 to 96.39 with B = 0.3, and from 81.23 to 96.67 with B = 0.5. These results indicate that CREST can rely on its predictor to filter out invalid counterfactuals, a useful characteristic for doing data augmentation, as we will see in §6.2. ## 4.3 Human Study We conduct a small-scale human study to evaluate the quality of counterfactuals produced by MiCE and CREST with 50% masking percentage. Annotators were tasked with rating counterfactuals' validity and *naturalness* (e.g., based on style, tone, and grammar), each using a 5-point Likert scale. Two fluent English annotators rated 50 examples from the IMDB dataset, and two others rated 50 examples from SNLI. We also evaluate manually created counterfactuals to establish a reliable baseline. More annotation details can be found in §D. The study results, depicted in Figure 4, show that humans find manual counterfactuals to be more valid and natural compared to automatically generated ones. Furthermore, CREST's counterfactuals receive higher ratings for validity and naturalness compared to MiCE, aligning with the results obtained from automatic metrics. ## 5 Crest-Rationalization Now that we have a method that generates highquality counterfactual examples, a natural step is to use these examples for data augmentation. However, vanilla data augmentation does not take advantage of the paired structure of original/contrastive examples and instead just treats them as individual datapoints. In this section, we present CREST's second component, CREST-Rationalization (illustrated in Figure 2), which leverages the relationships between factual and counterfactual inputs through a SPECTRA rationalizer with an **agreement regularization** strategy, described next. ## 5.1 Linking Counterfactuals And Rationales We propose to incorporate counterfactuals into a model's functionality by taking advantage of the fully differentiable rationalization setup. Concretely, we decompose a rationalizer into two flows, as depicted in Figure 2: a **factual flow** that receives factual inputs x and outputs a factual prediction yˆ, and a **counterfactual flow** that receives counterfactual inputs x˜ and should output a counterfactual prediction y˜ ̸= ˆy. As a by-product of using a rationalizer, we also obtain a factual rationale z ∈ {0, 1} nfor x and a counterfactual rationale z˜ ∈ {0, 1} m for x˜, where n = |x| and m = |x˜|. Training. Let Θ = {*ϕ, γ, θ*} represent the trainable parameters of a rationalizer (defined in §2.1). We propose the following loss function: $$\begin{array}{c}{{{\mathcal{L}}(\Theta)={\mathcal{L}}_{f}(y_{f},\hat{y}(\Theta))+\alpha{\mathcal{L}}_{c}(y_{c},\tilde{y}(\Theta))}}\\ {{\qquad+\lambda\Omega(z(\Theta),\tilde{z}(\Theta)),}}\end{array}$$ where Lf (·) and Lc(·) represent cross-entropy losses for the factual and counterfactual flows, respectively, and Ω(·) is a novel penalty term to encourage factual and counterfactual rationales to focus on the same positions, as defined next. α ∈ R and λ ∈ R are hyperparameters. Agreement regularization. To produce paired rationales for both the factual and counterfactual flows, we incorporate regularization terms into the training of a rationalizer to encourage the factual explainer to produce rationales similar to those originally generated by the *masker* z ⋆, and the counterfactual explainer to produce rationales that focus on the tokens modified by the *editor* z˜ ⋆. We derive the ground truth counterfactual rationale z˜ ⋆ by aligning x to x˜ and marking tokens that were inserted or substituted as 1, and others as 0. The regularization terms are defined as: $$\Omega(\mathbf{z},{\tilde{\mathbf{z}}})=\|\mathbf{z}(\Theta)-\mathbf{z}^{\star}\|_{2}^{2}+\|{\tilde{\mathbf{z}}}(\Theta)-{\tilde{\mathbf{z}}}^{\star}\|_{2}^{2}\,.\tag{4}$$ To allow the counterfactual rationale z˜ to focus on all important positions in the input, we adjust the budget for the counterfactual flow based on the length of the synthetic example produced by the counterfactual generator. Specifically, we multiply the budget by a factor of ∥z˜ ⋆∥0 ∥z⋆∥0 . | Setup | IMDB | rIMDB | cIMDB | RotTom | SST-2 | Amazon | Yelp | |---------------------------------------------------------------|------------|------------|------------|------------|------------|------------|------------| | F | 91.1 ± 0.3 | 91.4 ± 0.8 | 88.5 ± 0.9 | 76.5 ± 1.6 | 79.8 ± 1.6 | 86.0 ± 0.7 | 88.5 ± 0.7 | | With data augmentation: F + CH 90.9 ± 0.5 | 92.9 ± 0.9 | 90.4 ± 1.6 | 76.6 ± 1.5 | 80.7 ± 1.3 | 86.3 ± 1.0 | 89.1 ± 1.2 | | | F + CS,V | 91.0 ± 0.2 | 91.2 ± 1.0 | 89.3 ± 0.8 | 76.8 ± 0.9 | 79.3 ± 0.3 | 85.2 ± 0.9 | 88.0 ± 1.0 | | F + CS | 90.8 ± 0.2 | 91.6 ± 1.3 | 89.2 ± 0.4 | 76.7 ± 1.0 | 80.6 ± 0.6 | 86.4 ± 0.6 | 89.1 ± 0.5 | | With agreement regularization: F & CS,V 90.7 ± 0.5 92.2 ± 0.7 | 88.9 ± 1.0 | 76.3 ± 1.4 | 80.2 ± 1.3 | 86.3 ± 0.7 | 88.9 ± 0.7 | | | | F & CS | 91.2 ± 0.5 | 92.9 ± 0.5 | 89.7 ± 1.1 | 77.3 ± 2.3 | 81.1 ± 2.4 | 86.8 ± 0.8 | 89.3 ± 0.7 | ## 6 **Exploiting Counterfactuals For Training** In this section, we evaluate the effects of incorporating CREST-generated counterfactuals into training by comparing a vanilla data augmentation approach with our CREST-Rationalization approach. We compare how each affects model robustness (§6.2) and interpretability (§6.3). ## 6.1 Experimental Setting We use the IMDB and SNLI datasets to train SPECTRA rationalizers with and without counterfactual examples, and further evaluate on in-domain, contrast and out-of-domain (OOD) datasets. For IMDB, we evaluate on the revised IMDB, contrast IMDB, RottenTomatoes, SST-2, Amazon Polarity, and Yelp. For SNLI, we evaluate on the Hard SNLI, revised SNLI, break, MultiNLI, and Adversarial NLI. Dataset details can be found in §A. To produce CREST counterfactuals, which we refer to as "synthetic", we use a 30% masking budget as it provides a good balance between validity, fluency, and closeness (cf. Figure 3). We tune the counterfactual loss (α) and agreement regularization (λ) weights on the dev set. We report results with α = 0.01 and λ = 0.001 for IMDB, and α = 0.01 and λ = 0.1 for SNLI. ## 6.2 Robustness Results Tables 2 and 3 show results for counterfactual data augmentation and agreement regularization for IMDB and SNLI, respectively. We compare a standard SPECTRA trained on factual examples (F) with other SPECTRA models trained on augmentated data from human-crafted counterfactuals (F + CH) and synthetic counterfactuals generated by CREST (F + CS), which we additionally postprocess to drop invalid examples (F + CS,V ). Discussion. As shown in Table 2, CRESTRationalization (F & CS) consistently outperforms vanilla counterfactual augmentation (F + CS) on all sentiment classification datasets. It achieves the top results on the full IMDB and on all OOD datasets, while also leading to strong results on contrastive datasets—competitive with manual counterfactuals (F + CH). When analyzing the performance of CREST-Rationalization trained on a subset of valid examples (F & CS,V ) versus the entire dataset (F & CS), the models trained on the entire dataset maintain a higher level of performance across all datasets. However, when using counterfactuals for data augmentation, this trend is less pronounced, especially for in-domain and contrastive datasets. In §E, we explore the impact of the number of augmented examples on results and find that, consistent with previous research (Huang et al., 2020; Joshi and He, 2022), augmenting the training set with a small portion of valid and diverse synthetic counterfactuals leads to more robust models, and can even outweigh the benefits of manual counterfactuals. Examining the results for NLI in Table 3, we observe that both counterfactual augmentation and agreement regularization interchangeably yield top results across datasets. Remarkably, in contrast to sentiment classification, we achieve more substantial improvements with agreement regularization models when these are trained on valid counterfactuals, as opposed to the full set. Overall, these observations imply that CRESTRationalization is a viable alternative to data augmentation for improving model robustness, especially for learning contrastive behavior for sentiment classification. In the next section, we explore the advantages of CREST-Rationalization for improving model interpretability. ## 6.3 Interpretability Analysis In our final experiments, we assess the benefits of our proposed regularization method on model inter- | Setup | SNLI | SNLI-h | rSNLI | break | MNLI-m | MNLI-mm | ANLI | |---------------------------------------------------------------|------------|------------|------------|------------|------------|------------|------------| | F | 86.6 ± 0.2 | 73.7 ± 0.2 | 71.1 ± 0.8 | 69.5 ± 1.5 | 64.6 ± 1.1 | 65.9 ± 0.9 | 32.6 ± 0.7 | | With data augmentation: F + CH 86.6 ± 0.3 | 74.9 ± 1.1 | 72.4 ± 0.3 | 70.1 ± 1.9 | 64.2 ± 0.9 | 65.8 ± 0.9 | 31.8 ± 0.4 | | | F + CS,V | 86.5 ± 0.3 | 75.8 ± 1.2 | 71.8 ± 1.0 | 69.1 ± 2.0 | 64.4 ± 0.3 | 65.9 ± 0.4 | 32.2 ± 0.2 | | F + CS | 86.6 ± 0.3 | 74.7 ± 1.1 | 71.6 ± 0.8 | 71.2 ± 1.4 | 64.5 ± 0.4 | 66.4 ± 0.6 | 32.2 ± 1.0 | | With agreement regularization: F & CS,V 86.8 ± 0.1 75.3 ± 0.8 | 66.8 ± 0.7 | 68.2 ± 2.1 | 64.6 ± 0.7 | 66.1 ± 0.6 | 32.8 ± 0.6 | | | | F & CS | 86.6 ± 0.1 | 75.5 ± 1.3 | 67.0 ± 1.3 | 69.9 ± 1.7 | 64.2 ± 1.1 | 66.0 ± 0.7 | 32.5 ± 0.5 | pretability. We evaluate effects on rationale quality along three dimensions: plausibility, forward simulability, and counterfactual simulability. Plausibility. We use the MovieReviews (DeYoung et al., 2020) and the e-SNLI (Camburu et al., 2018) datasets to study the human-likeness of rationales by matching them with human-labeled explanations and measuring their AUC, which automatically accounts for multiple binarization levels.5 Forward simulability. Simulability measures how often a human agrees with a given classifier when presented with explanations, and many works propose different variants to compute simulability scores in an automatic way (Doshi-Velez and Kim, 2017; Treviso and Martins, 2020; Hase et al., 2020; Pruthi et al., 2022). Here, we adopt the framework proposed by Treviso and Martins (2020), which views explanations as a message between a classifier and a linear student model, and determines simulability as the fraction of examples for which the communication is successful. In our case, we cast a SPECTRA rationalizer as the classifier, use its rationales as explanations, and train a linear student on factual examples of the IMDB and SNLI datasets. High simulability scores indicate more understandable and informative explanations. Counterfactual simulability. Building on the manual simulability setup proposed by Doshi-Velez and Kim (2017), we introduce a new approach to automatically evaluate explanations that interact with counterfactuals. Formally, let C be a classifier that when given an input x produces a prediction yˆ and a rationale z. Moreover, let G be a pre-trained counterfactual editor, which receives x and z and produces a counterfactual x˜ by infilling spans on positions masked according to z (e.g., via masking). 5We determine the explanation score for a single word by calculating the average of the scores of its word pieces. We define *counterfactual simulability* as follows: $${\frac{1}{N}}\sum_{n=1}^{N}[[C(\mathbf{x}_{n})\neq C(G(\mathbf{x}_{n}\odot\mathbf{z}_{n}))]],\quad\quad(5)$$ where [[·]] is the Iverson bracket notation. Intuitively, counterfactual simulability measures the ability of a rationale to change the label predicted by the classifier when it receives a contrastive edit with infilled tokens by a counterfactual generator as input. Therefore, a high counterfactual simulability indicates that the rationale z focuses on the highly contrastive parts of the input. Results. The results of our analysis are shown in Table 4. We observe that plausibility can substantially benefit from synthetic CREST-generated counterfactual examples, especially for a rationalizer trained with our agreement regularization, which outperforms other approaches by a large margin. Additionally, leveraging synthetic counterfactuals, either via data augmentation or agreement regularization, leads to a high forward simulability score, though by a smaller margin—within the standard deviation of other approaches. Finally, when looking at counterfactual simulability, we note that models that leverage CREST counterfactuals consistently lead to better rationales. In particular, agreement regularization leads to strong results on both tasks while also producing more plausible rationales, showing the efficacy of CRESTRationalization in learning contrastive behavior. ## 7 Related Works Generating counterfactuals. Existing approaches to generating counterfactuals for NLP use heuristics (Ren et al., 2019; Ribeiro et al., 2020), leverage plug-and-play approaches to controlled generation (Madaan et al., 2021), or, most relatedly, fine-tune language models to | Sentiment Classification | Natural Language Inference | | | | | | |------------------------------------------------------------------|------------------------------|---------------|---------------|---------------|--------------|--------------| | Setup | Plausibility | F. sim. | C. sim. | Plausibility | F. sim. | C. sim. | | F | 0.6733 ± 0.02 | 91.70 ± 0.92 | 81.18 ± 2.79 | 0.7735 ± 0.00 | 59.26 ± 0.41 | 70.01 ± 0.44 | | With data augmentation: F + CH 0.6718 ± 0.04 | 91.44 ± 1.46 | 80.53 ± 4.17 | 0.7736 ± 0.01 | 59.51 ± 0.86 | 69.90 ± 0.57 | | | F + CS | 0.6758 ± 0.01 | 91.68 ± 0.59 | 84.54 ± 1.09 | 0.7779 ± 0.00 | 59.54 ± 0.08 | 70.76 ± 0.54 | | With agreement regularization: F & CS 0.6904 ± 0.02 91.93 ± 0.83 | 86.43 ± 1.56 | 0.7808 ± 0.00 | 59.31 ± 0.20 | 70.69 ± 0.29 | | | generate counterfactuals (Wu et al., 2021; Ross et al., 2021, 2022; Robeer et al., 2021). For instance, PolyJuice (Wu et al., 2021) finetunes a GPT-2 model on human-crafted counterfactuals to generate counterfactuals following pre-defined control codes, while CounterfactualGAN (Robeer et al., 2021) adopts a GAN-like setup. We show that CREST-Generation outperforms both methods in terms of counterfactual quality. Most closely related is MiCE (Ross et al., 2021), which also uses a two-stage approach based on a masker and an editor to generate counterfactuals. Unlike MiCE, we propose to relax the minimality constraint and generate masks using selective rationales rather than gradients, resulting not only in higher-quality counterfactuals, but also in a fully-differentiable set-up that allows for further optimization of the masker. Other recent work includes Tailor (Ross et al., 2022), a semantically-controlled generation system that requires a human-in-the-loop to generate counterfactuals, as well as retrieval-based and prompting approaches such as RGF (Paranjape et al., 2022) and CORE (Dixit et al., 2022). Training with counterfactuals. Existing approaches to training with counterfactuals predominantly leverage data augmentation. Priors works have explored how augmenting with both manual (Kaushik et al., 2020; Khashabi et al., 2020; Huang et al., 2020; Joshi and He, 2022) and automatically-generated (Wu et al., 2021; Ross et al., 2022; Dixit et al., 2022) counterfactuals affects model robustness. Unlike these works, CREST-Rationalization introduces a new strategy for training with counterfactuals that leverages the paired structure of original and counterfactual examples, improving model robustness and interpretability compared to data augmentation. Also related is the training objective proposed by Gupta et al. (2021) to promote consistency across pairs of examples with shared substructures for neural module networks, and the loss term proposed by Teney et al. (2020) to model the factual-counterfactual paired structured via gradient supervision. In contrast, CREST can be used to *generate* paired examples, can be applied to non-modular tasks, and does not require second-order derivatives. Rationalization. There have been many modifications to the rationalization setup to improve task accuracy and rationale quality. Some examples include conditioning the rationalization on pre-specified labels (Yu et al., 2019), using an information-bottleneck formulation to ensure informative rationales (Paranjape et al., 2020), training with human-created rationales (Lehman et al., 2019), and replacing stochastic variables with deterministic mappings (Guerreiro and Martins, 2021). We find that CREST-Rationalization, which is fully unsupervised, outperforms standard rationalizers in terms of model robustness and quality of rationales. ## 8 Conclusions We proposed CREST, a joint framework for selective rationalization and counterfactual text generation that is capable of producing valid, fluent, and diverse counterfactuals, while being flexible for controlling the amount of perturbations. We have shown that counterfactuals can be successfully incorporated into a rationalizer, either via counterfactual data augmentation or agreement regularization, to improve model robustness and rationale quality. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model's predictions. ## Limitations Our work shows that CREST is a suitable framework for generating high-quality counterfactuals and producing plausible rationales, and we hope that CREST motivates new research to develop more robust and interpretable models. We note, however, two main limitations in our framework. First, our counterfactuals are the result of a large language model (T5), and as such, they may carry all the limitations within these models. Therefore, caution should be exercised when making statements about the quality of counterfactuals beyond the metrics reported in this paper, especially if these statements might have societal impacts. Second, CREST relies on a rationalizer to produce highlights-based explanations, and therefore it is limited in its ability to answer interpretability questions that go beyond the tokens of the factual or counterfactual input. ## Acknowledgments This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by EU's Horizon Europe Research and Innovation Actions (UTTER, contract 101070631), by P2020 project MAIA (LISBOA-01-0247- FEDER045909), by the Portuguese Recovery and Resilience Plan through project C645008882-00000055 (NextGenAI, Center for Responsible AI), and by contract UIDB/50008/2020. We are grateful to Duarte Alves, Haau-Sing Lee, Taisiya Glushkova, and Henrico Brum for the participation in human evaluation experiments. ## References Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc. Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3792–3805, Seattle, United States. Association for Computational Linguistics. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458, Online. Association for Computational Linguistics. Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. CORE: A retrieve-thenedit framework for counterfactual data generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2964–2984, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608v2. Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307–1323, Online. Association for Computational Linguistics. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655, Melbourne, Australia. Association for Computational Linguistics. Nuno M. Guerreiro and André F. T. Martins. 2021. SPECTRA: Sparse structured text rationalization. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6534–6550, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nitish Gupta, Sameer Singh, Matt Gardner, and Dan Roth. 2021. Paired examples as indirect supervision in latent decision models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5774–5785, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language? In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 4351–4367, Online. Association for Computational Linguistics. William Huang, Haokun Liu, and Samuel R. Bowman. 2020. Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data. In *Proceedings of the First Workshop on* Insights from Negative Results in NLP, pages 82–87, Online. Association for Computational Linguistics. Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4459–4473, Online. Association for Computational Linguistics. Nitish Joshi and He He. 2022. An investigation of the (in)effectiveness of counterfactually augmented data. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 3668–3681, Dublin, Ireland. Association for Computational Linguistics. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In *International Conference on Learning Representations*. Daniel Khashabi, Tushar Khot, and Ashish Sabharwal. 2020. More bang for your buck: Natural perturbation for robust question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 163–170, Online. Association for Computational Linguistics. Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C. Wallace. 2019. Inferring which medical treatments work from reports of clinical trials. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3705–3717, Minneapolis, Minnesota. Association for Computational Linguistics. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Diptikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. In *Proceedings of the AAAI Conference on Artificial* Intelligence, pages 13516–13524. Vlad Niculae, Andre Martins, Mathieu Blondel, and Claire Cardie. 2018. Sparsemap: Differentiable sparse structured inference. In *International Conference on Machine Learning*, pages 3799–3808. PMLR. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4885–4901, Online. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115–124, Ann Arbor, Michigan. Association for Computational Linguistics. Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1938– 1952, Online. Association for Computational Linguistics. Bhargavi Paranjape, Matthew Lamm, and Ian Tenney. 2022. Retrieval-guided counterfactual generation for QA. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1670–1686, Dublin, Ireland. Association for Computational Linguistics. Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, and William W. Cohen. 2022. Evaluating explanations: How much do explanations from the teacher aid students? *Transactions of the Association for Computational Linguistics*, 10:359–375. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085– 1097, Florence, Italy. Association for Computational Linguistics. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics. Marcel Robeer, Floris Bex, and Ad Feelders. 2021. Generating realistic natural language counterfactuals. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3611–3625, Punta Cana, Dominican Republic. Association for Computational Linguistics. Alexis Ross, Ana Marasovic, and Matthew Peters. 2021. ´ Explaining NLP models via minimal contrastive editing (MiCE). In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3840–3852, Online. Association for Computational Linguistics. Alexis Ross, Tongshuang Wu, Hao Peng, Matthew Peters, and Matt Gardner. 2022. Tailor: Generating and perturbing text with semantic controls. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3194–3213, Dublin, Ireland. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. 2020. Learning what makes a difference from counterfactual examples and gradient supervision. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020,* Proceedings, Part X 16, pages 580–599. Springer. Marcos Treviso and André F. T. Martins. 2020. The explanation game: Towards prediction explainability through sparse communication. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 107–118, Online. Association for Computational Linguistics. Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. *arXiv preprint arXiv:2010.10596v3*. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2021. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6707–6723, Online. Association for Computational Linguistics. Mo Yu, Shiyu Chang, Yang Zhang, and Tommi Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4094– 4103, Hong Kong, China. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18, page 1097–1100, New York, NY, USA. Association for Computing Machinery. ## A Datasets The revised IMDB and SNLI datasets, which we refer to as rIMDB and rSNLI respectively, were created by Kaushik et al. (2020). They contain counterfactuals consisting of revised versions made by humans on the Amazon's Mechanical Turk crowdsourcing platform. For both datasets, the authors ensure that (a) the counterfactuals are valid; (b) the edited reviews are coherent; and (c) the counterfactuals do not contain unnecessary modifications. For SNLI, counterfactuals were created either by revising the premise or the hypothesis. We refer to (Kaushik et al., 2020) for more details on the data generation process. Table 5 presents statistics for the datasets used for training models in this work. | Train | Val. | Test | | | | | |---------|--------|--------|------|------|------|------| | Dataset | docs | tks | docs | tks | docs | tks | | IMDB | 22.5K | 6M | 2.5K | 679K | 25K | 6M | | rIMDB | 3414 | 629K | 490 | 92K | 976 | 180K | | SNLI | 549K | 12M | 10K | 232K | 10K | 231K | | rSNLI | 4165 | 188K | 500 | 24K | 1000 | 48K | Additionally, we incorporate various contrastive and out-of-domain datasets to evaluate our models. For IMDB, we use the contrast IMDB (Gardner et al., 2020), RottenTomatoes (Pang and Lee, 2005), SST-2 (Socher et al., 2013), Amazon Polarity and Yelp (Zhang et al., 2015). For SNLI, we evaluate on the Hard SNLI (Gururangan et al., 2018), break (Glockner et al., 2018), MultiNLI (Williams et al., 2018), and Adversarial NLI (Nie et al., 2020). We refer to the original works for more details. ## B Crest Details B.1 Masker For all datasets, the masker consists of a SPECTRA rationalizer that uses a T5-small encoder as the backbone for the encoder and predictor (see §2.1). Our implementation is derived directly from its original source (Guerreiro and Martins, 2021). We set the maximum sequence length to 512, truncating inputs when necessary. We employ a contiguity penalty of 10−4for IMDB and 10−2for SNLI. We train all models for a minimum of 3 epochs and maximum of 15 epochs along with early stopping with a patience of 5 epochs. We use AdamW (Loshchilov and Hutter, 2019) with a learning rate of 10−4and weight decay of 10−6. ## B.2 Editor For all datasets, CREST and MiCE editors consist of a full T5-small model (Raffel et al., 2020), which includes both the encoder and the decoder modules. We use the T5 implementation available in the *transformers* library (Wolf et al., 2020) for our editor. We train all models for a minimum of 3 epochs and maximum of 20 epochs along with early stopping with a patience of 5 epochs. We use AdamW (Loshchilov and Hutter, 2019) with a learning rate of 10−4and weight decay of 10−6. For both CREST and MiCE, we generate counterfactuals with beam search with a beam of size 15 and disabling bigram repetitions. We post-process the output of the editor to trim spaces and repetitions of special symbols (e.g., </s>). ## B.3 Spectra Rationalizers All of our SPECTRA rationalizers share the same setup and training hyperparameters as the one used by the masker in §4, but were trained with distinct random seeds. We tuned the counterfactual loss weight α within {1.0, 0.1, 0.01, 0.001, 0.0001}, and λ within {1.0, 0.1, 0.01, 0.001} for models trained with agreement rationalization. More specifically, we performed hyperparameter tuning on the validation set, with the goal of maximizing in-domain accuracy. As a result, we obtained α = 0.01 and λ = 0.001 for IMDB, and α = 0.01 and λ = 0.1 for SNLI. ## C Validity Vs. Closeness To better assess the performance of CREST and MiCE by varying closeness, we plot in Figure 5 binned-validity scores of CREST and MiCE with 30% masking on the revised SNLI dataset. Although CREST is deemed less valid than MiCE overall (cf. Table 1), we note that CREST generates more valid counterfactuals in lower minimality ranges. This provides further evidence that CREST remains superior to MiCE on closeness intervals of particular interest for generating counterfactuals in an automatic way. ![13_image_0.png](13_image_0.png) ## D Human Annotation The annotation task was conducted by four distinct individuals, all of whom are English-fluent PhD students. Two annotators were employed for IMDB and two for SNLI. The annotators were not given any information regarding the methods used to create each counterfactual, and the documents were presented in a random order to maintain source anonymity. The annotators were presented with the reference text and its corresponding gold label. Subsequently, for each method, they were asked to assess both the validity and the naturalness of the resulting counterfactuals using a 5-point Likert scale. We provided a guide page to calibrate the annotators' understating of validity and naturalness prior the annotation process. We presented hypothetical examples with different levels of validity and naturalness and provided the following instructions regarding both aspects: - "If every phrase in the text unequivocally suggests a counterfactual label, the example is deemed fully valid and should receive a top score of 5/5." - "If the counterfactual text aligns with the style, tone, and grammar of real-world examples, it's considered highly natural and deserves a score of 5/5." We measure inter-annotator agreement with a normalized and inverted Mean Absolute Difference (MAD), which computes a "soft" accuracy by averaging absolute difference ratings and normalizing them to a 0-1 range. We present the annotation results in Table 6. Our results show that humans agreed more on manual examples than on automatic approaches. On the other hand, for SNLI, | IMDB | SNLI | | | | | | |---------|--------|------|------|------|------|------| | Method | v | n | ro | v | n | ro | | Manual | 4.60 | 4.36 | 0.83 | 4.89 | 4.90 | 0.95 | | MiCE | 2.76 | 2.29 | 0.71 | 4.35 | 4.71 | 0.94 | | CREST | 4.06 | 3.44 | 0.76 | 4.89 | 4.89 | 0.96 | | Overall | 3.81 | 3.36 | 0.77 | 4.71 | 4.83 | 0.95 | | Setup | Data size | RotTom | SST-2 | Amazon | Yelp | |--------------------------------------------------------|----------------|------------|------------|------------|------------| | F | 100% | 76.5 ± 1.6 | 79.8 ± 1.6 | 86.0 ± 0.7 | 88.5 ± 0.7 | | With data augmentation: F + CH +8% 76.6 ± 1.5 | 80.7 ± 1.3 | 86.3 ± 1.0 | 89.1 ± 1.2 | | | | F + CS,V | +1% | 77.2 ± 1.1 | 80.5 ± 0.5 | 86.1 ± 0.2 | 88.8 ± 0.3 | | F + CS,V | +2% | 76.2 ± 1.2 | 80.8 ± 0.8 | 86.7 ± 0.5 | 89.6 ± 0.5 | | F + CS,V | +4% 77.7 ± 0.8 | 80.8 ± 0.7 | 87.0 ± 0.6 | 89.8 ± 0.6 | | | F + CS,V | +8% | 76.6 ± 2.2 | 80.2 ± 1.7 | 86.1 ± 0.9 | 88.2 ± 1.0 | | F + CS,V | +85% | 76.8 ± 0.9 | 79.3 ± 0.3 | 85.2 ± 0.9 | 88.0 ± 1.0 | | F + CS | +100% | 76.7 ± 1.0 | 80.6 ± 0.6 | 86.4 ± 0.6 | 89.1 ± 0.5 | | With agreement regularization: F & CS,V 85% 76.3 ± 1.4 | 80.2 ± 1.3 | 86.3 ± 0.7 | 88.9 ± 0.7 | | | | F & CS | 100% | 77.3 ± 2.3 | 81.1 ± 2.4 | 86.8 ± 0.8 | 89.3 ± 0.7 | annotators assigned similar scores across all methods. In terms of overall metrics, including validity, naturalness, and agreement, the scores were lower for IMDB than for SNLI, highlighting the difficulty associated with the generation of counterfactuals for long movie reviews. Annotation interface. Figure 6 shows a snapshot of the interface used for the annotation, which is publicly available at https://www.github.com/ mtreviso/TextRankerJS. ## E Counterfactual Data Augmentation Analysis Previous studies on counterfactual data augmentation have found that model performance highly depends on the number and diversity of augmented samples (Huang et al., 2020; Joshi and He, 2022). To account for this, we investigate the effect of adding increasingly larger portions of CREST counterfactuals for data augmentation on the IMDB dataset. Our findings are summarized in Table 7. Discussion. We find that incorporating humancrafted counterfactuals (F + CH) improves SPECTRA performance on all OOD datasets. On top of that, we note that using a small proportion ![14_image_0.png](14_image_0.png) (4% of the full IMDB) of valid CREST counterfactuals (F + CS,V ) through data augmentation also leads to improvements on all datasets and outweighs the benefits of manual counterfactuals. This finding confirms that, as found by PolyJuice (Wu et al., 2021), synthetic counterfactuals can improve model robustness. Conversely, as the number of augmented counterfactuals increases (85%), the performance on OOD datasets starts to decrease, which is also consistent with the findings of Huang et al. (2020). When augmenting the entire training set we obtain an increase of accuracy, suggesting that the counterfactual loss weight (α) was properly adjusted on the validation set. Finally, we observe that while applying CREST-Rationalization only on valid examples (F & CS,V ) degrades performance, applying CREST-Rationalization on all paired examples (F & CS) maintains a high accuracy on OOD datasets and concurrently leads to strong results on in-domain and contrast datasets (see Table 2). ## F Computing Infrastructure Our infrastructure consists of four machines with the specifications shown in Table 8. The machines were used interchangeably and all experiments were carried in a single GPU. | GPU | CPU | |-----------------------|------------------------| | 4 × Titan Xp - 12GB | 16 × AMD Ryzen - 128GB | | 4 × GTX 1080Ti - 12GB | 8 × Intel i7 - 128GB | | 3 × RTX 2080Ti - 12GB | 12 × AMD Ryzen - 128GB | | 3 × RTX 2080Ti - 12GB | 12 × AMD Ryzen - 128GB | Table 8: Computing infrastructure. ## G Examples Of Counterfactuals Table 9 shows examples of counterfactuals produced by MiCE and CREST with 30% masking. Sentiment Classification: Input: If you haven't seen this, it's terrible. It is pure trash. I saw this about 17 years ago, and I'm still screwed up from it. MiCE: If you haven't seen this, it's a great movie. I saw this about 17 years ago, and I'm still screwed up from it. CREST: If you haven't seen this movie, it's worth seeing. It's very funny. I saw it about 17 years ago, and I'm still screwed up from it. Input: Touching; Well directed autobiography of a talented young director/producer. A love story with Rabin's assassination in the background. Worth seeing ! MiCE: Watching abiography of a very young writer/producer. A great story of Rabin's assassination in the background! Worth seeing!! CREST: This is the worst film of a talented young director/producer. And Rabin's assassination in the background is even worse! Input: A solid, if unremarkable film. Matthau, as Einstein, was wonderful. My favorite part, and the only thing that would make me go out of my way to see this again, was the wonderful scene with the physicists playing badmitton, I loved the sweaters and the conversation while they waited for Robbins to retrieve the birdie. MiCE: This is an unremarkable, if unremarkable .hau, as steinstein, is the worst part of the movie, and the only thing that would make me go out of my way to see this again is the physicists /mitt ists and the wait wait till they waited for binbins to re-release. CREST: This is a very unremarkable and unwatchable film. The acting is unhau, the plot, and the acting. My favorite thing about this film, and the only thing that made me go out of my mind, was the ending with the physic nerves of Symitton, I watched the zombies and thought they waited for Robbins to retrieve the junkie. Input: I saw this film earlier today, and I was amazed at how accurate the dialog is for the main characters. It didn't feel like a film - it felt more like a documentary (the part I liked best). The leading ladies in this film seemed as real to me as any fifteen year-old girls I know. All in all, a very enjoyable film for those who enjoy independent films. MiCE: I saw this film earlier today, and I was amazed at how bad the film is for the sake of a film - it feels more like thanthe part I played in this film. To me - fifteen year-old s I don't know. All in all this is a bad film for those who like independent films : ``` CREST: I saw this movie earlier today, and I was surprised at how bad it is for the first time. It's not a good movie - it's just a bad movie (the only thing I can say about it). The acting is awful to me as any fifteen year-old as I can. All in all, the movie is a waste of time for me. Natural Language Inference: Prem: A large group of people walking in a busy city at night. Hyp: People are outside in a park. MiCE: People are walking in a city at night CREST: People walking in a city. Prem: Players from two opposing teams wearing colorful cleats struggle to gain control over a ball on an AstroTurf field. Hyp: The players are playing a sport. MiCE: The players are playing chess at home CREST: The players are sitting on a couch. Prem: A woman is in the middle of hitting a tennis ball. Hyp: A woman is playing tennis. MiCE: A woman is playing basketball at home CREST: A woman is playing basketball. Prem: Two boys with blond-hair, wearing striped shirts on a bed. Hyp: Children playing in the park. MiCE: Children are on the bed. CREST: Boys are on the bed. Prem: Bubbles surround a statue in the middle of a street. Hyp: There are bubbles around the statue. MiCE: There are bubbles surround the statue. CREST: Bubbles are in the ocean. Prem: A young girl is standing in a kitchen holding a green bib. Hyp: A boy is playing with a firetruck. MiCE: A child is in a fire place CREST: A girl is holding a bib. ``` Table 9: Examples of original inputs from the IMDB and SNLI datasets followed by synthetic counterfactuals produced by MiCE and CREST with 30% masking. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Final section (9) ✓ A2. Did you discuss any potential risks of your work? Final section (9) ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✓ A4. Have you used AI writing assistants when working on this paper? ChatGPT, mostly to rephrase some sentences by following the prompt "rewrite this sentence in a better, more fluent, way (keep tone)". ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1 (Footnote). ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 for datasets, and Appendix B for the model architecture / implementation. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A. ## C ✓ **Did You Run Computational Experiments?** Sections 4 And 6, And Appendix C. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The computing infrastructure is in Appendix D. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sections 4 and 6, and Appendix B. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 4 and 6, and Appendix C. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? For some metrics, yes (simulability in section 6). D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4.3 and Appendix D ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix D ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Considering the simplicity of the study, we found this to be unnecessary. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Considering the simplicity of the study, we found this to be unnecessary. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Considering the simplicity of the study, we found this to be unnecessary. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Considering the simplicity of the study, we found this to be unnecessary.
wang-etal-2023-towards-unifying
Towards Unifying Multi-Lingual and Cross-Lingual Summarization
https://aclanthology.org/2023.acl-long.843
To adapt text summarization to the multilingual world, previous work proposes multi-lingual summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks have been studied separately due to the different definitions, which limits the compatible and systematic research on both of them. In this paper, we aim to unify MLS and CLS into a more general setting, i.e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language. As the first step towards M2MS, we conduct preliminary studies to show that M2MS can better transfer task knowledge across different languages than MLS and CLS. Furthermore, we propose Pisces, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability via three-stage pre-training. Experimental results indicate that our Pisces significantly outperforms the state-of-the-art baselines, especially in the zero-shot directions, where there is no training data from the source-language documents to the target-language summaries.
## Towards Unifying Multi-Lingual And Cross-Lingual Summarization Jiaan Wang1∗ , Fandong Meng2, Duo Zheng3**, Yunlong Liang**2 Zhixu Li4† , Jianfeng Qu1†**And Jie Zhou**2 1School of Computer Science and Technology, Soochow University, Suzhou, China 2Pattern Recognition Center, WeChat AI, Tencent Inc, China 3Beijing University of Posts and Telecommunications 4Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China jawang.nlp@gmail.com, fandongmeng@tencent.com zhixuli@fudan.edu.cn, jfqu@suda.edu.cn ## Abstract To adapt text summarization to the multilingual world, previous work proposes multi-lingual summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks have been studied separately due to the different definitions, which limits the compatible and systematic research on both of them. In this paper, we aim to unify MLS and CLS into a more general setting, *i.e.*, many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language. As the first step towards M2MS, we conduct preliminary studies to show that M2MS can better transfer task knowledge across different languages than MLS and CLS. Furthermore, we propose PISCES, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability via threestage pre-training. Experimental results indicate that our PISCES significantly outperforms the state-of-the-art baselines, especially in the zero-shot directions, where there is no training data from the source-language documents to the target-language summaries.1 ## 1 Introduction The world we live in is multi-lingual. With globalization, text resources in various languages flood the Internet, where global users can easily access their desired information. Under this background, the text summarization community presents multilingual summarization (MLS) and cross-lingual summarization (CLS), respectively. As shown in Figure 1, MLS aims at building a unified model to process documents in multiple languages and generate summaries in the corresponding language (Giannakopoulos et al., 2015; Cao et al., 2020b; Hasan et al., 2021b; Wang et al., 2021; Varab and Schluter, ∗Work was done when Jiaan Wang was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China. †Corresponding authors. 1https://hf.co/Krystalan/PISCES $\begin{array}{ccc}Y^{\,En}&Y^{\,De}&Y^{\,En}\\ \uparrow&&\uparrow\\ \hline&&\text{MLS Model}\\ \hline\end{array}$ (a) ![0_image_1.png](0_image_1.png) ![0_image_0.png](0_image_0.png) ℎ ℎ ℎ MLS Model CLS Model M2MS Model ℎ ℎ (a) (b) (c) Figure 1: Illustration of (a) multi-lingual summarization, (b) cross-lingual summarization and (c) many-to-many summarization. Xiand Y i denote the input document and output summary in language i, respectively. En: English; De: German; Zh: Chinese. 2021), while CLS generates a summary in the target language from the given document in a different source language (Leuski et al., 2003a; Wan et al., 2010; Wan, 2011; Yao et al., 2015; Zhu et al., 2019; Ladhak et al., 2020; Perez-Beltrachini and Lapata, 2021; Wang et al., 2022b,d,c, 2023). Despite the close relationship between MLS and CLS (*e.g.*, both tasks involve more than one language and require models to distill the key information from documents), previous work studies each task separately, hindering the systematic exploration for both of them. In this paper, we aim to unify MLS and CLS into a more general setting named *many-to-many summarization* (M2MS). As its name implies, the goal of M2MS is to build a single summarization model to process a document in any source language and generate the corresponding summary in any given target language. In this manner, one M2MS model could perform more directions than MLS and CLS2, thus reducing the used parameters. For example, one M2MS model involving n languages could replace one MLS model and n×(n−1) CLS models. To provide a deeper understanding of M2MS, we also conduct preliminary studies to systematically compare M2MS with MLS and CLS, respectively. In detail, following recent CLS work (Ladhak et al., 2020; Perez-Beltrachini and Lapata, 2021), we use 2We use "direction" to denote the summarization direction from the source to the target languages, e.g., English (documents) ⇒ Chinese (summaries). 15127 mBART-50 (Tang et al., 2021) as the summarization model, and train the model in the settings of MLS, CLS and M2MS, respectively. After comparing the model performances, we find that the model trained in M2MS setting can better transfer task knowledge across different languages and combine the advantages of those trained in MLS and CLS settings. Therefore, we argue that it is promising to unify MLS and CLS into M2MS. Furthermore, we propose PISCES3, a pre-trained M2MS model that learns language modeling, crosslingual ability and summarization ability via three pre-training stages: (1) *meta pre-training* learns the general language modeling knowledge from multi-lingual unlabeled corpora; (2) *cross-lingual* pre-training makes the model aware of the transformation between different languages based on parallel corpora; (3) *task-specific pre-training* utilizes M2MS objective to simultaneously improve the cross-lingual ability and the summarization abilities of the model. Considering the high-quality M2MS samples are non-trivial to collect, we leverage a simple strategy to construct pseudo M2MS samples from multi-lingual unlabeled corpora. During the three-stage pre-training, PISCES gradually shifts from learning language modeling to the abilities required by M2MS. Among them, the learned cross-lingual ability plays a key role in enhancing the knowledge transferability of the downstream task (*i.e.*, summarization) from high-resource languages to low/zero-resource languages. Lastly, the pre-trained PISCES could be simply fine-tuned on M2MS with input source-language documents and output target-language summaries. We evaluate PISCES on the WikiLingua (Ladhak et al., 2020) and CrossSum (Hasan et al., 2021a) datasets. Experimental results show that PISCES achieves promising results compared with the stateof-the-art baselines (*i.e.*, mBART-50 and mT5), especially in the zero-shot directions. Moreover, we find that PISCES is even able to generate summaries for documents whose language never occurs in the fine-tuning stage. Our contributions are concluded as follows: - To our knowledge, we are the first to unify MLS and CLS into a more general setting (M2MS). We also conduct preliminary studies to provide deeper analyses among MLS, CLS and M2MS. - We propose PISCES, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability through a carefully designed three-stage pre-training. - We conduct extensive experiments and show that our PISCES achieves new state-of-the-art performance on the large-scale benchmark datasets. Besides, the effectiveness of PISCES in low/zeroresource languages is also demonstrated. ## 2 Related Work Multi-Lingual Summarization. Multi-lingual summarization (MLS) aims to process documents in multiple languages and generate their summaries in the corresponding language. Giannakopoulos et al. (2015) present MultiLing-2015 dataset. Later, this task receives increasing attention (Vanetik and Litvak, 2015; Litvak et al., 2016). Recently, largescale MLS datasets (Scialom et al., 2020; Varab and Schluter, 2021; Hasan et al., 2021b; Feng et al., 2022; Liang et al., 2022a) together with sophisticated methods (Cao et al., 2020b; Chi et al., 2020; Wang et al., 2021; Li et al., 2023) are proposed one after another. Considering the close relation between MLS and CLS, Cao et al. (2020b); Feng et al. (2022) also evaluate the MLS models on CLS to show their zero-shot CLS ability. Cross-Lingual Summarization. Given documents in one language, cross-lingual summarization (CLS) generates summaries in another language. Early work typically focuses on pipeline methods (Leuski et al., 2003b; Orasan and Chiorean ˘ , 2008; Wan et al., 2010; Wan, 2011; Yao et al., 2015), *i.e.*, translation and then summarization or summarization and then translation. Recently, with the availability of large-scale CLS datasets (Zhu et al., 2019; Ladhak et al., 2020; Perez-Beltrachini and Lapata, 2021; Wang et al., 2022b; Chen et al., 2022; Zheng et al., 2023), many researchers shift the research attention to end-to-end CLS models, including multi-task learning (Cao et al., 2020a; Bai et al., 2021; Liang et al., 2022b), knowledge distillation (Nguyen and Tuan, 2022), resourceenhanced (Zhu et al., 2020) and pre-training (Xu et al., 2020; Chi et al., 2021) approaches. Among them, most CLS work separately builds CLS models in each cross-lingual direction except for Hasan et al. (2021a), who jointly train mT5 (Xue et al., 2021) in multiple directions. Different from previous MLS and CLS, we unify them into a more general setting (M2MS) starting from the training stage. Besides, we are the first to | Trg Setting | En | Fr | Hi | Zh | Th | Tr | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|---------------------------|------|------| | Src | ONE | 41.2 / 17.5 / 34.6 / 74.2 35.2 / 14.8 / 29.2 / 73.0 28.2 / 08.3 / 22.6 / 67.7 34.9 / 11.8 / 30.4 / 69.8 34.3 / 14.3 / 30.0 / 66.1 | NA | | | | | U-CLS 39.7 / 16.0 / 32.7 / 73.6 36.8 / 15.3 / 29.9 / 73.6 31.2 / 09.2 / 23.9 / 69.0 37.9 / 13.9 / 32.7 / 71.5 38.9 / 17.9 / 33.4 / 68.9 3.2 / 0.3 / 3.0 / 48.9 MLS 41.6 / 17.9 / 34.7 / 74.4 05.3 / 00.8 / 04.8 / 63.8 03.3 / 00.7 / 03.1 / 53.7 14.6 / 00.9 / 14.5 / 60.1 20.8 / 05.7 / 20.0 / 54.1 2.5 / 0.2 / 2.4 / 47.3 | | | | | | | | En | M2MS | 41.9 / 18.2 / 34.9 / 74.6 37.2 / 15.8 / 30.3 / 73.9 31.7 / 09.6 / 24.5 / 69.3 37.9 / 13.9 / 32.7 / 71.5 39.5 / 18.5 / 34.0 / 69.1 3.2 / 0.2 / 3.0 / 49.0 | | | | | | ONE | 35.6 / 13.6 / 29.8 / 72.1 37.8 / 17.4 / 31.2 / 73.9 | NA | 32.6 / 10.0 / 28.4 / 68.6 31.4 / 11.8 / 27.6 / 64.9 | NA | | | | U-CLS 37.5 / 14.4 / 30.7 / 72.9 37.6 / 16.1 / 30.5 / 74.0 28.2 / 07.6 / 22.0 / 68.1 36.7 / 12.8 / 31.3 / 70.9 37.3 / 16.2 / 32.1 / 68.1 3.3 / 0.3 / 3.1 / 49.4 MLS 08.8 / 02.2 / 07.6 / 64.3 39.5 / 18.2 / 32.5 / 74.9 02.1 / 00.4 / 01.9 / 53.3 13.5 / 01.0 / 13.2 / 57.5 18.5 / 03.3 / 17.9 / 54.5 2.1 / 0.1 / 2.1 / 46.8 | | | | | | | | Fr | M2MS | 38.2 / 15.0 / 31.7 / 73.4 39.2 / 17.9 / 32.0 / 74.7 28.7 / 07.9 / 22.3 / 68.1 36.9 / 12.8 / 31.6 / 70.9 37.9 / 16.6 / 32.6 / 68.5 3.1 / 0.2 / 3.0 / 49.2 | | | | | | ONE | 32.2 / 10.9 / 26.1 / 70.2 | NA | 32.8 / 11.5 / 25.8 / 69.6 | NA | NA | NA | | U-CLS 36.8 / 14.0 / 29.8 / 72.2 31.9 / 11.6 / 24.7 / 71.4 32.7 / 10.3 / 25.6 / 70.3 32.6 / 10.2 / 27.3 / 68.6 34.9 / 14.3 / 29.4 / 67.1 3.3 / 0.3 / 3.2 / 50.0 MLS 11.1 / 03.3 / 09.3 / 57.7 11.6 / 03.2 / 09.5 / 59.3 36.0 / 12.7 / 27.8 / 71.3 14.2 / 02.8 / 12.8 / 57.2 23.1 / 06.0 / 21.3 / 57.9 2.1 / 0.1 / 2.0 / 46.7 | | | | | | | | Hi | M2MS | 37.9 / 14.6 / 30.8 / 72.8 32.8 / 12.2 / 25.9 / 72.1 35.6 / 12.5 / 27.8 / 71.1 33.2 / 10.6 / 28.2 / 69.1 35.4 / 14.6 / 30.1 / 67.4 3.4 / 0.3 / 3.2 / 49.7 | | | | | | ONE | 34.6 / 11.8 / 28.4 / 71.4 31.5 / 11.4 / 25.4 / 71.0 | NA | 40.8 / 16.9 / 35.4 / 71.9 | NA | NA | | | U-CLS 37.7 / 14.1 / 30.8 / 72.8 35.4 / 14.1 / 28.4 / 73.0 25.8 / 06.1 / 20.0 / 66.4 39.6 / 15.1 / 34.2 / 72.2 36.6 / 15.3 / 31.0 / 67.3 3.3 / 0.2 / 3.1 / 49.8 MLS 10.4 / 03.0 / 08.6 / 61.7 24.9 / 07.3 / 19.7 / 68.0 20.4 / 04.4 / 16.0 / 62.4 42.8 / 17.9 / 37.0 / 73.1 30.3 / 09.3 / 26.4 / 63.5 2.8 / 0.2 / 2.6 / 48.4 | | | | | | | | Zh | M2MS | 39.2 / 15.1 / 32.0 / 73.4 36.0 / 14.5 / 29.0 / 73.3 27.0 / 06.6 / 20.8 / 66.9 41.7 / 17.0 / 35.9 / 72.7 36.8 / 15.3 / 31.4 / 67.6 3.4 / 0.2 / 3.2 / 49.6 | | | | | | ONE | 32.1 / 11.1 / 26.4 / 70.4 27.9 / 02.7 / 22.7 / 69.4 | NA | NA | 37.8 / 17.6 / 33.0 / 67.4 | NA | | | U-CLS 37.2 / 14.4 / 30.7 / 72.6 34.9 / 13.9 / 27.7 / 72.3 27.1 / 06.8 / 20.6 / 66.9 34.1 / 10.9 / 28.3 / 68.9 39.9 / 18.4 / 34.3 / 69.5 3.4 / 0.3 / 3.2 / 49.4 MLS 07.4 / 01.8 / 06.6 / 54.9 10.1 / 02.5 / 08.4 / 58.4 11.8 / 02.1 / 09.6 / 57.6 16.8 / 03.3 / 15.0 / 59.4 43.3 / 22.3 / 37.1 / 70.3 2.7 / 0.3 / 2.6 / 47.8 | | | | | | | | Th | M2MS | 38.5 / 15.4 / 31.9 / 73.4 35.6 / 14.2 / 28.3 / 72.9 27.8 / 07.3 / 21.4 / 67.4 34.6 / 11.3 / 29.0 / 69.4 42.2 / 20.8 / 36.2 / 70.1 3.3 / 0.3 / 3.1 / 49.3 | | | | | | ONE | NA | NA | NA | NA | NA | NA | | U-CLS 16.9 / 03.3 / 14.4 / 62.9 16.7 / 03.3 / 13.5 / 64.6 16.2 / 02.6 / 13.7 / 61.0 21.7 / 03.8 / 19.1 / 61.2 22.8 / 05.7 / 19.9 / 60.4 3.4 / 0.3 / 3.3 / 48.8 MLS 06.6 / 00.8 / 05.9 / 53.5 09.7 / 01.1 / 08.6 / 58.7 07.8 / 00.7 / 07.0 / 54.1 17.9 / 02.8 / 15.3 / 58.7 17.4 / 02.5 / 16.6 / 54.4 2.3 / 0.1 / 2.2 / 44.7 | | | | | | | | Tr | M2MS | 15.7 / 02.6 / 13.4 / 62.1 16.0 / 03.2 / 13.2 / 64.4 14.9 / 02.3 / 12.6 / 60.1 19.9 / 03.0 / 17.6 / 60.0 21.4 / 04.8 / 19.3 / 59.9 3.1 / 0.2 / 3.0 / 48.4 | | | | | systematically investigate the capabilities of models trained with MLS, CLS and M2MS settings. Pre-Trained Models for Summarization. Pretrained models have shown their superiority in summarization task, *e.g.*, BART (Lewis et al., 2020) and T5 (Raffel et al., 2020). To enhance the summarization ability during the pre-training stage, PE-GASUS (Zhang et al., 2020a) introduces the gap sentence generation (GSG) objective to enable the model to generate key sentences in an article from the remaining ones. Further, PRIMERA (Xiao et al., 2022) extends GSG from single-document to multidocument summarization. In dialogue scenarios, Wang et al. (2022b) present mDIALBART for crosslingual dialogue summarization. Among these pre-trained summarization models, PEGASUS and PRIMERA only focus on monolingual summarization. Though mDIALBART aims at CLS, the model is merely built for a single crosslingual direction (*i.e.*, English ⇒ German/Chinese) and a specific scenario (*i.e.*, dialogue). Our PISCES is the first multi-lingual pre-trained model for general summarization. ## 3 **Does Unifying All Directions In A Single** Model Help Each Other? As discussed previously, M2MS unifies all summarization directions in a single model. Therefore, we wonder can such a setting help the model better transfer task knowledge across different languages compared with the settings of MLS and CLS? To answer the question, we conduct preliminary studies to investigate the influence of different settings. ## 3.1 Setup Data. The preliminary studies are conducted on WikiLingua (Ladhak et al., 2020), one of the largest CLS datasets. We focus on six languages, *i.e.*, English (En), French (Fr), Hindi (Hi), Chinese (Zh), Thai (Th) and Turkish (Tr). Among them, Tr serves as a zero-resource language, whose documents and summaries only appear in the validation and test sets. More details are given in Section 5.1. Summarization Model. Following recent CLS literature (Ladhak et al., 2020; Perez-Beltrachini and Lapata, 2021), we use mBART-50 (Tang et al., 2021) as the summarization model, and train the model in the following four settings: - mBART (ONE): We separately train several models, each of which is built and evaluated in one single direction. When the direction is crosslingual (or monolingual), the corresonding model is a CLS (or monolingual summarization) model. - mBART (U-CLS): We train a unified model with all cross-lingual samples, and test the model in all directions. - mBART (MLS): We train one unified model with monolingual samples in all languages. Then, the trained model is evaluated in all directions. - mBART (M2MS): It is a new setting introduced by this work, where the model is both trained and evaluated in all directions. | En⇒Fr | En⇒Hi | En⇒Zh | En⇒Th | | |--------------|---------|---------|---------|------| | mBART (MLS) | 5.8 | 0.2 | 1.3 | 1.0 | | mBART (M2MS) | 99.9 | 99.4 | 95.4 | 99.9 | | Fr⇒Hi | Fr⇒Zh | Fr⇒Th | Th⇒En | | | mBART (MLS) | 5.3 | 5.6 | 9.4 | 8.2 | | mBART (M2MS) | 99.4 | 95.8 | 99.9 | 99.5 | Table 2: Correct language rate (%) of the summaries generated by mBART (MLS) and mBART (M2MS). ## 3.2 Analytic Results Table 1 shows the results in terms of ROUGE (Lin, 2004) and BERTSCORE (Zhang et al., 2020b). mBART (M2MS) **vs. mBART** (CLS). The results in all directions show that mBART (M2MS) outperforms mBART (CLS) in all metrics, illustrating that unifying all directions in a single model could transfer task knowledge across different languages. mBART (M2MS) **vs. mBART** (MLS). Comparing mBART (M2MS) and mBART (MLS), it is apparent to find that mBART (M2MS) significantly outperforms mBART (MLS) in cross-lingual directions (*e.g.*, 26.9 vs. 11.7 ROUGE-1 in average), while achieving competitive results in monolingual directions (*e.g.*, 33.9 vs. 34.2 ROUGE-1 in average). To give a deeper understanding of why mBART (MLS) performs poorly in cross-lingual directions, we analyze its generated summaries and find that most of them are not in the language we expected. Table 2 shows the rate of the generated summaries in the correct language.4 The languages of the generated summaries are detected by *fastlangid*5. Compared with mBART (M2MS), mBART (MLS) struggles to generate summaries in the target language. We conjecture this is because that mBART (MLS) is only trained with monolingual data from multiple languages without any cross-lingual signals, resulting in limited cross-lingual ability. Based on the above analyses, we argue that the summarization signals from cross-lingual directions could help mBART (M2MS) perform CLS and transfer the task knowledge to zero-shot directions, while mBART (MLS) does not own such abilities. mBART (M2MS) **vs. mBART** (U-CLS). The only difference between mBART (M2MS) and mBART (U-CLS) is that the training data of mBART (M2MS) contains all monolingual samples, while mBART (U-CLS) does not. We find that the performance gap between mBART (M2MS) and mBART (U-CLS) is extremely smaller than that between mBART (M2MS) and mBART (CLS) / mBART (MLS). In detail, mBART (M2MS) outperforms mBART (U-CLS) in most directions when the source and the target languages have been seen during the fine-tuning stage, *i.e.*, the source and the target languages are from {En, Fr, Hi, Zh, Th}. However, when the source or target language is unseen (*i.e.*, Tr), the performance of mBART (M2MS) is slightly worse than mBART (CLS). This is because the monolingual training data used in mBART (M2MS) makes the word embeddings of the unseen language6 drift away from those of other languages (see details in Appendix A). Additionally, the cross-lingual signal between the unseen language and other languages never occurs in the fine-tuning stage, making it difficult to summarize from or to the unseen language. ## 3.3 Preliminary Conclusion The preliminary studies comparing mBART trained in different settings indicate that (1) the multilingual model trained in M2MS setting can better transfer task knowledge across different languages than those trained in the settings of MLS, CLS and unified CLS. (2) Compared with unified CLS, M2MS helps the model achieve better transferability across visible languages, but sacrifices the transferability to unseen languages. Grounding the above analyses, we argue that it is valuable to unify previous MLS and CLS to M2MS. Meanwhile, *how to improve the transferability to* unseen languages becomes a keypoint in M2MS. ## 4 P**Isces** In this section, we propose PISCES, a pre-trained multi-lingual model for M2MS with the backbone of transformer (Vaswani et al., 2017). Figure 2 shows the overview of PISCES, which contains three pre-training stages. Specifically, the meta pre-training (§ 4.1) lets the pre-trained model learn general language modeling via monolingual denoising objective in multiple languages. Then, to improve the transferability across different languages, the cross-lingual pre-training (§ 4.2) adds noises to the source-language sentences, and encourages the model to translate them into parallel sentences in the target language. Note that the parallel sentences used in this stage might involve the languages which are not seen in downstream tasks, 6We use "unseen language" to indicate the language does not occur in the *fine-tuning* stage. <En> It's a nice [MASK] today Multi-lingual unlabeled corpora It's a nice day today <En> ![4_image_1.png](4_image_1.png) 我们今天前往郊区游玩 <Zh> (Chinese summary) ![4_image_0.png](4_image_0.png) PISCES PISCES <En> It's a nice day, <mask-sent> ... (English document) (b) Cross-lingual pre-training ![4_image_3.png](4_image_3.png) En Fr … Tr Multi-lingual unlabeled corpora (c) Task-specific pre-training (a) Meta pre-training ![4_image_2.png](4_image_2.png) and it is the key to improving the transferability to these languages. Finally, to narrow the gap between the pre-training and fine-tuning stages, the task-specific pre-training (§ 4.3) trains the model with pseudo M2MS samples, which are constructed from the multi-lingual unlabeled corpora via gap sentences selection and machine translation. During the three-stage pre-training process, the model gradually learns the ability of language modeling, then the cross-lingual ability, and finally the adaptation to the specific task. ## 4.1 Meta Pre-Training The goal of meta pre-training is to provide good initialization for the subsequent pre-training stages. Here, we directly utilize mBART-50 (Tang et al., 2021) as the meta pre-trained model. mBART-50 is a multi-lingual BART (Lewis et al., 2020) with the transformer encoder-decoder architecture. The model is pre-trained on largescale multi-lingual unlabeled corpora to learn the multi-lingual language modeling. Specifically, following BART, the denoising task is used as the pre-training objective, and there are two types of noise: (1) *text infilling* randomly masks text spans in text sequences, and (2) *sentence permutation* randomly shuffles sentences in documents. The model is required to comprehend the noisy text sequences and recover them. To indicate the input and output languages, the language tags (*e.g.*, <En> and <Zh>) are appended at the inputs of encoder and decoder sides, respectively. ## 4.2 Cross-Lingual Pre-Training Despite the effectiveness of mBART-50, the input and output sequences in its pre-training stage are always in the same language, resulting in the underexplored cross-lingual ability. However, such ability is indispensable for M2MS. Therefore, crosslingual pre-training is designed to improve the cross-lingual transferability. In detail, we propose a simple yet effective pretraining task, *i.e.*, cross-lingual denoising, which lets the model generate sentences in the target language based on their noisy parallel sentences in a different source language. The noise used in this stage is *text infilling*. In this way, the pre-trained model is required to not only understand the text in the source language but also learn the transformation between different languages. ## 4.3 Task-Specific Pre-Training Task-specific pre-training aims to narrow the gap between the pre-training and fune-tuning stages. We directly adopt M2MS as its pre-training task. Grounding the truth that high-quality M2MS samples are difficult to collect, we construct the pseudo samples from multi-lingual unlabeled corpora. In detail, for a source-language document D = {s src i} |D| i=1, where s src idenotes the i-th sentence in D. Following previous monolingual pre-trained summarization methods (Zhang et al., 2020a; Xiao et al., 2022), we calculate the importance of each sentence as S(s src i) = ROUGE-1(s src i, D/ssrc i), where D/ssrc iindicates the rest of the document after s src iis removed. The sentences with high importance are selected as the gap sentences S src ∗ = 15131 | Trg | En | Fr | Hi | Zh | Th | Tr | | |---------------|--------------|-------------------------------------------------------------------------------------------------------------------|---------------------|--------------------|-------------------------------------------------------|-----------------------------------|----------------| | En | # Samples | 124589 / 8351 / 8517 53232 / 5161 / 5258 5707 / 1538 / 2672 13462 / 2697 / 2713 9170 / 2883 / 2697 - / 267 / 2730 | | | | | | | # Avg. Tokens | 492.8 / 47.3 | 521.3 / 55.4 | 500.6 / 71.8 | 516.8 / 49.4 | 524.2 / 48.4 | 458.3 / 54.3 | | | Fr | # Samples | 53232 / 5161 / 5258 | 53232 / 5161 / 5258 | - / 1449 / 2337 | 10628 / 2605 / 2400 7281 / 2750 / 2386 - / 232 / 2391 | | | | # Avg. Tokens | 659.4 / 45.3 | 659.3 / 55.5 | 617.3 / 73.1 | 649.0 / 48.5 | 673.4 / 47.3 | 589.9 / 54.4 | | | Hi | # Samples | 5707 / 1538 / 2672 | - / 1449 / 2337 | 5707 / 1538 / 2672 | - / 1134 / 2000 | - / 1266 / 2146 | - / 180 / 2091 | | # Avg. Tokens | 682.1 / 46.2 | 668.3 / 58.2 | 684.3 / 72.3 | 637.9 / 50.5 | 626.1 / 48.7 | 627.4 / 53.0 | | | Zh | # Samples | 13462 / 2697 / 2713 | 10628 / 2605 / 2400 | - / 1134 / 2000 | 13462 / 2697 / 2713 | - / 2392 / 2218 | - / 90 / 2147 | | # Avg. Tokens | 428.4 / 46.4 | 432.9 / 58.1 | 388.7 / 73.6 | 429.1 / 49.2 | 371.1 / 49.8 | 373.2 / 55.5 | | | Th | # Samples | 9170 / 2883 / 2697 | 7281 / 2750 / 2386 | - / 1266 / 2146 | - / 2392 / 2218 | 9170 / 2883 / 2697 - / 191 / 2172 | | | # Avg. Tokens | 488.6 / 44.5 | 504.9 / 56.2 | 424.6 / 71.8 | 412.1 / 51.0 | 490.1 / 48.2 | 404.1 / 54.2 | | | Tr | # Samples | - / 267 / 2730 | - / 232 / 2391 | - / 180 / 2091 | - / 90 / 2147 | - / 191 / 2172 | - / 267 / 2730 | | # Avg. Tokens | 465.1 / 47.5 | 472.4 / 60.0 | 468.1 / 72.8 | 456.9 / 52.7 | 449.1 / 49.8 | 465.1 / 54.3 | | {s src gi} |S src ∗| i=1 (gi ∈ {1, 2*, ...,* |D|}), which are further translated to a different target language S trg ∗ = {s trg gi } |S trg ∗ | i=1 via Google Translation7. In this manner, the source-language document D paired with source/target-language gap sentences S src ∗/S trg ∗ could constitute a pseudo pre-training sample. Quality Controlling. Since machine translation results might contain flaws, we further employ *roundtrip translation* strategy as suggested by Zhu et al. (2019) and Feng et al. (2022). For each gap sentence s src giin D, the translated counterpart s trg giis translated back to the source language, which we denote as s src′ gi. If the ROUGE-1 score between s src gi and s src′ giis less than the pre-defined threshold λ, the corresponding pseudo sample will be discarded. Input Format. To help the model trade off between (1) generating new sentences instead of translating part of input sentences, and (2) learning the translation pattern8(Zhu et al., 2020), half of source-language gap sentences in D are randomly masked with a special token <mask-sent>. 9 ## 5 Experiments 5.1 Benchmark Datasets In order to evaluate M2MS models, two requirements should be met in datasets, *i.e.*, (1) involving multiple languages and summarization directions, and (2) having abundant samples in each direction. Thus, we choose WikiLingua (Ladhak et al., 2020) and CrossSum (Hasan et al., 2021a). The original WikiLingua dataset, which involves 18 languages, is designed for CLS task. The 18 languages constitute 306 (18×17) cross-lingual directions, each of which contains about 18k CLS samples in average. For each document, WikiLingua also contains its summary in the original language. Therefore, the dataset could be used to evaluate M2MS models. However, the original splitting is for CLS. Thus, we re-split WikiLingua with the special consideration for M2MS: for each document in the test (or validation) set of one direction, the document and its parallel documents10 are not allowed to appear in the training and validation (or test) sets of other directions. This rule reduces the likelihood that learning shortcuts. We also intentionally create several zero-shot directions. We focus on six languages in this work: English (En), Chinese (Zh), French (Fr), Hindi (Hi), Turkish (Tr) and Thai (Th). After re-splitting, the statistics are shown in Table 3. There are **9 highresource directions** each of which contains more than 10k training samples. The other **8 directions** with less than 10k training samples are considered as **low-resource directions**. The remaining 19 zero-shot directions have no training sample. According to whether both the source and target languages appear in the whole training set, we further divide them into **11 non-trivial and 8 conventional zero-shot directions**. Note that Tr never appears in the training set of any direction, thus, in other words, the non-trivial zero-shot directions involve Tr while the conventional counterparts do not. We call Tr an *unseen language*. Though there is no training data in a conventional zero-shot direction, both its source and target languages might 10For each document, WikiLingua usually contains its parallel documents in other languages. ![6_image_0.png](6_image_0.png) have training data with a pivot language, making it less challenging than the non-trivial ones. Taking the conventional zero-shot direction Hi⇒Zh as an example, the training data in Hi⇒En and En⇒Zh could bridge the gap between Hi and Zh. For statistics of the CrossSum dataset used in our experiments, please refer to Appendix C.1. ## 5.2 Experimental Setup Baselines. We use mBART-50 (Tang et al., 2021) and mT5 (Xue et al., 2021) as baselines, which have achieved state-of-the-art performances on many CLS/MLS datasets (Perez-Beltrachini and Lapata, 2021; Hasan et al., 2021a; Feng et al., 2022). Metrics. We adopt ROUGE-1/2/L (Lin, 2004) and BERTSCORE (Zhang et al., 2020b) in our experiments. The ROUGE scores measure the lexical overlap between the generated summaries and corresponding references, while the BERTSCORE measures the semantic similarity. These metrics are calculated by *multi-lingual rouge*11 and *bert-score*12 toolkits, respectively. The BERTSCORE is based on bert-base-multilingual-cased model. The statistical significance test (Koehn, 2004) is also employed for a fair comparison. Implementation Details. The implementation details of the pre-training objectives, pre-training corpora and fine-tuning hyper-parameters are given in Appendix B. ## 5.3 Quantitative Results Table 4 shows the results on WikiLingua in terms of average ROUGE score (RS) and BERTSCORE (BS). Full results on ROUGE-1/2/L are given in Appendix D. The experimental results on CrossSum also verify the superiority of PISCES, which are provided in Appendix C.2. PISCES **vs. Baselines.** Our PISCES outperforms mBART-50 and mT5 in all directions, indicating its superiority. Specifically, PISCES achieves an average increase of 7.9 RS and 5.4 BS over mBART-50 in non-trivial zero-shot directions when the target language is not Tr. Compared with mBART-50, the average improvement in conventional zero-shot directions is 2.2 RS / 1.3 BS, while the counterpart in low-resource directions is 1.4 RS / 0.8 BS. As for high-resource directions, PISCES outperforms mBART-50 by 0.7 RS and 0.3 BS in average. It is not difficult to find that the fewer resources in a direction, the greater the improvement brought by our PISCES. This finding also indicates the potentiality of our model when faced with the real-world scenario, since there are thousands of languages in the world and most directions are low-resource or zeroshot. Through the cross-lingual and task-specific pre-training stages, PISCES facilitates the transfer of task knowledge from high-resource directions to the low-resource and zero-shot ones. Non-Trivial Zero-Shot Direction. As shown in Table 4, we divide the non-trivial zero-shot directions into two categories (*i.e.*, Tr⇒Others and Any⇒Tr) according to whether Tr is the target language. We Fr⇒Hi Hi⇒Fr Hi⇒Zh Zh⇒Hi PISCES 21.4 / 69.1 26.1 / 72.9 26.1 / 70.4 20.3 / **68.5** w/o TS 20.7 / 68.6 25.2 / 72.8 25.1 / 69.9 19.5 / 67.9 w/o CL 20.6 / 68.8 25.2 / **72.9** 25.3 / 70.0 19.5 / 67.8 w/o TS & CL 19.6 / 68.1 23.6 / 72.1 24.0 / 69.1 18.1 / 66.9 Table 5: Results of ablation studies. Table 6: Human evaluation results. "IF", "CC" and "GM" denote informativeness, conciseness and grammaticality, respectively. discover that the results in Any⇒Tr directions13 are significantly worse than the Tr⇒Others counterparts. This finding suggests that generating summaries in *unseen languages* is more difficult than understanding documents in *unseen languages*. This is because the encoder could partly understand the *unseen languages* through the shared vocabulary and the similar syntax constituent with other languages. But for the decoder, we only change its language tag to expect it can generate summaries in unseen languages. This requires the decoder to *simultaneously* (1) capture the relationships between the unseen language tag and the unseen language tokens and (2) summarize documents. However, the pre-trained model only meets the requirement (1) in the pre-training stage14, while requirement (2) in the fine-tuning stage, making it hard to simultaneously meet both requirements, and consequently, cannot generate summaries in unseen languages. We reserve this challenge for future work. Ablations. We conduct ablation studies to investigate the effect of the cross-lingual and task-specific pre-training stages. We run the following ablations: - PISCES **w/o TS**. To demonstrate the effectiveness of the task-specific pre-training, we also pre-train a variant PISCES model which does not include the task-specific pre-training stage. - PISCES **w/o CL**. To measure the effectiveness of the cross-lingual pre-training, we remove this stage in the whole pre-training process, resulting in another variant PISCES. - PISCES **w/o TS & CL** removes both the crosslingual and task-specific pre-training stages, | WikiLingua | | | | | | | | | | |--------------|-------|-------|-------|------|------|------|------|------|------| | Model | En⇒Zh | Zh⇒En | En⇒En | | | | | | | | IF | CC | GM | IF | CC | GM | IF | CC | GM | | | mT5 | 2.93 | 3.12 | 3.06 | 2.98 | 3.29 | 3.03 | 3.16 | 3.84 | 3.52 | | mBART 3.09 | 3.38 | 3.14 | 3.15 | 3.53 | 3.26 | 3.27 | 3.96 | 3.71 | | ![7_image_2.png](7_image_2.png) ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Ground Truth ![7_image_3.png](7_image_3.png) which is the same as mBART-50. As shown in Table 5, we conduct ablation studies in several conventional zero-shot directions (results in more directions are provided in Appendix E). In each case, the RS and BS are lower than vanilla PISCES. In addition, both PISCES w/o TS and PISCES w/o CL outperform PISCES w/o TS & CL. Thus, the effectiveness of both stages is proved. ## 5.4 Qualitative Results Human Evaluation. Following Zhu et al. (2020); Liang et al. (2022b), we conduct the human evaluation on 50 random samples extracted from WikiLingua (En⇒Zh, Zh⇒En and En⇒En, respectively). Three graduate students are invited to assess the generated summaries from three aspects: informativeness (IF), conciseness (CC) and grammaticality (GM). The scoring adopts a 5-point scale from 1 (worst) to 5 (best). Table 6 shows the average results. The IF, CC and GM scores of PISCES are significantly better than those of mT5 or mBART50, demonstrating the effectiveness of our model. Case Study. Table 7 shows an example Turkish document, the generated summary and the ground truth summary. Though the summary generated by PISCES contains a repeated sentence, it has good overlaps with the ground truth. But for mBART-50, the generated summary is not relevant to the core idea of the document. This observation indicates that, through the cross-lingual and task-specific pre-training, PISCES could better transfer the task knowledge from high-resource directions to zeroshot ones, and even has the ability to generate summaries for the documents whose language does not occur in the fine-tuning stage. Error Analysis. To further study how future research could advance M2MS, we take a closer look at the generation errors of PISCES and analyze them in Appendix F. ## 6 Conclusion In this paper, we unify MLS and CLS to M2MS. Through carefully-designed preliminary studies, we discuss that unifying MLS and CLS to M2MS is valuable. In addition, we propose PISCES, the first pre-trained M2MS model, which contains three pretraining stages to enable the model learn the multilingual language modeling, cross-lingual ability and summarization ability. Extensive experiments show its superiority compared with the state-ofthe-art baselines (mBART-50 and mT5). The case study further demonstrates that our model could even generate summaries for the documents whose language does not occur in the fine-tuning stage. ## Ethical Considerations In this section, we consider potential ethical issues of our model. In this paper, we propose PISCES which utilizes mBART-50 (Tang et al., 2021) as the meta pre-trained model and further suffers from the cross-lingual pre-training and task-specific pretraining stages. The pre-training samples are constructed from OPUS (Tiedemann and Thottingal, 2020) and mC4 (Xue et al., 2021) corpora. To construct the pseudo M2MS samples in the taskspecific pre-training stage, Google Translation is also adopted to translate gap sentences. Therefore, PISCES might involve the same biases and toxic behaviors exhibited by language models, pre-training corpora and Google Translation. ## Limitations While we show that PISCES outperforms mBART50 on WikiLingua (Ladhak et al., 2020), there are some limitations worth considering in future work: (1) PISCES still struggles to generate summaries in unseen languages (Section 5.3); (2) In this work, we focus on six languages in total, and future work could extend our method to more languages. ## Acknowledgements This work is supported by the National Natural Science Foundation of China (No.62072323, 62102276), Shanghai Science and Technology Innovation Action Plan (No. 22511104700), the Natural Science Foundation of Jiangsu Province (Grant No. BK20210705), the Natural Science Foundation of Educational Commission of Jiangsu Province, China (Grant No. 21KJD520005) and the Priority Academic Program Development of Jiangsu Higher Education Institutions. ## References Yu Bai, Heyan Huang, Kai Fan, Yang Gao, Zewen Chi, and Boxing Chen. 2021. Bridging the gap: Crosslingual summarization with compression rate. *ArXiv* preprint, abs/2110.07936. Yue Cao, Hui Liu, and Xiaojun Wan. 2020a. Jointly learning to align and summarize for neural crosslingual summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6220–6231, Online. Association for Computational Linguistics. Yue Cao, Xiaojun Wan, Jinge Yao, and Dian Yu. 2020b. Multisumm: Towards a unified model for multi-lingual abstractive summarization. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(01):11–18. Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, and Yue Zhang. 2022. The cross-lingual conversation summarization challenge. arXiv preprint arXiv:2205.00379. Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Saksham Singhal, Xian-Ling Mao, Heyan Huang, Xia Song, and Furu Wei. 2021. mT6: Multilingual pretrained text-to-text transformer with translation pairs. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 1671–1683, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, XianLing Mao, and Heyan Huang. 2020. Cross-lingual natural language generation via pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7570– 7577. AAAI Press. Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2022. MSAMSum: Towards benchmarking multi-lingual dialogue summarization. In *Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering*, pages 1–12, Dublin, Ireland. Association for Computational Linguistics. George Giannakopoulos, Jeff Kubina, John Conroy, Josef Steinberger, Benoit Favre, Mijail Kabadjov, Udo Kruschwitz, and Massimo Poesio. 2015. MultiLing 2015: Multilingual summarization of single and multi-documents, on-line fora, and call-center conversations. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 270–274, Prague, Czech Republic. Association for Computational Linguistics. Tahmid Hasan, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, and Rifat Shahriyar. 2021a. Crosssum: Beyond englishcentric cross-lingual abstractive text summarization for 1500+ language pairs. *ArXiv*, abs/2112.08804. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021b. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics. Yichong Huang, Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. The factual inconsistency problem in abstractive text summarization: A survey. *arXiv* preprint arXiv:2104.14839. Timo Johner, Abhik Jana, and Chris Biemann. 2021. Error analysis of using BART for multi-document summarization: A study for English and German language. In *Proceedings of the 23rd Nordic Conference* on Computational Linguistics (NoDaLiDa), pages 391–397, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. 2020. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4034–4048, Online. Association for Computational Linguistics. Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, and Eduard Hovy. 2003a. Cross-lingual c*st*rd: English access to hindi information. ACM Transactions on Asian Language Information Processing, 2(3):245–269. Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, and Eduard H. Hovy. 2003b. Cross-lingual c*st*rd: English access to hindi information. *ACM Trans. Asian Lang. Inf. Process.*, 2:245–269. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Qian Li, Shu Guo, Yang Luo, Cheng Ji, Lihong Wang, Jiawei Sheng, and Jianxin Li. 2023. Attributeconsistent knowledge graph representation learning for multi-modal entity alignment. *Proceedings of the* ACM Web Conference 2023. Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, and Jie Zhou. 2022a. Summaryoriented vision modeling for multimodal abstractive summarization. *arXiv preprint arXiv:2212.07672*. Yunlong Liang, Fandong Meng, Chulun Zhou, Jinan Xu, Yufeng Chen, Jinsong Su, and Jie Zhou. 2022b. A variational hierarchical model for neural crosslingual summarization. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2088– 2099, Dublin, Ireland. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Marina Litvak, Natalia Vanetik, Mark Last, and Elena Churkin. 2016. MUSEEC: A multilingual text summarization tool. In Proceedings of ACL-2016 System Demonstrations, pages 73–78, Berlin, Germany. Association for Computational Linguistics. Thong Nguyen and Luu Anh Tuan. 2022. Improving neural cross-lingual summarization via employing optimal transport distance for knowledge distillation. Proc. of AAAI. Constantin Orasan and Oana Andreea Chiorean. 2008. ˘ Evaluation of a cross-lingual Romanian-English multi-document summariser. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). Laura Perez-Beltrachini and Mirella Lapata. 2021. Models and datasets for cross-lingual summarisation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9408–9423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Carol Pfaff. 1979. Constraints on language mixing: Intrasentential code-switching and borrowing in spanish/english. *Language*, 55:291. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The multilingual summarization corpus. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 8051–8067, Online. Association for Computational Linguistics. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2021. Multilingual translation from denoising pre-training. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3450–3466, Online. Association for Computational Linguistics. Jörg Tiedemann and Santhosh Thottingal. 2020. OPUSMT - building open translation services for the world. In *Proceedings of the 22nd Annual Conference of* the European Association for Machine Translation, pages 479–480, Lisboa, Portugal. European Association for Machine Translation. Natalia Vanetik and Marina Litvak. 2015. Multilingual summarization with polytope model. In SIGDIAL Conference. Daniel Varab and Natalie Schluter. 2021. MassiveSumm: a very large-scale, very multilingual, news summarisation dataset. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10150–10161, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In *Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies*, pages 1546–1555, Portland, Oregon, USA. Association for Computational Linguistics. Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In *Proceedings of the 48th Annual Meeting of the Association for* Computational Linguistics, pages 917–926, Uppsala, Sweden. Association for Computational Linguistics. Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen, and Haizhou Li. 2022a. Analyzing and evaluating faithfulness in dialogue summarization. *arXiv preprint* arXiv:2210.11777. Danqing Wang, Jiaze Chen, Hao Zhou, Xipeng Qiu, and Lei Li. 2021. Contrastive aligned joint learning for multilingual summarization. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 2739–2750, Online. Association for Computational Linguistics. Jiaan Wang, Yunlong Liang, Fandong Meng, Beiqi Zou, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2023. Zeroshot cross-lingual summarization via large language models. *arXiv preprint*. Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022b. ClidSum: A benchmark dataset for cross-lingual dialogue summarization. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 7716–7729, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiaan Wang, Fandong Meng, Tingyi Zhang, Yunlong Liang, Jiarong Xu, Zhixu Li, and Jie Zhou. 2022c. Understanding translationese in cross-lingual summarization. *arXiv preprint arXiv:2212.07220*. Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022d. A Survey on Cross-Lingual Summarization. *Transactions of the Association for Computational Linguistics*, 10:1304–1323. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5245–5263, Dublin, Ireland. Association for Computational Linguistics. Ruochen Xu, Chenguang Zhu, Yu Shi, Michael Zeng, and Xuedong Huang. 2020. Mixed-lingual pretraining for cross-lingual summarization. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 536–541, Suzhou, China. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 118–127, Lisbon, Portugal. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328–11339. PMLR. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Shaohui Zheng, Zhixu Li, Jiaan Wang, Jianfeng Qu, An Liu, Lei Zhao, and Zhigang Chen. 2023. Longdocument cross-lingual summarization. In *Proceedings of the Sixteenth ACM International Conference* on Web Search and Data Mining, WSDM '23, page 1084–1092, New York, NY, USA. Association for Computing Machinery. Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3054– 3064, Hong Kong, China. Association for Computational Linguistics. Junnan Zhu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2020. Attend, translate and summarize: An efficient method for neural cross-lingual summarization. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 1309–1321, Online. Association for Computational Linguistics. ## A Word Embeddings Of The Unseen Language And Other Languages To verify the word embeddings of the unseen language drift away from those of other languages af- ![11_image_0.png](11_image_0.png) (a) mBART (M2MS) Figure 3: Visualization of word embeddings from ![11_image_1.png](11_image_1.png) mBART (M2MS) and mBART (U-CLS). Tr is the unseen language. ter adding the monolingual training data, based on MUSE dictionary, we choose top frequent 1000 English words and the words with the same meaning in other five languages (*i.e.*, Fr, Hi, Zh, Th and Tr). Then, we calculate the embeddings of these words based on mBART (M2MS) and mBART (U-CLS), respectively. For the word that consists of multiple tokens, the word embedding is the average of embeddings of those tokens. As shown in Figure 3, we utilize Principal Component Analysis (PCA) to visualize the word embeddings from mBART (M2MS) and mBART (U-CLS). In the PCA space, we further calculate the central point of each language by averaging the word embeddings in the language. Then, we find the average distance between the central point of Tr and other languages is 0.426 / 0.407 for mBART (M2MS) / mBART (U-CLS). This distance in vanilla mBART-50 (Tang et al., 2021) is 0.398. Therefore, the monolingual training data used in mBART (M2MS) makes the word embeddings of the unseen language drift away from those of other languages. ## B Implementation Details Pre-Training Details. We use mBART-50 (Tang et al., 2021) as the meta pre-trained model, and futher pre-train it via cross-lingual and taskspecific pre-training stages. The implementation of mBART-50 is based on the Transformers (Wolf et al., 2020) library with default settings (12 encoder layers, 12 decoder layers and 1024 hidden states). In cross-lingual pre-training, we dynamically mask 0-15% tokens in the source-language sentences, and construct 20.6M samples from OPUS parallel corpora (Tiedemann and Thottingal, 2020). In task-specific pre-training, we construct 3.1M training samples from mC4 corpus (Xue et al., 2021). We set the total length of gap sentences to k% of the document length, and k is dynamically | Direction | MultiUN | CCMatrix | CCAligned | MultiCCAligned | XLEnt | Europarl | QED | TED | WMT | Sum | |-------------|-----------|------------|-------------|------------------|---------|------------|--------|--------|-------|----------| | En⇔Fr | - | - | - | - | - | 349291 | 152623 | 77188 | 4648 | 583750 | | En⇔Hi | - | 2959722 | - | - | 405366 | - | 1211 | 9039 | 568 | 3375906 | | En⇔Th | - | - | 1947729 | - | 246976 | - | 52140 | 30765 | - | 2277610 | | En⇔Tr | - | - | 2496997 | - | 761750 | - | 94212 | 72674 | 3819 | 3429452 | | En⇔Zh | - | - | - | - | 1258289 | - | - | 3158 | 3658 | 1265105 | | Fr⇔Hi | - | - | - | 619040 | 97082 | - | 660 | 8816 | - | 725598 | | Fr⇔Th | - | - | - | 737469 | 67292 | - | 34418 | 30024 | - | 869203 | | Fr⇔Tr | - | - | - | 1321431 | 183282 | - | 61412 | 69931 | - | 1636056 | | Fr⇔Zh | 1494829 | - | - | - | 211039 | - | 2041 | 3088 | - | 1710997 | | Hi⇔Th | - | - | - | 436284 | 65870 | - | 484 | 4526 | - | 507164 | | Hi⇔Tr | - | 1099853 | - | - | 111573 | - | 544 | 8384 | - | 1220354 | | Hi⇔Zh | - | 445148 | - | - | 97732 | - | 15 | 650 | - | 543545 | | Th⇔Tr | - | - | - | 617566 | 86156 | - | 40026 | 29602 | - | 773350 | | Th⇔Zh | - | - | - | - | 54637 | - | 2390 | 2169 | - | 59196 | | Tr⇔Zh | - | 1435286 | - | - | 169774 | - | 1885 | 3125 | - | 1610070 | | Total | 1494829 | 5940009 | 4444726 | 3731790 | 3816818 | 349291 | 444061 | 353139 | 12693 | 20587356 | | En⇔Fr | En⇔Hi | En⇔Th | En⇔Tr | En⇔Zh | Fr⇔Hi | Fr⇔Th | Fr⇔Tr | Fr⇔Zh | Hi⇔Th | Hi⇔Tr | |---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | 190916 | 190916 | 190916 | 190916 | 88636 | 188351 | 190916 | 190916 | 190916 | 158518 | 190578 | | Hi⇔Zh | Th⇔Tr | Th⇔Zh | Tr⇔Zh | En⇒En | Fr⇒Fr | Hi⇒Hi | Th⇒Th | Tr⇒Tr | Zh⇒Zh | Total | | 172039 | 190916 | 24160 | 190916 | 95458 | 95458 | 95458 | 95458 | 95458 | 95458 | 3113274 | selected from [5, 10, 15]. The pre-defined λ in the round-trip translation is 0.7. All experimental results listed in this paper are the average of 3 runs. Table 8 and Table 9 show the statistics of the constructed samples in the cross-lingual pre-training and task-specific pre-training stages, respectively. The cross-lingual pre-training and task-specific pretraining stages are conducted on 8 NVIDIA Tesla V100 GPUs with 32GB memory. In the crosslingual pre-training stage, we pre-train the model for 150K steps, with early stopping, 32 batch size, 3e-5 learning rate following Xiao et al. (2022) and 10K warmup steps. In the task-specific pre-training stage, we pre-train the model for 100K steps, with early stopping, 4 batch size, 3e-5 learning rate and 10K warmup steps. Fine-Tuning and Testing Details. In the finetuning stage, we fine-tune the PISCES model on 8 NVIDIA Tesla V100 GPUs (32G) with 4 batch size, 10 epochs, 2K warmup steps, 3e-5 learning rate, and set the maximum number of tokens for input sequences to 1024. To balance the high-resource and low-resource language data, following Xue et al. (2021), we sample the training examples according to the probability p(D) ∝ |D| α, where p(D) is the probability of sampling training examples from a give direction during fine-tuning and |D| is the number of original examples in the direction. We set the hyperparameter α to 0.5. To fine-tune mT5 baseline on M2MS, the language tags (*e.g.*, <En> and <Zh>) are appended at the inputs of both encoder and decoder sides. In the test process, we set the beam size and the maximum decoded length to 5 and 128, respectively. ## C Experiments On Crosssum C.1 Data Statistics. Table 10 lists the data statistics of the CrossSum dataset (Hasan et al., 2021a) used in our experiments. The data splitting mainly inherits from the original CrossSum except for zero-shot directions and monolingual directions: (1) If the number of samples in a direction (*e.g.*, Fr⇒Hi) is less than 1k, we will regard the direction as a zero-shot direction and evenly split its samples into validation and test sets. (2) Considering the number of samples in cross-lingual directions is hundred-level or thousand-level, we truncate the number of samples in each monolingual direction (*e.g.*, En⇒En) to 10k to make a balance. The corresponding splitting follows 8:1:1. If the number of samples in a monolingual direction (*e.g.*, Th⇒Th) is less than 10k, its splitting follows the original CrossSum. ## C.2 Experimental Results. Table 11 shows the experimental results on CrossSum. Our PISCES outperforms mBART-50 by 2.3 ROUGE-1, 2.0 ROUGE-2, 2.0 ROUGE-L and 1.3 BERTSCORE in the average of all directions, which | Trg | En | Fr | Hi | Zh | Th | Tr | | |---------------|---------------|--------------------|--------------------|--------------------|--------------------|------------------|--------------------| | En | # Samples | 8000 / 1000 / 1000 | 1513 / 188 / 188 | 3784 / 463 / 481 | 3981 / 497 / 497 | 816 / 102 / 102 | 4542 / 568 / 566 | | # Avg. Tokens | 638.7 / 30.6 | 1013.0 / 43.2 | 899.9 / 41.0 | 914.6 / 35.5 | 1058.3 / 51.3 | 880.8 / 37.6 | | | Fr | # Samples | 1513 / 188 / 188 | 8000 / 1000 / 1000 | - / 308 / 308 | - / 174 / 174 | - / 92 / 93 | - / 414 / 415 | | # Avg. Tokens | 1124.3 / 33.7 | 710.9 / 40.8 | 1048.5 / 40.7 | 1358.3 / 37.6 | 1501.7 / 47.9 | 1058.3 / 38.9 | | | Hi | # Samples | 3784 / 463 / 481 | - / 308 / 308 | 8000 / 1000 / 1000 | 1107 / 135 / 137 | - / 189 / 189 | 2956 / 369 / 369 | | # Avg. Tokens | 862.0 / 31.6 | 1106.5 / 39.1 | 775.4 / 40.2 | 804.3 / 33.6 | 1186.3 / 49.4 | 712.8 / 34.3 | | | Zh | # Samples | 3981 / 497 / 497 | - / 174 / 174 | 1107 / 135 / 137 | 8000 / 1000 / 1000 | - / 134 / 135 | 1209 / 151 / 151 | | # Avg. Tokens | 725.0 / 32.7 | 1082.0 / 41.9 | 690.7 / 41.9 | 768.0 / 40.4 | 1059.7 / 52.7 | 642.4 / 36.6 | | | Th | # Samples | 816 / 102 / 102 | - / 92 / 93 | - / 189 / 189 | - / 134 / 135 | 6616 / 826 / 826 | - / 238 / 239 | | # Avg. Tokens | 957.1 / 34.5 | 1095.2 / 40.6 | 985.3 / 42.0 | 1036.3 / 38.7 | 1055.5 / 62.1 | 912.2 / 39.7 | | | Tr | # Samples | 4542 / 568 / 566 | - / 414 / 415 | 2956 / 369 / 369 | 1209 / 151 / 151 | - / 238 / 239 | 8000 / 1000 / 1000 | | # Avg. Tokens | 619.4 / 31.8 | 775.2 / 41.2 | 579.3 / 39.0 | 591.5 / 34.4 | 762.7 / 53.2 | 704.9 / 40.2 | | Src Trg Model En Fr Hi Zh Th Tr Avg. EnmT5 (580M) 30.1 / 8.3 / 22.3 / 66.3 30.7 / 10.4 / 22.8 / 66.2 30.2 / 8.9 / 24.8 / 67.4 26.1 / 6.6 / 22.6 / 65.8 27.4 / 8.6 / 21.8 / 60.1 25.8 / 9.9 / 22.5 / 66.7 28.4 / 8.8 / 22.8 / 65.4 mBART (610M) 31.2 / 8.7 / 22.8 / 66.9 32.8 / 12.4 / 24.5 / 66.9 32.6 / 9.5 / 25.5 / 68.3 29.6 / 8.2 / 24.3 / 67.0 30.8 / 10.4 / 23.4 / 62.9 26.3 / 10.2 / 22.8 / 67.2 30.6 / 9.9 / 23.9 / 66.5 Pisces (610M) 32.0 / 9.1 / 23.7 / 67.4 33.6 / 13.4 / 25.6 / 67.6 33.4 / 10.5 / 26.4 / 68.9 30.5 / 8.6 / 24.9 / 67.5 31.3 / 11.7 / 24.2 / 64.1 27.1 / 10.9 / 23.5 / 67.5 31.3 / 10.7 / 24.7 / 67.2 FrmT5 (580M) 30.7 / 10.1 / 23.4 / 66.6 30.8 / 11.9 / 23.5 / 66.2 33.0 / 12.1 / 27.3 / 68.1 39.3 / 21.5 / 33.1 / 69.9 35.6 / 15.9 / 29.2 / 64.6 25.3 / 10.7 / 22.3 / 65.8 32.5 / 13.7 / 26.5 / 66.9 mBART (610M) 31.9 / 10.3 / 23.8 / 67.4 32.0 / 12.9 / 24.3 / 66.6 36.0 / 16.6 / 30.2 / 69.8 41.3 / 23.4 / 36.7 / 70.9 37.1 / 17.4 / 30.8 / 65.3 29.5 / 14.1 / 26.1 / 68.2 34.6 / 15.8 / 28.7 / 68.0 Pisces (610M) 33.5 / 11.7 / 25.8 / 68.2 32.7 / 13.4 / 25.0 / 67.1 39.7 / 19.5 / 33.9 / 71.5 43.8 / 25.7 / 38.7 / 73.0 42.8 / 25.3 / 35.7 / 69.2 33.9 / 18.9 / 30.4 / 70.1 37.7 / 19.1 / 31.6 / 69.9 HimT5 (580M) 29.7 / 9.3 / 23.2 / 67.2 29.6 / 10.8 / 23.3 / 66.1 32.0 / 11.3 / 25.7 / 67.4 28.6 / 8.3 / 24.0 / 66.3 29.8 / 11.0 / 23.8 / 62.6 22.0 / 7.4 / 19.8 / 65.5 28.6 / 9.7 / 23.3 / 65.9 mBART (610M) 31.5 / 9.9 / 24.0 / 67.7 32.5 / 13.3 / 25.5 / 67.4 32.9 / 11.8 / 26.0 / 67.7 29.4 / 8.9 / 24.6 / 66.9 33.7 / 15.1 / 27.6 / 65.0 22.6 / 7.8 / 19.7 / 65.7 30.4 / 11.1 / 24.6 / 66.7 Pisces (610M) 31.8 / 9.9 / 24.1 / 68.0 35.3 / 15.9 / 28.0 / 68.7 33.8 / 12.5 / 26.8 / 68.3 32.5 / 10.8 / 27.3 / 68.9 38.3 / 19.0 / 31.3 / 67.0 23.9 / 8.6 / 21.0 / 66.4 32.6 / 12.8 / 26.4 / 67.9 ZhmT5 (580M) 30.9 / 9.8 / 23.1 / 66.5 31.1 / 13.1 / 24.9 / 66.5 31.2 / 9.2 / 24.7 / 68.3 32.5 / 11.6 / 26.8 / 67.0 29.2 / 9.3 / 23.8 / 62.5 24.0 / 9.2 / 21.7 / 66.8 29.8 / 10.4 / 24.2 / 66.3 mBART (610M) 32.4 / 10.6 / 24.4 / 67.4 35.7 / 17.7 / 28.6 / 68.5 33.5 / 10.9 / 27.5 / 69.5 33.1 / 11.6 / 26.9 / 67.2 35.3 / 15.2 / 28.6 / 64.5 25.3 / 9.3 / 22.2 / 67.3 32.6 / 12.5 / 26.4 / 67.4 Pisces (610M) 33.4 / 11.0 / 25.5 / 68.2 39.1 / 22.5 / 32.5 / 70.7 34.6 / 11.4 / 27.9 / 70.2 34.2 / 12.3 / 27.8 / 67.6 37.7 / 17.4 / 31.1 / 65.6 27.8 / 11.1 / 24.3 / 68.0 34.5 / 14.3 / 28.2 / 68.4 ThmT5 (580M) 25.7 / 6.6 / 19.1 / 62.6 25.3 / 9.9 / 19.8 / 63.5 30.3 / 10.3 / 24.6 / 66.4 27.2 / 7.6 / 22.8 / 64.1 33.7 / 13.2 / 25.6 / 63.4 22.0 / 9.5 / 19.7 / 63.9 27.4 / 9.5 / 21.9 / 64.0 mBART (610M) 27.1 / 6.9 / 19.7 / 63.7 27.6 / 11.0 / 21.5 / 64.2 30.8 / 12.5 / 25.1 / 66.6 33.1 / 15.8 / 28.4 / 67.5 35.9 / 14.7 / 26.9 / 65.2 27.2 / 13.0 / 24.2 / 66.4 30.3 / 12.3 / 24.3 / 65.6 Pisces (610M) 29.5 / 9.1 / 21.7 / 65.6 38.0 / 19.6 / 30.4 / 69.4 35.8 / 15.6 / 29.1 / 69.2 37.1 / 18.5 / 32.3 / 69.2 36.4 / 15.2 / 27.3 / 65.5 30.2 / 15.4 / 26.7 / 68.2 34.5 / 15.6 / 27.9 / 67.8 TrmT5 (580M) 29.4 / 10.1 / 22.4 / 67.1 32.9 / 12.2 / 24.6 / 67.2 29.2 / 7.8 / 23.8 / 67.1 30.1 / 9.5 / 25.1 / 67.6 30.7 / 11.3 / 24.9 / 62.8 28.0 / 11.9 / 24.2 / 67.2 30.0 / 10.5 / 24.2 / 66.5 mBART (610M) 32.0 / 11.1 / 24.9 / 68.0 36.0 / 17.2 / 28.9 / 68.9 32.5 / 9.5 / 26.0 / 68.6 31.5 / 9.8 / 25.5 / 67.9 37.5 / 18.1 / 31.1 / 66.4 28.8 / 12.7 / 24.9 / 67.8 33.1 / 13.1 / 26.9 / 67.9 Pisces (610M) 33.3 / 11.5 / 25.3 / 68.6 38.3 / 18.9 / 30.8 / 70.2 33.2 / 10.2 / 26.6 / 69.2 32.2 / 10.1 / 26.0 / 68.7 40.9 / 22.0 / 34.3 / 67.7 30.8 / 14.0 / 26.5 / 68.5 34.8 / 14.4 / 28.2 / 68.8 Avg.mT5 (580M) 29.4 / 9.0 / 22.2 / 66.0 30.1 / 11.4 / 23.2 / 66.0 31.0 / 9.9 / 25.2 / 67.5 30.6 / 10.9 / 25.7 / 66.8 31.1 / 11.5 / 24.8 / 62.7 24.5 / 9.8 / 21.7 / 66.0 29.4 / 10.4 / 23.8 / 65.8 mBART (610M) 31.0 / 9.6 / 23.3 / 66.8 32.8 / 14.1 / 25.6 / 67.1 33.1 / 11.8 / 26.7 / 68.4 33.0 / 13.0 / 27.7 / 67.9 35.1 / 15.2 / 28.1 / 64.9 26.6 / 11.2 / 23.3 / 67.1 31.9 / 12.5 / 25.8 / 67.0 Pisces (610M) 32.2 / 10.4 / 24.3 / 67.7 36.2 / 17.3 / 28.7 / 69.0 35.1 / 13.3 / 28.4 / 69.5 35.1 / 14.3 / 29.5 / 69.1 37.9 / 18.4 / 30.7 / 66.5 29.0 / 13.2 / 25.4 / 68.1 34.2 / 14.5 / 27.8 / 68.3 Table 11: Experimental results on CrossSum (ROUGE-1 / ROUGE-2 / ROUGE-L / BERTSCORE). gray indicates the zero-shot directions. "Avg." denotes the average scores w.r.t each row, each column or all directions. verifies the effectiveness of PISCES. For the average results in all zero-shot directions, mBART50 achieves 33.8, 15.7, 28.1 and 67.1 in terms of ROUGE-1/2/L and BERTSCORE. The counterparts of PISCES are 37.9, 19.6, 31.8 and 69.3, showing its superiority in the zero-shot directions. ## D Full Results On Wikilingua Table 12 shows the experimental results in terms of ROUGE-1, ROUGE-2 and ROUGE-L, respectively. ## E Ablations In Conventional Zero-Shot Directions Table 13 shows the ablation results in all conventional zero-shot directions. ## F Error Analysis We first randomly select 100 summaries generated by PISCES on WikiLingua (En⇒Zh). After manually examining the generated summaries, we find the following major error types: - **Missing Information**: part of the information in the ground truth summary is not mentioned in the generated summary. This is the most frequent error type, and accounts for 39% of the generated summaries. - **Faithfulness**: the generated summary involves information that is inconsistent with (or not presented in) the source document. We find 32% of the summaries have this error. - **Redundancy**: the generated summary contains additional information beyond the ground truth summary. 17% of the generated summaries contain this error. - **Foreign Words**: the generated summary involves words in another language. 9% of the generated Chinese summaries involve some (typically one or two) words in another language. Redundancy and missing information are two major flaws caused by the limited summarization ability (Johner et al., 2021). Faithfulness error is another error type that has been noticed in the summarization research field recently (Huang et al., 2021). The neural generative summarization models are highly prone to generate factual inconsistency errors (Huang et al., 2021). Some studies (Kryscinski et al., 2020; Wang et al., 2022a) show that over | Trg | Model | En | Fr | Hi | Zh | Th | Tr | |--------|--------------------|--------------------|--------------------|--------------------|--------------------|-----------------|------| | mT5 | 40.9 / 17.7 / 34.2 | 36.0 / 15.0 / 29.4 | 30.1 / 8.9 / 23.5 | 37.0 / 13.4 / 32.1 | 38.6 / 17.8 / 33.4 | 3.3 / 0.2 / 3.0 | | | mBART | 41.9 / 18.2 / 34.9 | 37.2 / 15.8 / 30.3 | 31.7 / 9.6 / 24.5 | 37.9 / 13.9 / 32.7 | 39.5 / 18.5 / 34.0 | 3.2 / 0.2 / 3.0 | | | PISCES | 42.8 / 18.8 / 35.5 | 38.1 / 16.4 / 31.1 | 33.7 / 10.8 / 26.6 | 38.8 / 14.2 / 33.3 | 40.9 / 19.3 / 35.6 | 4.5 / 0.7 / 4.2 | | | mT5 | 37.0 / 14.3 / 31.2 | 38.6 / 18.3 / 32.6 | 27.0 / 7.4 / 22.1 | 35.6 / 12.4 / 31.3 | 36.4 / 15.9 / 32.1 | 3.1 / 0.2 / 2.9 | | | mBART | 38.2 / 15.0 / 31.7 | 39.2 / 17.9 / 32.0 | 28.7 / 7.9 / 22.3 | 36.9 / 12.8 / 31.6 | 37.9 / 16.6 / 32.6 | 3.1 / 0.2 / 3.0 | | | PISCES | 39.2 / 15.4 / 32.4 | 40.0 / 18.3 / 32.5 | 31.3 / 8.8 / 24.2 | 37.4 / 13.0 / 31.9 | 39.2 / 17.3 / 33.6 | 4.1 / 0.6 / 3.8 | | | mT5 | 36.9 / 14.2 / 30.3 | 31.9 / 11.6 / 25.6 | 34.9 / 11.9 / 27.2 | 32.1 / 9.6 / 27.5 | 34.0 / 14.1 / 29.2 | 3.2 / 0.3 / 3.0 | | | mBART | 37.9 / 14.6 / 30.8 | 32.8 / 12.2 / 25.9 | 35.6 / 12.5 / 27.8 | 33.2 / 10.6 / 28.2 | 35.4 / 14.6 / 30.1 | 3.4 / 0.3 / 3.2 | | | PISCES | 39.8 / 16.0 / 32.7 | 35.7 / 14.1 / 28.4 | 37.2 / 13.6 / 28.8 | 35.9 / 11.8 / 30.7 | 38.1 / 16.6 / 32.6 | 4.0 / 0.6 / 3.8 | | | mT5 | 38.3 / 14.3 / 31.5 | 34.5 / 13.8 / 28.3 | 26.0 / 6.2 / 20.3 | 41.1 / 16.5 / 35.5 | 36.1 / 14.7 / 30.8 | 3.3 / 0.3 / 3.2 | | | mBART | 39.2 / 15.1 / 32.0 | 36.0 / 14.5 / 29.0 | 27.0 / 6.6 / 20.8 | 41.7 / 17.0 / 35.9 | 36.8 / 15.3 / 31.4 | 3.4 / 0.2 / 3.2 | | | PISCES | 40.3 / 15.8 / 33.0 | 37.4 / 15.4 / 29.9 | 29.6 / 8.2 / 23.2 | 42.5 / 17.5 / 36.3 | 39.2 / 17.0 / 33.6 | 4.3 / 0.6 / 4.0 | | | mT5 | 37.6 / 15.0 / 30.7 | 34.1 / 13.9 / 27.8 | 26.2 / 6.8 / 20.5 | 34.0 / 11.0 / 28.7 | 41.1 / 20.3 / 36.0 | 3.3 / 0.3 / 3.2 | | | mBART | 38.5 / 15.4 / 31.9 | 35.6 / 14.2 / 28.3 | 27.8 / 7.3 / 21.4 | 34.6 / 11.3 / 29.0 | 42.2 / 20.8 / 36.2 | 3.3 / 0.3 / 3.1 | | | PISCES | 40.2 / 16.6 / 33.2 | 37.2 / 15.4 / 29.7 | 31.0 / 9.3 / 23.9 | 36.9 / 12.7 / 31.3 | 43.3 / 21.7 / 37.5 | 4.3 / 0.7 / 4.0 | | | mT5 | 14.2 / 2.2 / 12.1 | 14.9 / 2.9 / 12.2 | 11.2 / 1.4 / 10.8 | 19.2 / 2.6 / 15.9 | 20.0 / 4.2 / 17.9 | 3.0 / 0.2 / 2.8 | | | mBART | 15.7 / 2.6 / 13.4 | 16.0 / 3.2 / 13.2 | 14.9 / 2.3 / 12.6 | 19.9 / 3.0 / 17.6 | 21.4 / 4.8 / 19.3 | 3.1 / 0.2 / 3.0 | | | PISCES | 28.3 / 8.8 / 23.4 | 27.3 / 9.3 / 22.2 | 23.2 / 5.5 / 18.5 | 29.8 / 8.2 / 25.7 | 30.8 / 11.3 / 26.7 | 5.3 / 0.8 / 5.0 | | Fr⇒Hi Hi⇒Fr Hi⇒Zh Zh⇒Hi PISCES 21.4 / 69.1 26.1 / 72.9 26.1 / 70.4 20.3 / **68.5** w/o TS 20.7 / 68.6 25.2 / 72.8 25.1 / 69.9 19.5 / 67.9 w/o CL 20.6 / 68.8 25.2 / **72.9** 25.3 / 70.0 19.5 / 67.8 w/o TS & CL 19.6 / 68.1 23.6 / 72.1 24.0 / 69.1 18.1 / 66.9 Hi⇒Th Th⇒Hi Zh⇒Th Th⇒Zh PISCES 29.1 / 68.5 21.4 / 69.0 29.9 / 68.9 27.0 / **71.0** w/o TS 28.2 / 68.1 20.3 / 68.3 28.7 / 68.3 25.8 / 70.3 w/o CL 28.0 / 68.0 20.3 / 68.4 29.0 / 68.5 26.0 / 70.4 w/o TS & CL 26.7 / 67.4 18.8 / 67.4 27.8 / 67.6 25.0 / 69.4 30% of the summaries generated by neural models contain this error. We confirm that CLS also involves the faithfulness error. Future work could give deeper and more fine-grained analyses of this error type. The issue of foreign words could also refer to the code-switching phenomenon (Pfaff, 1979). Note that the generated foreign words are not limited in the source language. In several cases, the generated Chinese summaries of the given English documents even involve Thai words. We also find the semantics of these foreign words are typically coherent with their context. This error type might be caused by the cross-lingual pre-training (which bridges the representation gap of parallel words in different languages) in PISCES. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
liu-etal-2023-improving
On Improving Summarization Factual Consistency from Natural Language Feedback
https://aclanthology.org/2023.acl-long.844
Despite the recent progress in language generation models, their outputs may not always meet user expectations. In this work, we study whether informational feedback in natural language can be leveraged to improve generation quality and user preference alignment. To this end, we consider factual consistency in summarization, the quality that the summary should only contain information supported by the input documents, as the user-expected preference. We collect a high-quality dataset, DeFacto, containing human demonstrations and informational natural language feedback consisting of corrective instructions, edited summaries, and explanations with respect to the factual consistency of the summary. Using our dataset, we study three natural language generation tasks: (1) editing a summary by following the human feedback, (2) generating human feedback for editing the original summary, and (3) revising the initial summary to correct factual errors by generating both the human feedback and edited summary. We show that DeFacto can provide factually consistent human-edited summaries and further insights into summarization factual consistency thanks to its informational natural language feedback. We further demonstrate that fine-tuned language models can leverage our dataset to improve the summary factual consistency, while large language models lack the zero-shot learning ability in our proposed tasks that require controllable text generation.
# On Improving Summarization Factual Consistency From Natural Language Feedback Yixin Liu∗1, Budhaditya Deb2**, Milagro Teruel**2, Aaron Halfaker2, Dragomir Radev1**, Ahmed H. Awadallah**2 1Yale University, 2Microsoft Research {yixin.liu, dragomir.radev}@yale.edu, {Budha.Deb, hassanam}@microsoft.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Despite the recent progress in language generation models, their outputs may not always meet user expectations. In this work, we study whether informational feedback in natural language can be leveraged to improve generation quality and user preference alignment. To this end, we consider *factual consistency* in summarization, the quality that the summary should only contain information supported by the input documents, as the user-expected preference. We collect a high-quality dataset, **DeFacto**, containing human demonstrations and informational natural language feedback consisting of corrective instructions, edited summaries, and explanations with respect to the factual consistency of the summary. Using our dataset, we study three natural language generation tasks: (1) *editing a summary* by following the human feedback, (2) *generating human feedback* for editing the original summary, and (3) revising the initial summary to correct factual errors by generating both the human feedback and edited summary. We show that DeFacto can provide factually consistent human-edited summaries and further insights into summarization factual consistency thanks to its informational natural language feedback. We further demonstrate that fine-tuned language models can leverage our dataset to improve the summary factual consistency, while large language models lack the zero-shot learning ability in our proposed tasks that require controllable text generation. ## 1 Introduction While recent natural language generation (NLG) models (Radford et al., 2019; Lewis et al., 2020; Raffel et al., 2020; Brown et al., 2020) have made significant progress on the generation quality, they cannot always generate outputs that meet the user needs. For example, while state-of-the-art summarization systems can generate fluent and relevant ∗Most of the work was done while the first author was an intern at Microsoft Research. Figure 1: Three NLG tasks studied using our dataset. The *Editing* model aims to improve the initial systemgenerated summary given human feedback. The *Critic* model aims to predict human feedback according to a user-required quality. The *Editor* model aims to automatically correct factual errors by predicting both the human feedback and edited summary. summaries, recent work (Goyal and Durrett, 2021; Tang et al., 2022) have shown that they still make errors on fine-grained qualities such as *factual consistency*. 1 These errors can lead to serious risks to the intended users and make it difficult for them to trust the systems for their decision-making. Such failures in satisfying the user needs have an *intrinsic* reason - the large benchmark datasets that are used to train NLG models are usually not collected according to pre-defined user needs, which results in a discrepancy between **model** behaviors and **user expectations**. For example, XSum (Narayan et al., 2018), one of the most commonly used summarization datasets, contains a large portion of reference summaries with *hallucinations*. 2 As a result, summarization models trained on XSum dataset generate many non-factual 1Following Goyal and Durrett (2021), we define factual consistency as the summary quality that *all the information of* the summary can be supported by the source document. 2Maynez et al. (2020) reports that around 76.9% reference summaries on the XSum dataset contains hallucinated contents that are not supported by the source documents. 15144 ![1_image_0.png](1_image_0.png) 2. Categorize Errors 3. Give Explanation 4. Provide Evidence 5. Write Corrective Instructions 6. Correct Summary contents, more than models trained on datasets such as CNN/DailyMail (Hermann et al., 2015) dataset (Goyal and Durrett, 2021). Unfortunately, it can be prohibitively expensive to collect new, large-enough datasets to train NLG models according to user needs, as they can be diverse, personal, and ever-changing over time. Instead of aligning an existing NLG model to a specific user need, we explore adjusting model outputs according to the user needs through **human** demonstrations and feedback. Specifically, we investigate three scenarios (Fig. 1): (1) an *Editing* model that aligns initial system outputs to human demonstrations based on the user feedback; (2) a Critic model that predicts user feedback of initial system outputs according to the user requirements; (3) an *Editor* model that automatically aligns the initial system outputs to user needs by predicting both the user feedback and edited summary. We choose *factual consistency* of systemgenerated summaries as the *user-required quality* to study the aforementioned application scenarios. To this end, we collect a high-quality, informational dataset containing human demonstrations and feedback. Specifically, the annotators are presented with initial system-generated summaries and asked to make changes to the summaries to make them factually consistent if they find errors in them. Apart from the **human-edited, factually consistent summaries**, the annotators are also required to provide **instructions** on how to change the initial summaries (i.e., if they find errors in them) and explanation on why the initial summaries are factually consistent or not. An example of our dataset is shown in Fig. 2. Using the collected dataset, we show that (1) the *Editing* model can effectively leverage human feedback to adjust the initial sys- Atomic Content Units (ACUs) ACU Writing tem outputs towards human demonstrations; (2) the Critic model can learn to generate meaningful feedback that can be used by the *Editing* model; (3) the Editor model can automatically correct factuality errors without explicit human intervention. Moreover, we find that the *Editor* model achieves better performance than the baseline model that only generates the edited summary, which indicates that natural language feedback can be beneficial for training models for the corresponding task. Our contributions can be briefly summarized as: (1) we collect **DeFacto**, 3a *high-quality dataset* containing human Demonstrations and Feedback for improving factual consistency of text summarization; (2) we conduct comprehensive analyses on the collected dataset, which provides further insights about factual consistency in text summarization, such as the relation between the type of factual errors and the type of editing operations; (3) we provide strong baseline models for the proposed three NLG tasks - summary editing (*Editing* model), feedback generation (*Critic* model), and automatic factuality error correction with feedback prediction (*Editor* model), which illustrates methods of leveraging natural language feedback for aligning model outputs with user expectations. (4) we present two case studies with large language models (LLMs) such as GPT-3.5 (Ouyang et al., 2022b), showing that LLMs still lack the *controllable* text generation ability in our proposed tasks. Chelsea weren't awarded a penalty. The clash occurred inside the box. David Ospina clashed with Oscar. Oscar is Brazilian. David Ospina clattered Oscar. Oscar was taken off at half time. David Ospina plays for Arsenal. Didier Drogba replaced Oscar. David Ospina is a goalkeeper. ## 2 The Defacto **Dataset** Our dataset, DEFACTO, contains human demonstrations and feedback w.r.t. the factual consistency of system-generated summaries. We choose **XSum** 3We make the **DeFacto** dataset publicly available at https: //github.com/microsoft/DeFacto. dataset as the target dataset to conduct the data collection because it is the most commonly studied dataset for summarization factual consistency. For the system-generated summaries, we select **PEGASUS** (Zhang et al., 2020), a top-performing summarization model to generate summaries on both the validation and test set of the XSum dataset. ## 2.1 Annotation Process Our annotation process follows the following steps: (1) **Detect errors**: The annotator is required to evaluate a summary given the source document and decide if the summary is factually consistent. (2) **Categorize errors**: If the annotator decides the summary *is not* factually consistent, they are required to **categorize the factual errors** in the summary as either intrinsic or *extrinsic*. 4 We note that both error detection and categorization are defined at the summary level. (3) **Give explanation**: The annotator is required to provide a natural language explanation on why the summary is factually consistent or not. (4) **Provide evidence**: The annotator is required to select a sentence from the source document as evidence to support their claims described in (3). (5) **Write corrective instruction**: The annotator is required to **provide instructions** of how to correct the original summary if they think it is not factually consistent. To enforce uniformity and reduce the noise in the instructions, we provide six templates for the annotators corresponding to different operations: Remove, Add, Replace, Modify, *Rewrite*, and *Others*. The annotators need to fill in the templates to generate the instructions. The details of the templates are in Appendix A.1. (6) **Correct summary**: Following the instruction in (5), the annotator is required to **edit the initial** summary to make it *factually consistent* with minimal, necessary modifications. We provide annotated examples in Appendix A.2. ## 2.2 Data Collection We conduct our data collection on Amazon Mechanical Turk5(MTurk) platform. The MTurk annotators need to pass a qualification test to be able to accept our assignments. The qualification test ![2_image_0.png](2_image_0.png) includes three actual annotation tasks, and we manually checked the correctness of the answers of the annotators and assigned them scores accordingly. For the actual tasks, we collected one annotation for each example (i.e., a document-summary pair), and collected around 1000 examples on the test set and 1500 examples on the validation set. To estimate the inter-annotator agreement, we additionally collect two more annotations for 100 examples on the test set. We require the annotators to be located in the United States. Depending on the difficulty of the assignments, the annotators are compensated with 1.2 - 2.0 US dollars per assignment accordingly based on a $12/hour pay rate. To check the inter-annotator agreement on steps (1) *Detect Errors* and (2) *Categorize Errors* in §2.1, we calculated the Krippendorff's alpha (Krippendorff, 2011), and found that the agreement score is 0.5552, 0.1899, 0.5260 for if the summary contains *extrinsic* factual errors, *intrinsic* factual errors and any factual errors respectively. For humanwritten explanation, *instructions*, and *edited summary* in step (3), (5), (6) in §2.1, we calculated the ROUGE (Lin, 2004) score among the answers provided by different annotators, and found the average ROUGE-1 F-score to be 30.52, 50.96, 71.77, respectively. Lastly, for the evidence in step (4), we consider two sentences as equivalent if the ROUGE1 score between them is above 90. We found the match rate among different annotators to be 0.4403. ## 3 Defacto **Analysis** With the collected annotations, we further split the data collected on the validation set of XSum dataset into a training set and a validation set for the following experiments. We perform data analyses with different aspects of the collected dataset. The basic dataset statistics are in Tab. 1. Out of all the examples, **71.1%** of them contain at least one factual error, **58.8%** of them contain *extrinsic errors*, 22.0% of them contain *intrinsic errors*, and **9.63%** of them contain *both* types of errors. | DAE | QAFactEval | | |---------------|--------------|-------| | Reference | 0.6176 | 1.549 | | System-output | 0.6904 | 1.826 | | Human-edited | 0.8975 | 2.540 | | R1 | R2 | RL | | |-----------------|-------|-------|-------| | Ref. v.s. Sys. | 48.01 | 25.54 | 40.45 | | Ref. v.s. Human | 40.30 | 18.22 | 33.86 | | Sys. v.s. Human | 75.79 | 66.17 | 74.89 | ## 3.1 Edited Summary For the edited summaries written by the annotators, we evaluate (1) their factual consistency; (2) their textual similarity with either the reference summaries or the initial outputs; (3) other aspects of their intrinsic quality (Grusky et al., 2018; Bommasani and Cardie, 2020). Factual Consistency To evaluate the factual consistency, we use two automatic metrics, DAE (Goyal and Durrett, 2020) and QAFactEval (Fabbri et al., 2022a), which achieve strong performance on the XSum dataset (Tang et al., 2022).6 The results are in Tab. 2, showing that the human-edited summaries are more factually consistent than both the reference summaries and initial system outputs. Text Similarity For textual similarity, we compare human-edited summaries against both the reference summaries and initial system outputs in Tab. 3. We note two observations: (1) There is a high-degree similarity between the initial system outputs and human-edited summaries, indicating that the annotators only made small changes to the initial outputs. (2) Compared with the initial system outputs, the human-edited summaries have lower similarity with the reference summaries, which suggests that the reference summaries and initial system outputs may share similar factual errors, leading to higher textual similarity. ![3_image_0.png](3_image_0.png) | Coverage | Novelty | Compression | | |------------|-----------|---------------|-------| | Ref. | 0.633 | 0.851 | 14.82 | | Sys. | 0.699 | 0.788 | 17.84 | | Human | 0.787 | 0.703 | 20.61 | Intrinsic Evaluation We evaluate two *intrinsic* summary qualities: the *compression rate* (Grusky et al., 2018) and the *abstractiveness* (Bommasani and Cardie, 2020). In particular, *compression rate* measures the length difference between the input text and the summary. And to evaluate *abstractiveness* we use two features, (1) Extractive Fragment Coverage (Grusky et al., 2018), which measures the extent to which the summary can be "copied" from the input text, (2) Novelty, which measures the ratio of words in the summary that are not in the input text.7 The statistics in Tab. 4 suggest that the human-edited summaries are less abstractive than the initial system outputs and reference summaries. This finding is coherent with Xiao et al. (2022) which found that there exists a tradeoff between faithfulness and abstractiveness. However, we note that the decrease of abstractiveness can result from removing non-factual information from the summary, which is the most common operation for correct *extrinsic* errors, as we will show next. ## 3.2 Instructions The annotators need to provide instructions on how to make changes to the initial system outputs to correct factual errors. We find that the editing can take more than one instruction and the average number of instructions is 1.52. We show the distribution of the number of instructions in Appendix C.2. As for the distribution of instruction types (Fig. 3), we found that *removing information* and *replacing* information to be the most frequent operations. Interestingly, fine-grained analysis in Fig. 3 shows that *extrinsic* errors are more likely to be corrected by the *replacing operation* while *intrinsic* errors can require more diverse types of operations. ## 4 Summary Editing For Improving Factual Consistency With DEFACTO, we propose a new NLG task: editing the initial summary based on human feedback. ## 4.1 Methods We formulate the summary editing task as a sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) problem. Specifically, a Seq2Seq model g learns a mapping from an input sequence X to a target output sequence Y : Y ← g(X). For this specific task, the input X has three components: input document, initial system-generated summary and *human feedback*, while the target output is the *human-edited summary* (Fig. 1). The human feedback consists of the *instructions* and *explanation*. To concatenate the different components of the input sequence, a short "*prompt*" is appended at the beginning of each component, then the entire input sequence becomes: "Article: *input document*; Candidate: *initial system-generated summary*; Instruction: *human instructions*; Explanation: *human* explanation". While recent work (Sanh et al., 2022; Bach et al., 2022) has shown that prompt design can affect the model performance, for simplicity we use simple text snippets for the baseline models. We instantiate the Seq2Seq model using a family of pre-trained Encoder-Decoder models, T5 (Raffel et al., 2020) and T0 (Sanh et al., 2022), which are widely used for transfer learning and few-shot learning where the data is scarce. To achieve better performance, the model is fine-tuned on the training set of DEFACTO using Maximum Likelihood Estimation (MLE) under the training paradigm of teacher forcing (Williams and Zipser, 1989). We note that we only used the subset of data in which | R1 | R2 | RL | DAE | QFE | | |---------|-------|-------|-------|-------|-------| | Sys. | 75.98 | 66.32 | 75.05 | 0.704 | 1.837 | | Human. | 100 | 100 | 100 | 0.905 | 2.550 | | D+S | 77.04 | 67.96 | 76.03 | 0.835 | 2.248 | | S+I | 87.48 | 81.72 | 86.16 | 0.857 | 2.289 | | D+S+I | 88.74 | 83.16 | 87.48 | 0.904 | 2.470 | | D+S+E | 81.83 | 74.10 | 80.36 | 0.899 | 2.437 | | D+S+I+E | 89.22 | 83.64 | 87.92 | 0.911 | 2.465 | the initial system output contains factual errors. ## 4.2 Experiments Implementation Details To initialize the *Editing* model, we use T5-3B and two variants of T0 models, T0-3B and T0pp.8 We compare the model performance with different variants of input (e.g., with or without the human-written explanation). To evaluate the quality of the model output, we focus on two aspects: *textual similarity* with the human-edited summary, as evaluated by ROUGE (Lin, 2004), and *factual consistency* with the input document, as evaluated by DAE (Goyal and Durrett, 2020) and QAFactEval (QFE) (Fabbri et al., 2022a). The checkpoints are selected based on their performance on the validation set. Experimental Results Tab. 5 shows the performance of fine-tuned T0pp with different input variants. We note the following observations: (1) Compared with the initial system-generated summaries, the *Editing* model is able to generate summaries more similar to the human-edited summaries and more factually consistent with the input document. (2) Both the human-written instructions and explanation can provide meaningful guidance to the Editing model, and the model with both of them as input (the **D+S+I+E** variant) achieves the best performance. (3) Without the input document, the Editing model (the S+I variant) can still improve 8T5-3B (https://huggingface.co/t5-3b), T0-3B (https://huggingface.co/bigscience/T0_3B), and T0pp (https://huggingface.co/bigscience/T0pp) have around 3, 3, and 11 billion parameters respectively. | T0pp | T0-3B | T5-3B | | | | | | | | | | | |---------|---------|---------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | R1 | R2 | DAE | QFE | R1 | R2 | DAE | QFE | R1 | R2 | DAE | QFE | | | D+S | 77.04 | 67.96 | 0.835 | 2.248 | 76.10 | 66.66 | 0.821 | 2.168 | 75.99 | 66.35 | 0.784 | 2.063 | | S+I | 87.48 | 81.72 | 0.857 | 2.289 | 87.30 | 81.00 | 0.852 | 2.263 | 87.59 | 81.50 | 0.844 | 2.237 | | D+S+I | 88.74 | 83.16 | 0.904 | 2.470 | 88.36 | 82.16 | 0.894 | 2.489 | 86.42 | 80.56 | 0.876 | 2.411 | | D+S+E | 81.83 | 74.10 | 0.899 | 2.437 | 79.85 | 71.41 | 0.902 | 2.510 | 79.09 | 71.20 | 0.877 | 2.373 | | D+S+I+E | 89.22 | 83.64 | 0.911 | 2.465 | 88.69 | 82.44 | 0.899 | 2.477 | 87.03 | 80.77 | 0.865 | 2.375 | the initial system-generated summaries by following the instructions. However, taking the input document as part of the input helps the model (the D+S+I variant) to achieve better performance, especially for better factual consistency. In Tab. 6, we compare the performance of T0pp, T0-3B and T5-3B with different kinds of inputs. We found that (1) the findings on T0pp (Tab. 5) are generalizable to T0-3B and T5-3B with few exceptions. (2) T0pp outperforms T0-3B across different input variants according to different automatic metrics except for the QAFactEval metric. (3) T0-3B generally outperforms T5-3B, likely thanks to the pre-training of T0 which is designed for performing zero-shot learning with instructions. Human Evaluation We conduct a human evaluation on the quality of model-edited summaries. We ask the annotators two questions: (1) Are the generated summaries more factually consistent than the original summaries (yes/no); (2) Do the generated summaries follow the instructions (yes/partly/no). We randomly sampled 100 examples from the test set, and have each generated summary annotated by three MTurk annotators. The generated summaries are from the trained checkpoint of T0pp with the input containing input document, initial systemgenerated summaries, and human-written instructions. Under major voting (with ties ignored), we found that 97% of model-edited summaries are more factually consistent than the original system outputs, and 91% of them follow the provided human-written instructions. ## 4.3 Case Study Of Llm Summary Editing As a case study, we evaluate the zero-shot learning ability of GPT-3.59for summary editing. We apply it to two settings, (1) editing without instructions and (2) editing by following instructions, in a zero9OpenAI's gpt-3.5-turbo-0301: https://platform. openai.com/docs/models/gpt-3-5. shot learning manner.10 The results in Tab. 7 show that (1) GPT-3.5 is able to leverage the editing instructions; (2) Compared with the fine-tuned model (T0pp), GPT-3.5 can generate edited summaries with higher factual consistency but it is worse at maintaining the content similarity with the original summary, which suggests that it still struggles with controllable text generation. | Model | Input | R1 | R2 | DAE | QFE | |---------|---------|-------|-------|-------|-------| | Sys. | - | 75.98 | 66.32 | 0.704 | 1.837 | | Human. | - | 100 | 100 | 0.905 | 2.550 | | T0pp | D+S | 77.04 | 67.96 | 0.835 | 2.248 | | T0pp | D+S+I | 88.74 | 83.16 | 0.904 | 2.470 | | GPT-3.5 | D+S | 36.75 | 21.98 | 0.892 | 2.351 | | GPT-3.5 | D+S+I | 72.22 | 60.53 | 0.910 | 2.651 | ## 5 Generating Feedback For Improving Factual Consistency We investigate if it is possible to train a model to generate feedback from a given document and summary pair to correct factual errors, and we name the subsequent model as a *Critic* model. ## 5.1 Methods Similarly to §4.1, we formulate the *Critic* model as a Seq2Seq model. The input sequence is a concatenation of the *input document* and the initial system-generated summary while the target output is the *human-written instructions* (Fig. 1).11 We use 10The prompts we used can be found in Appendix D.2. 11While it is also possible to require the *Critic* model to generate the explanation, we choose human-written instructions as the target output as it works better at helping the *Editing* model to improve the initial outputs (§4.2). Method Critic R1 R2 DAE QFE Sys. - 75.98 66.32 0.704 1.837 Human. - 100 100 0.905 2.550 D+S - 77.04 67.96 0.835 2.248 D+S+I - 88.74 83.16 0.904 2.470 D+S+I∗ T0pp 75.10 65.15 0.859 2.296 D+S+I∗ T0-3B 73.01 62.15 0.859 2.278 T0 as the startpoint to fine-tune the *Critic* model with MLE training on the subset of DEFACTO in which the initial summary contains factual errors. ## 5.2 Experiments Experimental Results Tab. 8 shows the textual similarity between the instructions generated by the *Critic* model and the human-written instructions. To have a more intuitive understanding of the model performance, in Tab. 9 we evaluate the performance of the *Editing* model with the instructions generated by the *Critic* model. We found that (1) While the generated instructions cannot work as well as the human-written instructions, they are helpful to the *Editing* model to improve the factual consistency of the initial system-generated summaries. (2) Compared with the *Editing* model that only takes the input document and initial summary as input (D+S), the *Editing* model (**D+S+I***) that also uses the generated instructions achieves better performance with respect to factual consistency, but its outputs have lower textual similarity with the human-edited summaries. It indicates that the *Critic* model can generate useful instructions. Meanwhile, the lower textual similarity may result from the fact that there can be more than one way to correct the factual errors,12 and the *Critic* model can generate instructions for a way of correction different from the human-edited summary. Human Evaluation To further evaluate the quality of model-generated instructions, we ask human annotators two questions: (1) Are the generated instructions equivalent to human-written instructions (yes/partly/no); (2) Are the generated instructions useful for correcting the factual errors (yes/partly/no). Similar to §4.2, we randomly sampled 100 examples from the test set, and have each generated instruction annotated by three MTurk annotators. For the first question, we found that the annotators think 24% of the generated instructions are *exactly* equivalent to the human-written instructions while 45% of them are *partly* equivalent. For the second question, we found that 39% of the generated instructions are useful for correcting factual errors while 31% of them are partly useful. As a result, we found that it is easier for the *Critic* model to generate useful instructions than generating instructions that are equivalent to human-written instructions. We hypothesize this is because there can be more than one way to edit the initial summary therefore the human-written instructions represent only one acceptable solution. | Method | Critic | R1 | R2 | DAE | QFE | |----------|----------|-------|-------|-------|-------| | Sys. | - | 75.98 | 66.32 | 0.704 | 1.837 | | Human. | - | 100 | 100 | 0.905 | 2.550 | | D+S+I∗ | T0pp | 75.10 | 65.15 | 0.859 | 2.296 | | D+S+I∗ | GPT-3.5 | 60.48 | 48.18 | 0.868 | 2.566 | | D+S+I∗ | GPT-4 | 63.60 | 51.15 | 0.860 | 2.604 | ## 5.3 Case Study Of Llm Critic As a case study, we evaluate the zero-shot learning ability of GPT-3.5 and GPT-413 for instruction generation. The results in Tab. 10 show that, compared with fine-tuned models, instructions generated by both GPT-3.5 and GPT-4 lead the editing model to generate summaries that are more factual but less similar to the original summaries. This finding shows a similar trend as in §4.3, that LLMs in a zero-shot learning setting lack the ability of con- | Rouge1 | Rouge2 | RougeL | | |----------|----------|----------|-------| | T0pp | 52.55 | 37.41 | 51.00 | | T0-3B | 51.70 | 36.56 | 50.33 | Method R1 R2 DAE QFE Sys. 75.98 66.32 0.704 1.837 Human. 100 100 0.905 2.550 T0pp Editing 77.04 67.96 0.835 2.248 EditorI 78.01 **69.01** 0.804 2.108 EditorE **78.46** 68.70 **0.867 2.309** T0-3B Editing 76.10 66.66 0.821 2.168 EditorI **77.40 68.29** 0.808 2.112 EditorE 77.27 67.92 **0.838 2.241** T5-3B Editing 75.99 66.35 0.784 2.063 EditorI **77.06 67.86 0.804** 2.106 EditorE 76.82 67.42 0.796 **2.114** trollable text generation. For example, GPT-3.5 responded with "No editing instructions needed" 23.9% of the time, despite being directly instructed to edit a factually inconsistent summary.14 ## 6 Summary Editor With Feedback Generation And Editing We define the third NLG task as to predict both the human feedback and the *edited summary* given the input document and initial system-generated summary. We name this model the *Editor* model because it needs to both evaluate the initial summary and make edits according to its own assessments. ## 6.1 Correcting Known Factual Errors Similar to §4.1 and §5.1, we fine-tuned the pretrained T0 and T5 models for our experiments. The two parts of the target output, the human feedback, and the edited summary are indicated by textual tags as specified in §4.1. We investigate two specific scenarios: (1) generating both the *instructions* and the edited summary; (2) generating both the explanation and the edited summary. We present the experimental results in Tab. 11. Compared with the *Editing* model that takes only the input document and the initial system-generated summary as the input, the *Editor* models have better performance in textual similarity, and the one that generates the explanations also achieves higher 14Prompts and more details are in Appendix D.3. System R1 R2 RL DAE QFE Pegasus 47.35 24.61 39.59 0.763 2.029 Human 41.94 19.49 34.97 0.905 2.550 CCGS 45.11 21.06 36.60 0.760 1.847 CLIFF 46.40 23.38 38.38 0.780 2.068 ReDRESS 43.50 19.77 35.28 0.830 2.065 FactPegasus 38.95 15.99 31.68 **0.882** 1.941 CompEdit 42.69 19.06 34.73 0.850 2.113 Editor 45.14 22.27 37.89 0.833 **2.250** factual consistency. The results suggest that learning to predict related information of a target generation task can be beneficial to the performance of language generation models, echoing the recent findings in chain-of-thought prompting (Wei et al., 2022; Huang et al., 2022; Jung et al., 2022). ## 6.2 Detecting And Correcting Factual Errors While in the previous experiments the models are trained to edit the initial system outputs with known factual errors, the *Editor* model can also be used on *arbitrary* system outputs where it is required to edit the initial output *only* when it identifies factual errors in it. To this end, we use the entire DEFACTO in this experiment with the following modifications to the target output: (1) the target summary is set to the original system output when it contains no factual errors, and to the human-edited summary otherwise; (2) only explanations are used as part of the target output because it is always available. We fine-tune T0-3B in this experiment and compare its results with several recently introduced summarization systems that are specifically trained to improve the summary factual consistency: (1) CCGS (Chen et al., 2021), (2) CLIFF (Cao and Wang, 2021), (3) ReDRESS (Adams et al., 2022), (4) FactPegasus (Wan and Bansal, 2022), (5) CompEdit (Fabbri et al., 2022b). More details about these systems can be found in Appendix D.1. The results in Tab. 12 show that the *Editor* can achieve competitive performance compared with the baseline systems and yield a balanced performance between the *content similarity* with the reference summary and the *factuality consistency*. Since the *Editor* model is trained on much fewer data than the other systems, its strong performance indicates the effectiveness of utilizing human demonstrations and feedback for improving factual consistency. ## 7 Related Work Factual Consistency in Text Summarization Factual consistency is an important quality of text summarization systems (Kryscinski et al., 2020; Maynez et al., 2020; Fabbri et al., 2021). Related work has proposed various methods of improving the factual consistency of summaries by (1) training abstractive summarization models with factuality metrics (Goyal and Durrett, 2021; Cao et al., 2022), (2) introducing new training objectives and multi-task learning for model training (Cao and Wang, 2021; Zhu et al., 2021; Aralikatte et al., 2021; Zhang et al., 2022; Xu and Zhao, 2022), (3) post-editing or re-ranking the initially generated summaries to improve the factual consistency (Cao et al., 2020; Chen et al., 2021; Balachandran et al., 2022; Fabbri et al., 2022b), (4) designing factualityaware pre-training (Wan and Bansal, 2022). To facilitate the evaluation of summarization models and automatic factuality metrics that evaluate the factual consistency of summaries, various benchmark datasets have been collected by the related work (Kryscinski et al., 2020; Wang et al., 2020; Huang et al., 2020; Fabbri et al., 2021; Goyal and Durrett, 2021; Pagnoni et al., 2021). In these benchmarks, system-generated summaries are evaluated by human annotators with either numerical quality scores, binary labels, or binary labels with fine-grained error taxonomies. In contrast, our dataset contains more detailed human feedback with *natural language descriptions* and provides error-free, human-edited summaries. Neural Text Editing Neural text editing models (Malmi et al., 2022) are suitable for application scenarios where there is a significant textual overlap between the input and output sequences (Awasthi et al., 2019; Malmi et al., 2019; Stahlberg and Kumar, 2020; Mallinson et al., 2020; Reid and Zhong, 2021; Mallinson et al., 2022), such as grammatical error correction, text simplification, and text style transfer. Instead of autoregressive generation, text editing can also be achieved by predicting and performing edit operations (Stahlberg and Kumar, 2020; Mallinson et al., 2020) or through non-autoregressive text generation (Gu et al., 2019; Agrawal and Carpuat, 2022). Unlike most of the related work, we propose a text editing task that requires the editing models to follow the editing instructions. Faltings et al. (2021) introduces a similar dataset as ours containing single-sentence edits and the associated natural language commands crawled from Wikipedia. However, our dataset is different from theirs as we define a specific target quality, summary factual consistency, for the text edits and instructions. Improving Neural Models through Human Feedback Leveraging human feedback to improve neural models has become a recent research focus. InstructGPT3 (Ouyang et al., 2022a) use human feedback to improve initial predictions from a GPT3 model for better user preference alignments. Madaan et al. (2021) propose the interactive MERCURIE system, where users interactively correct the explanations generated by a reasoning system. In Xu et al. (2022), a generic chatbot is continuously trained using various forms of human feedback including natural language comments. Schick et al. (2022) propose a collaborative language model, PEER, which imitates a draft writing process and interactively refines a language generation task through human feedback. For text summarization, prior works (Stiennon et al., 2020; Wu et al., 2021; Nguyen et al., 2022; Scheurer et al., 2022) have studied training summarization models through human feedback in the form of numerical scores of summary quality and thus different from natural language feedback used in our work. ## 8 Conclusions Using summary factual consistency as a target quality, we study improving text generation with human demonstrations and feedback. We demonstrate the usages of human feedback in three proposed NLG tasks using the collected dataset, DEFACTO, and show that human feedback can be used to improve summary factual consistency. We believe that our proposed tasks can be extended to other important text qualities beyond factual consistency, and utilizing natural language feedback for improving text generation can be a promising path for future work. ## Acknowledgements We thank the anonymous reviewers for their valuable feedback and helpful suggestions. ## Limitations The annotation task we proposed in this work, i.e., detecting factual errors in summaries and providing human demonstrations and feedback for correcting the identified errors, can be complicated and timeconsuming. During our recruiting phase for MTurk annotators, we found that the ratio of annotators who were qualified after finishing the qualification test was relatively low. Therefore, it can be difficult to scale up the annotated dataset given the time and budget limitations. As a result, our dataset is of a relatively small scale and we only used one summarization dataset (XSum) and one base summarization model (Pegasus). In this work, we view summary factual consistency as an example of user-expected quality to study leveraging natural language feedback for aligning system outputs with user preferences. However, user preferences can be diverse and personal and some user-expected output quality will be less well-defined and objective than summary factual consistency, which further increases the difficulty and ambiguity of data annotation and model evaluation. Therefore, it can be challenging to directly apply the methods we proposed in this work to such subjective quality aspects, and we leave it for future work to explore generalizing our methods to more diverse user expectations and preferences. ## References Griffin Adams, Han-Chin Shing, Qing Sun, Christopher Winestock, Kathleen McKeown, and Noémie Elhadad. 2022. Learning to revise references for faithful summarization. arXiv preprint arXiv:2204.10290. Sweta Agrawal and Marine Carpuat. 2022. An imitation learning curriculum for text editing with nonautoregressive models. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7550– 7563, Dublin, Ireland. Association for Computational Linguistics. Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, and Ryan McDonald. 2021. Focus attention: Promoting faithfulness and diversity in summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6078–6095, Online. Association for Computational Linguistics. Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Par- allel iterative edit models for local sequence transduction. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4260–4270, Hong Kong, China. Association for Computational Linguistics. Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Alshaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. PromptSource: An integrated development environment and repository for natural language prompts. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 93–104, Dublin, Ireland. Association for Computational Linguistics. Vidhisha Balachandran, Hannaneh Hajishirzi, William W. Cohen, and Yulia Tsvetkov. 2022. Correcting diverse factual errors in abstractive summarization via post-editing and language model infilling. Rishi Bommasani and Claire Cardie. 2020. Intrinsic evaluation of summarization datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8075–8096, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics. Meng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correction for abstractive summarization models. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6251–6258, Online. Association for Computational Linguistics. Shuyang Cao and Lu Wang. 2021. CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633–6649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Sihao Chen, Fan Zhang, Kazoo Sone, and Dan Roth. 2021. Improving faithfulness in abstractive summarization with contrast candidate generation and selection. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5935–5941, Online. Association for Computational Linguistics. Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022a. QAFactEval: Improved QAbased factual consistency evaluation for summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2587–2601, Seattle, United States. Association for Computational Linguistics. Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, and Caiming Xiong. 2022b. Improving factual consistency in summarization with compression-based post-editing. *ArXiv*, abs/2211.06196. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, and Bill Dolan. 2021. Text editing by command. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5259–5274, Online. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In *Proceedings of the* 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics. Jiatao Gu, Changhan Wang, and Jake Zhao Junbo. 2019. Levenshtein Transformer. Curran Associates Inc., Red Hook, NY, USA. Karl Moritz Hermann, Tomáš Kociský, Edward Grefen- ˇ stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1693–1701, Cambridge, MA, USA. MIT Press. Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020. What have we achieved on text summarization? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 446–469, Online. Association for Computational Linguistics. Fan Huang, Haewoon Kwak, and Jisun An. 2022. Chain of explanation: New prompting method to generate higher quality natural language explanation for implicit hate speech. *ArXiv*, abs/2209.04889. Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and Yejin Choi. 2022. Maieutic prompting: Logically consistent reasoning with recursive explanations. ArXiv, abs/2205.11822. Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Yiming Yang, Peter Clark, Keisuke Sakaguchi, and Eduard H. Hovy. 2021. Improving neural model performance through natural language feedback on their explanations. *CoRR*, abs/2104.08765. Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022. Edit5: Semi-autoregressive text-editing with t5 warm-start. Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. FELIX: Flexible text editing through tagging and insertion. In *Findings of the* Association for Computational Linguistics: EMNLP 2020, pages 1244–1255, Online. Association for Computational Linguistics. Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, and Aliaksei Severyn. 2022. Text generation with textediting models. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts*, pages 1–7, Seattle, United States. Association for Computational Linguistics. Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Duy-Hung Nguyen, Nguyen Viet Dung Nghiem, BaoSinh Nguyen, Dung Tien Tien Le, Shahab Sabahi, Minh-Tien Nguyen, and Hung Le. 2022. Make the most of prior data: A solution for interactive text summarization with preference feedback. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1919–1930, Seattle, United States. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022a. Training language models to follow instructions with human feedback. *ArXiv*, abs/2203.02155. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022b. Training language models to follow instructions with human feedback. In *Advances in Neural Information* Processing Systems. Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4812–4829, Online. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Machel Reid and Victor Zhong. 2021. LEWIS: Levenshtein editing for unsupervised text style transfer. In *Findings of the Association for Computational* Linguistics: ACL-IJCNLP 2021, pages 3932–3944, Online. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. Jérémy Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. 2022. Training language models with language feedback. Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. 2022. Peer: A collaborative language model. Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits: Sequence transduction using span-level edit operations. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 5147–5159, Online. Association for Computational Linguistics. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. *Advances* in Neural Information Processing Systems, 33:3008– 3021. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104–3112, Cambridge, MA, USA. MIT Press. Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Kryscinski, Justin F. Rousseau, and Greg Durrett. 2022. Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors. ArXiv, abs/2205.12854. David Wan and Mohit Bansal. 2022. FactPEGASUS: Factuality-aware pre-training and fine-tuning for abstractive summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1010–1028, Seattle, United States. Association for Computational Linguistics. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *ArXiv*, abs/2201.11903. Ronald J. Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural Comput.*, 1(2):270–280. Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. 2021. Recursively summarizing books with human feedback. Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, and Pengfei Liu. 2022. DataLab: A platform for data analysis and intervention. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 182–195, Dublin, Ireland. Association for Computational Linguistics. Jing Xu, Megan Ung, Mojtaba Komeili, Kushal Arora, Y-Lan Boureau, and Jason Weston. 2022. Learning new skills after deployment: Improving open-domain internet-driven dialogue with human feedback. Wang Xu and Tiejun Zhao. 2022. Jointly learning guidance induction and faithful summary generation via conditional variational autoencoders. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2340–2350, Seattle, United States. Association for Computational Linguistics. Haopeng Zhang, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, and Yingbo Zhou. 2022. Improving the faithfulness of abstractive summarization via entity coverage control. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 528–535, Seattle, United States. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org. Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency of abstractive summarization. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718–733, Online. Association for Computational Linguistics. ## A Data Collection Details We used the XSum dataset for our data collection. It is released under the Apache 2 license and contains news articles written in English. ## A.1 Instruction Templates We provide six templates for the annotators corresponding to different operations: Remove, Add, Replace, Modify, Rewrite, *Others*: (1) *Remove* the information about __ from the summary. (2) Add the information about __ to the summary. (3) *Replace* the information about __ *with* the information about __. (4) *Modify* the information about __ in the summary. (5) *Rewrite* the summary *entirely* by __. (6) *Other* instructions: __. We note that sometimes it takes more than one instruction to edit the original summary. ## A.2 Annotated Examples We provide the following annotated examples. Example 1 Original Summary: A Wirral biscuit factory is to close with the loss of 342 jobs. Explanation: Location is in Moreton (Morton?), not Wirral, and 342 families will be affected but that technically doesn't translate to 342 jobs. Instruction: Replace the information about Wirral with the information about Moreton. Replace the information about loss of 342 jobs with the information about affect 342 families. Edited Summary: A Moreton biscuit factory is to close, affecting 342 families. Example 2 Original Summary: Two teenage girls have appeared at Teesside Crown Court accused of murdering a woman in Middlesbrough. Explanation: The Teesside Crown Court was not mentioned by name, only the Youth court. The woman was found in Stephen Street and not Midlesbrough. Instruction: Replace the information about Middlesbrough with the information about Stephen Street. Replace the information about Teesside Crown Court with the information about Teesside Youth Court. Edited Summary: Two teenage girls have appeared at Teesside Youth Court accused of murdering a woman in Stephen Street. Example 3 Original Summary: Michael O'Halloran believes St Johnstone manager Tommy Wright will get the best out of him following his release by Rangers. Explanation: the first name info in summary is not found in the source text. St Johnstone info is also not mentioned in the source. Instruction: Remove the information about the first names of both people from the summary. Remove the information about Wright being the St Johnstone manager from the summary. Edited Summary: O'Halloran believes Wright will get the best out of him following his release by Rangers. ## Example 4 Original Summary: Aberdeen's Royal Concert Hall is to sell off hundreds of items of memorabilia as part of building work. Explanation: The source text doesn't state the name of the concert hall. Instruction: Replace the information about Aberdeen's Royal Concert Hall with the information about Aberdeen Performing Arts. Edited Summary: Aberdeen Performing Arts is to sell off hundreds of items of memorabilia as part of building work. ## Example 5 Original Summary: Lancashire County Council's decision to stop composting waste has been criticised as "catastrophic". Explanation: The original summary strongly implies that the decision to stop composting was "catastrophic" but the original text strongly implies that the composting program itself was a catastrophic failure versus stopping the program. Instruction: Replace the information about the stoppage of the composting program being catastrophic with the information about how the composting program was a catastrophic failure. Edited Summary: Lancashire County Council's decision to stop composting waste shows the program has been a "catastrophic" failure. Example 6 Original Summary: Alex Goode and Ollie Devoto have been called up to England's Six Nations training squad. Explanation: The source text does not mention the name of England's training squad. Instruction: Remove the information about the name of England's training squad from the summary. Edited Summary: Alex Goode and Ollie Devoto have been called up to England's training squad. ## B Factuality Metrics Setting We use two factuality metrics, DAE (Goyal and Durrett, 2020) and QAFactEval (Fabbri et al., 2022a), in our experiments. For DAE, we transfer its *dependency-level* predictions into *summarylevel* scores. Specifically, following the notation of Goyal and Durrett (2021), we use d(S) to denote the dependency-parse of a summary S. For each arc a in d(S), the DAE metric predicts a label ya representing if this dependency is entailed by the input document (ya = 1 means entailment.) Then, we define a summary-level factuality score fS using the predictions: $$f_{S}={\frac{\sum_{a}y_{a}}{|d(S)|}}.$$ $$(1)$$ . (1) ![14_image_0.png](14_image_0.png) For QAFactEval, we directly use its prediction scores since they are summary-level scores. ## C Defacto **Analysis** C.1 Intrinsic Summary Evaluation In §3.1, we use three metrics to evaluate the summary's intrinsic quality. (1) Compression rate (Grusky et al., 2018) is defined as the ratio of the number of words in the input document D and in the summary S: $$\mathrm{COMPRESSION}(D,S)={\frac{|D|}{|S|}}.$$ . (2) (2) Extractive Fragment Coverage (Grusky et al., 2018) is defined with *extractive fragments* (Grusky et al., 2018), F(*D, S*), which is a set of word sequences shared between the input document D and the summary S. Then, the Extractive Fragment Coverage is defined as $$\mathrm{COVERAGE}(D,S)={\frac{1}{|S|}}\sum_{f\in F(D,S)}|f|.$$ $$(3)$$ |f|. (3) (3) We define a word-level novelty between the input document D and the summary S as $$\mathrm{NOVELTY}=1-{\frac{|D\cap S|}{|S|}}.\qquad\qquad(4)$$ $\mathbf{r}$uctio ## C.2 Human-Written Instructions We find that it can take more than one instruction to perform the summary editing. Fig.4 shows the distribution of the number of instructions. ## D Experimental Details We use T5-3B and two variants of T0 models, T03B and T0pp in our experiments.15 For the 3B 15T5-3B (https://huggingface.co/t5-3b), T0-3B (https://huggingface.co/bigscience/T0_3B), and T0pp (https://huggingface.co/bigscience/T0pp) have around 3, 3, and 11 billion parameters respectively. models, it takes one 40GB GPU to train the model, and the training time is around 8 hours. For the 11B models, it takes eight 32GB GPUs to train the model, and the training time is around 20 hours. All the experiments converged in 50 epochs. ## D.1 Baseline Summarization Systems In §6.2, we compare the performance of *Editor* model with the following summarization systems: (1) CCGS (Chen et al., 2021), which is based on contrastive candidate generation and selection. (2) CLIFF (Cao and Wang, 2021), which is trained with contrasting learning and synthetically generated contrastive examples. (3) ReDRESS (Adams et al., 2022), which is a summary post-editor that learns to remove factual errors through contrastive learning. (4) FactPegasus (Wan and Bansal, 2022), which is pre-trained and fine-tuned with factual-consistencyaware training objectives. (5) CompEdit (Fabbri et al., 2022b), which is a compression-based post-editing model that removes the non-factual entities from the original summary by performing summary compression. $$2^{\circ}$$ ## D.2 **Setting Of Llm Case Study For Summary** Editing We use GPT-3.5 for the summary editing experiment with LLMs. To ensure stable results, we set the sampling temperature to 0. The prompt for summary editing *without* instructions is as follows: You will be given an article and a summary of the article, which is not factually consistent with the article. That is, the summary contains information that is not supported by the article. Your task is to edit the summary to make it factually consistent with the article. The correction should preserve most of the summary and only adapt it. Please only make the necessary changes to the summary. However, if you find all the information in the summary is not correct, please write a new summary of the entire article instead. The edit operations are: 1. Remove Information, 2. Add Information, 3. Replace Information, 4. Modify Information, 5. Rewrite Summary 6. Others. The summary should contain only one sentence. Please keep the style of the summary unchanged, and the length of the summary should be similar before and after your edits. Article: {{Article}} Summary: {{Summary}} Please edit the summary accordingly: The prompt for summary editing *with* instructions is as follows: You will be given an article and a summary of the article, which is not factually consistent with the article. That is, the summary contains information that is not supported by the article. You will be given instructions about how to edit the summary to make it factually consistent with the article. Your task is to follow the instructions and edit the summary accordingly. The correction should preserve most of the summary and only adapt it. Please only make necessary changes to the summary, and keep the summary length close to the original length. The summary should contain only one sentence. Article: {{Article}} Summary: {{Summary}} Instructions: {{Instruction}} Please edit the summary accordingly: ## D.3 Setting Of Llm Case Study For Instruction Generation The prompt we used for instruction generation is as follows: You will be given an article and a summary of the article, which is not factually consistent with the article. That is, the summary contains information that is not supported by the article. Your task is to generate instructions for editing the summary to make it factually consistent with the article. The correction should preserve most of the summary and only adapt it. Please only make the necessary changes to the summary. The edit operations are: 1. Remove Information, 2. Add Information, 3. Replace Information, 4. Modify Information, 5. Rewrite Summary 6. Others. Please note that "Remove Information" and "Replace Information" should be the majority of the operations. "Add Information" comes next. The summary should contain only one sentence. Please keep the style of the summary unchanged, and the length of the summary should be similar before and after the editing. ## Example: Summary: A coalition of US civil rights groups has called on Facebook to do more to protect black people from racist abuse on the social network, saying the site is "racially biased". The summary is not factually consistent because the source text does not contain the information called on Facebook to do more to protect black people, which is stated in the summary. Instructions: Replace the information about the claim of the coalition with information about better moderation on the platform. More instruction examples (summaries omitted): - "Instruction 1" - "Instruction 2" - ... Input: Article: {{Article}} Summary: {{Summary}} Please generate the editing instructions for the summary to make it factually consistent with the article: As discussed in §5.3, LLMs still lack the ability of *controllable* instruction generation. For example, GPT-3.5 responded with "No editing instructions needed" 23.9% of the time, despite being directly instructed to edit a factually inconsistent summary. Conversely, GPT-4 only made such mistakes in 1.8% of examples. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 9 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2-6. ✓ B1. Did you cite the creators of artifacts you used? Section 1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Yes. We described the license of XSum dataset in the Appendix. We will add descriptions of the license for the dataset we create upon acceptance. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No, due to time constraints, we will specify the data release license upon acceptance. Our use case is research-based and consistent with the underlying licenses. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use standard benchmark datasets that have been widely used so we expect the risk of their containing offensive or personal information is relatively low. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In Appendix A. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 4, 5, 6. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 and Appendix D ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Section 4, 5, 6, Appendix D, and we provide the hyperparameter settings in the training scripts, which is included in the supplementary material. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We only report the result one time since we only report the baseline model results for our collected dataset. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 2. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix A and supplementary materials. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 2. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The annotators had signed consent before they accepted the annotation asks. Due to anonymity concerns, we will release the details of the consent upon acceptance. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We use standard benchmark datasets that have been widely used so we expect the risk of their containing offensive or personal information is relatively low. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 2.
mendelsohn-etal-2023-dogwhistles
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
https://aclanthology.org/2023.acl-long.845
Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second, often hateful or provocative, meaning to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For example, the word {``}cosmopolitan{''} in a sentence such as {``}we need to end the cosmopolitan experiment{''} can mean {``}worldly{''} to many but also secretly mean {``}Jewish{''} to a select few. We present the first large-scale computational investigation of dogwhistles. We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles with rich contextual information and examples, and analyze their usage in historical U.S. politicians{'} speeches. We then assess whether a large language model (GPT-3) can identify dogwhistles and their meanings, and find that GPT-3{'}s performance varies widely across types of dogwhistles and targeted groups. Finally, we show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks presented by such coded language. This work sheds light on the theoretical and applied importance of dogwhistles in both NLP and computational social science, and provides resources to facilitate future research in modeling dogwhistles and mitigating their online harms.
## From Dogwhistles To Bullhorns: Unveiling Coded Rhetoric With Language Models Julia Mendelsohn♢ ∗ Ronan Le Bras♣ Yejin Choi♠♣ **Maarten Sap**♡♣ ♢University of Michigan School of Information ♣Allen Institute for AI ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ♡Language Technologies Institute, Carnegie Mellon University \# juliame@umich.edu dogwhistles.allen.ai ## Abstract Warning: content in this paper may be upsetting or offensive to some readers. Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For example, in the sentence "we need to end the cosmopolitan experiment," the word "*cosmopolitan*" likely means "*worldly*" to many, but secretly means "*Jewish*" to a select few. We present the first large-scale computational investigation of dogwhistles. We develop a typology of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles with rich contextual information and examples, and analyze their usage in historical U.S. politicians' speeches. We then assess whether a large language model (GPT3) can identify dogwhistles and their meanings, and find that GPT-3's performance varies widely across types of dogwhistles and targeted groups. Finally, we show that harmful content containing dogwhistles avoids toxicity detection, highlighting online risks of such coded language. This work sheds light on the theoretical and applied importance of dogwhistles in both NLP and computational social science, and provides resources for future research in modeling dogwhistles and mitigating their online harms. ## 1 Introduction The cosmopolitan elite *look down on the common affections that once bound this nation together: things like place and national feeling and* religious faith. . . The cosmopolitan agenda has driven both Left and Right. . . It's time we ended the cosmopolitan experiment and recovered the promise of the republic. –Josh Hawley (R-MO), 2019 ∗Work done while interning at the Allen Institute for AI. Figure 1: Schematic of how dogwhistles work, based ![0_image_0.png](0_image_0.png) on Henderson and McCready (2018) with the example of *cosmopolitan*. First, a speaker simultaneously communicates the dogwhistle message and their persona (identity). The in-group recovers both the message content and speaker persona, enabling them to arrive at the coded meaning (e.g. *Jewish*). The out-group only recognizes the message's content and thus interprets it literally. This literal meaning also provides the speaker with plausible deniability; if confronted, the speaker can claim that they solely intended the literal meaning. We have got this tailspin of culture, in our inner cities *in particular, of men not working and* just generations of men not even thinking about working or learning to value the culture of work. –Paul Ryan (R-WI), 2014 Cosmopolitan and *inner city* are examples of dogwhistles, expressions that "send one message to an out-group and a second (often taboo, controversial, or inflammatory) message to an in-group" (Henderson and McCready, 2018). Many listeners would believe that Hawley is simply criticizing well-traveled or worldly people, but others recognize it as an attack on the Jewish people. Similarly, many assume that Ryan is discussing issues within a geographic location, but others hear a pernicious stereotype of Black men as lazy. Crucially, Hawley and Ryan can avoid alienating the out-group by maintaining *plausible deniability*: they never explicitly say "Jewish" or "Black", so they can reject accusations of racism (Haney-López, 2014). Because dogwhistles can bolster support for par15162 ticular policies or politicians among the in-group while avoiding social or political backlash from the out-group, they are a powerful mechanism of political influence (Mendelberg, 2001; Goodin and Saward, 2005). For example, racist dogwhistles such as *states' rights* and *law and order* were part of the post-Civil Rights Republican Southern Strategy to appeal to white Southerners, a historically Democratic bloc (Haney-López, 2014). Despite polarization and technology that enables message targeting to different audiences, dogwhistles are still widely used by politicians (Haney-López, 2014; Tilley et al., 2020) and civilians in online conversations (Bhat and Klein, 2020; Åkerlund, 2021). Beyond political science, research on dogwhistles is urgent and essential for NLP, but they remain a challenge to study. Dogwhistles are actively and intentionally deployed to evade automated content moderation, especially hate speech detection systems (Magu et al., 2017). They may also have harmful unseen impacts in other NLP systems by infiltrating data used for pretraining language models. However, researchers face many difficulties. First, unless they are a part of the in-group, researchers may be completely unaware of a dogwhistle's existence. Second, dogwhistles' meanings cannot be determined by form alone, unlike most overt hateful or toxic language. Rather, their interpretation relies on complex interplay of different factors (context, personae, content, audience identities, etc.; Khoo, 2017; Henderson and McCready, 2018, 2019; Lee and Kosse, 2020), as illustrated in Figure 1. Third, since their power is derived from the differences between in-group and out-group interpretations, dogwhistles continuously evolve in order to avoid being noticed by the out-group. We establish foundations for large-scale computational study of dogwhistles by developing theory, providing resources, and empirically analyzing dogwhistles in several NLP systems. Prior work largely focuses on underlying mechanisms or political effects of dogwhistle communication (Albertson, 2015; Henderson and McCready, 2018) and typically considers a very small number of dogwhistles (often just one). To aid larger-scale efforts, we first create a new taxonomy that highlights both the systematicity and wide variation in kinds of dogwhistles (§2.1). This taxonomy characterizes dogwhistles based on their covert meanings, style and register, and the personae signaled by their users. We then compile a glossary of 340 dogwhistles, each of which is labeled with our taxonomy, rich contextual information, explanations, and realworld examples with source links (§2.2-2.3). As this glossary is the first of its kind, we highlight its value with a case study of racial dogwhistles in historical U.S. Congressional Speeches (§3). We then apply our taxonomy and glossary to investigate how dogwhistles interact with existing NLP systems (§4). Specifically, we evaluate the ability of large language models (i.e. GPT-3) to retrieve potential dogwhistles and identify their covert meanings. We find that GPT-3 has a limited capacity to recognize dogwhistles, and performance varies widely based on taxonomic features and prompt constructions; for example, GPT-3 is much worse at recognizing transphobic dogwhistles than racist ones. Finally, we show that hateful messages with standard group labels (e.g. *Jewish*) replaced with dogwhistles (e.g. *cosmopolitan*) are consistently rated as far less toxic by a commercially deployed toxicity detection system (Perspective API), and such vulnerabilities can exacerbate online harms against marginalized groups (§5). This work highlights the significance of dogwhistles for NLP and computational social science, and offers resources for further research in recognizing dogwhistles and reducing their harmful impacts. Our glossary, code, results, GPT3 outputs, and a form for adding new dogwhistles to our glossary are all available at: https: //dogwhistles.allen.ai. ## 2 Curating A Dogwhistle Glossary 2.1 Taxonomy Based on prior work and our own investigations, we craft a new taxonomy (Figure 2). We categorize dogwhistles by register, type, and persona. Register We label all dogwhistles as either part of a formal/offline or **informal/online** register. Formal/offline dogwhistles originated in offline contexts or are likely to appear in statements by mainstream political elites (e.g. *family values*). The informal/online register includes dogwhistles that originated on the internet and are unlikely to be used in political speech (e.g. *cuckservative*). Type I Henderson and McCready (2018) distinguish dogwhistles into two types: **Type I** dogwhistles covertly signal the speaker's persona but do not alter the implicatures of the message itself, while **Type II** dogwhistles additionally alter the ![2_image_0.png](2_image_0.png) message's implied meaning. We extend this typology to highlight the wide variety of dogwhistles, which has important consequences for building a theory of dogwhistles as well as future computational modeling. We identify three subcategories of "only persona-signaling" (Type I) dogwhistles: symbols (including emojis, abbreviations, and imagery), **self-referential** terms for members of the in-group, and dogwhistles that require specialized knowledge from a **shared in-group culture**. Type II Dogwhistles with an "added message meaning" (Type II) tend to fall into two subcategories: they name a concept or serve as a substitute for a target group label. We further divide concepts into **policies** (titles for initiatives with covert implications, such as *law and order*), **values** that the in-group purports to uphold, expressions whose covert meanings are grounded in in-group **humor**, and **other concepts**, which are often coded names for entities that are not group labels (e.g. the New World Order conspiracy theory is antisemitic but does not name or describe Jewish people). Dogwhistles serve as target group labels in three ways. Many are stereotype-based, whose interpretations rely on pre-existing associations between the dogwhistle and target group; we separate these into **stereotype-based target group labels**, which directly name the target group (e.g. *cosmopolitan*), while **stereotype-based descriptors** are less direct but still refer to the target group (e.g. *innercity*). Others have an **arbitrary or phonetic** relationship to the group label; these are commonly used to evade content moderation, such as "Operation Google" terms invented by white supremacists on 4chan to replace various slurs (Magu et al., 2017; Bhat and Klein, 2020). The final subcategory, **Bogeyman**, includes names of people or institutions taken to represent the target group (e.g. George Soros↔*Jewish*, or Willie Horton↔*Black*). Persona **Persona** refers to the in-group identity signalled by the dogwhistle. Figure 2 lists some personae, but this is an open class with many potential in-groups. There is considerable overlap in membership of listed in-groups (e.g. white supremacists are often antisemitic), so we label persona based directly on explanations from sources referenced in our glossary (as described in 2.2). Drawing upon third-wave sociolinguistics, personae are not static labels or stereotypes; rather, people actively construct and communicate personae through linguistic resources, such as dogwhistles (Eckert, 2008). ## 2.2 Gathering Dogwhistles We draw from academic literature, media coverage, blogs, and community-sourced wikis about dogwhistles, implicit appeals, and coded language. Since academic literature tends to focus on a small set of examples, we expanded our search to media coverage that identifies dogwhistles in recent political campaigns and speeches (e.g. Burack, 2020) or attempts to expose code words in hateful online communities (e.g. Caffier, 2017). During our search, we found several community-sourced wikis that provided numerous examples of dogwhistles, particularly the RationalWiki "Alt-right glossary", "TERF glossary", and "Code word" pages.1 ## 2.3 Glossary Contents Our glossary contains 340 English-language dogwhistles and over 1,000 surface forms (morphological variants and closely-related terms), mostly from the U.S. context. Each dogwhistle is labeled 1rationalwiki.org/wiki/{Alt-right_ glossary,TERF_glossary,Code_word} | meaning | Trans people threaten cis women's rights Many anti-transgender people [claim that] women's "sex-based rights" are somehow being threatened, removed, weakened, eroded, or erased by transgender rights. . . "Sex-based rights", by the plain English meaning of those words, cannot exist in a country that has equality law. . . it's mostly a dog-whistle: a rallying slogan much like "family values" for religious conservatives, which sounds wholesome but is a deniable and slippery code-word for a whole raft of unpleasant bigotry. | |-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Source | Medium post by David Allsopp When so-called leftists like @lloyd_rm demand that we give up our hard-won sex-based rights, they align themselves squarely with men's rights activists. To both groups, female trauma is white noise, an irrelevance, or else exaggerated or invented. | | Context | Tweet by J.K. Rowling on June 28, 2020 | ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) with its register, type, and signaled persona, an explanation from a linked source, and at least one example with linguistic, speaker, situational, and temporal context included, as well as a link to the example text. Table 1 shows one glossary entry for the transphobic dogwhistle *sex-based rights*. Antisemitic, transphobic, and racist (mostly antiBlack but sometimes generally against people of color) dogwhistles are the most common, with over 70 entries for each persona. The glossary includes dogwhistles with other personae, such as homophobic, anti-Latinx, Islamophobic, anti-vax, and religious. See Table A.1 in the Appendix for glossary statistics across register, type, and persona. Because dogwhistles continuously evolve, we intend for this resource to be a living glossary and invite the public to submit new entries or examples. ## 3 Case Study: Racial Dogwhistles In Historical U.S. Congressional Speeches We showcase the usefulness of our glossary, with a diachronic case study of racial dogwhistles in politicians' speeches from the U.S. Congressional Record (Gentzkow et al., 2019; Card et al., 2022) to analyze the frequency of speeches containing racist dogwhistles from 1920-2020. For this case study, we simply identify glossary terms based on regular expressions and do not distinguish between covert and literal meanings of the same expressions. We also measure how ideologies of speakers using dogwhistles changed over time using DW-NOMINATE (Poole and Rosenthal, 1985), a scaling procedure that places politicians on a two dimensional map based on roll call voting records, such that ideologically similar politicians are located near each other (Carroll et al., 2009; Lewis et al., 2023). We consider the first dimension of DW-NOMINATE, which corresponds to a liberal-conservative axis.2 As shown in Figure 3, dogwhistle use began to increase during the Civil Rights Era, following the 1954 *Brown vs. Board of Education* Supreme Court decision mandating racial integration of public schools. This aligns with qualitative accounts of the Republican Southern Strategy: because explicit racism was no longer acceptable, politicians turned to dogwhistles to make the same appeals implicitly, particularly aiming to gain the support of white voters in the Southern United States (Mendelberg, 2001). Their frequency continued to increase from the 1970s through the 1990s, paralleling HaneyLópez (2014)'s account of dogwhistles during the Nixon, Reagan, Bush Sr., and Clinton presidencies. Since the 1990s, the frequency of racial dogwhistles has fluctuated but remained high. Like Haney-López (2014), we qualitatively observe that the dogwhistles invoked post-9/11 have shifted towards being more Islamophobic and anti-Latinx rather than exclusively anti-Black. We caution that this case study and Figure 3 do not make novel claims; rather, our goal is to show that even a naive application of our glossary illustrates qualitatively well-established historical patterns in U.S. politics. Figure 4 shows how the average ideologies of speakers who use particular dogwhistles (*property* rights, thug, welfare reform, *hardworking Americans*, and *Willie Horton*) have shifted over time, and reveals interesting insights into the evolution and lifecycle of dogwhistles. Most racial dogwhistles in the U.S. Congressional Speeches have become increasingly associated with more conservative speakers over time. However, the inflection point when speaker ideologies shift varies across dogwhistles, suggesting that they emerged as dogwhistles at different points. For example, *property* rights became increasingly associated with more conservative speakers since the 1960s, while the average ideology of speakers using *welfare reform* ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) ## Did Not Change Until The 1990S. Willie Horton presents an interesting example. In his 1988 presidential campaign, George Bush ran a television advertisement featuring Willie Horton, a Black man convicted of rape and murder while on prison furlough (Mendelberg, 2001). The ad was so powerful among white voters that it propelled Bush to victory, but shortly afterwards was explicitly called out as racist (Haney-López, 2014). We see this pattern in Figure 4: in 1988, *Willie Horton* was associated with extremely conservative speakers, but quickly became more liberal, and *Willie Horton* no longer functioned as a successful dogwhistle. ## 4 Recognition Of Dogwhistles In Gpt-3 We conduct two experiments to assess if a large language model, GPT-3 (Brown et al., 2020), can recognize dogwhistles. First, we interrogate whether GPT-3 is able to **identify covert meanings** of dogwhistles from our glossary, an ability that would be instrumental in understanding the breadth of online bigotry. Second, we measure GPT-3's ability to **surface** dogwhistles, motivated by the fact that dogwhistles are often intentionally obscured from researchers which makes it impossible to ensure that a manual search is complete or comprehensive. Since GPT-3 is trained on large portions of internet data, it may be able to reveal a more comprehensive and diverse set of dogwhistles than manual efforts. Finally, we present a small manual analysis of **in-context dogwhistle recognition** using our glossary examples and GPT-4 as a proof of concept ## For Future Work. 4.1 Identifying Covert Meanings To gauge which types of in-group language GPT-3 might have seen during training, we examine if it is able to identify dogwhistles' covert meanings. Experimental setup We create prompts that vary in (1) which of the five definitions of dogwhistles is provided, if any, and (2) if the prompt includes a secret cue. For each input term, we construct 12 prompts (6 definitions including none, and each with or without a secret cue). Consider the example below, with a definition, secret cue, dogwhistle and GPT-3's response . Prompts with no secret clue simply exclude the word *secretly*. We generate responses for each dogwhistle in the glossary and consider multiple forms for some, such as actual emojis as well as descriptions. We thus test 480 variants with 12 prompts for each, leading to 5,760 generations.3 A dogwhistle is the use of coded or suggestive language in political messaging to garner support from a particular group without provoking opposition. For example, "cosmopolitan" secretly means "Jewish to many anti-Semitic people . 3We use the text-davinci-002 GPT-3 model with max_tokens=256, temperature=0.0 (more deterministic outputs) and num_outputs=1. No Secret Cue With Secret Cue No Definition .085 [±.025] .296 [±.021] With Definition .437 [±.010] .543 [±.010] Table 2: Proportion of dogwhistles with covert meanings identified by GPT-3 when prompted with a dogwhistle definition, a secret cue, both, or neither. Confidence intervals are shown in brackets. Results The first author manually evaluated whether or not each of the 5,760 GPT-3 generations contains the covert meaning for each dogwhistle. 80.3% of dogwhistles had their covert meanings identified in at least one generation. Overall, 56.0% generations contained the correct covert meaning for dogwhistles that are part of the formal/offline register, but just 29.4% for dogwhistles in the informal/online register. We refer readers to Appendix A.2 (Figure A.2) for more details about registerbased variation and examples of dogwhistles for which GPT-3 performed particularly well or poorly. The specific prompt form strongly impacts GPT3's ability to generate covert meanings (Table 2). Without a definition or secret cue, covert meanings are identified in just 8.5% of generations. Including both a definition and secret cue improves GPT-3's performance over 5-fold, with dogwhistles' covert meanings identified in 54.3% of generations. We observe wide variation in GPT-3's ability to identify covert meanings across personae. Among the most represented personae in our glossary (at least 100 generations for each), GPT-3 has the lowest recognition of transphobic dogwhistles, the highest recognition of homophobic and Islamophobic dogwhistles, with antisemitic, white supremacist, and racist dogwhistles in the middle (Appendix Table A.3). There is also variation in performance by dogwhistle type and the specific definition provided; we refer the reader to Appendix A.2 and Figure A.3 for more details. ## 4.2 Surfacing Dogwhistles In addition to evaluating if GPT-3 can identify dogwhistles' covert meanings, we assess GPT-3's ability to surface dogwhistles in text generation. Experimental setup We construct a series of prompts that begin with one of five definitions of dogwhistles from prior work (Table A.2). The definition is followed by a question or request for examples (see Appendix A.1 for more prompting details). In the following example, the definition is marked in blue, the request in purple, and GPT-3's response is highlighted in yellow . A dogwhistle is the use of coded or suggestive language in political messaging to garner support ![5_image_0.png](5_image_0.png) Evaluation We use our glossary as a proxy to measure precision and recall of GPT-3's ability to surface dogwhistles because an exhaustive groundtruth set of dogwhistles does not exist. We calculate recall as the proportion of dogwhistles in our glossary that were also surfaced at least once by GPT-3. For precision, the authors manually inspect candidates appearing in at least 4% of GPT3 text generations for generic, *white supremacist*, racist, antisemitic, *Islamophobic*, and *transphobic* prompt types. Because our glossary is not exhaustive, this method yields conservative estimates (see Appendix A.1 for more evaluation details). Precision Results We find that GPT-3 does have the ability to surface dogwhistles when prompted to do so, but caution that such results are imperfect and require manual verification. The most common errors involve explicit mentions of groups in stereotypes or conspiracy theories (*Jews are behind the* 9/11 attacks) or phrases that may accompany dogwhistles but are not dogwhistles themselves (I'm not racist but...). Precision in dogwhistle surfacing varies across prompt types; while the average precision over all six prompt types is 66.8%, scores range from just 50% for transphobic dogwhistle prompts to 91.3% for generic prompts (Figure A.1). Recall Results GPT-3 surfaced 153 of 340 dogwhistles in our glossary (45%). We observe significant differences by register: GPT-3 surfaced 69.4% of *formal/offline* dogwhistles but just 12.9% of *informal/online* dogwhistles. Despite its ability to generate emojis and other symbols, GPT-3 did not surface any symbols or emojis from our glossary except for the antisemitic triple parentheses "*((()))*". Figure 5 shows GPT-3 surfacing recall results by both register and in-group personae. We show results for the five most-frequent personae represented in our glossary. Recall of dogwhistles in the ![6_image_0.png](6_image_0.png) informal/online register is low across the board. For the formal/offline register, recall is considerably higher although it varies widely across personae. As with precision, GPT-3 has the lowest performance for transphobic dogwhistles, surfacing just 44.8% of formal/offline transphobic dogwhistles. For formal/offline antisemitic dogwhistles, recall is considerably higher but far from perfect at 71.7%. GPT-3 has 80.3% and 83.3% recall of racist and white supremacist dogwhistles, respectively, and full 100% recall of Islamophobic dogwhistles. ## 4.3 Identifying Dogwhistles In Context Our experiments show that LLMs have some ability to identify covert meanings of a given dogwhistle specified in the prompt (Section 4.1) and generate dogwhistles (Section 4.2). We now consider a natural follow-up question: can LLMs correctly identify dogwhistles in real-world texts? Fully addressing this question would require a larger dataset including a variety of linguistic contexts per dogwhistle, documents containing no dogwhistles, and documents with dogwhistle expressions that do not carry the dogwhistle meaning. We leave such a rigorous analysis for future work. For now, we present a small manual analysis with ChatGPT/GPT-4 (OpenAI, 2023), using several real-world examples from our glossary as a proof of concept for in-context dogwhistle recognition. We prompt GPT-4 with the instructions below. Appendix Table A.5 shows the exact glossary example and GPT-4 output text for the dogwhistles *cosmopolitan (elite)* (antisemitic), *inner-city* (racist), \#IStandWithJKRowling, (transphobic), and did you see Kyle? (white supremacist). A dogwhistle is the use of coded or suggestive language in political messaging to garner support from a particular group without provoking opposition. Identify the dogwhistle in the following message: [GLOSSARY EXAMPLE]. Using the examples presented in Section 1, GPT4 correctly identifies the dogwhistles *cosmopolitan* (elite) and *inner city*, and generates a correct explanation for their covert meanings. However, the model does not work as well for the other examples. For the example containing *\#IStandWithJKRowling*, GPT-4 correctly identifies that text covertly signals transphobia through support of JK Rowling, but does not select this hashtag as the dogwhistle. On the other hand, GPT-4 correctly identifies the dogwhistle in a tweet from JK Rowling, and correctly relates this symbol to the women's suffrage movement, but does not capture the appropriation of this symbol to covertly communicate transphobia. Finally, GPT-4 misses both the dogwhistle and the precise covert meaning for did you see Kyle? ("see Kyle" sounds similar to the Nazi slogan "Sieg Heil"); while the model still ultimately identifies covert white supremacy, it generates a false explanation connecting the glossary example to this persona. ## 5 Dogwhistles And Toxicity Detection Beyond evaluating language models' ability to recognize dogwhistles, we seek to understand how dogwhistles affect the decisions that NLP systems make, and how this has downstream implications for content moderation and online safety. We begin to address this with a study of how dogwhistles are handled by a widely-deployed toxic language detection system, Google/Jigsaw's Perspective API.4 Perspective API scores a text between 0 and 1 for a range of attributes (e.g. toxicity, identity attack, profanity), representing the estimated probability that a reader would perceive the text to contain that attribute. Perspective API's models are multilingual BERT-based models distilled into singlelanguage convolutional neural networks for faster inference, and are trained on annotated data from online forums. We refer readers to the Perspective API Model Cards for more details.5 4https://perspectiveapi.com/ 5https://developers.perspectiveapi. com/s/about-the-api-model-cards ![7_image_0.png](7_image_0.png) Experimental setup We consider 237 hateful sentence templates from HateCheck (Röttger et al., 2021), a test suite for bias in hate speech detection, that contain placeholders for identity terms (group referents) in either adjectival, singular nominal, or plural nominal forms. We fill filled with a standard group label, a slur, or a dogwhistle in the corresponding grammatical form requested by the template. For this experiment, we consider racist (mostly anti-Black), antisemitic, and transphobic terms, as these personae are the most common in our glossary (see Tables A.7 and A.8 for a sample of sentence templates and group label terms, respectively). We feed our resulting 7,665 sentences to Perspective API to get scores for toxicity, *severe* toxicity, and *identity attack*. Results Hateful sentences are rated as less toxic, less severely toxic, and less identity-attacking when dogwhistles are used instead of standard group labels or slurs (Figure 6). This pattern holds for all three personae (Appendix Figure A.4). Interestingly, mean toxicity scores for slurs are lower than for standard group labels, especially for antisemitic slurs. We observe relatively wide variation in Perspective API's ratings depending on the specific choice of slur. For example, sentences containing the *N-word* are almost always rated as more toxic than the same sentences containing *Black* or Black people. Lower toxicity ratings for other slurs, such as the highly derogatory antisemitic *K-word*6 may be because, similar to dogwhistles, Perspective API does not recognize that these terms refer 6https://ajc.org/translatehate/kike to identity groups. However, deeper analysis of slurs is outside the scope of the current work. ## 6 Discussion & Conclusion We lay the groundwork for NLP and computational social science research on dogwhistles by developing a new taxonomy and glossary with rich contextual information and examples. We demonstrate our glossary's utility in a case study of historical U.S. Congressional speeches, where our quantitative analysis aligns closely with historical accounts. We further use our glossary to show that GPT-3 has some, but limited, ability to retrieve dogwhistles and recognize their covert meanings. Finally, we verify that dogwhistles readily evade PerspectiveAPI's toxicity detection. We now turn to several implications of this work, highlighting potential future directions across disciplines. Dogwhistles and toxic language Dogwhistles are closely related to other forms of subtle biases studied in NLP, such as implicit hate speech and symbols (Magu et al., 2017; Magu and Luo, 2018; ElSherief et al., 2018, 2021; Qian et al., 2019; Caselli et al., 2020; Menini et al., 2021; Arviv et al., 2021; Botelho et al., 2021; Wiegand et al., 2021a,b; Hartvigsen et al., 2022), microaggressions (Breitfeller et al., 2019), dehumanization (Mendelsohn et al., 2020), propaganda (Da San Martino et al., 2020), condescension (Pérez-Almendros et al., 2020), and stereotypes (Nangia et al., 2020; Sap et al., 2020; Nadeem et al., 2021). However, dogwhistles are distinct from toxic language in several important ways. First, although often implicitly abusive, they are not exclusively hateful; for example, *wonder-working power* covertly signals the speaker's Evangelical Christian identity (Albertson, 2015). Second, dogwhistles are characterized by dual meanings, wherein different sub-audiences interpret the exact same message differently (Henderson and McCready, 2018). Third, dogwhistles' true meanings are intentionally hidden from the out-group (Saul, 2018). Nevertheless, because dogwhistles are often deployed specifically to avoid hate speech detection and other content moderation tools, NLP researchers should consider how dogwhistles highlight a vulnerability in extant language technologies, which ultimately puts people's safety and well-being at risk. We show that hateful speech using dogwhistles evades toxicity detection, and is one way that NLP systems (unintentionally) perpetuate harms against marginalized groups. This finding is not surprising, as prior work shows that toxicity detection often fails on subtle language (Han and Tsvetkov, 2020; Hartvigsen et al., 2022), but underscores the need for toxicity and hate speech detection models to be able to flag hateful dogwhistles. One potential approach to improve such models could be to train them to recognize dogwhistles in naturallyoccurring in-group contexts (starting with modeling contextual factors; Zhou et al., 2023). More broadly, content moderation pipelines should take context into account and consider mechanisms to identify when a dogwhistle has potentially negative consequences. Beyond toxicity detection, future work ought to consider the impact of dogwhistles in a broader range of NLP tasks, such as bias mitigation or story generation. How do LLMs know about dogwhistles? Our findings regarding GPT-3's ability to surface and identify dogwhistles' covert meanings are probably driven by the contents of the training data. GPT-3's training data likely includes right-wing extremist content, as has been shown with its predecessor GPT-2 (Gehman et al., 2020), which may result in high performance for dogwhistles from these in-groups. Or perhaps the model is simply memorizing articles or social media posts that explicitly call out certain expressions as dogwhistles. Future work could evaluate if large language models can learn dogwhistles' covert meanings from incontext usage alone by experimentally controlling for whether or not these terms are explicitly exposed as dogwhistles in the training data. Moreover, we find that GPT-3's performance varies widely across target groups. Transphobic dogwhistles are notably difficult for GPT-3 to surface and identify. Perhaps this is because the model is trained on fewer data from transphobic communities compared to other in-groups considered in this work. Furthermore, transphobic dogwhistles may be less frequent in the training data because many have emerged relatively recently. Another reason may be formatting: transphobic dogwhistles are often emoji-based and appear in social media screen names and profile bios rather than in posts themselves. We hope that future work will investigate the links between language models' knowledge of dogwhistles and training data. Potential of LLMs for dogwhistle research Beyond the risks presented by current NLP technologies, we wish to highlight the potential benefits of using NLP to advance dogwhistle research. Even though LLMs' performance is likely due to vast training data, and even then, their outputs require manual verification, our experiments with GPT-3 demonstrate that LLMs have some ability to surface dogwhistles and explain their covert meanings. This is particularly valuable as dogwhistles are intentionally hidden from out-group members, and out-group researchers may have no other way to access this information. There is thus a unique opportunity for LLMs to assist dogwhistle research, and political content analysis more broadly. Bridging large-scale analysis and mathematical models Our work builds foundations for largescale computational analysis of dogwhistles in realworld political discourse. We diverge from prior quantitative dogwhistle research, which focuses on mathematically modeling the process underlying dogwhistle communication using probabilistic, game-theoretic, deep learning, and networkbased approaches on simulation data (Smaldino et al., 2018; Dénigot and Burnett, 2020; Henderson and McCready, 2020; Breitholtz and Cooper, 2021; Smaldino and Turner, 2021; Xu et al., 2021; Hertzberg et al., 2022; van der Does et al., 2022). We are optimistic about future research synthesizing these two strands of work to address many of the challenges presented by dogwhistles. For example, future work could use our resources along with these mathematical models to develop systems that can automatically detect dogwhistle usages, emergence of new dogwhistles, or decline of older terms as dogwhistles due to out-group awareness. Implications for social science research Understanding dogwhistles at scale has vast implications across disciplines, so we develop resources useful for both NLP and social science researchers. We provide the most comprehensive-to-date glossary of dogwhistles and demonstrate through our case study how this resource can be used to analyze political speeches and other corpora, such as social media posts and newspaper articles. Dogwhistles have mostly been studied using primarily qualitative methods (Moshin, 2018; Åkerlund, 2021) and experiments (Albertson, 2015; Wetts and Willer, 2019; Thompson and Busby, 2021), and we hope that by facilitating quantitative content analysis, our resources can add to dogwhistle researchers' methodological repertoires. ## 7 Limitations This work represents an initial push to bring dogwhistles to the forefront of NLP and computational social science research, and as such, has many limitations. Our glossary is the most comprehensive resource to date (to the best of our knowledge) but aims to document a moving target, as dogwhistles continuously emerge or fall out of use due to outgroup awareness. We aim to make this resource a "living glossary" and encourage others to submit new entries or examples. We further encourage future research to develop models to automatically detect the emergence of new dogwhistles. Another major limitation in this work is that we identify as out-group members for nearly all dogwhistles in the glossary and have an adversarial relationship with many of the communities studied (e.g. white supremacists). Although our work would ideally be validated by members of the ingroups, they have very little incentive to share this information, as that would damage the dogwhistle's utility as a tool for covert in-group communication. This work, like most prior work, is limited in that we operationalize dogwhistles as a static binary; we assume each term either does or does not have a dogwhistle interpretation and is categorically included or excluded from our glossary and analyses. In reality, dogwhistles are far more complicated constructs. For example, Lee and Kosse (2020) characterize dogwhistles along two dimensions: the size of their in-group and the degree to which their usage is conventionalized. Other axes of variation may include the level of out-group awareness, and the social and political risks of backlash to the communicator if the dogwhistle interpretation is exposed. It is even possible that audience members who hear a dogwhistle further recirculate it even if they themselves do not recognize the covert meaning (Saul, 2018). We hope future work will consider multifaceted and continuous measures of "dogwhistleness" that account for such nuances. Finally, the current work is limited in the scope of dogwhistles considered: they are all in English with the vast majority coming from the U.S. political and cultural contexts. However, dogwhistles are prominent across cultures (Pal et al., 2018; Åkerlund, 2021) and we hope that future work will consider other languages and cultures, especially involving researchers who have high awareness of or expertise in non-U.S political environments. ## 8 Ethical Implications We caution readers about several potential ethical risks of this work. First is the risk of readers misusing or misunderstanding our glossary. We emphasize that dogwhistles are extremely contextdependent, and most terms in the glossary have benign literal meanings that may be more common than the covert dogwhistle meanings. For example, many entities from the financial sector have been used as antisemitic dogwhistles (e.g. the Federal Reserve, *bankers*) but their primary usage has no antisemitic connotations. Relatedly, some glossary entries include terms that originate from the target group but were appropriated by the dogwhistles' in-group. Examples include the appropriation of goy (a Yiddish word for non-Jewish people) as an antisemitic in-group signal, and *baby mama* (originally from African American English) as a racist dogwhistle. As with hate speech detection (Sap et al., 2019), there is a risk of social bias in dogwhistle detection. As we have discussed throughout this work, dogwhistle researchers face a challenge with no exhaustive ground truth and an unknown search space. We anticipate our glossary being a helpful resource for this reason, but because we also lack such exhaustive ground truth, there are bound to be biases in the representation of dogwhistles in our glossary. The current version of the glossary may exclude groups and thus lead to worse performance in dogwhistle detection, toxic language detection, and other downstream NLP tasks. Our glossary also includes real-world examples of how each dogwhistle is used. This presents a privacy risk, which we mitigate by prioritizing examples from public figures or examples from anonymous social media accounts whenever possible. We do not release personal information of any speaker who is not a well-known public figure. Finally, we do not pursue any computational modeling or prediction of dogwhistle usages in this work, but see it as a natural direction for future work. However, we caution researchers to consider dual-use issues in doing so. Many people use coded language in order to avoid censorship from authoritarian regimes (Yang, 2016) and marginalized groups may also use coded language for their own safety (Queen, 2007). When building computational models, we urge researchers to mitigate this dual-use risk as much as possible. ## Acknowledgements We thank Ceren Budak, Yulia Tsvetkov, and audiences at Text as Data 2022 (TADA) and New Ways of Analyzing Variation 50 (NWAV) for their helpful feedback on an earlier version of this work. We also thank the anonymous reviewers for their comments and suggestions. J.M. gratefully acknowledges support from the Google PhD Fellowship. ## References Mathilda Åkerlund. 2021. Dog whistling far-right code words: the case of 'culture enricher'on the swedish web. *Information, Communication & Society*, pages 1–18. Bethany L Albertson. 2015. Dog-whistle politics: Multivocal communication and religious appeals. *Political Behavior*, 37(1):3–26. Eyal Arviv, Simo Hanouna, and Oren Tsur. 2021. It's a thin line between love and hate: Using the echo in modeling dynamics of racist online communities. Proceedings of the International AAAI Conference on Web and Social Media, 15(1):61–70. David A Bateman and John Lapinski. 2016. Ideal points and american political development: Beyond dwnominate. *Studies in American Political Development*, 30(2):147–171. Prashanth Bhat and Ofra Klein. 2020. Covert hate speech: White nationalists and dog whistle communication on twitter. In *Twitter, the public sphere,* and the chaos of online deliberation, pages 151–172. Springer. Austin Botelho, Scott Hale, and Bertie Vidgen. 2021. Deciphering implicit hate: Evaluating automated detection algorithms for multimodal hate. In *Findings* of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1896–1907. Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. In *Proceedings of the 2019 conference* on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 1664– 1674. Ellen Breitholtz and Robin Cooper. 2021. Dogwhistles as inferences in interaction. In *Proceedings of* the Reasoning and Interaction Conference (ReInAct 2021), pages 40–46. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Emily Burack. 2020. A list of antisemitic dogwhistles used by donald trump. *Hey Alma*. Justin Caffier. 2017. Get to know the memes of the alt-right and never miss a dog-whistle again. *Vice*. Dallas Card, Serina Chang, Chris Becker, Julia Mendelsohn, Rob Voigt, Leah Boustan, Ran Abramitzky, and Dan Jurafsky. 2022. Computational analysis of 140 years of us political speeches reveals more positive but increasingly polarized framing of immigration. Proceedings of the National Academy of Sciences, 119(31):e2120510119. Royce Carroll, Jeffrey B Lewis, James Lo, Keith T Poole, and Howard Rosenthal. 2009. Measuring bias and uncertainty in dw-nominate ideal point estimates via the parametric bootstrap. *Political analysis*, 17(3):261–275. Tommaso Caselli, Valerio Basile, Jelena Mitrovic, Inga ´ Kartoziya, and Michael Granitzer. 2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language. In *Proceedings of* the 12th language resources and evaluation conference, pages 6193–6202. Giovanni Da San Martino, Alberto Barrón-Cedeño, Henning Wachsmuth, Rostislav Petrov, and Preslav Nakov. 2020. SemEval-2020 task 11: Detection of propaganda techniques in news articles. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1377–1414, Barcelona (online). International Committee for Computational Linguistics. Quentin Dénigot and Heather Burnett. 2020. Dogwhistles as identity-based interpretative variation. In Proceedings of the Probability and Meaning Conference (PaM 2020). Penelope Eckert. 2008. Variation and the indexical field 1. *Journal of sociolinguistics*, 12(4):453–476. Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, and Elizabeth Belding. 2018. Hate lingo: A target-based linguistic analysis of hate speech in social media. In *Proceedings of the International AAAI Conference on Web and Social Media*, volume 12. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. *arXiv* preprint arXiv:2109.05322. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 3356–3369. Matthew Gentzkow, Jesse M Shapiro, and Matt Taddy. 2019. Measuring group differences in highdimensional choices: method and application to congressional speech. *Econometrica*, 87(4):1307–1340. Robert E Goodin and Michael Saward. 2005. Dog whistles and democratic mandates. *The Political Quarterly*, 76(4):471–476. Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against veiled toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7732–7739, Online. Association for Computational Linguistics. Ian Haney-López. 2014. *Dog whistle politics: How* coded racial appeals have reinvented racism and wrecked the middle class. Oxford University Press. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3309–3326. Robert Henderson and Elin McCready. 2018. How dogwhistles work. *New Frontiers in Artificial Intelligence*, pages 231–240. Robert Henderson and Elin McCready. 2019. Dogwhistles, trust and ideology. In *Proceedings of the 22nd* Amsterdam Colloquium, pages 152–160. Robert Henderson and Elin McCready. 2020. Towards functional, agent-based models of dogwhistle communication. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 73–77. Niclas Hertzberg, Robin Cooper, Elina Lindgren, Björn Rönnerstrand, Gregor Rettenegger, Ellen Breitholtz, and Asad Sayeed. 2022. Distributional properties of political dogwhistle representations in swedish bert. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 170–175. Justin Khoo. 2017. Code words in political discourse. Philosophical Topics, 45(2):33–64. Rebecca Lee and Maureen Kosse. 2020. The social domain of understanding: Ethnographically-informed frame semantics of dog whistles. High Desert Linguistics Society 14. Jeffrey B Lewis, Keith Poole, Howard Rosenthal, Adam Boche, Aaron Rudkin, and Luke Sonnet. 2023. Voteview: Congressional roll-call votes database. https://voteview. com/. Rijul Magu, Kshitij Joshi, and Jiebo Luo. 2017. Detecting the hate code on social media. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11, pages 608–611. Rijul Magu and Jiebo Luo. 2018. Determining code words in euphemistic hate speech using word embedding networks. In *Proceedings of the 2nd workshop* on abusive language online (ALW2), pages 93–100. Tali Mendelberg. 2001. The Race Card: Campaign Strategy, Implicit Messages, and the Norm of Equality. Princeton University Press. Julia Mendelsohn, Yulia Tsvetkov, and Dan Jurafsky. 2020. A framework for the computational linguistic analysis of dehumanization. *Frontiers in artificial* intelligence, 3:55. Stefano Menini, Alessio Palmero Aprosio, and Sara Tonelli. 2021. Abuse is contextual, what about nlp? the role of context in abusive language annotation and detection. *arXiv preprint arXiv:2103.14916*. Jamie Moshin. 2018. Hello darkness: Antisemitism and rhetorical silence in the" trump era". *Journal of* Contemporary Rhetoric, 8. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967. OpenAI. 2023. Gpt-4 technical report. *arXiv*. Joyojeet Pal, Dinsha Mistree, and Tanya Madhani. 2018. A friendly neighborhood hindu. In *CeDEM Asia 2018: Proceedings of the International* Conference for E-Democracy and Open Government; Japan 2018, pages 97–121. Edition DonauUniversität Krems. Carla Pérez-Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5891–5902. Keith T Poole and Howard Rosenthal. 1985. A spatial model for legislative roll call analysis. *American* journal of political science, pages 357–384. Jing Qian, Mai ElSherief, Elizabeth Belding, and William Yang Wang. 2019. Learning to decipher hate symbols. *arXiv preprint arXiv:1904.02418*. Robin Queen. 2007. Sociolinguistic horizons: Language and sexuality. *Language and Linguistics Compass*, 1(4):314–330. Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. Hatecheck: Functional tests for hate speech detection models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 41–58. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5477–5490. Jennifer Saul. 2018. Dogwhistles, political manipulation, and philosophy of language. In Daniel Fogal, Daniel W. Harris, and Matt Moss, editors, New work on speech acts, volume 360, page 84. Oxford University Press Oxford. Paul E Smaldino, Thomas J Flamson, and Richard McElreath. 2018. The evolution of covert signaling. *Scientific reports*, 8(1):1–10. Paul E Smaldino and Matthew A Turner. 2021. Covert signaling is an adaptive communication strategy in diverse populations. *Psychological review*. Andrew Ifedapo Thompson and Ethan C Busby. 2021. Defending the dog whistle: The role of justifications in racial messaging. *Political Behavior*, pages 1–22. Brian P Tilley et al. 2020. "i am the law and order candidate": A content analysis of donald trump's race-baiting dog whistles in the 2016 presidential campaign. *Psychology*, 11(12):1941. Tamara van der Does, Mirta Galesic, Zackary Okun Dunivin, and Paul E Smaldino. 2022. Strategic identity signaling in heterogeneous networks. Proceedings of the National Academy of Sciences, 119(10):e2117898119. Rachel Wetts and Robb Willer. 2019. Who is called by the dog whistle? experimental evidence that racial resentment and political ideology condition responses to racially encoded messages. *Socius*, 5:2378023119866268. Michael Wiegand, Maja Geulig, and Josef Ruppenhofer. 2021a. Implicitly abusive comparisons–a new dataset and linguistic analysis. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 358–368. Michael Wiegand, Josef Ruppenhofer, and Elisabeth Eder. 2021b. Implicitly abusive language–what does it actually look like and why are we not getting there? In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 576–587. Association for Computational Linguistics. Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, and Furu Wei. 2021. Blow the dog whistle: A chinese dataset for cant understanding with common sense and world knowledge. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2139–2145. Fan Yang. 2016. Rethinking china's internet censorship: The practice of recoding and the politics of visibility. New Media & Society, 18(7):1364–1381. Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, and Maarten Sap. 2023. Cobra frames: Contextual reasoning about effects and harms of offensive statements. In *Findings of ACL*. ## A Appendix | Category | Count | | |-------------------------------------|---------------------------------|-----| | formal/offline | 193 | | | Register | informal/online | 147 | | stereotype-based target group label | 64 | | | concept (policy) | 41 | | | concept (values) | 37 | | | persona signal (symbol) | 35 | | | stereotype-based descriptor | 34 | | | persona signal (self-referential) | 32 | | | concept (other) | 29 | | | arbitrary target group label | 23 | | | persona signal (shared culture) | 18 | | | humor/mockery/sarcasm | 11 | | | representative (Bogeyman) | 10 | | | phonetic-based target group label | 4 | | | Type | persona signal (in-group label) | 2 | | racist | 76 | | | transphobic | 73 | | | antisemitic | 73 | | | white supremacist | 48 | | | Islamophobic | 16 | | | conservative | 8 | | | anti-liberal | 7 | | | anti-Latino | 6 | | | homophobic | 6 | | | anti-vax | 5 | | | religious | 4 | | | climate change denier | 4 | | | anti-Asian | 3 | | | anti-LGBTQ | 3 | | | liberal | 3 | | | xenophobic | 2 | | | anti-GMO | 2 | | | Persona | misogynistic | 1 | Table A.1: Distribution of glossary entries across all registers, types, and personae. ## A.1 Details For Dogwhistle Surfacing We create 51 total request formulations that ask for generic examples of dogwhistles (n=17), dogwhistles that target specific social groups (n=25), and dogwhistles that are used by certain personae/ingroups (n=9). For each prompt, we also consider three spelling variations of "dogwhistle": dogwhistle, *dog-whistle*, and *dog whistle*. Exact prompt text can be found in our project repository. To encourage GPT-3 to generate a list, we conclude all prompts with a newline token followed by "1.". All prompts were provided to a GPT-3 Instruct model (text-davinci-002) with de- ![13_image_0.png](13_image_0.png) fault hyperparameters except for max_tokens=256, temperature=0.7, and num_outputs=5 (5 generations per prompt). The resulting texts are strings that take the form of an enumerated list. To aggregate and compare surfaced dogwhistles across each text completion, we post-process by: splitting by newline characters, removing enumeration and other punctuation, converting all outputs to lowercase, lemmatizing each surfaced term with SpaCy, and removing definite articles that precede generated dogwhistles. We then aggregate over all generations to determine how often each dogwhistle is surfaced for each in-group. In calculating precision of dogwhistle surfacing, we mark each of the 154 candidate terms as true positives if they appear in the glossary. Some surfaced dogwhistles were marked as "correct" if they were closely related to a dogwhistle entry in our glossary, even if the exact term did not appear. Examples include national security, *identity politics*, the swamp, *tax relief*, and *patriot*. However, this is still a conservative estimate because our glossary is not exhaustive. GPT-3 surfaces a number of terms that potentially have dogwhistle usages but were not covered by our glossary, and thus not included in our precision estimates. Examples of these terms include names of Muslim political organizations (Hezbollah, Hamas, *Muslim Brotherhood*) and *Second Amendment rights*. Figure A.1 shows variation in precision of dogwhistle surfacing across prompt types (in-groups and generic prompting). ## A.2 Details For Identifying Covert Meaning Variation across registers We identify variation in GPT-3's ability to identify dogwhistles' covert meanings based on prompt features, dog- | Source | Definition | |-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Albertson (2015) | A dogwhistle is an expression that has different meanings to different audiences. A dogwhistle is a term that sends one message to an outgroup while | | Henderson and McCready (2018) | at the same time sending a second (often taboo, controversial, or inflammatory) message to an ingroup. A dogwhistle is a word or phrase that means one thing to the public | | Bhat and Klein (2020) | at large, but that carry an additional, implicit meaning only recognized by a specific subset of the audience. | | Merriam-Webster | A dogwhistle is a coded message communicated through words or phrases commonly understood by a particular group of people, but not by others. | | Wikipedia | A dogwhistle is the use of coded or suggestive language in political messaging to garner support from a particular group without provoking opposition. | Table A.2: Definitions of dogwhistles and their sources used for prompting GPT-3. Below are links for the Merriam-Webster and Wikipedia sources: https://www.merriam-webster.com/words-at-play/dog-whistle-political-meaning https://en.wikipedia.org/wiki/Dog_whistle_(politics) ![14_image_0.png](14_image_0.png) whistle register, and the interaction between the two. Figure A.2 shows that including the definition in prompts consistently improves GPT-3's covert meaning identification for both formal and informal dogwhistles. However, including the secret cue has minimal effect for informal dogwhistles, and only leads to substantial improvement for identifying formal dogwhistles' covert meanings. Variation across personae There is significant variation in GPT-3's performance across personae, as can be seen in Table A.3. Variation across dogwhistle types GPT-3's performance varies widely across dogwhistle types in our taxonomy (§2.1; Fig. 2). GPT-3 has the lowest performance for humor-based and arbitrary target group label dogwhistles, and the highest performance for representatives (Bogeymen), phoneticbased target group labels, and policies (Table A.4). | Persona | Proportion | 95% CI | |-----------------------|--------------|----------| | homophobic | 0.737 | 0.069 | | Islamophobic | 0.654 | 0.060 | | climate change denier | 0.595 | 0.106 | | anti-Asian | 0.583 | 0.126 | | conservative | 0.563 | 0.100 | | anti-Latino | 0.560 | 0.107 | | racist | 0.532 | 0.029 | | anti-vax | 0.490 | 0.101 | | antisemitic | 0.462 | 0.029 | | white supremacist | 0.448 | 0.029 | | anti-liberal | 0.292 | 0.106 | | transphobic | 0.229 | 0.024 | ![14_image_1.png](14_image_1.png) Table A.3: Proportion of dogwhistles with covert meanings identified by GPT-3 across personae with at least 50 GPT-3 generations. GPT-3 has the lowest recognition of transphobic dogwhistles and the highest recognition of homophobic and Islamophobic dogwhistles. Table A.4: Proportion of dogwhistles with covert meanings identified by GPT-3 by the dogwhistle type. | Dogwhistle Type | Proportion | 95% CI | |-----------------------------------|--------------|----------| | concept (humor) | 0.244 | 0.063 | | arbitrary group label | 0.261 | 0.046 | | stereotype-based descriptor | 0.311 | 0.060 | | persona signal (symbol) | 0.331 | 0.032 | | persona signal (self-referential) | 0.444 | 0.046 | | persona signal (shared culture) | 0.448 | 0.054 | | concept (values) | 0.475 | 0.026 | | stereotype-based group label | 0.497 | 0.031 | | concept (policy) | 0.519 | 0.036 | | phonetic-based group label | 0.533 | 0.127 | | representative (Bogeyman) | 0.618 | 0.063 | | Glossary Example | GPT-4 Response | Term | Meaning | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|-----------| | The dogwhistle in this message is the term "cosmopolitan elite," which is often used to target a specific group of people, such as urban, educated, and often liberal individuals, without explicitly naming them. It can also be interpreted as an anti-Semitic or anti-immigrant message, as "cosmopolitan" has historically been used to criticize Jewish people and immigrants for their perceived lack of loyalty to their country. | | | | | The cosmopolitan elite look down on the common affections that once bound this nation together: things like place and national feeling and religious faith. The cosmopolitan agenda has driven both Left and Right. It's time we ended the cosmopolitan experiment and recovered the promise of the republic. | ✔ | ✔ | | | The dogwhistle in this message is the reference to "inner cities" and "generations of men not working," which can be interpreted as coded language targeting African Americans or other minority groups living in urban areas, perpetuating stereotypes about laziness and unemployment. | | | | | We have got this tailspin of culture, in our inner cities in particular, of men not working and just generations of men not even thinking about working or learning the value and the culture or work. | ✔ | ✔ | | | #IStandWithJKRowling because she wrote a series of books that made readers out of millions of children and adolescents. And she has every right to express her opinion on cultural matters. And because she's right about them. | The dogwhistle in this message is "she's right about them," which indirectly supports J.K. Rowling's controversial opinions on gender | ✘ | ✔ | | and transgender issues without explicitly mentioning them. The dogwhistle in this message is the use of the colors | , | | | | And together, we will win | which are associated with the women's suffrage movement. This suggests that the message is aimed at garnering support from | ✔ | ✘ | | women or feminists without explicitly stating it. The dogwhistle in this message is "Did you see Kyle? He's about this tall.." which could be a reference to the white supremacist | | | | | Did you see Kyle? He's about this tall.. | symbol of the "K" hand sign, where the fingers form the letters "W" and "P" for "white power." The name "Kyle" is sometimes associated with white supremacist groups. | ✘ | ✘ | Table A.5: Manual proof-of-concept analysis for using GPT-4 to identify dogwhistles in-context. The columns on the right indicate whether GPT-4 correctly identifies the dogwhistle term and its covert meaning, respectively. ![15_image_0.png](15_image_0.png) Figure A.3: Proportion of GPT-3 generations that correctly identify dogwhistles' covert meanings across prompted dogwhistle definitions and secret cues. | Definition Source | Mean | 95% CI | |-------------------------------|--------|----------| | None Provided | 0.191 | 0.025 | | Merriam-Webster | 0.438 | 0.031 | | Albertson (2015) | 0.449 | 0.031 | | Bhat and Klein (2020) | 0.513 | 0.032 | | Henderson and McCready (2018) | 0.515 | 0.032 | | Wikipedia | 0.534 | 0.032 | Table A.6: Proportion of GPT-3 generations that correctly identify dogwhistles' covert meanings for each dogwhistle definition provided in prompting. Variation across dogwhistle definitions Only 19.1% of GPT-3 generations include the correct covert meaning when prompted with no dogwhistle definition. Prompting GPT-3 with any of the five dogwhistle definitions greatly improved performance over no definition provided, but the extent varied, with the Merriam-Webster definition yielding the lowest improvement (43.8%) and Wikipedia yielding the highest (54.3%) (Table A.6). The boost in performance by adding the secret cue depends on the specific definition used; the secret cue has a bigger effect when using the Merriam-Webster and Albertson (2015) definitions (Figure A.3). Where does GPT-3 perform poorly? Most unrecognized dogwhistles are part of the informal register, especially symbols (e.g. the transphobic spiderweb or *cherry* emojis). Other unrecognized dogwhistles include "Operation Google" terms (e.g. Skype, *Yahoo*), more recent terms (e.g. Let's Go Brandon), and several antisemitic and transphobic dogwhistles whose covert meanings are especially context-dependent (e.g. *adult human female*, XX, (Wikipedia) early life, *fellow white people*). Unrecognized formal dogwhistles tend to be extremely subtle and nuanced (e.g. *Dred Scott* as a conservative anti-abortion dogwhistle) or are highlyconventionalized phrases that may be far more commonly used without the covert implicature (e.g. the antisemitic dogwhistle *poisoning the well*). ![16_image_0.png](16_image_0.png) Where does GPT-3 perform well? GPT-3 readily identifies Islamophobic dogwhistles (e.g. radical Islam, *Barack Hussein Obama*), many antisemitic conspiracy theories (e.g. Khazars, Kalergi Plan, *Israel Lobby*), some racist dogwhistles whose covert meanings are more widely discussed among the out-group (e.g. inner-city, ghetto, thug, war on drugs, *black-on-black crime*), some alt-right memes (e.g. *Pepe the Frog*), and conservative political rhetoric (e.g. balance the budget, *judicial* activism, *job creators*). ## A.3 Details For Toxicity Detection Table A.7 shows a sample of the 237 hateful sentence templates used from HateCheck (Röttger et al., 2021). We used the subset of HateCheck templates labeled as hateful and containing a placeholder for an identity term in either adjectival, singular nominal, or plural nominal form. We filled in these placeholders with either a standard group label, dogwhistle, or slur for three personae: antisemitic, racist, and transphobic. Specific terms used can be found in Table A.8. For all personae and toxicity measures provided by Perspective API, dogwhistles are rated as less toxic than slurs and standard group labels (Figure A.4. Interestingly, except for the N-word, Perspective seems to not recognize the extreme offensiveness of most slurs, and thus toxicity scores for hateful sentences containing slurs are similar to or lower than scores for the same hateful sentences containing standard group labels. ## Template So sick of being around [IDENTITY_P] like you. [IDENTITY_P] are disgusting. You [IDENTITY_P] are all dishonest. [IDENTITY_P] are human garbage. Your [IDENTITY_A] lives are worthless. Never trust a single word said by a [IDENTITY_S]. I will cut their [IDENTITY_A] throats. I would love to see those [IDENTITY_P] suffer. All [IDENTITY_P] should fuck off forever. Of course that dumbass was a [IDENTITY_S]. [IDENTITY_P] think they're special. I hate them. Isn't it typical of a [IDENTITY_S] to be that stupid? Table A.7: Sample of template hateful sentences from HateCheck (n=237). We create inputs for Perspective API by replacing identity term placeholders with a standard group label, dogwhistle, or slur in either adjectival (IDENTITY_A), singular nominal (IDENTITY_S), or plural nominal form (IDENTITY_P). | Racist | Antisemitic | Transphobic | |------------------|----------------------|----------------| | African-American | Jewish | transgender | | Black | Jew | trans | | inner-city | cosmopolitan | AGP | | welfare queen | cosmopolitan elite | autogynephile | | thug | coastal elite | AAP | | dindu | globalist | autoandrophile | | gangbanger | international banker | clownfish | | ghetto | cultural Marxist | troon | | urban | Hollywood elite | TIF | | superpredator | Khazar | TIM | | n****r | k**e | t****y | | c**n | h**b | s*****e | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Sections 0 and 1 ✓ A4. Have you used AI writing assistants when working on this paper? Yes, we used ChatGPT to rephrase a single sentence from the abstract to repeat the same point at the end of the introduction. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 ✓ B1. Did you cite the creators of artifacts you used? Congressional records data (Section 3), Glossary sources (Section 2), GPT-3 (Section 4), and Hate speech templates + models (Section 5) ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Our glossary and taxonomy will be open and available to the public (Section 2) ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes, in Sections 2, 5, and 8 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Data released is anonymized with no identifying information ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2 and Section 8 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2 ## C ✓ **Did You Run Computational Experiments?** Section 4 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. We did not train our own model, and just used GPT-3 with the OpenAI API The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, section 4 and Appendix A.1 and A.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes, Sections 3-5, with additional statistics in A.1-A.3 in the appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes, Section 4 and Appendix A.1-A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
riemenschneider-frank-2023-exploring
Exploring Large Language Models for Classical Philology
https://aclanthology.org/2023.acl-long.846
Recent advances in NLP have led to the creation of powerful language models for many languages including Ancient Greek and Latin. While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English. We evaluate all models on morphological and syntactic tasks, including lemmatization, which demonstrates the added value of T5{'}s decoding abilities. We further define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. Our experiments provide the first benchmarking analysis of existing models of Ancient Greek. Results show that our models provide significant improvements over the SoTA. The systematic analysis of model types can inform future research in designing language models for Classical languages, including the development of novel generative tasks. We make all our models available as community resources, along with a large curated pre-training corpus for Ancient Greek, to support the creation of a larger, comparable model zoo for Classical Philology.
# Exploring Large Language Models For Classical Philology Frederick Riemenschneider Dept. of Computational Linguistics Heidelberg University 69120 Heidelberg riemenschneider@cl.uni-heidelberg.de Anette Frank Dept. of Computational Linguistics Heidelberg University 69120 Heidelberg frank@cl.uni-heidelberg.de ## Abstract Recent advances in NLP have led to the creation of powerful language models for many languages including Ancient Greek and Latin. While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using ROBERTA and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English. We evaluate all models on morphological and syntactic tasks, including lemmatization, which demonstrates the added value of T5's decoding abilities. We further define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. Our experiments provide the first benchmarking analysis of existing models of Ancient Greek. Results show that our models provide significant improvements over the SoTA. The systematic analysis of model types can inform future research in designing language models for Classical languages, including the development of novel generative tasks. We make all our models available as community resources, along with a large curated pre-training corpus for Ancient Greek, to support the creation of a larger, comparable model zoo for Classical Philology. Our models and resources are available at https://github.com/Heidelberg-NLP/ ancient-language-models. ## 1 Introduction Since the beginning of the creation of the Index Thomisticus in 1946 (Busa, 1980) and the publication of the Concordance to Livy (Packard, 1968), Classical Philology has been revitalized by the "digital revolution" (Berti, 2019). Today, numerous efforts have been undertaken to make Classical texts digitally available, annotate, and automatically process them. E.g., the Classical Language Toolkit (CLTK, Johnson et al., 2021) offers various tools to process pre-modern languages, in particular Latin and pre-modern Greek.1 Recently, we see a surge of the first pre-trained contextualized language models (PLMs) for Classical languages: Latin BERT has been proposed by Bamman and Burns (2020), Ancient Greek (AG) BERT by Singh et al. (2021). Lately, a second AG BERT has been proposed by Yamshchikov et al. (2022). However, both AG BERT models have been pre-trained on a comparatively small pretraining dataset. Moreover, they have been initialized from Modern Greek BERT (Koutsikakis et al., 2020), which limits them to the modern Greek alphabet, ignoring the diacritics of Ancient Greek. Although numerous richly annotated treebanks are available for Latin and AG, systems have, by now, not been evaluated on a shared benchmark. Given that two popular treebanks for AG have been integrated into Universal Dependencies (de Marneffe et al., 2021), it is surprising that researchers working on AG do not compare to benchmarking results of, e.g., Straka (2018). Hence, a thorough assessment of the performance of the existing models is necessary in order to compare and evaluate their effectiveness for this underexplored language. While BERT models are known to achieve high performance on a wide range of tasks, encoderdecoder models or multilingual models may often be a better choice, depending on the task. In this work, we explore a variety of language models for Classics in general and Ancient Greek in particular: We introduce GRεTA, GRεBERTA, PHILBERTA, and PHILTA, four PLMs for Classics. GRεBERTA and GRεTA are ROBERTA (Liu et al., 2019) and T5 (Raffel et al., 2020) models trained on Ancient Greek texts, respectively. PHILBERTA and 15181 PHILTA are their trilingual counterparts pre-trained on Greek as well as Latin and English data. We explore the advantages of (i) the two model architectures in (ii) mono- and multilingual pre– training for the mid-resource language Ancient Greek on a variety of morphological, syntactic, and semantic tasks, helping to answer questions, such as: When to choose one architecture over the other? or: *How does multilinguality affect a* language model? Moreover, we publish the first wide-ranging benchmark results to compare our models for AG and Latin to the relevant prior work, establishing new SoTA results for both languages. In summary, we aim to unify and push forward the current research landscape at the intersection of Classics and NLP with the following contributions: (i) We introduce four pre-trained language models for Classics: GRε(BERT|T)A and PHIL(BERT|T)A. To our knowledge, we are the first to develop encoder-decoder models for Classics, and multilingual models tailored to both Latin and Greek. (ii) We evaluate the already existing and our proposed models on several tasks, making many of them comparable for the first time. Furthermore, we outperform the existing Ancient Greek BERT models by a notable margin. (iii) Our evaluation sheds light on the differences between encoders like ROBERTA and encoders of encoder-decoder models like T5 as well as on the influence of multilinguality on the mid-resource language Ancient Greek. By offering novel model types for AG, we aim to inspire new research and application tasks. (iv) We develop and publish a large-scale, highquality pre-training corpus for AG as a contribution to the community. ## 2 Related Work Pre-training Data for Ancient Greek. Pretrained language models require large amounts of unlabeled pre-training data. Ancient Greek and Latin being historical languages, the number of available texts is inherently limited, which makes the creation of a high-quality pre-training corpus even more important. To circumvent this problem, Singh et al. (2021) and Yamshchikov et al. (2022) pre-trained their AG BERT model from a Modern Greek BERT (Koutsikakis et al., 2020). But this approach has two weaknesses: First, there is an important cultural gap between modern and ancient texts that we do not want to introduce into our models. A Modern Greek BERT is familiar with contemporary concepts like cell phones or communism, which are unknown to antiquity, while we intend to use PLMs as a "window" to ancient cultures. Also the style of modern internet documents is fundamentally different from the transmitted ancient texts. Second, and more importantly, continuing pre-training of the Modern Greek BERT prevents us from adapting its tokenizer. AG, however, uses more diacritics, which host important information. By contrast, in our work, we build a tokenizer from scratch that is optimized for Ancient Greek. In order to boost the data needed to train "pure" models of Ancient Greek, we put special effort into the curation of a large, but high-quality pre-training corpus for AG, leveraging previously unused textual sources. Finally, we evaluate the effect of using additional multilingual pre-training data. Evaluating Models for Ancient Languages. Morphological and syntactic tasks, such as PoS tagging, dependency parsing, and lemmatization, have always been of interest to researchers of Latin and Ancient Greek. The standard tool for AG morphological analysis is Morpheus (Crane, 1991), a rulebased system, that has also been integrated into many more recent approaches. PoS Tagging has also been performed by various language-agnostic systems trained on AG data (Celano et al., 2016), but their success depends heavily on the chosen dataset: a winning system on one dataset (Celano et al., 2016) achieves the worst results on another (Keersmaekers, 2019). More recently, the CLTK (Johnson et al., 2021) provides a variety of taggers for many tasks. Surprisingly, although numerous richly annotated treebanks are available, systems have, by now, not been evaluated on a common benchmark.2 E.g., Singh et al. (2021) test their proposed AG BERT on random splits from three popular treebanks, which we cannot compare against. The second AG BERT (Yamshchikov et al., 2022) has only been evaluated on authorship attribution. As for lemmatization, Vatri and McGillivray (2020) provide an evaluation of three different lemmatizers. However, one of the evaluated candidates was partly trained on test data, which may have influenced its performance. It is noteworthy that, 2Cf. also Johnson et al. (2021): "The CLTK lacks formal evaluations of its models' accuracies. [...] Unfortunately, [outside] benchmarks do not yet exist for pre-modern languages." ![2_image_0.png](2_image_0.png) despite the integration of two popular treebanks for AG into Universal Dependencies (UD, de Marneffe et al., 2021), many groups working on AG systems have not compared their models against the results of models benchmarked on UD data, such as Straka (2018). We remedy these issues by evaluating our systems and existing AG BERT models on the two authoritative treebanks covered by UD. The tasks we consider - dependency parsing, lemmatization, coarse, universal (UPoS) PoS tagging and fine-grained, language-specific (XPoS) tagging – are visualized in Figure 1. For Latin, the issue does not arise thanks to the EvaLatin 2022 campaign (Sprugnoli et al., 2022), which has enabled direct comparison of models and has engendered strong models for Latin. Yet, despite the impressive results achieved in EvaLatin, our trilingual models outperform the existing systems on PoS tagging and lemmatization. Language Model Architectures. Language models can be categorized into three classes: encoder-only, decoder-only, and encoder-decoder models. Encoder-only models such as BERT (Devlin et al., 2019) and ROBERTA (Liu et al., 2019) are best suited for tasks that aim to analyze complete sequences by sequence or token classification. Encoder-decoder models, on the other hand, are typically employed for conditional generation tasks, such as machine translation. Currently, all three models for ancient languages are BERT and thus encoder-only architectures. We argue that an encoder-decoder model, such as T5 (Raffel et al., 2020), is a useful addition to this encoder-only landscape. First, it enlarges the space of possible NLP tasks for AG, enabling us, e.g., to cast lemmatization as a sequence-tosequence task and to explore machine translation for ancient languages. Second, it allows us to compare the encoder of an encoder-only model with that of an encoder-decoder architecture, as they are both trained on the same data with a similar pretraining objective. Finally, commonly used multilingual encoder-decoder models like MT5 (Xue et al., 2021) and BYT5 (Xue et al., 2022) are not pre-trained on Ancient Greek texts. As we aim for optimally trained encoder-only models, we chose ROBERTA over BERT: its dynamic masking strategy exploits the pre-training data better, and it has been shown that BERT's NSP objective can be detrimental (Liu et al., 2019). ## 3 Pre-Trained Language Models For Ancient Greek And Latin 3.1 GRε(BERT|T)A and PHIL**(BERT**|T)A GRεBERTA and PHILBERTA are ROBERTAbase-, GRεTA and PHILTA are T5base-sized models. Both models are pre-trained using a masked language modeling (MLM) objective. Specifically, in the case of ROBERTA, wordpieces are masked during the pre-training process, while for T5, spans are masked. Although it has been shown that multilingual pre-training can lead to gains for low-resource languages through cross-lingual transfer, it remains an open question when exactly it is preferable to use a multilingual instead of a monolingual model (Doddapaneni et al., 2021). To explore the implications of multilinguality for AG language models, we test different capabilities and possible interferences by comparing the different model types. ## 3.2 Plm Fine-Tuning For Downstream Tasks3 PoS Tagging. PoS tagging for Ancient Greek typically aims for a complete morphological analysis: 3The following descriptions remain neutral to different PLM types by referring to basic transformer components. Where necessary, we will distinguish specific PLM types. Next to the word class, the model has to predict eight fine-grained morphological attributes.4 We frame this sequence labeling task as a multi-task classification problem applied to each token, with nine different classification heads per token on top of one shared encoder: We denote a sequence of tokens S of length n as S = w1, w2*, . . . , w*n and refer to the contextualized embedding of each token as ei = Encoder(w1:n, i). As Byte Pair Encoding (Sennrich et al., 2016) splits words into subword units, we represent each token using its first subword embedding in the encoder. Each of the nine attributes is predicted using a feed-forward network applied to the last encoding layer, followed by a softmax function. The total loss is calculated as: $${\mathcal{L}}_{\mathrm{total}}=\sum_{m=0}^{8}{\frac{1}{9}}{\mathcal{L}}_{m}$$ We use this approach for the Perseus XPoS dataset. For the other, less-detailed tagsets, we employ a single classification head. Dependency Parsing. We follow Zhang et al. (2017) who cast dependency parsing as head selection. The model predicts a unique head for each token considered as a dependent. Since the model makes independent predictions, the obtained dependency graph can (in a few cases) be unconnected and is then completed by the Chu-LiuEdmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) for building non-projective trees - given that AG allows free word order. While Zhang et al.'s (2017) DENSE parser was based on a bi-directional LSTM, we define the model on top of the final hidden states of the transformer encoders. Following Zhang et al. (2017), we add an artificial ROOT token w0 and calculate the probability of the word wj ∈ {w0, w1*, . . . , w*N } being the head of the word wi ∈ {w1, w2*, . . . , w*n} in S as: ${\mathcal{P}_{\text{head}}(w_j|w_i,S)=\frac{\exp(f(\mathbf{e}_j,\mathbf{e}_i))}{\sum_{k=0}^N\exp(f(\mathbf{e}_k,\mathbf{e}_i))}}$ here ${f}$ predicts the score of an edge ${(w_i,w_i)}$ as where f predicts the score of an edge (wj , wi) as follows: $$f(\mathbf{e}_{j},\mathbf{e}_{i})=\mathbf{v}^{\mathsf{T}}\cdot\operatorname{tanh}(\mathbf{U}\cdot\mathbf{e}_{j}+\mathbf{W}\cdot\mathbf{e}_{i})$$ Here, v is a weight vector and U, W weight matrices. Dependency labels are predicted in a similar 4Person, number, tense, mood, voice, gender, case, and degree. Usually, many of these attributes are left empty. E.g., only adjectives have the attribute "degree". fashion: Let g be a single hidden-layer rectifier network that takes as input the concatenation [ei; ej ]. The probability for the label l is then computed as: $$p_{\text{label}}(l|w_j,w_i,S)=\frac{\exp(g(\mathbf{e}_j,l,\mathbf{e}_i))}{\sum_{l'\in L}\exp(g(\mathbf{e}_j,l',\mathbf{e}_i))}$$ While Zhang et al. (2017) use the representations. of their trained DENSE model as input for the label classifier, we resort to the pre-trained embeddings. Lemmatization. Current systems for lemmatization of AG, such as UDPIPE (Straka, 2018) or GLEM (Bary et al., 2017), are rule-based or use a classifier to predict editing rules that modify a token's pre- and suffixes. However, these complex scripts are not well-suited for a language like AG, which has many irregular forms that involve modifications of the word stem. An alternative approach is to utilize an encoder-decoder model that receives the inflected form, the PoS tag, and (optionally) additional information such as morphological features, as demonstrated for different languages by Schmid (2019) or Wróbel and Nowak (2022). Yet, these earlier encoder-decoder-based lemmatization models are purely word-based and rely on pre-computed PoS tags or morphological features in a pipeline setting. By contrast, we propose a novel T5-based lemmatization model that is (i) contextualized, so that relevant morphological indicators can be inferred by the model on the fly from the token's surrounding context. (ii) The model works *end-to-end*: it receives the surface form of the word to be lemmatized in its full sentence context and predicts its lemma without receiving or predicting PoS tags or morphological features.5 We mark the t(arget) token to be lemmatized in its context using delimiter tokens <t_tok_beg> and <t_tok_end>. For instance, for the input sentence ξύνοιδα <t_tok_beg> ἐμαυτῷ <t_tok_- end> οὐδὲν ἐπισταμένῳ with the marked inflected t(arget) token ἐμαυτῷ, we expect as output the lemma ἐμαυτοῦ. We also experiment with providing, in addition, the target word as a sequence of individual characters, delimited by an additional separator token <t_tok_sep>: ξύνοιδα <t_tok_- beg> ἐμαυτῷ <t_tok_sep> ἐ μ α υ τ ῷ <t_- tok_end> οὐδὲν ἐπισταμένῳ. ## Semantic And World Knowledge Probing Tasks. So far, we considered only morphological and syn-5However, multi-task learning for joint morphological analysis and lemmatization is an interesting option that we did not pursue here. tactic tasks. However, to evaluate the models more comprehensively, it is necessary to also test their semantic and world knowledge. Since such benchmarks do not exist for AG or Latin, we create two small datasets to evaluate these aspects. Inspired by Talmor et al. (2020), we test whether the language models can **distinguish synonyms** from antonyms. For this task, we input a sentence, e.g., τὸ χρήσιμον καὶ τὸ ἀγαθόν: <mask> ὁμοῖά ἐστιν ("the useful and the good: they are <mask> similar"), and the model has to predict either οὐχ ("not") or πάντως ("very"). Talmor et al. (2020) cast a similar task for English as a zero-shot MLM prediction problem using BERT and ROBERTA. However, with our prompt, the models always predict οὐχ ("not"), regardless of the provided word pairs. Experiments with variations of the prompt have led to similar difficulties. Hence, we evaluate this task in a few-shot setting, fine-tuning the MLM-head on 10 to 50 shots of synonyms and antonyms each, to prepare them for the task. Similarly, we construct a dataset of **family relationships** between (mythical) heroes and gods. Here, the model is given a phrase, such as Τηλέμαχος ὁ τοῦ <mask> παῖς ("Telemachus, son of <mask>"), and has to predict the correct entity (in this case, Odysseus). For this task, we test the models in a zero-shot setting. However, this task cannot be solved by most encoder-only models, as the masked names typically consist of more than a single wordpiece. Thus, for this task, we evaluate only GRεTA and PHILTA, which can predict full entity names. By comparing the mono- and multilingual variants, we assess the models' acquired world knowledge as well as potential effects that may be induced by multilingual training: Given that Greek and Roman mythology share many of these gods, yet by different names, the multilingual model may be able to acquire additional knowledge from the Latin pre-training data, to solve the task formulated in Ancient Greek. We describe both datasets in Appendix B.2. ## 3.3 Acquisition Of Pre-Training Data Ancient Greek. To cover a wide range of dialects, topics, and time periods of Ancient Greek, we make use of four different data sources: (the Greek part of) the Open Greek & Latin Project,6 the CLARIN corpus Greek Medieval Texts,7the Patrologia Graeca,8and the Internet Archive (IA).9 While the first three sources contain born-digital textual data, the IA online library provides books in the public domain along with their OCR transcriptions. However, we found the partition of texts labeled as Ancient Greek in the IA to be incomplete and noisy: only a small fraction of the books containing AG text was labeled as such, and only few of them were transcribed with OCR settings supporting Greek characters. We hence extracted a novel data partition from the IA that was then fully reOCRed by the Internet Archive to ensure correct transcription. To select a large number of high-quality texts, we applied a complex retrieve and filter procedure, focusing not only on (i) text quality, but also on (ii) collecting purely Ancient Greek texts, avoiding inclusion of texts in different languages, such as Latin, English, or German that can co-occur in the same book, and on (iii) filtering duplicates. Latin and English. Acquiring pre-training data for Latin was facilitated by the Corpus Corporum project,10 a meta-repository of Latin corpora that offers a comprehensive representation of the Latin language. All this data was kindly offered to us. For English, we collect pre-training data from various sources, aiming for texts that are related to antiquity, by being focused on the same topics that we find in ancient texts - as opposed to modern themes. To this end, we utilize English translations of Latin and Ancient Greek texts as pre-training data. Furthermore, we ensure that the amount of English data is of similar size as the ancient texts, to prevent the models from being overwhelmed by a large number of English texts. Statistics of pre-training data in Table 1. More details on corpus creation and statistics in Appendix C. ## 3.4 Pre-Training Process Even though our filtering of the IA corpus resulted in high-quality texts, the corpus is necessarily noisier than the born-digital texts. We therefore start pre-training on the IA data, and continue with the born-digital texts. Our tokenizers and the multilingual variants are trained on the born-digital texts only. For further pre-training details, see Appendix A. | Language | Dataset | Number of Tokens | |----------------------|-----------------|--------------------| | Open Greek & Latin | 30.0 million | | | Greek Medieval Texts | 3.3 million | | | Patrologia Graeca | 28.5 million | | | Internet Archive | 123.3 million | | | Overall | 185.1 million | | | Latin | Corpus Corporum | 167.5 million | | English | Overall | 212.8 million | | Ancient Greek | | | ## 4 Experiments We run the experiments outlined in Section 3.2 to provide insight into the performances achieved by different model types and in relation to prior SoTA. ## 4.1 Datasets Ancient Greek. For the PoS tagging, dependency parsing, and lemmatization tasks, we evaluate the PLMs for AG on the data provided by the Perseus and the PROIEL datasets, which are both integrated into Universal Dependencies 2.10 (de Marneffe et al., 2021). To probe our models for semantic and world knowledge (see Section 3.2), we use our newly constructed datasets, described in Appendix B.2. Latin. For Latin, we resort to the treebank used in EvaLatin 2022 (Sprugnoli et al., 2022), which covers three tasks: PoS tagging, lemmatization, and feature identification. Since no data for dependency parsing is provided, we restrict our evaluation to PoS tagging and lemmatization. In EvaLatin, instead of constructing test data by drawing samples from the initial data set, the test data exhibits different degrees of distribution differences in relation to the training data. For each task, three test sets are provided: The *Classical* set belongs to the same genre and time period as the training data, but comes from an author not included in the training data. The *Cross-genre* data includes two works that belong to different genres, yet being written during roughly the same time period. The *Crosstime* test set is based on text written in the 15th century, which is significantly later than the texts of the training data. In Table 2, we summarize the diverse tasks under consideration with their corresponding metrics, the used evaluation datasets, the model architectures, and the pre-trained language models that are applicable to the respective task. Further details, including dataset statistics, are provided in Appendix B.1. ## 4.2 Models And Baselines Ancient Greek. To showcase the capabilities of a recent system tailored to AG, we report the results of the taggers provided by the Classical Language Toolkit (Johnson et al., 2021).11 As a baseline, we use the currently best-performing system, UDPIPE (Straka et al., 2019), a transformer-based multitask architecture that utilizes multilingual BERT, trainable word embeddings, and character embeddings.12 In addition, to directly assess the benefits of using our monolingual model, we replace this multilingual BERT with our GRεBERTA model. For PoS tagging and dependency parsing, we further compare to both prior encoder models trained on AG texts. We use the PoS tagger and DENSE (Section 3.2) to evaluate both AG BERT models as well as our GRεBERTA and PHILBERTA models. We apply the same approach to GRETA's encoder (GRETA-ENC) to investigate its behavior. For lemmatization, we compare the performance of CLTK and UDPIPE with that of our full-fledged T5 models. To predict a lemma during inference, we use beam search with a width of 20. Latin. For Latin, we report the results of both teams that participated in the EvaLatin 2022 competition: Team KRAKÓW (Wróbel and Nowak, 2022) utilizes the XLM-ROBERTAlarge (Conneau et al., 2020) model for PoS tagging, team KULEUVEN (Mercelis and Keersmaekers, 2022) employs an ELECTRA model. For lemmatization, Wróbel and Nowak (2022) use BYT5small (Xue et al., 2022), a multilingual encoder-decoder model similar to MT5 (Xue et al., 2021) that operates on UTF-8 bytes instead of subwords. Mercelis and Keersmaekers (2022) implement a cascaded approach that resembles the Greek lemmatizer GLEM (Bary et al., 2017): If a rule-based lookup PoS Tagging Dependency Parsing **Lemmatization** ![6_image_0.png](6_image_0.png) UPoS XPoS Unlabeled Labeled Task Description PoS tagging with universally applicable, coarse ![6_image_1.png](6_image_1.png) Metric Accuracy Accuracy UAS LAS Accuracy Datasets Perseus ✓ ![6_image_3.png](6_image_3.png) Model Architecture Encoder + Classification Encoder + DENSE Encoder + DENSE Encoder-decoder PLM Instances (GRε|PHIL)BERTA, ![6_image_2.png](6_image_2.png) returns multiple lemmata, the system tries to disambiguate between these possibilities by means of the predicted PoS tag. To further clarify any remaining ambiguities, a classifier is trained to select the correct lemma from the available options. ## 5 Results Ancient Greek. We present the results for PoS tagging and **dependency parsing** for Ancient Greek on the Perseus dataset in Table 3. The PROIEL dataset seems to be easier to solve, as all models achieve performances that are much closer to each other (see Appendix D). Since the overall trends are consistent across both datasets, we focus our discussion on the results on the Perseus dataset. As seen in Table 3, the CLTK performs clearly below all other models on both tasks. While the CLTK is not directly comparable to the other models (see fn. 11), the evaluation still provides a perspective on the capabilities of the *de facto* only available framework for processing AG text. UDPIPE provides a strong baseline, which AG BERT (Yamshchikov et al., 2022) is unable to consistently outperform. By contrast, all other PLMs show clear gains over UDPIPE. The monolingual, encoder-only GRεBERTA model consistently performs best on all tasks. Interestingly, the performance of GRETA-ENC on PoS tagging is slightly worse than that of PHILBERTA, while it achieves better results for dependency parsing. This trend has also been observed in initial experiments. We elaborate on the behavior of GRεTAENC and PHILBERTA in Section 6. Results for **Lemmatization** are shown in Table 4. Here, augmenting UDPIPE with GRεBERTA's pretrained embeddings does not lead to better scores. We attribute this to the tokenization process and refer to our discussion in Section 6. GRεTA, on the ![6_image_4.png](6_image_4.png) other hand, demonstrates strong encoder-decoder capabilities and significantly outperforms UDPIPE. Providing GRεTA with the individidual characters of the target word leads to a small gain. The results of the **Synonym/antonym disambiguation task** are visualized in Figure 2. Again, GRεBERTA and PHILBERTA demonstrate higher scores compared to the AG BERT models. We observe the same for GRεTA and PHILTA (cf. Figure 4 in Appendix D). Our monolingual models and their multilingual counterparts perform almost equally, especially taking into account the overlapping standard deviation bands. We see a minimal trend for PHILTA to gain over GRεTA in Figure 4, but our small datasets do not allow drawing firm conclusions on their relative performance. Finally, we report zero-shot results for the **Family relationship task** in Table 5. As the T5-based models have been pre-trained to predict multiple masked spans at once, they tend to predict, for each sample, more than a single entity. We interpret the output as a ranked list and report recall@k, evalu- | Model | Accuracy | |-------------------|--------------| | CLTK | 76.10 | | UDPIPE (official) | 86.70 | | UDPIPE (ours) | 84.50 (0.09) | | UDPIPE + GRεBERTA | 85.56 (0.06) | | PHILTA | 90.02 (0.02) | | PHILTA + Chars | 90.66 (0.01) | | GRεTA | 90.80 (0.10) | | GRεTA + Chars | 91.14 (0.10) | | Model | PoS Tagging | Dependency Parsing | | | |------------------------------------|---------------|----------------------|--------------|--------------| | UPoS | XPoS | UAS | LAS | | | CLTK | 68.83 | 47.21 | 59.21 | 43.24 | | UDPIPE (official) | 92.88 | 85.60 | 80.32 | 74.53 | | UDPIPE (ours) | 92.36 (0.09) | 84.72 (0.06) | 78.74 (0.04) | 73.14 (0.06) | | UDPIPE + GRεBERTA | 95.74 (0.06) | 90.95 (0.07) | 86.30 (0.14) | 82.15 (0.14) | | AG BERT (Singh et al., 2021) | 94.92 (0.18) | 88.27 (0.27) | 84.03 (0.12) | 78.80 (0.37) | | AG BERT (Yamshchikov et al., 2022) | 92.50 (0.03) | 84.56 (0.13) | 80.34 (0.11) | 74.22 (0.21) | | GRεTA-ENC | 94.44 (0.14) | 89.03 (0.13) | 87.32 (0.04) | 83.06 (0.07) | | PHILBERTA | 95.60 (0.21) | 90.41 (0.18) | 86.99 (0.06) | 82.69 (0.06) | | GRεBERTA | 95.83 (0.10) | 91.09 (0.02) | 88.20 (0.11) | 83.98 (0.21) | Model k = 1 k = 5 k = 10 k > 10 ![7_image_0.png](7_image_0.png) GRεTA 4.39 9.65 10.53 10.96 PHILTA 3.07 8.33 11.40 11.84 Table 5: Zero-shot family relationships task (recall@k). ating whether the correct entity is contained in the first 1, 5, 10, and >10 predictions, restricting the maximum sequence length to 50 wordpieces. Latin. The PoS tagging and lemmatization scores on EvaLatin 2022 are reported in Table 6. While the performance scores of all models are rather close to each other, our trilingual models consistently outperform the EvaLatin 2022 participant systems across all three subtasks. PHILBERTA reaches even higher scores than KRAKÓW-OPEN on PoS tagging, which leverages additional annotated data. On lemmatization, PHILTA similarly outperforms KRAKÓW-CLOSED on the Classical, Cross-genre, and Cross-time subtasks by 2.25, 1.78, and 0.23 percentage points, respectively, but does not outperform KRAKÓW-OPEN on the Crossgenre and the Cross-time subtask. | Model | Classical | Cross-genre | Cross-time | |-----------------------|---------------------|---------------|--------------| | KRAKÓW-OPEN | 97.99 | 96.06 | 92.97 | | UPoS | KRAKÓW-CLOSED 97.61 | 94.62 | 92.70 | | KU-LEUVEN | 96.33 | 92.31 | 92.11 | | PHILBERTA | 98.23 (0.06) | 96.59 (0.15) | 93.25 (0.12) | | Lemmatiz. KRAKÓW-OPEN | 97.26 | 96.45 | 92.15 | | KRAKÓW-CLOSED 95.08 | 91.62 | 91.68 | | | KU-LEUVEN | 85.44 | 86.48 | 84.60 | | PHILTA + Chars | 97.33 (0.04) | 93.40 (0.13) | 91.91 (0.04) | ## 6 Analyses And Discussion Training Behavior of GRεTA-ENC. While ![7_image_1.png](7_image_1.png) GRETA-ENC and GRεBERTA are of similar size (Table 7) and pre-trained with comparable objectives, GRεTA-ENC performs slightly worse than GRεBERTA. One reason may be that in a T5 model, some important information is distributed across encoder and decoder. This raises the question of whether encoders in encoder-decoder models are trained suboptimally, and whether improvements may be obtained by combining separately pre-trained encoders and decoders, or by pretraining the encoder before adding the decoder. Another reason may be that the encoder is not accustomed to using its classification head. Here again, it may be advantageous to pre-train the encoder before extending it to encoder-decoder pre-training. In Figure 3 we compare the PoS tagging validation accuracy of GRεTA-ENC to that of a randomly initialized T5 encoder (same size). GRεTA-ENC performs much worse than the randomly initialized model after one epoch, reaching only approximately 6%. However, while the randomly initialized model stagnates, GRεTA-ENC outperforms the randomly initialized model after two epochs, significantly improving its performance thereafter. ![8_image_0.png](8_image_0.png) By contrast, GRεBERTA reaches a high validation accuracy already after one epoch. We see the same trend with different random seeds and for dependency parsing, but it is most apparent in Perseus XPoS tagging. Lemmatization as a Character-based Task. As seen in Table 4, augmenting UDPIPE with GRεBERTA does not lead to significant improvement for lemmatization. This we attribute to the tokenization process. GRεBERTA uses wordpieces, which contain little information about individual characters. We hypothesize that UDPIPE ignores the GRεBERTA embeddings for lemmatization and instead relies on its own additional character embeddings. Accordingly, explicitly providing GRεTA with the individual characters of the inflected word form leads to a slight increase in performance. This explanation can also shed light on the success of the *UTF-8 bytes-based* BYT5 model for lemmatization in Latin. This model was chosen by Wróbel and Nowak (2022), after initial experiments with the *wordpiece-based* MT5 (Xue et al., 2021) had underperformed. Future work on (AG) lemmatization could therefore investigate whether Byte Pair Encoding-based models can be augmented with character embeddings as additional input. Effect of Multilinguality. Table 3 shows that PHILBERTA consistently performs slightly worse compared to monolingual GRεBERTA on morphological and syntactic tasks. We attribute this to the curse of multilinguality (Conneau et al., 2020): the capacity of the trilingual models is split between three languages. Still, both models achieve strong results on AG and Latin tasks and can be especially useful in tasks that require multilingual knowledge, as in MT or glossing tasks. Our small-sized knowledge probing tasks show very similar performance for both model types. While the size of our data does not allow for firm conclusions, this is in line with Kassner et al. (2021), who find no improved knowledge representation in multilingual PLMs. ## 7 Conclusion We introduce four strong language models for Classical Philology, including the first encoder-decoder PLMs for Ancient Greek and Latin. We rigorously benchmark our models and prior work on various tasks, demonstrating strong improvement over the SoTA. We showcase the versatility of encoderdecoder models, (i) by offering a novel end-to-end contextualized lemmatization model for AG and Latin, with a greatly simplified architecture that clearly outperforms prior work; (ii) while MLM in encoder-only models is restricted to single-token predictions, our T5-based models exhibit great flexibility for formulating probing tasks, which help exploring what models learn from pre-training data. Considering the two investigated model dimensions, our work (iii) sheds light on differences between the encoders of T5 vs. ROBERTA, where the former tends to exhibit slower learning curves; (iv) our monolingual models outperform the multilingual ones in monolingual morphological and syntactic tasks, without clear trends on small-scale semantic and knowledge probing tasks. ## Limitations While we aim for a comprehensive analysis of existing methods (such as lemmatization) and model types for Ancient Greek and other Classical languages, there are limits to exhaustively exploring the full space of variations and rigorously evaluating their impact on model performance. For example, we could not comprehensively evaluate the effects of (i) the pre-training corpora, as we did not re-train a BERT model for Ancient Greek, to pin down the exact difference between prior BERT models (which were trained on smaller data before) and our own models, which are based on inherently stronger model types; similarly, we did not induce Latin ROBERTA and T5 models, to confirm the differences between mono- and multilingual models for language-specific Latin tasks. (ii) In a similar vein, we did not compare different model sizes. However, we studied prior work and scaling laws and believe that the base model is appropriate for the size of our training data. Further factors of this type concern (iii) hyperparameter settings and (iv) other factors in isolation. Not only do we miss sufficient computational resources to perform such manifold ablations and comparative assessments, we also considered the carbon footprint that such experiments cause and which does not stand up to the insights that could possibly be gained from more experiments. For these reasons, we focused on two selected dimensions of variants that we believe to be valuable for a community interested in Classical languages: (i) We tried to answer questions as to when multilingual models can be profitably used, and (ii) aimed to showcase various potential advantages of encoder-decoder models, which by now have not been considered in studies on Classical languages. Another clear limitation lies in the size of the demonstrated semantic and knowledge probing tasks. (i) They are of small size, and we cannot, therefore, draw firm conclusions as to, e.g., the effect of multilinguality. Also, the synonym/antonym disambiguation task is presumably the most difficult one. As a counter-balance, we used a more tangible task for knowledge probing, by choosing family relationships, which we expect to be frequently found in the pre-training corpora. (ii) A further limitation we find for the knowledge probing tasks resides in the size of our trained models and the underlying pretraining data. This limitation could be one that is not easy to overcome. But we still encourage the community to create similar probing task datasets. Future work may find appropriate ways of data augmentation, or transfer learning methods that are applicable to historical languages so that further progress and insight will be possible. ## Ethics Statement It is a computationally demanding task to pre-train large language models. However, transfer learning opens the possibility to fine-tune our pre-trained models, which showed strong performances, in a reasonable amount of time. The texts utilized for pre-training the models may well exhibit biases related to ancient perspectives of the world. We do not view this as an issue, as the proposed language models for historical languages are intended for academic use and do not have practical, everyday applications. ## Acknowledgments We are deeply indebted to the Internet Archive team for their continuous support by creating new OCR transcriptions of the misclassified Greek books, and to our anonymous reviewers for their comments, which have helped to significantly improve the paper. We thank Nina Stahl and Thomas KuhnTreichel for their help in creating our semantic and knowledge probing tasks, as well as Jan Ctibor and Philipp Roelli for providing us with the invaluable Corpus Corporum data. Finally, we acknowledge and thank for crucial support from the Google TPU Research Cloud program, for granting us access to their TPUs. ## References David Bamman and Patrick J Burns. 2020. Latin BERT: A contextual language model for classical philology. arXiv preprint arXiv:2009.10053. Corien Bary, Peter Berck, and Iris Hendrickx. 2017. A Memory-Based Lemmatizer for Ancient Greek. In Proceedings of the 2nd International Conference on Digital Access to Textual Cultural Heritage, DATeCH2017, page 91–95, New York, NY, USA. Association for Computing Machinery. M. Berti. 2019. Digital Classical Philology: Ancient Greek and Latin in the Digital Revolution. Age of access? Grundfragen der Informationsgesellschaft. De Gruyter. Thorsten Brants. 2000. TnT - a statistical part-ofspeech tagger. In Sixth Applied Natural Language Processing Conference, pages 224–231, Seattle, Washington, USA. Association for Computational Linguistics. R. Busa. 1980. The Annals of Humanities Computing: The Index Thomisticus. *Computers and the Humanities*, 14(2):83–90. Giuseppe G. A. Celano, Gregory Crane, and Saeed Majidi. 2016. Part of Speech Tagging for Ancient Greek. Open Linguistics, 2(1). Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On shortest arborescence of a directed graph. *Scientia Sinica*, 14(10):1396. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Gregory Crane. 1991. Generating and Parsing Classical Greek. *Literary and Linguistic Computing*, 6(4):243– 245. Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. *Computational Linguistics*, 47(2):255–308. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Sumanth Doddapaneni, Gowtham Ramesh, Mitesh M Khapra, Anoop Kunchukuttan, and Pratyush Kumar. 2021. A primer on pretrained multilingual language models. *arXiv preprint arXiv:2107.00676*. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards B, 71(4):233–240. Kyle P. Johnson, Patrick J. Burns, John Stewart, Todd Cook, Clément Besnier, and William J. B. Mattingly. 2021. The Classical Language Toolkit: An NLP framework for pre-modern languages. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 20–29, Online. Association for Computational Linguistics. Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021. Multilingual LAMA: Investigating knowledge in multilingual pretrained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3250–3258, Online. Association for Computational Linguistics. Alek Keersmaekers. 2019. Creating a richly annotated corpus of papyrological Greek: The possibilities of natural language processing approaches to a highly inflected historical language. Digital Scholarship in the Humanities, 35(1):67–82. Manfred Kern, Alfred Ebenbauer, and Silvia KrämerSeifert. 2003. *Lexikon der antiken Gestalten in den* deutschen Texten des Mittelalters. Walter de Gruyter. John Koutsikakis, Ilias Chalkidis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2020. GREEKBERT: The Greeks Visiting Sesame Street. In 11th Hellenic Conference on Artificial Intelligence, SETN 2020, page 110–117, New York, NY, USA. Association for Computing Machinery. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*. Wouter Mercelis and Alek Keersmaekers. 2022. An ELECTRA model for Latin token tagging tasks. In Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages, pages 189–192, Marseille, France. European Language Resources Association. D. W. Packard. 1968. *A Concordance to Livy*. A Concordance to Livy. Harvard University Press. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Helmut Schmid. 2019. Deep Learning-Based Morphological Taggers and Lemmatizers for Annotating Historical Texts. In *Proceedings of the 3rd International* Conference on Digital Access to Textual Cultural Heritage, DATeCH2019, page 133–137, New York, NY, USA. Association for Computing Machinery. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Pranaydeep Singh, Gorik Rutten, and Els Lefever. 2021. A pilot study for BERT language modelling and morphological analysis for ancient and medieval Greek. In *Proceedings of the 5th Joint SIGHUM Workshop* on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 128–137, Punta Cana, Dominican Republic (online). Association for Computational Linguistics. Rachele Sprugnoli, Marco Passarotti, Flavio Massimiliano Cecchini, Margherita Fantoli, and Giovanni Moretti. 2022. Overview of the EvaLatin 2022 evaluation campaign. In Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages, pages 183–188, Marseille, France. European Language Resources Association. Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197–207, Brussels, Belgium. Association for Computational Linguistics. Milan Straka, Jana Straková, and Jan Hajic. 2019. Eval- ˇ uating contextualized embeddings on 54 languages in pos tagging, lemmatization and dependency parsing. arXiv preprint arXiv:1908.07448. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-on what language model pre-training captures. *Transactions of the Association for Computational Linguistics*, 8:743–758. Alessandro Vatri and Barbara McGillivray. 2020. Lemmatization for Ancient Greek: An experimental assessment of the state of the art. Journal of Greek Linguistics, 20(2):179 - 196. Krzysztof Wróbel and Krzysztof Nowak. 2022. Transformer-based part-of-speech tagging and lemmatization for Latin. In *Proceedings of the* Second Workshop on Language Technologies for Historical and Ancient Languages, pages 193–197, Marseille, France. European Language Resources Association. Linting Xue, Aditya Barua, Noah Constant, Rami AlRfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. ByT5: Towards a token-free future with pre-trained byte-to-byte models. *Transactions of the Association for Computational Linguistics*, 10:291–306. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Ivan Yamshchikov, Alexey Tikhonov, Yorgos Pantis, Charlotte Schubert, and Jürgen Jost. 2022. BERT in Plutarch's Shadows. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 6071–6080, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665–676, Valencia, Spain. Association for Computational Linguistics. Hyperparameter GRεBERTA PHILBERTA GRεTA PHILTA ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) Adam ϵ 1 · 10−8 1 · 10−8 1 · 10−8 1 · 10−8 Adam β1 0.9 0.9 0.9 0.9 Adam β2 0.999 0.999 0.999 0.999 Attention Dropout 0.1 0.1 0.1 0.1 Attention Heads 12 12 12 12 Batch Size 128 256 512 512 dff - — 2048 2048 dkv - — 64 64 dmodel - — 768 768 Hidden Dropout 0.1 0.1 0.1 0.1 Hidden Size 768 768 - — Learning Rate (LR) 5 · 10−5 5 · 10−5 5 · 10−3 5 · 10−3 LR Scheduler linear linear linear linear Nb. of Layers 12 12 2 · 12 2 · 12 Nb. of Parameters 126 mill. 135 mill. 223 mill. 247 mill. Train Epochs 50, 100 0, 100 50, 100 0, 100 Warmup Steps 0 0 10000 10000 Weight Decay 0 0 0.01 0.01 ## A Training Details A.1 Pre-Training Details We pre-train the monolingual models for 50 epochs on the Internet Archive corpus and continue pretraining for 100 epochs on the born-digital texts, the trilingual models were trained for 100 epochs on the born-digital texts. The tokenizers were trained on the born-digital data only. GRεBERTA and PHILBERTA were trained on an NVIDIA A100- PCIE-40GB, GRεTA and PHILTA on a Google TPU v2-8. Training took between 3 and 7 days. Further details in Table 7. ## A.2 Fine-Tuning Details We train every Greek model for 50 epochs on an NVIDIA GeForce GTX 1080 Ti, evaluating the model after every epoch on the validation set and using early stopping with a stopping patience of 5. As the EvaLatin dataset does not provide a validation set, we use 2% of the training data as the validation set. Furthermore, since the EvaLatin dataset is larger than the Greek datasets, we set the maximum number of training epochs to 20 for the Latin models. Depending on the treebank and the task, training the models took approximately 1 hour (PoS tagging), 5–7 hours (dependency parsing), and 1–3 days (lemmatization). Further details in Table 8. We did not experiment with different hyperparameter settings, as our main goal was to provide comparable and wide-ranging benchmarking results. | Hyperparameter Adam ϵ | 1 · 10−8 | |-------------------------|------------| | Adam β1 | 0.9 | | Adam β2 | 0.999 | | Batch Size | 32 | | Early Stopping Patience | 5 | | Learning Rate | 1 · 10−4 | | Learning Rate Scheduler | linear | | Random Seeds | 42, 1, 2 | | Train Epochs | 50 | | Weight Decay | 1 · 10−5 | ![12_image_3.png](12_image_3.png) ![12_image_4.png](12_image_4.png) Table 8: Fine-tuning hyperparameters. Perseus PROIEL **EvaLatin** Sentences (train) 11 476 15 014 15 785 Sentences (dev) 1137 1019 - Sentences (test) 1306 1047 1960 Sentences (total) 13 919 17 080 17 745 Tokens (train) 159 895 187 033 316 573 Tokens (dev) 22 135 13 652 — Tokens (test) 20 959 13 314 45 544 Tokens (total) 202 989 213 999 362 117 Lemmata 13 413 9348 10 357 Forms 41 304 32 591 54 133 UPoS Tags 14 14 16 XPoS Tags 847 27 - Dependency Relations 25 33 — ## B Downstream Task Details B.1 Universal Dependencies And Evalatin 2022 For PoS tagging, UD provides universal PoS tags (UPoS) and language-specific PoS tags (XPoS). UPoS consists of 17 tags used for all languages covered by UD.13 XPoS tags, on the other hand, can follow a dataset-specific annotation scheme. While the XPoS tags of the PROIEL dataset are similar to the UPoS tags, the Perseus dataset aims for a complete morphological analysis (cf. Section 3.2). See Table 9 for further details and Table 2 for an overview. In line with common convention, we report the accuracy for both PoS tag sets. For dependency parsing, we report the unlabeled attachment score (UAS) and the labeled attachment score (LAS). The UAS indicates the percentage of tokens that have been assigned the correct head, whereas for the LAS, both the predicted head and the dependency label have to be correct. All results are obtained from the official evaluation script.14 ## B.2 Semantic And World Knowledge Semantic Knowledge. We asked a graduate student and a doctoral candidate in the field of Classics to gather synonym and antonym pairs. Such word pairs can be nouns and substantivized adjectives or substantivized infinitives. We then utilized a predefined template to generate sentences that incorporate the collected pairs. As this template does not ensure grammaticality, the annotators manually edited the sentences. Subsequently, the sentences were independently reviewed by both annotators, deduplicated, and then verified by a professor of Ancient Greek. All three annotators participated on a voluntary basis and were not compensated for their contributions. One of the annotators is also a co-author of this paper. 141 synonym and 146 antonym pairs were collected. While we publish all 287 examples, we drop 5 randomly selected antonym pairs in our experiments to ensure that the number of synonym and antonym pairs is equal. We train all language models for 10 epochs using a batch size of 4 and report the averaged, cross-validated results. World Knowledge. This dataset was compiled by one of the previous annotators who is not a co-author of this paper. The annotator gathered 228 examples with 11 different relations by reading through Hesiod's *Theogony* and by drawing inspiration from Kern et al. (2003), a lexicon that contains comprehensive information about mythical figures. ## C Acquisition Of Pre-Training Data C.1 Ancient Greek Pre-Training Data Open Greek & Latin Project.15 The Open Greek & Latin Project is an umbrella project covering various subprojects that aim toward the development of open-access corpus linguistic resources for Latin and Classical Greek. Two of them, the Perseus Digital Library and the First Thousand Years of Greek project, contain Ancient Greek texts, mostly covering texts that are typically associated with classical antiquity, such as Homer, Plato, Herodotus, Euripides, and Plutarch. Already in this corpus, we find a wide variety of dialects and language stages. The Open Greek & Latin Project contains approximately 30 million tokens. Greek Medieval Texts.16 The Greek Medieval Texts corpus offered by CLARIN covers writings from the fourth to the sixteenth century AD. It contains religious, poetical-literary and politicalhistorical texts as well as hymns and epigrams. Strictly speaking (and as the name suggests) the corpus contains texts of late antiquity, and in particular, Medieval Greek. We argue, however, that Ancient Greek and Medieval Greek, although different language stages, are strongly connected to each other and that our language models benefit from seeing more diverse data during pre-training. This corpus contains about 3.3 million tokens and is licensed under the CC BY-NC 4.0 license. Patrologia Graeca.17 The Patrologia Graeca is a large collection of important Christian texts written in Greek, dating from the first until the fifteenth century AD. Since not all texts are machine-readable and available, we are restricted to those out of copyright texts that are made accessible (around 28.5 million tokens). Internet Archive.18 The Internet Archive is an online library that provides texts obtained from public domain books via OCR. We found out that only a small fraction of the books containing Ancient Greek text was labeled as such. Moreover, we discovered that even less books were transcribed with OCR settings that allowed Greek characters. As a result, many high-quality scans of Ancient Greek texts were transcribed into incomprehensible sequences of non-Greek characters. For example, the verse ὦ γύναι ἦ μάλα τοῦτο ἔπος νημερτὲς ἔειπες19 is transcribed as & yvvai, ff pdXa tovto Sttoˆ vrjpepTeˆ e=C/.7r=C9∗. Even though transcriptions of this nature may seem useless at first glance, they are nevertheless helpful in identifying documents that have been incorrectly treated as non-Greek texts, for many common words are relatively consistently transcribed. τοῦτο ("this"), for example, is often transcribed into tovto. By searching for all books that contain the word tovto, we can identify potential Greek texts. This approach allows us to avoid the computationally intensive task of applying Greek OCR to every book in the Internet Archive, and instead focus our efforts on a more targeted search. All candidates are then filtered more aggressively: If 16https://inventory.clarin.gr/corpus/890. 17http://patristica.net/graeca/. 18https://archive.org/. 19Hom. Il. 3.204. a candidate contains the five (presumably) Greek stopwords tovto (τοῦτο), kal (καί), tov (τόν), to (τό), and yap (γάρ) more than ten times, the candidate is considered to contain Greek text. We argue that this method effectively minimizes false positives while retaining a high recall: Since Greek stopwords like τοῦτο ("this") and καί ("and") should be present often enough in every book with a significant amount of text, our approach should correctly classify them as Greek. Non-Greek texts, on the other hand, should hardly contain all five stopwords more than ten times. This procedure yields 25378 books, on which the Internet Archive applies OCR with Ancient Greek as a target language. While our method reliably detects Greek texts, it does not ensure a high scan (and therefore also text) quality. In order to use solely high-quality data, we keep only lines in which more than 90% of tokens are also present in the born-digital vocabulary. A similar approach is used by Bamman and Burns (2020), who use Latin texts from the Internet Archive as pre-training data for Latin BERT. They "retain only those books where at least 40% of tokens are present in a vocabulary derived from born-digital texts". We argue that it is more sensible to include or disregard individual lines instead of whole books: Almost every Greek text contains a Latin or English introduction, and many books are equipped with a translation. Thus, our method not only ensures high-quality data but also removes non-Greek text parts. Finally, to ensure that our dataset does not contain any significant duplications, we remove all instances of repeated text exceeding 300 characters. After this aggressive filtering, we have approximately 123.3 million tokens left. To demonstrate its quality, we show 40 samples randomly drawn from the dataset in Table 10. ## C.2 English Pre-Training Data By collecting English translations of ancient texts, we focus on texts that are strongly connected to antiquity. We gather these texts from various sources: The Perseus Digital Library20 and the Internet Classics Archive21 provide born-digital open-access translations of Classical Greek and Latin texts. Similarly, the Documenta Catholica Omnia database22 contains a large amount of primarily catholic texts in many languages, of which we select the English 20http://www.perseus.tufts.edu/hopper/. 21http://classics.mit.edu/. 22http://www.documentacatholicaomnia.eu/. τῆς Μασίστεω γυναικός, ἐούσης καὶ ταύτης ἐνθαῦτα. ὡς πίστις ὑμῶν· φοβηθέντες δὲ ἐθαύμαζον, λέγοντες πρὸς ἀλλήλους, ὑποληπτέον' ἡ γὰρ πέψις τοῖς μὲν ἐν τῷ ἄνθει μᾶλλον ἀνέπαυσαν γὰρ τ. ἐμὸν πνεῦμα κ. τὸ ὑμῶν εἰ Σατανᾶς ἀνέστη ἐφ᾿ ἑαυτόν πρόσωπον ναοῦ Κυρίου, μετὰ τὸ ἀποικίσαι Ναβουχοδονόσορ ἐκείνοις δὲ ὄντος ἀεὶ τοῦ ἐπιχειρεῖν καὶ ἐφ ἡμῖν εἶναι δεῖ τὸ προαμύνασθαι. ἑξακοσίους ὁπλίτας εἰς τὴν πόλιν ἄ ἄγει. ἐν τῷ στρατεύματι ἔχοντι τοῦ Γερμανικοῦ συναγορεύειν μέλλοντος,' νοοῦν εἴη, ὅτι ἄλλου τὴν νόησιν λαμβάνον οὐ τὸ ἐὰν δὲ μὴ τούτοις δύνῃ χρῆσθαι, μου- ἐφ᾿ ὑμᾶς' ὑμεῖς.δὲ καθίσατε ἐν τῇ πόλει Υ ῾Ιερουσαλὴμ καὶ νοητῆς τελειότητος. μένον οὐκ ἐπίστευσαν. τίον ἀράτω ᾿Ιησοῦς διδόντα ὑετὸν ἐπὶ τὴν γῆν, ἀποστέλλοντα ὕδωρ ταρασσέσθω ὑμῶν ἡ καρδία, μηδὲ δευλιάτω. ἠκούσατε ὅτι ἐγὼ τὴν ξημίην ἐπέθηκαν. Ζυώδεκα δέ μοι δοκέουσι πόλιας ποιήἐστι' τὰ δὲ γενητὰ, ἔξωθεν ὄντα, πρόσκειται, ὡς τῇ ὁ δὲ Κλεομένης τὸν ἱρέα ἐκέλευε τοὺς εἵλωτας ἀπὸ τοῦ ἅπαξ ἀλλὰ πολλάκις. ἐλθόντος. καὶ αὖθις ἔδοξε τούτου χάριν καὶ κερματίξῃς αὐτό, ἐκεῖνοι πολλαπλασιοῦσιν, εὐλαβούμενοι καὶ προλάμψαν τὸ ἐραστὸν αὐτοῦ καὶ τὸν κρυπτόμενον πεντακοσίας, οὺς πάντας ἡ τοῦ δεσπότου χάρις καὶ φιλανθρωπία διατρέφει. ταύτης ἰδίᾳ προετρέπετο τὸν Σικόπαν κοινωνῆσαι οὐδὲ παναρμονίου ἡμῖν δεήσει ἐν ταῖς ὠδαῖς τε καὶ σημεῖα τοῦ τοῦτον συχοφαντεῖν ἐγχαλοῦντ᾿ ἀφορμήν. συμπεριλαμβανομένων καὶ ψυχῆς καὶ τῶν ἐν πλὴν ἐξ ὠκυβόλων εἴ ποτε τόξων σφι ἄρτισις περὶ τὸ σῶμα ἐστί. μὴ πέσῃς κατέναντι ἐνεδρεύοντος ο Εἰς τοῦτο ἐσυνέργησαν οἱ πρῶτοι τῆς γενεᾶς τῆς, χωρίων ἢ οἰκιῶν ὑπῆρχον, πωλοῦντες ἔφερον τὰς τιμὰς ᾧ δὲ περὶ ἑκάστην μεθοδον¨) φιλοσοφοῦντι καὶ μὴ' τῶν τῆς. παιδὸς γάμων, Ζεὺς διαλύσας ἐπέτρεψεν ὑμῶν. πόλεις αἱ πρὸς νότον συνεκλείσθησαν, καὶ οὐκ ἦν ὁ ἀνοίγων: ἀπῳκίσθη Ιουδας, πειρασμούς. Περὶ ταύτης ἡ Γραφὴ (ά. Κορ, ἔπεσεν ἐπὶ πρόσωπον αὐτοῦ προσευχόμενος ζητεῖ' οἷδεν. γὰρ ὁ-πατὴριὑμῶν ὁ οὐράνιος Table 10: 40 randomly drawn lines of the Internet Archive pre-training dataset. partition for our use. Finally, we utilize Lexundria,23 Loebulus,24 and the Project Gutenberg to add (often proofread) scans of books in the public domain. While Lexundria and Loebulus are restricted to providing translations of Latin and Greek texts, the Project Gutenberg offers a more diverse range of literature. Therefore, we use only books from Project Gutenberg that are tagged with the keyword "Latin". We report detailed statistics in Table 11. | Ancient Greek English | |-------------------------| | Language | Dataset | Number of Tokens | |---------------------------|-----------------|--------------------| | Open Greek & Latin | 30.0 million | | | Greek Medieval Texts | 3.3 million | | | Patrologia Graeca | 28.5 million | | | Internet Archive | 123.3 million | | | Overall | 185.1 million | | | Latin | Corpus Corporum | 167.5 million | | Perseus | 10.8 million | | | Classics Archive | 4.9 million | | | Lexundria | 2.8 million | | | Loebulus | 14.0 million | | | Project Gutenberg | 28.7 million | | | Documenta Catholica Omnia | 151.7 million | | | Overall | 212.8 million | | ## D Further Results Model **Accuracy** CLTK 96.51 UDPIPE (official) 94.71 UDPIPE (ours) 93.87 (0.05) UDPIPE + GRεBERTA 94.17 (0.05) GRεTA 97.40 (0.02) GRεTA + Chars 97.48 (0.02) ![15_image_0.png](15_image_0.png) | Model | PoS Tagging | Dependency Parsing | | | |------------------------------------|---------------|----------------------|--------------|--------------| | UPoS | XPoS | UAS | LAS | | | CLTK | 97.10 | 97.47 | 76.81 | 73.39 | | UDPIPE (official) | 97.77 | 98.05 | 86.05 | 82.14 | | UDPIPE (ours) | 97.99 (0.05) | 97.68 (0.06) | 85.64 (0.17) | 81.70 (0.25) | | UDPIPE + GRεBERTA | 98.56 (0.02) | 98.70 (0.03) | 89.75 (0.16) | 86.59 (0.15) | | AG BERT (Singh et al., 2021) | 97.98 (0.02) | 98.14 (0.05) | 88.50 (0.09) | 84.72 (0.18) | | AG BERT (Yamshchikov et al., 2022) | 97.19 (0.06) | 97.42 (0.08) | 86.61 (0.21) | 82.12 (0.15) | | GRεTA-ENC | 98.16 (0.02) | 98.31 (0.03) | 89.93 (0.08) | 86.48 (0.08) | | PHILBERTA | 98.15 (0.16) | 98.45 (0.05) | 90.32 (0.13) | 86.43 (0.61) | | GRεBERTA | 98.60 (0.03) | 98.70 (0.04) | 90.28 (0.03) | 86.84 (0.12) | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We discuss general limitations in a section a dedicated section titled "Limitations". Furthermore, we acknowledge that the experiments on our small probing datasets do not allow to draw firm conclusions in Section 5 and Section 6. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? The paper's claims are summarized in the Abstract and explicitly listed at the end of the Introduction (Section 1). ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We use pre-trained Language Models for Ancient Greek, introducing them in Section 1, elaborating on them in Section 2 as Related Work, and using them in our experiments in Section 5 and Section 6. Furthermore, we pre-train Language Models, elaborating on them in Section 3 and using them in our experiments (Section 5 and 6) as well. Finally, we create a pre-training corpus for Ancient Greek, described in Section 3 and in Section C. The downstream task datasets that we use are introduced in Section 4 and in Section B. ✓ B1. Did you cite the creators of artifacts you used? We cite the creators of the datasets we used in Sections 1 and 2. We cite relevant prior work and language models that we compare to in Sections 1 and 2. We elaborate on the datasets we use in Section 4.1 and specify the version. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The licenses for the data are discussed in Section C. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We specify the intended use for our pre-training dataset in Section 1 and the intended use for our language models in Section 1 and 2. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Given that we create our dataset utilizing open-domain texts from antiquity, we do not consider anonymization to be a significant concern. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Our pre-training corpus for Ancient Greek is described in Section 3 and in Section C. Our language models are documented in Section 3 and in Section A. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. For the Universal Dependencies and EvaLatin datasets that we used, we report the statistics in Appendix B. We report the creation of the pre-training and probing corpora and their statistics in Appendix B and C. ## C ✓ **Did You Run Computational Experiments?** We elaborate on our experiments in Section 4. Pre-training and fine-tuning details can be found in Appendix A. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We report pre-training and fine-tuning details in Appendix A. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We use the official evaluation scripts for our dataset, mentioned in Section B. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** The Collection Of Our Probing Task Data Is Described In Section B. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? They were informed orally in a brief introductory session. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We report details about how the annotators were paid and how they were recruited in Appendix B. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Details about consent are reported in Appendix B. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We report the characteristics of the annotator population in Appendix B.
tu-etal-2023-layoutmask
{L}ayout{M}ask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
https://aclanthology.org/2023.acl-long.847
Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.
# Layoutmask: Enhance Text-Layout Interaction In Multi-Modal Pre-Training For Document Understanding ## Yi Tu, Ya Guo, Huan Chen, Jinyang Tang Ant Group, China {qianyi.ty,guoya.gy,chenhuan.chen,jinyang.tjy}@antgroup.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multimodal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification. ## 1 Introduction Visually-rich Document Understanding (VrDU) is an important research area that aims to understand various types of documents (e.g., forms, receipts, and posters), and it has attracted much attention from both academia and industry. In recent years, pre-training techniques (Devlin et al., 2019; Zhang et al., 2019) have been introduced into this area and self-supervised pre-training multi-modal models have demonstrated great successes in various VrDU tasks (Xu et al., 2020, 2021; Hong et al., 2022; Li et al., 2021a). Figure 1: A receipt image from SROIE dataset and the global/local 1D positions of tokens based on global/insegment reading orders. Local 1D positions restart with "1" for each individual segment. **Blue Arrow:** When using global 1D position, the reading order is explicitly implied by the ascending numbers, so the word after "Qty" is "Price". **Red Arrows:** When using local 1D position, the successor of "Qty" is not directly given and can have more possible choices, so their semantic relations and 2D positions will be considered during pre-training. However, existing document pre-training models suffer from reading order issues. Following the idea of BERT (Devlin et al., 2019), these methods (Xu et al., 2020, 2021; Hong et al., 2022) usually adopt ascending numbers (e.g., 0, 1, 2,.., 511) to represent the global reading order of tokens in the document. Then, these numbers are encoded into 1D position embeddings to provide explicit reading order supervision during pre-training, which are called "global 1D position". While such global 1D positions are widely used in NLP models for textual data, it is not a good choice for document data. Firstly, plain texts always have definite and linear reading orders, but the reading order of a document may not be unique or even linear, which 15200 cannot be simply encoded with monotonically increasing numbers. Secondly, the global reading order of a document is usually obtained by ordering detected text segments from OCR tools with empirical rules, so it heavily relies on stable and consistent OCR results, affecting the generalization ability in real-world applications. Moreover, the empirical rules to obtain reading orders (e.g., "topdown and left-right") may not be able to handle documents with complex layouts, thus providing inaccurate supervision. Some previous studies have attempted to solve the above reading order issues. LayoutReader (Wang et al., 2021) proposes a sequence-tosequence framework for reading order detection with supervised reading order annotations. XYLayoutLM (Gu et al., 2022) utilizes an augmented XY Cut algorithm to generate different proper reading orders during pre-training to increase generalization ability. ERNIE-Layout (Peng et al., 2022) rearranges the order of input tokens in serialization modules and adopts a reading order prediction task in pre-training. While these studies propose databased or rule-based solutions to provide explicit reading order supervision, we believe that the selfsupervised pre-training process on a large number of documents without using extra supervision is sufficient to help the model to learn reading order knowledge, and such knowledge can be implicitly encoded into the pre-trained model with better adaptiveness and robustness to various document layouts. We proposed a novel multi-modal pre-training model, **LayoutMask**, to achieve this goal. LayoutMask only uses text and layout information as model input and aims to enhance text-layout interactions and layout representation learning during pre-training. It differs from previous studies in three aspects: choice of 1D position, masking strategy, and pre-training objective. Instead of global 1D position, LayoutMask proposes to use the in-segment token orders as 1D position, which is referred to as "**local 1D position**" (See illustration in Figure 1). As local 1D position does not provide cross-segment orders, LayoutMask is supposed to infer global reading order by jointly using 1D position, 2D position, and semantic information, thus bringing in-depth text-layout interactions. To further promote such interactions, we equip the commonly used pre-training objective, Masked Language Modeling (MLM), with two novel masking strategies, **Whole Word Masking** and **Layout-Aware Masking**, and design an auxiliary pre-training objective, **Masked Position** Modeling, to predict masked 2D positions during pre-training. With the above designs, we increase the difficulty of pre-training objectives and force the model to focus more on layout information to obtain reading order clues in various document layouts in self-supervised learning, thus producing more adaptive and robust text-layout representations for document understanding tasks. Experimental results show that our proposed method can bring significant improvements to VrDU tasks and achieve SOTA performance with only text and layout modalities, indicating that previous studies have not fully explored the potential power of layout information and text-layout interactions. The contributions of this paper are summarized as follows: 1. We propose LayoutMask, a novel multi-modal pre-training model focusing on text-layout modality, to generate adaptive and robust multi-modal representations for VrDU tasks. 2. In LayoutMask, we use local 1D position instead of global 1D position to promote reading order learning. We leverage Whole Word Masking and Layout-Aware Masking in the MLM task and design a new pre-training objective, Masked Position Modeling, to enhance text-layout interactions. 3. Our method can produce useful multi-modal representations for documents and significantly outperforms many SOTA methods in multiple VrDU tasks. ## 2 Related Work The early studies in VrDU area usually use unimodal models or multi-modal models with shallow fusion (Yang et al., 2016, 2017; Katti et al., 2018; Sarkhel and Nandi, 2019). In recent years, pretraining techniques in NLP (Devlin et al., 2019; Zhang et al., 2019; Bao et al., 2020) and CV (Bao et al., 2021; Li et al., 2022) have become more and more popular, and they have been introduced into this area. Inspired by BERT (Devlin et al., 2019), LayoutLM (Xu et al., 2020) first improved the masked language modeling task by using the 2D coordinates of each token as layout embeddings, which can jointly model interactions between text and layout information and benefits document understanding tasks. Following this idea, LayoutLMv2 (Xu et al., 2021) propose to concatenate image patches with textual tokens to enhance text-image interactions, and LayoutLMv3 (Huang et al., 2022) proposed to learn cross-modal alignment with unified text and image masking. While the above methods focus on text-image interactions, some other studies have realized the importance of layout information. StructuralLM (Li et al., 2021a) utilizes segment-level layout features to provide word-segment relations. DocFormer (Appalaraju et al., 2021) combines text, vision, and spatial features with a novel multimodal self-attention layer and shares learned spatial embeddings across modalities. LiLT (Wang et al., 2022) proposes a language-independent layout transformer where the text and layout information are separately embedded. ERNIE-Layout (Peng et al., 2022) adopts a reading order prediction task in pre-training and rearranges the token sequence with the layout knowledge. ## 3 Methodology LayoutMask is a multi-modal transformer that can encode text and layout information of documents and produce multi-modal representations. The pipeline of LayoutMask can be seen in Figure 2. LayoutMask uses the transformer model with a spatial-aware self-attention mechanism proposed in LayoutLmv2 (Xu et al., 2021) as the backbone and follows its preprocessing settings for text and layout embeddings. In Section 3.1, we will discuss the different choices of layout information in LayoutMask. In Section 3.2, we will introduce the pre-training tasks and masking strategies used in LayoutMask. ## 3.1 Selection Of Layout Information For VrDU tasks, there are two types of commonly used layout information: 1D position and 2D position. We list the 1D and 2D positions used in previous studies in Table 1. 1D Position: As we discussed in Section 1, using global 1D position will bring read order issues and could damage the adaptiveness and robustness of pre-trained models. Different from some previous models that leverage global 1D position as model input, we propose to use local 1D position in LayoutMask. Local 1D position only encodes the token orders within each segment and always | Method | Position | | |---------------------------------|------------|----------| | 1D | 2D | | | LayoutLM (Xu et al., 2020) | Global | Word | | StructuralLM (Li et al., 2021a) | Global | Segment | | LayoutLMv2 (Xu et al., 2021) | Global | Word | | BROS (Hong et al., 2022) | Global | Segment† | | LiLT (Wang et al., 2022) | Global | Segment | | LayoutLMv3 (Huang et al., 2022) | Global | Segment | | LayoutMask(Ours) | Local | Segment | restarts with 1 for each individual segment. Illustrations of the global and local 1D positions can be seen in Figure 1 and Figure 2. Compared with global 1D position, the major difference of using local 1D position is the lack of cross-segment orders, so the global reading order has to be inferred with other layout and semantic clues. Besides, the in-segment orders implied by local 1D position are more reliable and trustworthy than cross-segment orders when meeting complex document layouts. 2D Position: The 2D position is represented as a 4-digit vector like [x1, y1, x2, y2], where [x1, y1] and [x2, y2] are the normalized coordinates of the top-left and bottom-right corners of a text box. There are two commonly used types of 2D positions: word-level 2D position (Word-2D) and segment-level 2D position (Segment-2D). For Word-2D, tokens of the same word will have the same word-level boxes as their 2D position. While for Segment-2D, the segment coordinates are shared by tokens within each segment. In our model, we choose local 1D position and segment-level 2D position as our model input, where local 1D position can provide in-segment orders, and segment-level 2D position can provide cross-segment reading order clues, so the pretrained model can learn the correct global reading order by jointly using 1D and 2D positions. We will compare the experimental results using different 1D & 2D position combinations in Section 4.3.1 and provide detailed discussions. ## 3.2 Pre-Training Objectives 3.2.1 Masked Language Modeling The Masked Language Modeling task is the most essential and commonly used pre-training task in multi-modal pre-training. In this task, we randomly mask some tokens with a given probability Pmlm (e.g., 15%) and recover these tokens during pre- ![3_image_0.png](3_image_0.png) ## Training. For each document, we use M to denote the number of masked tokens. yi and ¯yi represent the ground truth and prediction of the i-th masked token. Then the loss of this task is the average cross entropy loss of all masked tokens: $$\mathcal{L}_{\mathrm{mlm}}=-\frac{1}{M}\sum_{i=1}^{M}\mathrm{CE}(\mathrm{y}_{i},\bar{\mathrm{y}}_{i}).\tag{1}$$ In preliminary experiments, we find that the naive MLM task is not optimal for multi-modal pretraining. Thus we propose to adopt two novel strategies, Whole Word Masking (WWM) and LayoutAware Masking (LAM), to enhance this task. Whole Word Masking: The WWM strategy was first proposed for Chinese language models to increase the task difficulty (Cui et al., 2021). Following this strategy, we set masks at word-level instead of token-level, which is much more challenging. When using WWM, the semantic relations between masked and unmasked tokens of the same words are eliminated, so the model has to find more context to predict masked words, which can promote text-layout interactions. Layout-Aware Masking: As we use Local-1D and Segment-2D as model input, the global reading order should be obtained by jointly using 1D and 2D positions, where Local-1D provides in-segment orders and segment-2D provides cross-segment clues. We find that the cross-segment orders are harder to be learned, so we propose Layout-Aware Masking (LAM) strategy to address this issue. Unlike naive masking strategy where each token has an equal masking probability Pmlm, in LAM strategy, the first and last word of each segment has a higher probability (i.e., 3 × Pmlm) to be masked. In order to predict such masked words, the model has to pay more attention to finding their contexts in the preceding or succeeding segment, thus promoting learning cross-segment orders. ## 3.2.2 Masked Position Modeling To further promote the representation learning of layout information in the MLM task, we design an auxiliary task, Masked Position Modeling (MPM), which has a symmetric pre-training objective: recovering randomly masked 2D positions during pre-training (See illustration in Figure 2). Inspired by WWM, we also apply the MPM task at wordlevel instead of token-level. For each pre-training document, we randomly choose some unduplicated words with a given probability Pmpm. Then, for each selected word, we mask their 2D positions with the following two steps: Box Split: We first split the selected word out of its segment so the original segment box becomes 2 or 3 segment pieces (depending on if the word is at the start/end or in the middle). The selected word | Method | #Parameters | Modality | FUNSD(F1↑) | CORD(F1↑) | SROIE(F1↑) | |------------------------------------------|---------------|------------|--------------|-------------|--------------| | BERTBase (Devlin et al., 2019) | 110M | T | 60.26 | 89.68 | 90.99 | | RoBERTABase (Liu et al., 2019) | 125M | T | 66.48 | 93.54 | - | | UniLMv2Base (Bao et al., 2020) | 125M | T | 68.90 | 90.92 | 94.59 | | BROSBase (Hong et al., 2022) | 110M | T+L | 83.05 | 95.73 | 95.48 | | LiLTBase (Wang et al., 2022) | - | T+L | 88.41 | 96.07 | - | | LayoutLMBase (Xu et al., 2020) | 160M | T+L+I | 79.27 | - | 94.38 | | LayoutLMv2Base (Xu et al., 2021) | 200M | T+L+I | 82.76 | 94.95 | 96.25 | | TILTBase (Powalski et al., 2021) | 230M | T+L+I | - | 95.11 | 97.65† | | DocFormerBase (Appalaraju et al., 2021) | 183M | T+L+I | 83.34 | 96.33 | - | | LayoutLMv3Base (Huang et al., 2022) | 133M | T+L+I | 90.29 | 96.56 | - | | LayoutMaskBase (Ours) | 182M | T+L | 92.91±0.34 | 96.99±0.30 | 96.87±0.19 | | BERTLarge (Devlin et al., 2019) | 340M | T | 65.63 | 90.25 | 92.00 | | RoBERTALarge (Liu et al., 2019) | 355M | T | 70.72 | 93.80 | - | | UniLMv2Large (Bao et al., 2020) | 355M | T | 72.57 | 92.05 | 94.88 | | LayoutLMLarge (Xu et al., 2020) | 343M | T+L | 77.89 | - | 95.24 | | BROSLarge (Hong et al., 2022) | 340M | T+L | 84.52 | 97.40 | - | | LayoutLMv2Large (Xu et al., 2021) | 426M | T+L+I | 84.2 | 96.01 | 97.81 | | TILTLarge (Powalski et al., 2021) | 780M | T+L+I | - | 96.33 | 98.10† | | DocFormerLarge (Appalaraju et al., 2021) | 536M | T+L+I | 84.55 | 96.99 | - | | LayoutLMv3Large (Huang et al., 2022) | 368M | T+L+I | 92.08 | 97.46 | - | | ERNIE-LayoutLarge (Peng et al., 2022) | - | T+L+I | 93.12 | 97.21 | 97.55 | | LayoutMaskLarge (Ours) | 404M | T+L | 93.20±0.29 | 97.19±0.20 | 97.27±0.32 | becomes a one-word segment piece with just itself. Then we update the local 1D positions (restarting with 1) and segment 2D positions for each new segment piece. With the above operations, we can eliminate the local reading order clues implied by original 1D and 2D positions, so the model has to focus on semantical clues and new 2D positions. Box Masking: For each selected word, we mask its 2D position with pseudo boxes: [0, 0, 0, n] where n ∈ [0, 1, 2*, ...*] is a random number. Notice that segment 2D position is shared among tokens in the same segment, so the pseudo boxes will act as identifiers to distinguish identical tokens from different masked boxes, thus avoiding ambiguity. During pre-training, our model is supposed to predict the masked 2D positions with GIoU loss (Rezatofighi et al., 2019): $${\cal L}_{\rm mpm}=-\frac{1}{N}\sum_{i=1}^{N}(\frac{|{\rm B}_{i}\cap\bar{\rm B}_{i}|}{|{\rm B}_{i}\cup\bar{\rm B}_{i}|}-\frac{|{\rm C}_{i}\backslash({\rm B}_{i}\cup\bar{\rm B}_{i})|}{|{\rm C}_{i}|}).\tag{2}$$ Here, i ∈ [1, 2*, ..., N*] is the index of N masked 2D positions. Biis the ground truth box normalized to [0,1], and B¯i denotes the predicted 2D position. Ciis the smallest convex shapes that covers Bi and B¯i. Lmpm is the average GIoU loss of N masked 2D positions. The MPM task is very similar to the cloze test, where a group of randomly selected words is supposed to be refilled at the right positions in the original document. To predict the masked 2D positions of selected words, the model has to find the context for each word based on semantic relations and then infer with 2D position clues from a spatial perspective. The joint learning process with both semantic and spatial inference can promote text-layout interactions and help the model to learn better layout representations. With the above two pre-training objectives, the model is pre-trained with the following loss: $${\mathcal{L}}_{\mathrm{total}}={\mathcal{L}}_{\mathrm{mlm}}+\lambda{\mathcal{L}}_{\mathrm{mpm}},\qquad\qquad(3)$$ where λ is a hyper-parameter that controls the balance of the two pre-training objectives. ## 4 Experiments 4.1 Pre-Training Settings LayoutMask is pre-trained with IIT-CDIP Test Collection (Lewis et al., 2006). It contains about 42 million scanned document pages, and we only use 10 million pages. We use a public OCR engine, PaddleOCR1to obtain the OCR results. We train LayoutMask with two parameter sizes. LayoutMaskBase has 12 layers with 16 heads, and the hidden size is 768. LayoutMaskLarge has 24 layers with 16 heads where the hidden size is 1024. LayoutMaskBase and LayoutMaskLarge are 1https://github.com/PaddlePaddle/PaddleOCR | Method | Modality | Accuracy↑ | | |-------------------------------------|------------|-------------|-------| | Base | Large | | | | VGG-16 (Afzal et al., 2017) | I | 90.97 | | | Ensemble (Das et al., 2018) | I | 93.07 | | | LadderNet (Sarkhel and Nandi, 2019) | I | 92.77 | | | BERT (Devlin et al., 2019) | T | 89.81 | 89.92 | | RoBERTA (Liu et al., 2019) | T | 90.06 | 90.11 | | UniLMv2 (Bao et al., 2020) | T+L | 90.06 | 90.20 | | LayoutLM (Xu et al., 2020) | T+L | 91.78 | 91.90 | | StructuralLM (Li et al., 2021a) | T+L | - | 96.08 | | SelfDoc (Li et al., 2021b) | T+L+I | 92.81 | - | | TITL (Powalski et al., 2021) | T+L+I | 95.25 | 95.52 | | LayoutLMv2 (Xu et al., 2021) | T+L+I | 95.25 | 95.64 | | DocFormer (Appalaraju et al., 2021) | T+L+I | 96.17 | 95.50 | | LiLT (Wang et al., 2022) | T+L+I | 95.68 | - | | LayoutLMv3 (Huang et al., 2022) | T+L+I | 95.44 | 95.93 | | ERNIE-Layout (Peng et al., 2022) | T+L+I | - | 96.27 | | LayoutMask (Ours) | T+L | 93.26 | 93.80 | initialized with pre-trained XLM-RoBERTa models (Conneau et al., 2020). For hyper-parameters, we have Pmlm=25% and Pmpm=15% (See ablation study in Section A of the Appendix). The weight of MPM loss λ is set to be 1. ## 4.2 Comparison With The State-Of-The-Art In this section, we compare LayoutMask with SOTA models on two VrDU tasks: form & receipt understanding and document image classification. ## 4.2.1 Form And Receipt Understanding In this task, we conduct entity extraction task on three document understanding datasets: FUNSD (Jaume et al., 2019), CORD (Park et al., 2019), and SROIE (Huang et al., 2019). The FUNSD dataset is a form understanding dataset, which contains 199 documents (149 for training and 50 for test) and 9707 semantic entities. The CORD dataset is a receipt understanding dataset, and it contains 1000 receipts (800 for training, 100 for validation, and 100 for test) with 30 semantic labels in 4 categories. The SROIE dataset is another receipt understanding dataset with four types of entities, containing 626 receipts for training and 347 receipts for test. For evaluation, we adopt the word-level F1 score as the evaluation metric for FUNSD and CORD and use the entity-level F1 score for SROIE. Since these datasets are quite small, in order to provide stable and reliable results, we repeat our experiments ten times for each test and report the average F1 scores and standard errors as the final results. The results of previous methods and LayoutMask on these datasets are listed in Table 2. We have categorized them by the modalities used in pre-training: "T" for text, "L" for layout, and "I" for image. Notice that LayoutMask is a "T+L" model that does not use image modality. For the base version, LayoutMaskBase outperforms other methods, including "T+L+I" models, on all three datasets (FUNSD+2.62%, CORD+0.43%, SROIE+0.62%). For the large version, LayoutMaskLarge ranks first on FUNSD and has comparable results on CORD and SROIE. These results show that LayoutMask has competitive performance with SOTA methods, demonstrating the effectiveness of our proposed modules. Since LayoutMask only uses text and layout information, we believe that the potential power of layout information has not been fully explored in previous studies. 4.2.2 Document Image Classification In the document image classification task, we aim to classify document images in RVL-CDIP dataset (Harley et al., 2015). This dataset is a subset of the IIT-CDIP collection with 400,000 labeled document images (320,000 for train, 40,000 for validation, and 40,000 for test) in 16 categories. We use PaddleOCR to extract text and layout information as model input. We compare different methods with the overall classification accuracies on RVLCDIP, and the results are in Table 3. It is observed that LayoutMask has beaten all unimodality models ("I" and "T"). For "T+L" models, LayoutMaskBase outperforms other base models with a margin of 1.48%, while LayoutMaskLarge takes the second place in large models. Compared with "T+L+I" models where image modality is utilized, LayoutMask falls behind due to the lack of visual features from image modality. We have found that the image modality plays an important role in this task because RVL-CDIP images contain many elements that cannot be recognized by OCR engines (e.g., figures, table lines, and handwritten texts) and have orientation issues (See examples in Figure 5 of the Appendix). So the lack of image modality will bring difficulties that cannot be solved with only text and layout information. ## 4.3 Ablation Study On Layoutmask 4.3.1 Comparison Of Layout Information We first compare the performance of LayoutMask using different layout information. To make a fair comparison, we use LayoutMask with only the MLM task and the WWM strategy during pretraining. For each test, LayoutMask is pre-trained Table 4: The average F1 scores (%) with different 1D position and 2D position combinations. The best results are denoted in boldface. | Position Settings | Datasets | | | | |---------------------|------------|-------------|------------|-------------| | 1D | 2D | FUNSD (F1↑) | CORD (F1↑) | SROIE (F1↑) | | Global | Word | 82.17±0.45 | 95.95±0.43 | 96.02±0.34 | | Global | Segment | 91.61±0.42 | 96.69±0.24 | 96.20±0.26 | | Local | Word | 91.65±0.36 | 95.86±0.22 | 96.54±0.23 | | Local | Segment | 92.30±0.24 | 96.68±0.12 | 96.56±0.21 | ![6_image_0.png](6_image_0.png) | # | Position Settings | Swap | SROIE (F1↑) | | | | | |----------------|---------------------|------------|---------------|------------|------------|------------|------------| | 1D & 2D | Probability | Address | Company | Date | Total | Overall | | | 1 | Local+Segment | - | 96.69±0.37 | 95.88±0.28 | 99.66±0.13 | 94.02±0.49 | 96.56±0.21 | | 2 | - | 96.54±0.51 | 95.84±0.59 | 99.69±0.26 | 92.73±0.57 | 96.20±0.26 | | | 3 | 10 | 91.73±2.00 | 95.22±0.61 | 99.65±0.34 | 91.87±1.33 | 94.62±0.69 | | | Global+Segment | | | | | | | | | 4 | 20 | 90.03±3.77 | 94.93±0.53 | 99.60±0.32 | 91.67±1.35 | 94.06±1.02 | | | 5 | 30 | 88.12±4.59 | 94.88±0.82 | 99.55±0.28 | 91.19±1.38 | 93.44±1.14 | | and fine-tuned with a specific 1D and 2D position combination. The results are listed in Table 4. Performance of 1D Position: For 1D position, Local-1D outperforms Global-1D on both FUNSD (+9.48%/+0.69% with Word-2D/Segment-2D) and SROIE (+0.52%/+0.36%) and falls a little behind on CORD (-0.07%/-0.01%). To understand the benefits of using Local1D, we provide entity-level F1 score on SROIE dataset in Table 5 (\#1 for Local+Segment and \#2 for Global+Segment). It is obvious that the performance gap between Local+Segment and Global+Segment mainly comes from entity "Total" (from 94.02% to 92.72%), while other entities have similar F1 scores. We illustrate two example images of SROIE and their entities annotations in Figure 3. The right image, which contains entity "Total", has both vertical layout (first two lines) and horizontal layout and has multiple misleading numbers with the same content as ground truth (i.e., "193.00"). So it is hard to recognize the entity "Total" by using the ordinary reading order implied by Global-1D. Therefore, using Local-1D can perform better since it is more adaptive to such cases. Performance of 2D Position: For 2D position, using segment-level 2D position brings better results on all three datasets, regardless of the 1D position types. An important reason is that the segment information is highly indicative of recognizing entities. For example, every entity in FUNSD and CORD exactly shares the same segment. Therefore, although Word-2D contains more layout details, it will break the alignments between 2D positions and entities, thus bringing performance drops. A typical result of such phenomenon2can be seen on FUNSD, where replacing Global+Segment to Global+Word will result in a significant decrease of 9.44%. Robustness Comparison: Besides performance superiority, another important reason to choose the local 1D position is its robustness to layout disturbance. In real-world cases, a typical layout disturbance is "Segment Swap", where segments in the same line are indexed with wrong orders due to document rotation or OCR issues. In such scenarios, the incorrect cross-segment order will lead to incorrect global 1D positions and can be harmful to model inference. Fortunately, the local 1D position is naturally immune to such disturbance since it does not rely on cross-segment orders, making it more robust than global 1D position. To quantify such differences in robustness, we demonstrate how the segment swap will influence the performance of using global 1D position by simulating it on test datasets. For each test document, we randomly choose some lines with a given probability Pswap and then swap the segments in it. We conduct experiments on LayoutMaskBase (MLM+WWM) in Global+Segment setting with different Pswap (i.e., 10%, 20%, and 30%) and the results are reported in Table 5 (\#3-5). During our experiments, we have found that the segment swap does not bring significant perfor- #Pre-training Setting **Datasets** ![7_image_0.png](7_image_0.png) MLM WWM LAM MPM FUNSD (F1↑) CORD (F1↑) SROIE (F1↑**) RVL-CDIP (ACC**↑) 1√89.73±0.50 96.32±0.15 95.76±0.34 92.17 2√ √ 92.30±0.24 96.68±0.12 96.56±0.21 92.89 3*√ √ √* 92.66±0.26 96.89±0.24 96.64±0.22 93.03 4*√ √ √* 92.77±0.30 96.84±0.17 96.66±0.32 93.11 5*√ √ √ √* 92.91±0.34 96.99±0.30 96.87±0.19 93.26 ![7_image_2.png](7_image_2.png) mance changes on FUNSD and CORD datasets (so these results are not listed due to the limited space). A possible reason is that FUNSD and CORD do not contain cross-segment entities, so the segment swap can not break the order of words in each entity. Evidence for this explanation is that the SROIE dataset is significantly affected by segment swap, and its cross-segment entities ("Address" and "Company") have obvious performance drops. In SROIE, the majority of "Address" entities and a few "Company" entities are printed in multiple lines (See examples in Figure 3), so the segment swap can change the in-entity orders of entity words. The results show that the "Address" entity has the largest drop among all entities (-4.81%, -6.51%, and -8.42% for Pswap=10%, 20%, 30%). Besides, the "Total" entity has the second largest decrease (-0.86%, -1.06%, and -1.54%). As aforementioned, the "Total" entities are usually surrounded by complex layouts and misleading numbers, so the segment swap will bring extra difficulties in recognizing the correct entities. The above performance decreases of using global 1D position prove the superiority of using local 1D position since it is not affected by such layout disturbance and can have more robust performance in real-world scenarios. ## 4.3.2 Effectiveness Of Proposed Methods ![7_Image_1.Png](7_Image_1.Png) In Table 6, we provide results using different pretraining tasks and masking strategies to demonstrate the effectiveness of our proposed modules. Comparing \#1 and \#2 in Table 6, we observe that WWM brings significant performance improvements on all datasets. The reason is that it increases the difficulty of the MLM task, so we can obtain a stronger language model. We also find that LAM can also brings consistent improvements on all dataset because LAM can force the model to learn better representations for layout information, which is beneficial to downstream tasks. Comparing \#2 to \#4 and \#3 to \#5, it is observed that the MPM task also brings considerable improvements on all datasets. MPM works as an auxiliary task to help the MLM task and can increase the pre-training difficulty, contributing to learning better and more robust layout representations. Moreover, the full-version LayoutMask (\#5) outperforms the naive version (\#1) by a large margin (FUNSD+3.18%, CORD+0.67%, SROIE+1.11%, and RVL-CDIP+1.09%), demonstrating the effectiveness of our proposed modules when working together. To better illustrate the effectiveness of our model design, we list category-level accuracy improvements on RVL-CDIP dataset and provide detailed discussions in Section B of the Appendix. ## 5 Conclusion In this paper, we propose LayoutMask, a novel multi-modal pre-training model, to solve the reading order issues in VrDU tasks. LayoutMask adopts local 1D position as layout input and can generate adaptive and robust multi-modal representations. In LayoutMask, we equip the MLM task with two masking strategies and design a novel pretraining objective, Masked Position Modeling, to enhance the text-layout interactions and layout representation learning. With only using text and layout modalities, our method can achieve excellent results and significantly outperforms many SOTA methods in VrDU tasks. ## Limitations Our method has the following limitations: Datasets: In multi-modal pre-training, we rely on downstream datasets to evaluate the performance of pre-trained models. The commonly used entity extraction datasets are relatively small and lack diversity, so the proposed method may not generalize well to real word scenarios. Lack of Image Modality: In LayoutMask, we focus on text-layout interactions, leaving the image modality unexplored. However, documents in the real world contain many elements that can not be described by text and layout modalities, like figures and lines, so incorporating image modality is important in building a universal multi-modal pre-training model for document understanding. ## References Muhammad Zeshan Afzal, Andreas Kölsch, Sheraz Ahmed, and Marcus Liwicki. 2017. Cutting the error by half: Investigation of very deep cnn and advanced training strategies for document image classification. In *2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR)*, volume 1, pages 883–888. IEEE. Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 993–1003. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2021. Beit: Bert pre-training of image transformers. In *International Conference on Learning Representations*. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, et al. 2020. Unilmv2: Pseudomasked language models for unified language model pre-training. In *International Conference on Machine Learning*, pages 642–652. PMLR. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Arindam Das, Saikat Roy, Ujjwal Bhattacharya, and Swapan K Parui. 2018. Document image classification with intra-domain transfer learning and stacked generalization of deep convolutional neural networks. In *2018 24th international conference on pattern* recognition (ICPR), pages 3180–3185. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pages 4171– 4186. Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xylayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4583– 4592. Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 991–995. IEEE. Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 36, pages 10767– 10775. Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document AI with unified text and image masking. In MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10 - 14, 2022, pages 4083–4091. ACM. Zheng Huang, Kai Chen, Jianhua He, Xiang Bai, Dimosthenis Karatzas, Shijian Lu, and CV Jawahar. 2019. Icdar2019 competition on scanned receipt ocr and information extraction. In *2019 International* Conference on Document Analysis and Recognition (ICDAR), pages 1516–1520. IEEE. Guillaume Jaume, Hazim Kemal Ekenel, and JeanPhilippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In *2019* International Conference on Document Analysis and Recognition Workshops (ICDARW), volume 2, pages 1–6. IEEE. Anoop R Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes Höhne, and Jean Baptiste Faddoul. 2018. Chargrid: Towards understanding 2d documents. In *EMNLP*. David Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, and Jefferson Heard. 2006. Building a test collection for complex document information processing. In *Proceedings of the 29th* annual international ACM SIGIR conference on Research and development in information retrieval, pages 665–666. Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, and Luo Si. 2021a. Structurallm: Structural pre-training for form understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6309– 6318. Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pre-training for document image transformer. In MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10 - 14, 2022, pages 3530–3539. ACM. Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021b. Selfdoc: Self-supervised document representation learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652–5660. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: a consolidated receipt dataset for post-ocr parsing. In *Workshop on Document Intelligence at* NeurIPS 2019. Qiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, et al. 2022. Ernielayout: Layout knowledge enhanced pre-training for visually-rich document understanding. arXiv preprint arXiv:2210.06155. Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, and Gabriela Pałka. 2021. Going full-tilt boogie on document understanding with text-image-layout transformer. In International Conference on Document Analysis and Recognition, pages 732–747. Springer. Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 658–666. Ritesh Sarkhel and Arnab Nandi. 2019. Deterministic routing between layout abstractions for multi-scale classification of visually rich documents. In *28th* International Joint Conference on Artificial Intelligence (IJCAI), 2019. Jiapeng Wang, Lianwen Jin, and Kai Ding. 2022. Lilt: A simple yet effective language-independent layout transformer for structured document understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7747–7757. Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. 2021. Layoutreader: Pre-training of text and layout for reading order detection. In EMNLP (1). Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021. Layoutlmv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591. Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutlm: Pre-training of text and layout for document image understanding. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data* Mining, pages 1192–1200. Xiao Yang, Ersin Yumer, Paul Asente, Mike Kraley, Daniel Kifer, and C Lee Giles. 2017. Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5315–5324. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In *Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies*, pages 1480– 1489. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441– 1451. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ## A Ablation Study Of Masking Probabilities We compare LayoutMask using different Pmlm and Pmpm, and the results are in Figure 4. We first find the best Pmlm without using the MPM task, and the optimal value is 25%. Then we fix such optimal Pmlm to find the best Pmpm, which is 15% as the results show. ## B Ablation Study On Rvl-Cdip To further understand the effectiveness of our model design, we list the detailed classification results on RVL-CDIP dataset with the naive version and the full version in Table 7. It is observed that the major performance improvements come from three categories: presentation (+3.36%), ad- | Category | Model Settings | Diff. (%) | | |-----------------|------------------|-------------|------| | Naive | Full | | | | letter | 90.30 | 90.86 | 0.56 | | form | 85.71 | 86.77 | 1.07 | | email | 98.17 | 98.33 | 0.15 | | handwritten | 93.96 | 94.26 | 0.30 | | advertisement | 88.47 | 91.40 | 2.93 | | sci-report | 87.87 | 89.38 | 1.51 | | sci-publication | 93.08 | 93.73 | 0.65 | | specification | 95.91 | 96.56 | 0.64 | | file folder | 91.29 | 92.71 | 1.42 | | news article | 90.09 | 92.44 | 2.35 | | budget | 94.01 | 94.96 | 0.95 | | invoice | 94.02 | 94.54 | 0.52 | | presentation | 86.14 | 89.50 | 3.36 | | questionnaire | 92.44 | 92.88 | 0.44 | | resume | 98.31 | 98.70 | 0.39 | | memo | 94.93 | 95.12 | 0.19 | | Overall | 92.17 | 93.26 | 1.09 | vertisement (+2.93%), and news article (+2.35%). We find these categories have more diverse layouts (See examples in Figure 5), so classifying these documents requires a better understanding of the document structure, which also indicates the effectiveness of our methods in helping layout understanding. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? in Section Limitations ✓ A2. Did you discuss any potential risks of your work? in Section 4 ✓ A3. Do the abstract and introduction summarize the paper's main claims? in Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 4 ✓ B1. Did you cite the creators of artifacts you used? in Section 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We use datasets in public domain and licensed for research purposes. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our use of dataset is consistent with their intended use. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use datasets in public domain and licensed for research purposes. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Section 4 ## C ✓ **Did You Run Computational Experiments?** In Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
hu-etal-2023-hearing
Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech Recognition
https://aclanthology.org/2023.acl-long.848
Audio-visual speech recognition (AVSR) provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with visual information. However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task, with noise adaptation techniques such as front-end denoise processing. Though effective, these methods are usually faced with two practical challenges: 1) lack of sufficient labeled noisy audio-visual training data in some real-world scenarios and 2) less optimal model generality to unseen testing noises. In this work, we investigate the noise-invariant visual modality to strengthen robustness of AVSR, which can adapt to any testing noises while without dependence on noisy training data, a.k.a., unsupervised noise adaptation. Inspired by human perception mechanism, we propose a universal viseme-phoneme mapping (UniVPM) approach to implement modality transfer, which can restore clean audio from visual signals to enable speech recognition under any noisy conditions. Extensive experiments on public benchmarks LRS3 and LRS2 show that our approach achieves the state-of-the-art under various noisy as well as clean conditions. In addition, we also outperform previous state-of-the-arts on visual speech recognition task.
# Hearing Lips In Noise: Universal Viseme-Phoneme Mapping And Transfer For Robust Audio-Visual Speech Recognition Yuchen Hu1, Ruizhe Li2, Chen Chen1, Chengwei Qin1, Qiushi Zhu3**, Eng Siong Chng**1 1Nanyang Technological University, Singapore 2University of Aberdeen, UK 3University of Science and Technology of China, China {yuchen005@e., chen1436@e., chengwei003@e., aseschng@}ntu.edu.sg, ruizhe.li@abdn.ac.uk, qszhu@mail.ustc.edu.cn ## Abstract Audio-visual speech recognition (AVSR) provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with visual information. However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task, with noise adaptation techniques such as front-end denoise processing. Though effective, these methods are usually faced with two practical challenges: 1) lack of sufficient labeled noisy audio-visual training data in some real-world scenarios and 2) less optimal model generality to unseen testing noises. In this work, we investigate the noiseinvariant visual modality to strengthen robustness of AVSR, which can adapt to any testing noises while without dependence on noisy training data, *a.k.a.*, unsupervised noise adaptation. Inspired by human perception mechanism, we propose a universal viseme-phoneme mapping (UniVPM) approach to implement modality transfer, which can restore clean audio from visual signals to enable speech recognition under any noisy conditions. Extensive experiments on public benchmarks LRS3 and LRS2 show that our approach achieves the state-of-the-art under various noisy as well as clean conditions. In addition, we also outperform previous stateof-the-arts on visual speech recognition task1. ## 1 Introduction The world surrounding us involves multiple modalities, including vision, audio, text, etc., which complement each other and jointly comprise human perception (Baltrušaitis et al., 2018; Zhu et al., 2021b). Audio-visual speech recognition (AVSR) leverages both audio and visual modalities to understand human speech, which provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with noise-invariant lip movement information (Sumby and Pollack, 1954). ![0_image_0.png](0_image_0.png) However, most existing efforts still focus on audio modality to improve noise-robustness considering its dominance in AVSR, where audio modality contains much richer information to represent speech content than visual modality (Sataloff, 1992; Ren et al., 2021). Current mainstream approaches introduce noise adaptation techniques to improve robustness2, inspired by robust speech recognition (Wang et al., 2020). Most of them leverage noise-corrupted training data to strengthen robustness (Afouras et al., 2018a; Ma et al., 2021b; Song et al., 2022), and recent works extend it to selfsupervised learning scheme (Shi et al., 2022b; Hsu and Shi, 2022). Based on that, latest works introduce speech enhancement as front-end to denoise before recognition (Xu et al., 2020; Hong et al., 2022). Despite the effectiveness, these methods are usually faced with two practical challenges. First, they require abundant labeled noisy audio-visual data for network training, which is not always available in some real-world scenarios (Lin et al., 2021; Chen et al., 2022). Second, the well-trained model may not adapt to new-coming noise scenes in practical applications2, resulting in less optimal model 2Experimental analysis are in §A.1 and §4.2. 1Code is available at https://github.com/YUCHE N005/UniVPM. 15213 generality (Meng et al., 2017). Therefore, our research idea in this paper is leveraging visual modality to develop a general noise-robust AVSR system while without dependence on noisy training data. We may gain some inspirations from human perception mechanism of noisy audio-visual speech. Neuroscience studies (Nath and Beauchamp, 2011) find that human brain will unconsciously rely more on the lip movement to understand speech under noisy conditions (*a.k.a.*, McGurk Effect, McGurk and MacDonald, 1976). During this process, instead of directly recognizing lip movement, human brain will first transfer it to speech signal in auditory cortex for further understanding (Bourguignon et al., 2020; Mégevand et al., 2020). With prior knowledge of lip-audio mapping, human brain can restore informative clean audio from lip movement under any noisy conditions to aid in speech understanding (Bernstein et al., 2004; Aller et al., 2022). Motivated by above observations, we propose a universal viseme-phoneme3 mapping approach (UniVPM) to implement modality transfer, which can restore clean audio from lip movement to enable speech recognition under any noisy conditions. We first build two universal memory banks to model all the visemes and phonemes via online balanced clustering. Based on that, an adversarial mutual information estimator is proposed to construct strong viseme-phoneme mapping, which enables final lip-to-audio modality transfer via retrieval. As a result, our system can adapt well to any testing noises while without noisy training data. Empirical results show the effectiveness of our approach. Our contributions are summarized as: - We present UniVPM, a general noise-robust AVSR approach investigated on visual modality, which can adapt to any testing noises while without dependence on noisy training data, *a.k.a.*, unsupervised noise adaptation. - We build two universal banks to model all the visemes and phonemes via online balanced clustering, followed by an adversarial mutual information estimator to construct strong mapping between them, which enables modality transfer to restore clean audio from lip movement for speech recognition under any noises. tensive experiments also show its superiority on visual speech recognition (VSR) task. ## 2 Related Work Audio-Visual Speech Recognition. AVSR provides a promising solution to noise-robust speech recognition with the noise-invariant visual modality (Afouras et al., 2018a). However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task (Sataloff, 1992; Ren et al., 2021). Mainstream approaches introduce noise adaptation techniques to strengthen robustness, where most of them leverage noise-corrupted data to improve network training (Afouras et al., 2018a; Ma et al., 2021b; Pan et al., 2022; Shi et al., 2022b; Hsu and Shi, 2022), and recent works further introduce speech enhancement as front-end to denoise before recognition (Xu et al., 2020; Hong et al., 2022). Despite the effectiveness, these methods require abundant labeled noisy audio-visual training data that is not always available in some real scenarios, and they may not adapt to the new-coming noise scenes in practical applications. In this work, we investigate the visual modality to develop a general noise-robust AVSR approach while without dependence on noisy training data, *a.k.a.*, unsupervised noise adaptation. Memory Network. Memory network (Weston et al., 2014) presents a long-term memory component that can be read from and written in with inference capability. Miller et al. (2016) introduces key-value memory structure where key memory is used to address a query and the retrieved output is obtained from value memory using the address. Since this scheme can remember selected information, it is effective for augmenting features in many tasks, including video prediction (Lee et al., 2021), cross-modal retrieval (Song et al., 2018; Chen et al., 2020a), lip reading (Kim et al., 2021a, 2022) and talking face generation (Park et al., 2022). Despite the advances, the memory network is prone to overfitting when handling imbalanced distributed data, a.k.a., long tail4(Liu et al., 2019), which may fail to model the minority classes well. In this work, we propose to build two memory banks via online balanced clustering to model all the visemes and phonemes equally, *i.e.*, universal. Viseme-Phoneme Mapping. Viseme-phoneme mapping is important to many visual-audio learning tasks, including speech recognition (Chan et al., 4Phoneme distribution in English is a long tail, see §A.4. ![2_image_0.png](2_image_0.png) 2022), lip reading (Ren et al., 2021) and lip-tospeech synthesis (Prajwal et al., 2020). Among them, cross-modal distillation is a popular technique to transfer knowledge from viseme to phoneme (Afouras et al., 2020; Zhao et al., 2020; Ren et al., 2021). Other works design specific neural networks to learn their mapping (Qu et al., 2019; Kim et al., 2021b). Recent studies introduce selfsupervised learning to capture correlations between visemes and phonemes (Qu et al., 2021; Ma et al., 2021a). Though effective, these methods are often challenged by the ambiguity of homophenes (Bear and Harvey, 2017) where one lip shape can produce different sounds. To this end, we propose an adversarial mutual information estimator to construct strict viseme-phoneme mapping with the strong distinguishing ability of adversarial learning. ## 3 Methodology 3.1 Overview The overall framework of proposed UniVPM is illustrated in Fig. 2. During training, we first send the input video and clean audio streams into two front-ends for processing, which generates modality sequences fv, fa ∈ R T ×D, where T is number of frames and D is embedding dimension. These frames are sent into two memory banks to model all the visemes and phonemes, using an online balanced clustering algorithm where each cluster center represents a specific viseme or phoneme. Then, we propose an adversarial mutual information estimator to construct strong mapping between corresponding visemes and phonemes. Based on that, we finally implement modality transfer via retrieval to restore clean audio from visual signals, which enables speech recognition under any testing noises. ## 3.2 Online Balanced Clustering Clustering is a widely used knowledge discovery technique to partition a set of data points into homogeneous groups, which has a variety of applications such as data mining (Fayyad et al., 1996). Among them, K-Means algorithm (MacQueen, 1967) is the most well-known and popular one. However, it cannot be directly applied for our viseme and phoneme clustering due to imbalanced data distribution (see §A.4). This may challenge K-Means clustering according to uniform effect (Xiong et al., 2006). As shown in Fig. 3 (a), most cluster centers gather in the majority data class (*i.e.*, over-fitting), leaving the minority class not well modeled. To this end, we propose an Online Balanced Clustering algorithm in Alg. 1 to model all the visemes and phonemes equally from input frames. Algorithm 1 Online Balanced Clustering. Require: Streaming data D, number of clusters N, maximum cluster size Smax. 1: Initialize an empty memory bank B and a list of empty cluster banks {B1, B2*, ...,* BN }. 2: **while** len(B) ≤ N do 3: Receive new batch data d from D 4: Append all frame samples in d to bank B 5: **end while** 6: Initialize a list of cluster centers {c1, c2*, ..., c*N } from B using K-MEANS++ Algorithm (2006) 7: for batch data d ∈ D do 8: Append all frame samples in d to bank B 9: {B1*, ...,* BN } = RE-ALLOCATE(B, {c1*, ..., c*N }) 10: {c1*, ..., c*N } = RENEW-CENTERS({B1*, ...,* BN }) 11: Calculate average cluster size Savg = len(B)/N 12: Threshold cluster size Sthr = min(Savg, Smax) 13: for i = 1, 2*, ..., N* do 14: if len(Bi) > Sthr **then** ▷ UNDERSAMPLING 15: Maintain the Sthr-nearest samples to ci in Bi 16: Update B accordingly 17: **else** ▷ OVERSAMPLING 18: Set a random weight α ∈ (0, 1) 19: Find the nearest sample d*near* to ci in Bi 20: dnew = d*near* · α + ci · (1 − α) ![3_image_1.png](3_image_1.png) First, we set the number of clusters N to 40, following the amount of English phonemes (Phy, 2022). Then, we set a maximum cluster size Smax (*i.e.*, number of samples in each cluster) to control the total memory. We also initialize an empty bank B as an overall cache, as well as a list of empty banks {B1, B2*, ...,* BN } to cache each cluster. The proposed algorithm is executed in three steps, center initialization, K-Means clustering and re-sampling. First, we collect the first few batches of data frames into B to initialize N dispersed cluster centers {c1, c2*, ..., c*N }, using K-Means++ algorithm (Arthur and Vassilvitskii, 2006). Second, we add the current batch data to bank B and employ vanilla K-Means algorithm to re-allocate each sample in the bank to the nearest cluster center, after that the new cluster centers would be updated. Finally, we propose a re-sampling strategy to balance the size of different clusters as well as control the total memory of bank B, by setting a threshold cluster size Sthr (line 12 in Alg. 1). For those clusters with more than Sthr samples (*i.e.*, majority cluster), we perform undersampling by only maintaining the Sthr nearest samples to cluster center. In contrast, for the minority clusters with less samples than threshold, we propose oversampling to interpolate a new sample between center and the ![3_image_0.png](3_image_0.png) nearest sample with a random weight, inspired by SMOTE algorithm (Chawla et al., 2002). In this way, as illustrated in Fig. 3 (b), the resulted clusters would be balanced-sized and separated to better represent each of the visemes and phonemes. ## 3.3 Adversarial Mutual Information Estimator After clustering visemes and phonemes in banks, we propose an Adversarial Mutual Information Estimator (AMIE) to construct strong mapping between them. Mutual Information (MI) is a commonly used measure to explore the coherence between two distributions, which is, however, historically difficult to estimate. Recently, Belghazi et al. (2018) propose a Mutual Information Neural Estimation (MINE) approach to approximate MI lower bound with neural network. Based on that, we propose an adversarial learning approach to maximize the MI between visemes and phonemes, in order to construct strict mapping between them and thus alleviate the ambiguity of homophenes. ## 3.3.1 Preliminary Theory Of Mine Mutual information measures the mutual dependency between two probability distributions, $$I(X,Y)=\sum_{x\in X}\sum_{y\in Y}p(x,y)\log\frac{p(x,y)}{p(x)p(y)},\tag{1}$$ where $p(x,y)$ is the joint probability distribution. $$\mathrm{(1)}$$ of X and Y , and p(x) and p(y) are the marginals. Therefore, the mutual information can be written in terms of Kullback-Leibler (KL-) divergence: $$I(X,Y)=D_{K L}(p(x,y)\parallel p(x)p(y)),$$ where $D_{K L}$ is defined as: $$\left(2\right)$$ $$D_{K L}(p\parallel q)=\sum_{x\in X}p(x)\log{\frac{p(x)}{q(x)}},$$ $$(3)$$ , (3) Furthermore, the KL-divergence admits the Donsker-Varadhan (DV) representation (Donsker and Varadhan, 1983; Belghazi et al., 2018): $$D_{KL}(p\parallel q)=\sup_{T:\Omega\to\mathbb{R}}\mathbb{E}_{p}[T]-\log(\mathbb{E}_{q}[e^{T}]),\tag{4}$$ where the supremum is taken over all functions $T$ on Ω ⊂ R dto guarantee two finite expectations. Therefore, we have the MI lower bound: $$I(X,Y)\geq I_{\Theta}(X,Y),$$ I(X, Y ) ≥ IΘ(*X, Y* ), (5) where IΘ is the neural information measure, $$I_{\Theta}(X,Y)=\sup_{\theta\in\Theta}\mathbb{E}_{p(x,y)}[T_{\theta}(x,y)]\tag{6}$$ $$-\log(\mathbb{E}_{p(x)p(y)}[e^{T_{\theta}(x,y)}]),$$ and Tθ denotes a trainable neural network. ## 3.3.2 Proposed Amie Based on MINE, we propose an Adversarial Mutual Information Estimator to explore and maximize the mutual information between clustered visemes and phonemes. As illustrated in Fig. 2 and 4, given a visual sequence fv, we send each frame of it into viseme bank to find the nearest cluster center cv, which forms the viseme sequence sv ∈ R T ×D. Similarly, we obtain a phoneme sequence sa to represent audio features fa. The neural network Tθ then feeds {sv, sa} to output a scalar for MI estimation, where Tθ is a 3-layer classifier with output as a 1-dimensional scalar. Furthermore, since we do not concern the accurate value of MI when maximizing it, we employ Jensen-Shannon (JS) representation (Hjelm et al., 2018) to approximate KL-divergence in Eq. 4, which has been proved with more stable neural network optimization. Therefore, the mutual information between clustered visemes and phonemes is estimated as: $$I_{\Theta}^{JS}(s_{v},s_{a})=\sup_{\theta\in\Theta}\mathbb{E}_{p(s_{v},s_{a})}[-\text{sp}(-T_{\theta}(s_{v},s_{a}))]$$ $$-\mathbb{E}_{p(s_{v})p(s_{a})}[\text{sp}(T_{\theta}(s_{v},\tilde{s}_{a}))],\tag{7}$$ where $\tilde{s}_{a}$ is the shuffle-ordered version of $s_{a}$ that subjects to the marginal distributions of phonemes, and sp(z) = log(1 + e z) is the softplus function. As stated in Belghazi et al. (2018), the neural network Tθ can be used to estimate MI between generated data (sv, sa in our case) by directly trained on them. However, this will suffer a lot from the poor quality of generated data at early training stage. One feasible scheme (Zhu et al., 2021a) is to train Viseme Bank (a) AMIE Phoneme Bank Phoneme Bank address weighted sum (b) AMIE Tθ on real data (fv, fa in our case) and then estimate MI on generated data, but this suffers from the ambiguity of homophenes (see Fig. 8). To this end, we propose AMIE with adversarial learning to estimate and maximize the MI between corresponding visemes and phonemes, which can construct strict viseme-phoneme mapping without ambiguity. Inspired by GAN (Goodfellow et al., 2014), we design the AMIE as discriminator and the visemephoneme banks as generator. Based on that, the adversarial loss is defined as: $$\begin{split}\mathcal{L}_{GAN}&=\mathcal{L}_{D}+\mathcal{L}_{G}\\ &=I_{\Theta}^{JS}(f_{v},f_{a})+[-I_{\Theta}^{JS}(s_{v},s_{a})],\end{split}\tag{8}$$ Our framework employs an adversarial learning strategy for optimization, where D and G play a two-player minimax game as detailed in Alg. 2. As a result, the estimated MI between corresponding visemes and phonemes would be maximized to construct mapping relationships. The strong distinguishing ability of adversarial learning enables strict viseme-phoneme mapping to overcome the ambiguity of homophenes, as shown in Fig. 5. ## 3.4 Modality Transfer With constructed viseme-phoneme mapping, we can finally implement modality transfer to restore clean audio from lips. As shown in Fig. 4, given the visual sequence fv and clustered phoneme centers {c 1a, c2a*, ..., c*N a }, we calculate an addressing score Ai,j to indicate the probability that the i-th visual frame corresponds to the j-th phoneme cluster: $${\mathcal{A}}^{i,j}={\frac{\exp(\langle f_{v}^{i},c_{a}^{j}\rangle/\tau)}{\sum_{k=1}^{N}\exp(\langle f_{v}^{i},c_{a}^{k}\rangle/\tau)}},\qquad\qquad(9)$$ where ⟨·, · ⟩ denotes cosine similarity, τ is temperature weight. The restored clean audio frames are: $$\hat{f}_{a}^{i}=\sum_{j=1}^{N}{\mathcal{A}}^{i\cdot j}\cdot c_{a}^{j},\qquad\qquad(10)$$ | Method | PT | FT | Babble, SNR (dB) = | Speech, SNR (dB) = | Music + Natural, SNR (dB) = | Clean | | | | | | | | | | | | | | | | |--------------------|-----------------------|------------|----------------------|----------------------|-------------------------------|-----------|------|---------|-----|-----------|------|-----------------|------|-----|-----|------|------|------|-----|-----|-----| | Type | Type | -10 | -5 | 0 | 5 | 10 | avg | -10 | -5 | 0 | 5 | 10 | avg | -10 | -5 | 0 | 5 | 10 | avg | ∞ | | | RNN-T (2019) | - | Clean | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 4.5 | | Hyb-AVSR (2021b) | - | Noisy | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 2.3 | | TM-seq2seq (2018a) | - | Noisy | - | - | 42.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 7.2 | | EG-seq2seq (2020) | - | Noisy 38.6 | 31.1 | 25.5 | 24.3 | 20.7 | 28.0 | - | - | - | - | - | - | - | - | - | - | - | - | 6.8 | | | u-HuBERT (2022) | Noisy | Noisy | - | - | 4.1 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 1.2 | | Clean Clean 72.6 | 30.9 | 9.8 | 2.9 | 2.1 | 23.7 93.4 | 71.6 | 22.1 | 6.1 | 2.7 | 39.2 24.1 | 10.9 | 3.6 | 2.4 | 1.9 | 8.6 | 1.42 | | | | | | | AV-HuBERT (2022b) | Noisy 30.0 | 15.2 | 5.9 | 2.7 | 1.9 | 11.1 15.9 | 7.5 | 3.9 | 2.4 | 1.9 | 6.3 | 12.1 | 5.9 | 3.1 | 2.2 | 1.8 | 5.0 | 1.40 | | | | | Noisy Clean 39.4 | 14.5 | 5.2 | 2.7 | 2.0 | 12.8 18.8 | 5.1 | 3.1 | 2.3 | 1.9 | 6.2 | 11.4 | 5.0 | 2.8 | 2.2 | 1.8 | 4.6 | 1.54 | | | | | | Noisy 28.4 | 13.4 | 5.0 | 2.6 | 1.9 | 10.3 11.4 | 4.6 | 2.9 | 2.2 | 1.8 | 4.6 | 9.7 | 4.7 | 2.5 | 1.9 | 1.8 | 4.1 | 1.40 | | | | | | UniVPM (ours) | Clean Clean 37.5 17.1 | 6.9 | 2.6 | 1.9 | 13.2 20.4 | 9.6 | 4.9 | 3.6 2.3 | 8.2 | 14.2 | 6.8 | 3.1 2.1 1.8 5.6 | 1.31 | | | | | | | | | | Noisy 28.1 | 13.8 | 5.1 | 2.2 | 1.7 | 10.2 14.5 | 6.7 | 3.3 | 2.1 | 1.7 | 5.7 | 10.7 | 5.2 | 2.7 | 1.9 | 1.6 | 4.4 | 1.22 | | | | | | Noisy Clean 32.6 | 12.6 | 4.4 | 2.3 | 1.7 | 10.7 17.0 | 4.4 | 2.7 | 2.1 | 1.6 | 5.6 | 10.1 | 4.3 | 2.4 | 1.9 | 1.6 | 4.1 | 1.25 | | | | | | Noisy 26.8 | 12.1 | 4.0 | 2.1 | 1.6 | 9.3 | 10.4 | 4.1 | 2.5 | 2.0 | 1.6 | 4.1 | 8.7 | 4.1 | 2.1 | 1.7 | 1.5 | 3.6 | 1.18 | | | | Table 1: WER (%) of proposed UniVPM and prior works on LRS3 benchmark. "PT Type" / "FT Type" denote pre-training / finetuning data type. "SNR" is signal-to-noise ratio. All the noisy data contains MUSAN (2015) noise. Method PT FT Babble, SNR (dB) = Speech, SNR (dB) = Music + Natural, SNR (dB) = Clean Type Type -10 -5 0 5 10 avg -10 -5 0 5 10 avg -10 -5 0 5 10 avg ∞ TM-seq2seq (2018a) - Noisy - - - - - - - - - - - - - - - - - - 8.5 Hyb-RNN (2018) - Noisy - - - - - - - - - - - - - - - - - - 7.0 LF-MMI TDNN (2020) - Clean - - - - - - - - - - - - - - - - - - 5.9 Hyb-AVSR (2021b) - Noisy - - - - - - - - - - - - - - - - - - 3.7 MoCo+w2v2 (2022) - Noisy - - - - - - - - - - - - - - - - - - 2.6 AV-HuBERT (2022b)Clean Clean 65.2 33.6 10.9 5.6 3.8 23.8 88.2 57.8 20.6 7.5 4.0 35.6 27.3 13.3 6.7 4.0 3.4 10.9 2.57 Noisy 33.2 16.3 7.6 4.6 3.7 13.1 14.9 9.5 6.2 4.5 3.8 7.8 13.9 9.0 4.9 3.9 3.2 7.0 2.38 Noisy Clean 36.9 18.6 8.1 4.8 3.5 14.4 24.6 9.7 4.8 3.6 3.4 9.2 15.2 8.4 5.1 3.8 3.1 7.1 2.44 Noisy 32.7 14.9 6.4 4.5 3.4 12.4 9.0 5.9 3.9 3.5 3.0 5.1 12.5 6.0 4.4 3.5 3.0 5.9 2.33 UniVPM (ours)Clean Clean 38.3 19.0 9.2 5.0 3.5 15.0 21.1 12.2 7.8 5.4 3.9 10.1 16.3 10.4 5.6 3.6 3.2 7.8 2.30 Noisy 30.4 14.4 6.6 4.1 3.4 11.8 12.4 8.3 5.5 4.2 3.6 6.8 12.4 7.9 4.3 3.4 3.0 6.2 2.17 Noisy Clean 33.7 16.2 6.7 4.2 3.2 12.8 19.8 7.6 4.0 3.2 3.1 7.5 13.4 7.3 4.5 3.4 2.9 6.3 2.24 Noisy 30.1 13.7 5.7 4.1 3.2 11.4 7.5 5.1 3.4 3.1 2.8 4.4 10.9 5.0 3.8 3.1 2.8 5.1 **2.16** To supervise the quality of restored audio ˆfa = { ˆf ia} T i=1, we first employ AMIE to maximize the MI between ˆfa and fv, where Eq. 8 is rewritten as: $$\mathcal{L}_{GAN}=I_{\Theta}^{JS}(f_{v},f_{a})+[-I_{\Theta}^{JS}(s_{v},s_{a})-I_{\Theta}^{JS}(f_{v},\hat{f}_{a})],\tag{11}$$ along with a reconstruction loss $\mathcal{L}_{rec}=\|\hat{f}_{a}-f_{a}\|_{2}$. to enable restoration of high-quality clean audio. ## 3.5 Optimization The UniVPM is optimized in an end-to-end manner (see Alg. 2), with the final training objective as: $$\mathcal{L}=\mathcal{L}_{ASR}+\lambda_{GAN}\cdot\mathcal{L}_{GAN}+\lambda_{rec}\cdot\mathcal{L}_{rec}+\lambda_{var}\cdot\mathcal{L}_{var},\tag{12}$$ where LASR denotes the downstream speech recognition loss. Lvar is a variance regularization term to disperse the clustered viseme and phoneme centers, which aims to ease their mapping construction. λGAN , λrec and λvar are weighting parameters. ## 4 Experiments 4.1 Experimental Setup Datasets. Our experiments are conducted on two large-scale public datasets, LRS3 (Afouras et al., 2018b) and LRS2 (Chung et al., 2017). LRS3 dataset collects 433 hours of transcribed English videos from TED & TEDx talks. LRS2 contains 224 hours of video speech from BBC programs. Configurations and Baselines. The proposed UniVPM is implemented based on AV-HuBERT with similar configurations, which are detailed in §B.3. We also select some mainstream AVSR approaches as baselines for comparison, *e.g.*, u-HuBERT (Hsu and Shi, 2022), and details are presented in §B.7. ## 4.2 Main Results Audio-Visual Speech Recognition. Table 1 compares the AVSR performance of our UniVPM with prior methods on LRS3 benchmark. Under clean training data, the proposed UniVPM (in purple shades) significantly outperforms AV-HuBERT baseline, and it achieves comparable performance to the AV-HuBERT trained on noisy data, where the restored clean audio plays the key role and implements our original motivation of unsupervised noise adaptation. Based on that, available noisy training data further improves the robustness5, where our best results achieve new state-of-5Noisy training pipeline of UniVPM is shown in Fig. 9. | Method | PT | FT | Meeting, SNR (dB) = | Cafe, SNR (dB) = | Resto, SNR (dB) = | Station, SNR (dB) = | | | | | | | | | | | | | | | | |----------------------------------------------------------------------------------------------------------|------------------|------|-----------------------|--------------------|--------------------------------|-----------------------|------------------|----------|-------------|----------|------|-----------|-----------|---------|-----|-----|-----|-----|-----|----|-----| | Type | Type | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | | Finetuned on DEMAND Noise | | | | | | | | | | | | | | | | | | | | | | | Clean Clean 33.2 | 11.7 | 4.3 | 3.1 | 13.1 26.0 | 8.5 | 2.9 | 2.0 | 9.9 63.5 | 30.4 | 11.0 | 3.9 | 27.2 20.1 | 7.0 | 4.7 | 2.5 | 8.6 | | | | | | | AV-HuBERT (2022b) | Noisy 10.6 | 5.2 | 2.9 | 2.5 | 5.3 | 10.1 | 4.3 | 2.3 | 1.8 | 4.6 27.8 | 14.4 | 4.9 | 2.6 | 12.4 | 7.6 | 4.5 | 2.9 | 2.0 | 4.3 | | | | Noisy Clean 17.7 | 7.1 | 4.0 | 2.9 | 7.9 | 16.0 | 5.8 | 2.7 | 1.9 | 6.6 49.5 | 19.5 | 6.2 | 3.1 | 19.6 11.8 | 5.9 | 3.7 | 2.2 | 5.9 | | | | | | Noisy 10.2 | 4.8 | 2.7 | 2.4 | 5.0 | 9.4 | 4.0 | 2.2 | 1.8 | 4.4 23.5 | 13.2 | 4.4 | 2.4 10.9 | 7.2 | 4.3 2.9 | 1.8 | 4.1 | | | | | | | Finetuned on MUSAN Noise | | | | | | | | | | | | | | | | | | | | | | | Clean Clean 33.2 | 11.7 | 4.3 | 3.1 | 13.1 26.0 | 8.5 | 2.9 | 2.0 | 9.9 63.5 | 30.4 | 11.0 | 3.9 | 27.2 20.1 | 7.0 | 4.7 | 2.5 | 8.6 | | | | | | | Noisy 13.9 | 6.3 | 3.3 | 2.8 | 6.6 | 13.6 | 5.1 | 2.6 | 1.9 | 5.8 36.1 | 17.5 | 5.7 | 2.9 | 15.6 | 9.9 | 5.3 | 3.5 | 2.1 | 5.2 | | | | | AV-HuBERT (2022b) Noisy Clean 17.7 | 7.1 | 4.0 | 2.9 | 7.9 | 16.0 | 5.8 | 2.7 | 1.9 | 6.6 49.5 | 19.5 | 6.2 | 3.1 | 19.6 11.8 | 5.9 | 3.7 | 2.2 | 5.9 | | | | | | Noisy 13.2 | 5.5 | 3.2 | 2.7 | 6.2 | 12.4 | 4.8 | 2.3 | 1.8 | 5.3 33.7 | 16.1 | 5.1 | 2.6 | 14.4 | 9.8 | 5.1 | 3.5 | 1.9 | 5.1 | | | | | UniVPM (ours) | Clean Clean 12.8 | 5.3 | 3.1 2.7 | 6.0 | 12.1 4.9 2.3 1.7 5.3 32.8 15.8 | 5.0 | 2.8 14.1 | 9.5 | 5.0 3.6 2.1 | 5.1 | | | | | | | | | | | | | Noisy 10.0 | 4.7 | 2.7 | 2.4 | 5.0 | 9.6 | 4.0 | 2.2 1.6 4.4 24.9 | 13.3 | 4.7 | 2.6 | 11.4 | 7.0 | 4.3 2.9 | 1.8 | 4.0 | | | | | | | | Noisy Clean 11.9 | 5.1 | 3.0 | 2.6 | 5.7 | 10.8 | 4.6 | 2.2 | 1.7 | 4.8 27.4 | 14.8 | 4.9 | 2.6 | 12.4 | 8.3 | 4.7 | 3.2 | 1.8 | 4.5 | | | | | Noisy | 9.7 | 4.6 | 2.6 | 2.3 | 4.8 | 9.0 | 3.8 | 2.1 | 1.6 | 4.1 22.6 | 12.9 | 4.3 | 2.4 | 10.6 | 6.9 | 4.3 | 2.8 | 1.7 | 3.9 | | | | Table 3: WER (%) on unseen testing noises with LRS3 benchmark. Testing noises "Meeting", "Cafe", "Resto" | | | | | | | | | | | | | | | | | | | | | | Method Finetune Unlabeled Labeled WER (%) Mode Data (hrs) Data (hrs) TM-seq2seq (2018a) AV - 1,519 58.9 EG-seq2seq (2020) AV - 590 57.8 Hyb-AVSR (2021b) AV - 590 43.3 RNN-T (2019) AV - 31,000 33.6 Distill-PT (2022) V 1,021 438 31.5 u-HuBERT (2022) AV 2,211 433 27.2 AV-HuBERT (2022a)AV 1,759 433 34.7 V 1,759 433 28.6 + Self-Training V 1,759 433 26.9 UniVPM (ours) AV 1,759 433 **26.7** Table 4: WER (%) results of visual speech recognition (VSR) on LRS3 benchmark. "Finetune Mode" denotes the input modality during finetuning stage. | Method | B | S | M | N | Clean VSR | | |---------------------------------------------|------|------|------|-----|-------------|------| | AV-HuBERT (2022b) | 23.7 | 39.2 | 10.7 | 6.4 | 1.42 | 34.7 | | Effectiveness of Online Balanced Clustering | | | | | | | | Memory Network (2022) | 20.6 | 29.5 | 9.2 | 6.1 | 1.39 | 32.0 | | Online Clustering | 19.3 | 22.9 | 8.7 | 5.9 | 1.37 | 31.2 | | Online Balanced Clustering | 13.2 | 8.2 | 6.1 | 5.1 | 1.31 | 26.7 | | Effectiveness of AMIE | | | | | | | | None | 22.3 | 35.4 | 10.4 | 6.0 | 1.39 | 31.8 | | Contrastive Learning | 21.5 | 29.2 | 9.7 | 5.8 | 1.37 | 30.1 | | MINE (2018) | 18.6 | 20.1 | 8.3 | 5.5 | 1.34 | 28.8 | | AMIE w/o Adv. Learning | 17.0 | 17.9 | 7.7 | 5.4 | 1.33 | 28.2 | | AMIE | 13.2 | 8.2 | 6.1 | 5.1 | 1.31 | 26.7 | | Analysis of Adversarial Learning | | | | | | | | LGAN w/ I(sv, sa) | 15.4 | 14.6 | 7.2 | 5.3 | 1.32 | 27.8 | | LGAN w/ I(fv, ˆfa) | 17.7 | 22.0 | 8.8 | 5.6 | 1.36 | 29.2 | | LGAN w/ I(sv, sa) + I(fv, ˆfa) | 13.2 | 8.2 | 6.1 | 5.1 | 1.31 | 26.7 | | Analysis of Regularization | | | | | | | | None | 17.2 | 20.4 | 8.0 | 5.7 | 1.36 | 30.3 | | UniVPM w/ Lrec | 14.3 | 11.5 | 6.5 | 5.3 | 1.33 | 27.4 | | UniVPM w/ Lvar | 15.6 | 14.6 | 7.2 | 5.4 | 1.33 | 28.5 | | UniVPM w/ Lrec + Lvar | 13.2 | 8.2 | 6.1 | 5.1 | 1.31 | 26.7 | the-art in various noisy as well as clean conditions. Furthermore, we can also observe similar results on LRS2 dataset as shown in Table 2. Table 3 further compares the performance of UniVPM with AV-HuBERT on unseen testing noises, which are sampled from DEMAND (Thiemann et al., 2013) dataset. First, when AV-HuBERT is finetuned and tested both on DEMAND noise, good WER performance can be achieved. However, if it is finetuned on MUSAN noise and then tested on unseen DEMAND noise, the performance would degrade a lot. In comparison, our UniVPM finetuned on clean data (purple shades) achieves significant improvement and surpasses the AV-HuBERT finetuned on MUSAN noise, which further verifies the strong generality of our model. Furthermore, when finetuned on MUSAN noise, our UniVPM even outperforms the AV-HuBERT finetuned on in-domain DEMAND noise, which highlights the superiority of our approach on unseen test noises. Visual Speech Recognition. To further verify the effectiveness of UniVPM, we evaluate its VSR performance by discarding the input audio modality during inference, as shown in Table 4. In this case, with restored clean audio from lip movements, the proposed UniVPM significantly outperforms AVHuBERT baseline (34.7%→26.7%). Although the visual-only training and self-training strategies improve AV-HuBERT's results, our UniVPM still defines new state-of-the-art on LRS3 benchmark. ## 4.3 Ablation Study Table 5 presents the ablation study of components in UniVPM. The four parts of ablation are independent, *i.e.*, each study is conducted where other three components are kept same as full UniVPM. Effect of Online Balanced Clustering. In UniVPM, our online clustering baseline outperforms the memory network with learnable embeddings, indicating the superiority of clustering technique ![7_image_0.png](7_image_0.png) in representing visemes and phonemes. Based on that, our proposed online balanced clustering achieves significant improvement by modeling all the visemes and phonemes equally without overfitting, which is further shown in Fig. 5. Effect of AMIE. As presented in Table 5, AMIE plays the key role in the promising performance of UniVPM by constructing strong viseme-phoneme mapping. As a comparison, the contrastive learning baseline only provides limited improvement, and MINE performs better by maximizing the estimated MI between visemes and phonemes. Based on that, our proposed AMIE introduces JS representation to stabilize system optimization, which improves performance but still suffers from the ambiguity of homophenes. To this end, our adversarial learning approach achieves further improvement by constructing strict viseme-phoneme mapping without ambiguity, as shown in Fig. 8. Analysis of Adversarial Learning. As illustrated in Eq. 11, there are two key components in adversarial learning, i.e., I(sv, sa) that constructs visemephoneme mapping and I(fv, ˆfa) that supervises the quality of restored clean audio. Results in Table 5 indicate that viseme-phoneme mapping is the most important, and the supervision on restored clean audio also improves the AVSR performance. Analysis of Regularization. According to Eq. 12, Lrec and Lvar are two auxiliary terms for regularization, where the former supervises the quality of restored audio, and the latter disperses clustered viseme and phoneme centers to ease their mapping construction. Both of them are proved with positive contributions to the gains of performance. Visualizations. Fig. 5 presents t-SNE visualization and confusion matrixes to further verify the effectiveness of UniVPM. First, the online clustering baseline generates gathered viseme and phoneme centers due to over-fitting, where only several majority phonemes are modeled as shown in (g). Our proposed online balanced clustering alleviates such over-fitting issue and generates separated phoneme centers, which can cover most of the real phonemes as illustrated in (h). However, we can still observe gathered viseme centers due to homophenes, and the ambiguity of viseme-phoneme mapping is also shown in (k). To this end, our proposed AMIE effectively alleviates the ambiguity of homophenes thanks to the strong distinguishing ability of adversarial learning, which constructs strict visemephoneme mapping in (l). Meanwhile, we also observe dispersed viseme centers in (c), which can distinguish the same visemes that correspond to different phonemes. In addition, real phonemes are also better modeled by clustering as shown in (i). Evaluation of Modality Transfer. Table 6 further reports phoneme match accuracy to evaluate the quality of restored clean audio. We observe that online clustering baseline can hardly restore cor- | Method | VSR | Phoneme | |------------------------------|----------------|-----------| | WER (%) | Match Acc. (%) | | | AV-HuBERT (2022a) | 34.7 | - | | + Online Clustering | 33.5 | 14.2 | | + Online Balanced Clustering | 31.8 | 31.0 | | + AMIE (UniVPM) | 26.7 | 67.5 | Table 6: Evaluation of restored clean audio in terms of phoneme match accuracy on LRS3 test set. It is calculated with predicted phonemes for restored audio and real clean audio by pre-trained model (Phy, 2022). rect phonemes, and the proposed online balanced clustering improves the accuracy but still limited by the ambiguity of homophenes. Furthermore, our proposed AMIE significantly improves the quality of restored clean audio with strict viseme-phoneme mapping, which also yields better VSR result. ## 5 Conclusion In this paper, we propose UniVPM, a general robust AVSR approach motivated from visual modality via unsupervised noise adaptation. UniVPM constructs universal viseme-phoneme mapping to implement modality transfer, which can restore clean audio from visual signals to enable speech recognition under any noises. Experiments on public benchmarks show that UniVPM achieves state-of-the-art under various noisy as well as clean conditions. Further analysis also verifies its effectiveness on VSR task. ## Limitations We state two points of limitations and future work in this section. First, the UniVPM combines both restored clean audio and original input audio for downstream speech recognition, while without any trade-off to weight them. For example, under extremely noisy conditions the restored clean audio plays a more important role, while in less noisy scenarios the original audio may provide more valid information. Some weighting strategies to select the most effective audio information could benefit the downstream speech recognition. Second, the proposed clustering and viseme-phoneme mapping are actually unsupervised schemes, so that it could be promising to extend our UniVPM to the popular self-supervised learning framework, in order to make full use of the abundant unlabeled data. ## Ethics Statement All the data used in this paper are publicly available and are used under the following licenses: the Creative Commons BY-NC-ND 4.0 License and Creative Commons Attribution 4.0 International License, the TED Terms of Use, the YouTube's Terms of Service, and the BBC's Terms of Use. The data is collected from TED and BBC and contain thousands of speakers from a wide range of races. To protect the anonymity, only mouth area of speaker is visualized wherever used in the paper. ## Acknowledgements This research is supported by KLASS Engineering & Solutions Pte Ltd and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No.: AISG2100E-2023-103). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). ## References Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2018a. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence. Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2018b. Lrs3-ted: a large-scale dataset for visual speech recognition. arXiv preprint arXiv:1809.00496. Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2020. Asr is all you need: Cross-modal distillation for lip reading. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2143–2147. IEEE. Máté Aller, Heidi Solberg Økland, Lucy J MacGregor, Helen Blank, and Matthew H Davis. 2022. Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception. *Journal of Neuroscience*, 42(31):6108–6120. David Arthur and Sergei Vassilvitskii. 2006. kmeans++: The advantages of careful seeding. Technical report, Stanford. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449–12460. Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2):423–443. Helen L Bear and Richard Harvey. 2017. Phoneme-toviseme mappings: the good, the bad, and the ugly. Speech Communication, 95:40–67. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. 2018. Mutual information neural estimation. In International conference on machine learning, pages 531–540. PMLR. Lynne E Bernstein, Edward T Auer Jr, and Sumiko Takayanagi. 2004. Auditory speech detection in noise enhanced by lipreading. *Speech Communication*, 44(1-4):5–18. Mathieu Bourguignon, Martijn Baart, Efthymia C Kapnoula, and Nicola Molinaro. 2020. Lip-reading enables the brain to synthesize auditory features of unknown silent speech. *Journal of Neuroscience*, 40(5):1053–1065. David M Chan, Shalini Ghosh, Debmalya Chakrabarty, and Björn Hoffmeister. 2022. Multi-modal pretraining for automated speech recognition. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 246–250. IEEE. Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. *Journal of artificial intelligence research*, 16:321–357. Chen Chen, Nana Hou, Yuchen Hu, Shashank Shirol, and Eng Siong Chng. 2022. Noise-robust speech recognition with 10 minutes unparalleled in-domain data. In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP), pages 4298–4302. IEEE. Hui Chen, Guiguang Ding, Xudong Liu, Zijia Lin, Ji Liu, and Jungong Han. 2020a. Imram: Iterative matching with recurrent attention memory for crossmodal image-text retrieval. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12655–12663. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*. Yang Chen, Weiran Wang, I-Fan Chen, and Chao Wang. 2020c. Data techniques for online end-to-end speech recognition. *arXiv preprint arXiv:2001.09221*. Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2017. Lip reading sentences in the wild. In *2017 IEEE conference on computer* vision and pattern recognition (CVPR), pages 3444– 3453. IEEE. Monroe D Donsker and SR Srinivasa Varadhan. 1983. Asymptotic evaluation of certain markov process expectations for large time. iv. Communications on Pure and Applied Mathematics, 36(2):183–212. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. In *International Conference on Learning Representations*. Usama M Fayyad, Gregory Piatetsky-Shapiro, Padhraic Smyth, and Ramasamy Uthurusamy. 1996. Advances in knowledge discovery and data mining. American Association for Artificial Intelligence. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In *NeurIPS*. Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of the* 23rd international conference on Machine learning, pages 369–376. Anmol Gulati, James Qin, Chiu Chung-Cheng, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented transformer for speech recognition. In *Interspeech*, pages 5036–5040. R Devon Hjelm, Alex Fedorov, Samuel LavoieMarchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2018. Learning deep representations by mutual information estimation and maximization. In *International Conference on Learning Representations*. Joanna Hong, Minsu Kim, and Yong Man Ro. 2022. Visual context-driven audio feature enhancement for robust end-to-end audio-visual speech recognition. In 23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, pages 2838–2842. International Speech Communication Association. Wei-Ning Hsu and Bowen Shi. 2022. u-hubert: Unified mixed-modal speech pretraining and zero-shot transfer to unlabeled modality. In *Advances in Neural* Information Processing Systems. Minsu Kim, Joanna Hong, Se Jin Park, and Yong Man Ro. 2021a. Multi-modality associative bridging through memory: Speech sound recollected from face video. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 296–306. Minsu Kim, Joanna Hong, and Yong Man Ro. 2021b. Lip to speech synthesis with visual context attentional gan. *Advances in Neural Information Processing* Systems, 34:2758–2770. Minsu Kim, Jeong Hun Yeo, and Yong Man Ro. 2022. Distinguishing homophenes using multi-head visualaudio memory for lip reading. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, volume 22. Davis E King. 2009. Dlib-ml: A machine learning toolkit. *The Journal of Machine Learning Research*, 10:1755–1758. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Sangmin Lee, Hak Gu Kim, Dae Hwi Choi, HyungIl Kim, and Yong Man Ro. 2021. Video prediction recalling long-term motion context via memory alignment learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 3054–3063. Hsin-Yi Lin, Huan-Hsin Tseng, Xugang Lu, and Yu Tsao. 2021. Unsupervised noise adaptive speech enhancement by discriminator-constrained optimal transport. *Advances in Neural Information Processing Systems*, 34:19935–19946. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. 2019. Large-scale long-tailed recognition in an open world. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 2537–2546. Yanhua Long, Yijie Li, Hone Ye, and Hongwei Mao. 2017. Domain adaptation of lattice-free mmi based tdnn models for speech recognition. International Journal of Speech Technology, 20(1):171–178. Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Björn W Schuller, and Maja Pantic. 2021a. Lira: Learning visual speech representations from audio through selfsupervision. *arXiv preprint arXiv:2106.09171*. Pingchuan Ma, Stavros Petridis, and Maja Pantic. 2021b. End-to-end audio-visual speech recognition with conformers. In *ICASSP 2021-2021 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7613–7617. IEEE. Pingchuan Ma, Stavros Petridis, and Maja Pantic. 2022. Visual speech recognition for multiple languages in the wild. *arXiv preprint arXiv:2202.13084*. J MacQueen. 1967. Classification and analysis of multivariate observations. In *5th Berkeley Symp. Math.* Statist. Probability, pages 281–297. Takaki Makino, Hank Liao, Yannis Assael, Brendan Shillingford, Basilio Garcia, Otavio Braga, and Olivier Siohan. 2019. Recurrent neural network transducer for audio-visual speech recognition. In 2019 IEEE automatic speech recognition and understanding workshop (ASRU), pages 905–912. IEEE. Harry McGurk and John MacDonald. 1976. Hearing lips and seeing voices. *Nature*, 264(5588):746–748. Pierre Mégevand, Manuel R Mercier, David M Groppe, Elana Zion Golumbic, Nima Mesgarani, Michael S Beauchamp, Charles E Schroeder, and Ashesh D Mehta. 2020. Crossmodal phase reset and evoked responses provide complementary mechanisms for the influence of visual speech in auditory cortex. *Journal* of Neuroscience, 40(44):8530–8542. Zhong Meng, Zhuo Chen, Vadim Mazalov, Jinyu Li, and Yifan Gong. 2017. Unsupervised adaptation with domain separation networks for robust speech recognition. In *2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)*, pages 214–221. IEEE. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. *arXiv preprint arXiv:1606.03126*. Audrey R Nath and Michael S Beauchamp. 2011. Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech. Journal of Neuroscience, 31(5):1704–1714. Xichen Pan, Peiyu Chen, Yichen Gong, Helong Zhou, Xinbing Wang, and Zhouhan Lin. 2022. Leveraging unimodal self-supervised learning for multimodal audio-visual speech recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4491–4503, Dublin, Ireland. Association for Computational Linguistics. Se Jin Park, Minsu Kim, Joanna Hong, Jeongsoo Choi, and Yong Man Ro. 2022. Synctalkface: Talking face generation with precise lip-syncing via audio-lip memory. In *36th AAAI Conference on Artificial Intelligence (AAAI 22)*. Association for the Advancement of Artificial Intelligence. Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Georgios Tzimiropoulos, and Maja Pantic. 2018. Audio-visual speech recognition with a hybrid ctc/attention architecture. In *2018 IEEE Spoken Language* Technology Workshop (SLT), pages 513–520. IEEE. Vitou Phy. 2022. Automatic Phoneme Recognition on TIMIT Dataset with Wav2Vec 2.0. If you use this model, please cite it using these metadata. KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar. 2020. Learning individual speaking styles for accurate lip to speech synthesis. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 13796–13805. Leyuan Qu, Cornelius Weber, and Stefan Wermter. 2019. Lipsound: Neural mel-spectrogram reconstruction for lip reading. In *INTERSPEECH*, pages 2768– 2772. Leyuan Qu, Cornelius Weber, and Stefan Wermter. 2021. Lipsound2: Self-supervised pre-training for lip-tospeech reconstruction and lip reading. *arXiv preprint* arXiv:2112.04748. Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. 2019. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations. Sucheng Ren, Yong Du, Jianming Lv, Guoqiang Han, and Shengfeng He. 2021. Learning from the master: Distilling cross-modal advanced knowledge for lip reading. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 13325–13333. Robert T Sataloff. 1992. The human voice. *Scientific* American, 267(6):108–115. Bowen Shi, Wei-Ning Hsu, Kushal Lakhotia, and Abdelrahman Mohamed. 2022a. Learning audio-visual speech representation by masked multimodal cluster prediction. In International Conference on Learning Representations. Bowen Shi, Wei-Ning Hsu, and Abdelrahman Mohamed. 2022b. Robust self-supervised audio-visual speech recognition. arXiv preprint arXiv:2201.01763. David Snyder, Guoguo Chen, and Daniel Povey. 2015. Musan: A music, speech, and noise corpus. arXiv preprint arXiv:1510.08484. Ge Song, Dong Wang, and Xiaoyang Tan. 2018. Deep memory network for cross-modal retrieval. *IEEE* Transactions on Multimedia, 21(5):1261–1275. Qiya Song, Bin Sun, and Shutao Li. 2022. Multimodal sparse transformer network for audio-visual speech recognition. IEEE Transactions on Neural Networks and Learning Systems. William H Sumby and Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. *The journal of the acoustical society of america*, 26(2):212– 215. Joachim Thiemann, Nobutaka Ito, and Emmanuel Vincent. 2013. The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings. In Proceedings of Meetings on Acoustics ICA2013, volume 19, page 035081. Acoustical Society of America. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Zhong-Qiu Wang, Peidong Wang, and DeLiang Wang. 2020. Complex spectral mapping for single-and multi-channel speech enhancement and robust asr. IEEE/ACM transactions on audio, speech, and language processing, 28:1778–1787. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017. Hybrid ctc/attention architecture for end-to-end speech recognition. *IEEE Journal of Selected Topics in Signal Processing*, 11(8):1240–1253. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. *arXiv preprint arXiv:1410.3916*. Hui Xiong, Junjie Wu, and Jian Chen. 2006. K-means clustering versus validation measures: a data distribution perspective. In *Proceedings of the 12th ACM* SIGKDD international conference on Knowledge discovery and data mining, pages 779–784. Bo Xu, Cheng Lu, Yandong Guo, and Jacob Wang. 2020. Discriminative multi-modality speech recognition. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 14433–14442. Jianwei Yu, Shi-Xiong Zhang, Jian Wu, Shahram Ghorbani, Bo Wu, Shiyin Kang, Shansong Liu, Xunying Liu, Helen Meng, and Dong Yu. 2020. Audio-visual recognition of overlapped speech for the lrs2 dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6984–6988. IEEE. Jisi Zhang, Catalin Zorila, Rama Doddipatla, and Jon Barker. 2022. On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. *arXiv preprint* arXiv:2205.01751. Pengfei Zhang, Jianru Xue, Cuiling Lan, Wenjun Zeng, Zhanning Gao, and Nanning Zheng. 2019. Eleatt-rnn: Adding attentiveness to neurons in recurrent neural networks. *IEEE Transactions on Image Processing*, 29:1061–1073. Ya Zhao, Rui Xu, Xinchao Wang, Peng Hou, Haihong Tang, and Mingli Song. 2020. Hearing lips: Improving lip reading by distilling speech recognizers. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6917–6924. Hao Zhu, Huaibo Huang, Yi Li, Aihua Zheng, and Ran He. 2021a. Arbitrary talking face generation via attentional audio-visual coherence learning. In *Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial* Intelligence, pages 2362–2368. Hao Zhu, Man-Di Luo, Rui Wang, Ai-Hua Zheng, and Ran He. 2021b. Deep audio-visual learning: A survey. *International Journal of Automation and Computing*, 18(3):351–376. Mode PT FT Babble, SNR (dB) = Speech, SNR (dB) = Music + Natural, SNR (dB) = Clean ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) Type Type -10 -5 0 5 10 avg -10 -5 0 5 10 avg -10 -5 0 5 10 avg ∞ ![12_image_2.png](12_image_2.png) ![12_image_3.png](12_image_3.png) ![12_image_4.png](12_image_4.png) Clean Clean 99.3 89.6 43.9 11.0 3.7 49.5 102.5 93.8 63.5 24.1 10.7 58.9 58.6 35.9 13.9 5.4 2.6 23.3 1.55 Noisy 98.2 65.6 17.0 5.3 2.7 37.8 94.3 73.8 46.3 22.9 9.7 49.4 43.4 18.0 6.5 3.2 2.1 14.6 1.50 Noisy Clean 98.3 77.6 23.0 7.3 2.9 41.8 87.3 62.9 41.0 22.2 8.9 44.5 43.4 19.3 7.1 3.4 2.5 15.1 1.62 Noisy 97.5 62.3 15.7 5.1 2.6 36.6 81.7 56.2 37.3 19.0 8.3 40.5 38.7 15.1 5.7 3.1 2.3 13.0 1.60 Clean Clean 72.6 30.9 9.8 2.9 2.1 23.7 93.4 71.6 22.1 6.1 2.7 39.2 24.1 10.9 3.6 2.4 1.9 8.6 1.42 Noisy 30.0 15.2 5.9 2.7 1.9 11.1 15.9 7.5 3.9 2.4 1.9 6.3 12.1 5.9 3.1 2.2 1.8 5.0 1.40 Noisy Clean 39.4 14.5 5.2 2.7 2.0 12.8 18.8 5.1 3.1 2.3 1.9 6.2 11.4 5.0 2.8 2.2 1.8 4.6 1.54 Noisy 28.4 13.4 5.0 2.6 1.9 10.3 11.4 4.6 2.9 2.2 1.8 4.6 9.7 4.7 2.5 1.9 1.8 4.1 1.40 ## A Supplementary Experimental Analysis A.1 **Analysis Of The Noise-Robustness Of Avsr** Table 7 presents the performance of AV-HuBERT to analyze the noise-robustness of AVSR system. First, as the original motivation of AVSR, the visual modality significantly improves the audioonly speech recognition performance under various noisy as well as clean testing conditions, especially the low-SNR environments. However, most existing efforts still focus on audio modality to improve robustness considering its dominance in AVSR task. The reason is the inherent information insufficiency of visual modality to represent speech content. Mainstream approaches introduce noise adaptation techniques to strengthen robustness, where most of them leverage noise-corrupted data to improve network training (Afouras et al., 2018a; Ma et al., 2021b; Pan et al., 2022; Shi et al., 2022b; Hsu and Shi, 2022). As shown in Table 7, available noisy training data significantly improves the AVSR performance in different testing conditions. However, this strategy is usually faced with two practical challenges. First, it requires abundant labeled noisy audio-visual training data, which is not always available in some real-world scenarios (Meng et al., 2017; Long et al., 2017; Lin et al., 2021; Chen et al., 2022). For instance, in scenarios like theatre, it is valuable to develop a AVSR system but costly to obtain sufficient training data. Second, as it is impossible to cover all the realworld noises in training data, when some unseen noises appear in practical testing scenarios, the well-trained model may not perform well as shown in Table 3, resulting in less optimal model generality (Meng et al., 2017). Above two challenges motivate this work. With unsupervised noise adaptation investigated on visual modality, our proposed UniVPM improves the AVSR performance under clean training data to a comparable level to the state-of-the-art AV-HuBERT trained on noisy data ![12_image_5.png](12_image_5.png) in various noisy as well as clean testing conditions, as shown in Table 1, 2, and 3. Moreover, available noisy training data can further improve the robustness of UniVPM and yield new state-of-the-arts on both LRS3 and LRS2 benchmarks. ## A.2 Analysis Of Limited In-Domain Noisy Audio-Visual Data According to §1 and §A.1, the first challenge of audio modality-based robust AVSR is the limited indomain noisy audio-visual data, which leads to domain mismatch between training and testing (Meng et al., 2017; Long et al., 2017; Lin et al., 2021; Chen et al., 2020c, 2022). Actually there are two methods of obtaining such data, *i.e.*, collection and simulation. First, we can collect and transcribe amounts of noisy audio-visual data under real-world scenarios, but that is extremely time-consuming and laborious, and to our best knowledge there is currently no such public dataset. Second, as there is sufficient clean transcribed audio-visual data (Afouras et al., 2018b; Chung et al., 2017), we can collect indomain noise to simulate noisy audio-visual data. However, this data augmentation method can only partially alleviate but not resolve the domain mismatch problem (Zhang et al., 2022). What is worse, the in-domain noise data is also not always available in all the real-world scenarios (Meng et al., 2017; Long et al., 2017; Chen et al., 2020c, 2022). As presented in Table 1, in case of no available in-domain noise, our UniVPM achieves comparable performance to previous state-of-the-art trained on in-domain noise. When in-domain noise is available, our UniVPM directly outperforms previous state-of-the-art, which breaks out the limit of data augmentation and moves one step forward to the real noisy data training setting (*i.e.*, oracle). In addition, Table 3 further investigates the cases with outof-domain training noise, where our UniVPM even Figure 6: Phoneme distributions in LRS3 and LRS2 ![13_image_0.png](13_image_0.png) datasets. Pre-trained phoneme recognition model (Phy, 2022) is used for statistics, where speech is recognized into 44 phonemes, with 39 of them visualized in figures and another 5 special phonemes eliminated (*i.e.*, '|', '[UNK]', '[PAD]', '<s>', '</s>'). surpasses previous state-of-the-art trained on indomain noise. As a result, our proposed approach effectively alleviates the limitation of in-domain noisy data in audio modality-based robust AVSR. ## A.3 **Analysis Of Univpm From Meta-Learning** Perspective The main idea of our proposed UniVPM can also be explained from meta-learning perspective (Raghu et al., 2019), *i.e.*, learn how to learn. In AVSR task, considering the inherent information sufficiency of visual modality to represent speech content (Sataloff, 1992; Ren et al., 2021), the key factor of its robustness is still the informative audio modality. However, audio is usually interfered by background noise during practical inference. Therefore, the key of improving robustness is to gain sufficient knowledge from clean audio in training stage, and metalearning exactly tells AVSR how to learn from the clean audio. Motivated by this idea, we leverage clean audio-visual data to train the core modules of UniVPM, *i.e.*, viseme and phoneme banks, where video serves as "prompt" and clean audio serves as "meta". In particular, our UniVPM learns the mapping between visemes and phonemes, which then enables modality transfer to restore clean audio against testing noises. Here the viseme-phoneme mapping defines *how to learn from clean audio*. Therefore, we only need video "prompt" during inference to access the clean audio "meta", which enables UniVPM to adapt to any testing noises. Figure 7: Phoneme distributions without 'h\#'. ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) ## A.4 Analysis Of Phoneme Distribution In Lrs3 And Lrs2 Datasets Fig. 6 presents the phoneme distribution in LRS3 and LRS2 datasets. We can observe that in both datasets, the phoneme obeys a long-tail distribution (Liu et al., 2019) with head classes including 'h\#', 'ih', 'n', 'l', 's', 'ah', etc. For better visualization, Fig. 7 removes the dominant phoneme 'h\#' and also presents a long-tail distribution. Therefore, the neural network trained on these data is prone to over-fitting to head phoneme classes, resulting in less satisfactory performance on tail classes. LRS3 and LRS2 are both large-scale English reading speech datasets recorded with thousands of speakers from a wide range of races, so that they can be roughly representative of the phoneme distribution of English language. ## B Experimental Details B.1 Datasets LRS36(Afouras et al., 2018b) is currently the largest public sentence-level lip reading dataset, 6https://www.robots.ox.ac.uk/~vgg/dat a/lip_reading/lrs3.html which contains over 400 hours of English video extracted from TED and TEDx talks on YouTube. The training data is divided into two parts: pretrain (403 hours) and trainval (30 hours), and both of them are transcribed at sentence level. The pretrain part differs from trainval in that the duration of its video clips are at a much wider range. Since there is no official development set provided, we randomly select 1,200 samples from trainval as validation set (∼ 1 hour) for early stopping and hyper-parameter tuning. In addition, it provides a standard test set (0.9 hours) for evaluation. LRS27(Chung et al., 2017) is a large-scale publicly available labeled audio-visual (A-V) datasets, which consists of 224 hours of video clips from BBC programs. The training data is divided into three parts: pretrain (195 hours), train (28 hours) and val (0.6 hours), which are all transcribed at sentence level. An official test set (0.5 hours) is provided for evaluation use. The dataset is very challenging as there are large variations in head pose, lighting conditions and genres. ## B.2 Data Preprocessing The data preprocessing for above two datasets follows the LRS3 preprocessing steps in prior work (Shi et al., 2022a). For the audio stream, we extract the 26-dimensional log filter-bank feature at a stride of 10 ms from input raw waveform. For the video clips, we detect the 68 facial keypoints using dlib toolkit (King, 2009) and align the image frame to a reference face frame via affine transformation. Then, we convert the image frame to gray-scale and crop a 96×96 region-of-interest (ROI) centered on the detected mouth. During training, we randomly crop a 88×88 region from the whole ROI and flip it horizontally with a probability of 0.5. At inference time, the 88×88 ROI is center cropped without horizontal flipping. To synchronize these two modalities, we stack each 4 neighboring acoustic frames to match the image frames that are sampled at 25Hz. ## B.3 Model Configurations Front-ends. We adopt the modified ResNet-18 from prior work (Shi et al., 2022a) as visual frontend, where the first convolutional layer is replaced by a 3D convolutional layer with kernel size of 5×7×7. The visual feature is flattened into an 1D 7https://www.robots.ox.ac.uk/~vgg/dat a/lip_reading/lrs2.html vector by spatial average pooling in the end. For audio front-end, we use one linear projection layer followed by layer normalization (Ba et al., 2016). UniVPM. The viseme and phoneme banks contain N = 40 clusters, following the amount of English phonemes (Phy, 2022), *i.e.*, 39 regular phonemes and one special phoneme '[PAD]' that indicates silence. It is worth mentioning that the actual amount of visemes is less than phonemes due to homophene phenomenon, *i.e.*, one-to-many lipaudio mapping (Bear and Harvey, 2017), but in this work we set same number of clusters to construct a strict one-to-one viseme-phoneme mapping, as shown in Fig. 5 and Fig. 8. The cluster capacity Smax in Alg. 1 is set to 20, and the temperature τ in Eq. 9 is set to 0.1. Speech Recognition. The downstream speech recognition model follows AV-HuBERT (Shi et al., 2022b) with 24 Transformer (Vaswani et al., 2017) encoder layers and 9 decoder layers, where the embedding dimension/feed-forward dimension/attention heads in each Transformer layer are set to 1024/4096/16 respectively. We use a dropout of p = 0.1 after the self-attention block within each Transformer layer, and each Transformer layer is dropped (Fan et al., 2019) at a rate of 0.1. The total number of parameters in our UniVPM and AV-HuBERT baseline are 478M and 476M. ## B.4 Data Augmentation Following prior work (Shi et al., 2022b), we use many noise categories for data augmentation to simulate noisy training data. We select the noise categories of "babble", "music" and "natural" from MUSAN noise dataset (Snyder et al., 2015), and extract some "speech" noise samples from LRS3 dataset. For experiments on unseen testing noises (see Table 3), we also select the noise categories of "Meeting", "Cafe", "Resto" and "Station" from DEMAND noise dataset (Thiemann et al., 2013). All categories are divided into training, validation and test partitions. During training process, we randomly select one noise category and sample a noise clip from its training partition. Then, we randomly mix the sampled noise with input clean audio, at signal-to-noise ratio (SNR) of 0dB with a probability of 0.25. At inference time, we evaluate our model on clean and noisy test sets respectively. Specifically, the system performance on each noise type is evaluated separately, where the testing noise clips are added at five different SNR levels: {−10, −5, 0, 5, 10}dB. At last, the testing results on different noise types and SNR levels will be averaged to obtain the final noisy WER result. ## B.5 Training And Inference Training. The noisy training data is synthesized by adding random noise from MUSAN (Snyder et al., 2015) or DEMAND (Thiemann et al., 2013) of 0dB at a probability of 0.25. We load the pretrained AV-HuBERT8for front-ends and downstream speech recognition model, and then follow its sequence-to-sequence (S2S) finetuning configurations (Shi et al., 2022b) to train our system. We use Transformer decoder to decode the encoded features into unigram-based subword units (Kudo, 2018), where the vocabulary size is set to 1000. The weighting parameters λGAN /λrec/λvar in Eq. 12 are set to 0.1/0.2/0.5, respectively. The entire system is trained for 60K steps using Adam optimizer (Kingma and Ba, 2014), where the learning rate is warmed up to a peak of 0.001 for the first 20K updates and then linearly decayed. The training process takes ∼ 2.5 days on 4 NVIDIA-V100-32GB GPUs, where in comparison the AV-HuBERT finetuning takes ∼ 1.3 days on 4 NVIDIA-V100-32GB GPUs. Inference. As shown in Table 1, the testing noises "Babble", "Music" and "Natural" are sampled from MUSAN, and "Speech" is drawn from LRS3, following prior work (Shi et al., 2022b). No language model is used during inference. We employ beam search for decoding, where the beam width and length penalty are set to 50 and 1 respectively. All hyper-parameters in our systems are tuned on validation set. Since our experimental results are quite stable, a single run is performed for each reported result. ## B.6 Details Of Univpm Optimization As detailed in Alg. 2, we design a two-step adversarial learning strategy for UniVPM optimization, where the discriminator and generator play a twoplayer minimax game. First, we maximize LGAN to update the discriminator, where generator is detached from optimization. According to Eq. 11, maximizing the first term of LGAN increases the MI between visual and audio sequences, while maximizing the second term is actually decreasing the Algorithm 2 UniVPM Optimization. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Require: Training data Dtrain that contains visual-audio pairs (xv, xa) and the text transcription y. The UniVPM network θ that consists of visual front-end θvf , audio frontend θaf , viseme bank Bv, phoneme bank Ba, AMIE θ*AMIE* and speech recognition model θASR. Hyperparameter weights λGAN , λrec, λvar. 1: Load pre-trained AV-HuBERT for θvf , θaf and θASR, randomly initialize θ*AMIE*. 2: Initialize empty banks Bv and Ba. 3: **while** not converged do 4: for (xv, xa) ∈ Dtrain do 5: FORWARD-PROPAGATION: 6: fv = θvf (xv), fa = θaf (xa) ▷ front-ends 7: Update Bv and Ba according to Alg. 1 8: Obtain viseme sequence sv from fv and Bv 9: Obtain phoneme sequence sa from fa and Ba 10: Generate restored audio ˆfa in Eq. 9 and 10 11: yˆ = θASR(fv ⊕ fa ⊕ ˆfa) ▷ recognition 12: TRAINING OBJECTIVES: 13: LGAN (LD and LG) in Eq. 11 14: Lrec = ∥ ˆfa − fa∥2 15: Lvar = Var(c 1 v*, ..., c*N v ) + Var(c 1 a*, ..., c*N a ) 16: LASR = CrossEntropy(ˆ*y, y*) 17: BACK-PROPAGATION: ▷ adversarial learning 18: UPDATE AMIE: ▷ unfreeze θ*AMIE* 19: arg max θAMIELGAN 20: UPDATE REST NETWORK: ▷ freeze θ*AMIE* ![15_image_5.png](15_image_5.png) LASR +λGAN ·LG +λrec ·Lrec+ MI between visemes and phonemes, as well as the ![15_image_2.png](15_image_2.png) ![15_image_3.png](15_image_3.png) ![15_image_4.png](15_image_4.png) ![15_image_6.png](15_image_6.png) MI between visual and restored audio sequences (this is opposite to our desired viseme-phoneme mapping and modality transfer). Second, we freeze discriminator and update the rest network, where minimizing LG increases the MI between visemes and phonemes, as well as MI between visual and restored audio sequences. In addition, LASR optimizes the downstream speech recognition model, Lrec supervise the quality of restored clean audio, and Lvar disperses the viseme and phoneme centers to ease their mapping construction. The entire system is trained in an end-to-end manner. In actual experiments, to save computation cost, we update Bv and Ba once every 10 epochs, which has been proved with no affect on the system performance. One can refer to our attached code for more implementation details. ## B.7 Baselines In this section, we describe the baselines for comparison. - **TM-seq2seq** (Afouras et al., 2018a): TMseq2seq proposes a Transformer-based AVSR system to model the A-V features separately and then attentively fuse them for decoding, and uses cross-entropy based sequence-tosequence loss as training criterion. - **TM-CTC** (Afouras et al., 2018a): TM-CTC shares the same architecture with TM-seq2seq, but uses CTC loss (Graves et al., 2006) as training criterion. - **Hyb-RNN** (Petridis et al., 2018): Hyb-RNN proposes a RNN-based AVSR model with hybrid seq2seq/CTC loss (Watanabe et al., 2017), where the A-V features are encoded separately and then concatenated for decoding. - **RNN-T** (Makino et al., 2019): RNN-T adopts the popular recurrent neural network transducer (Graves, 2012) for AVSR task, where the audio and visual features are concatenated before fed into the encoder. - **EG-seq2seq** (Xu et al., 2020): EG-seq2seq builds a joint audio enhancement and multimodal speech recognition system based on RNN (Zhang et al., 2019), where the A-V features are concatenated before decoding. - **LF-MMI TDNN** (Yu et al., 2020): LF-MMI TDNN proposes a joint audio-visual speech separation and recognition system based on time-delay neural network (TDNN), where the A-V features are concatenated before fed into the recognition network. - **Hyb-AVSR** (Ma et al., 2021b): HybAVSR proposes a Conformer-based (Gulati et al., 2020) AVSR system with hybrid seq2seq/CTC loss, where the A-V input streams are first encoded separately and then concatenated for decoding. - **MoCo+w2v2** (Pan et al., 2022): MoCo+w2v2 employs self-supervised pre-trained audio and visual front-ends, *i.e.*, wav2vec 2.0 (Baevski et al., 2020) and MoCo v2 (Chen et al., 2020b), to generate better audio-visual features for fusion and decoding. - **AV-HuBERT** (Shi et al., 2022a,b): AVHuBERT employs self-supervised learning to capture deep A-V contextual information, where the A-V features are masked and concatenated before fed into Transformer encoder to calculate masked-prediction loss for pretraining, and sequence-to-sequence loss is then used for finetuning. - **u-HuBERT** (Hsu and Shi, 2022): u-HuBERT extends AV-HuBERT to a unified framework of audio-visual and audio-only pre-training. - **Distill-PT** (Ma et al., 2022): Distill-PT proposes a Conformer-based VSR framework with additional distillation from pre-trained ASR and VSR models. ## C Clustering Algorithms C.1 Uniform Effect In K**-Means** K-Means (MacQueen, 1967) is the most popular and successful clustering algorithm, where sample re-allocation and center renewal are executed alternatively to minimize the intra-cluster distance. However, Xiong et al. (2006) points out that KMeans algorithm tends to produce balanced clustering result, *a.k.a.*, uniform effect. This preference seriously degrades the performance when the clusters are imbalanced-sized. The consequence is that the center of minority clusters will gradually move to the territory of majority cluster, as illustrated in Fig. 3 (a). In other words, the K-Means algorithm will be over-fitted to majority clusters, leaving the samples in minority clusters not well modeled. ## C.2 K**-Means++** The performance of K-Means clustering relies on the center initialization, where the vanilla algorithm initialize cluster centers randomly. KMeans++ (Arthur and Vassilvitskii, 2006) is an improved version with dispersed initial centers. It determines cluster centers one by one, and each newly initialized center is pushed as distant as possible to the existed centers. As a result, the K initial cluster centers would separate from each other and benefit the subsequent clustering process. ## C.3 Details Of Online Clustering Baseline For comparison, we build an Online Clustering algorithm as baseline. It is similar to Alg. 1 but employs a vanilla random pruning strategy, instead of re-sampling, to control the total memory of the bank. Our strategy is to randomly keep Sthr samples in the cluster if its number of samples exceeds Sthr. Compared to the proposed Online Balanced Clustering algorithm, this baseline also controls ![17_image_0.png](17_image_0.png) memory size but ignores the imbalanced clusters, as indicated by the dashed ellipses in Fig. 3 (a). ## C.4 Principles Of Online Balanced Clustering According to Alg. 1, the main idea of proposed Online Balanced Clustering is the re-sampling operation to balance cluster sizes. For majority clusters, we perform undersampling to maintain the Sthr nearest samples to cluster center, so that the gathered clusters in Fig. 3 (a) can be separated. For minority clusters, we introduce oversampling to interpolate a new sample near the center, so that the minority clusters are highlighted. As a result, all the clusters are balanced-sized and separated from each other as shown in Fig. 3 (b), so that the over-fitting problem is resolved. As a result, all of the visemes and phonemes can get well represented, which enables the subsequent visemephoneme mapping construction. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations is after conclusion without section number A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 And Appendix A ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix B C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
huang-etal-2023-extensible
An Extensible Plug-and-Play Method for Multi-Aspect Controllable Text Generation
https://aclanthology.org/2023.acl-long.849
Recently, multi-aspect controllable text generation that controls the generated text in multiple aspects (e.g., sentiment, topic, and keywords) has attracted increasing attention. Although methods based on parameter efficient tuning like prefix-tuning could achieve multi-aspect controlling in a plug-and-play way, the mutual interference of multiple prefixes leads to significant degeneration of constraints and limits their extensibility to training-time unseen aspect combinations. In this work, we provide a theoretical lower bound for the interference and empirically found that the interference grows with the number of layers where prefixes are inserted. Based on these analyses, we propose using trainable gates to normalize the intervention of prefixes to restrain the growing interference. As a result, controlling training-time unseen combinations of aspects can be realized by simply concatenating corresponding plugins such that new constraints can be extended at a lower cost. In addition, we propose a unified way to process both categorical and free-form constraints. Experiments on text generation and machine translation demonstrate the superiority of our approach over baselines on constraint accuracy, text quality, and extensibility.
# An Extensible Plug-And-Play Method For Multi-Aspect Controllable Text Generation Xuancheng Huang1† Zijun Liu2,4† Peng Li3,5 Tao Li1 Maosong Sun2,4∗ **Yang Liu**2,3,4,5∗ 1Meituan, China 2Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China 3Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 4Beijing National Research Center for Information Science and Technology 5Shanghai Artificial Intelligence Laboratory, Shanghai, China ## Abstract Recently, multi-aspect controllable text generation that controls the generated text in multiple aspects (e.g., sentiment, topic, and keywords) has attracted increasing attention. Although methods based on parameter efficient tuning like prefix-tuning could achieve multi-aspect controlling in a plug-and-play way, the mutual interference of multiple prefixes leads to significant degeneration of constraints and limits their extensibility to training-time unseen aspect combinations. In this work, we provide a theoretical lower bound for the interference and empirically found that the interference grows with the number of layers where prefixes are inserted. Based on these analyses, we propose using trainable gates to normalize the intervention of prefixes to restrain the growing interference. As a result, controlling training-time unseen combinations of aspects can be realized by simply concatenating corresponding plugins such that new constraints can be extended at a lower cost. In addition, we propose a unified way to process both categorical and free-form constraints. Experiments on text generation and machine translation demonstrate the superiority of our approach over baselines on constraint accuracy, text quality, and extensibility.1 ## 1 Introduction Multi-aspect controllable text generation (MCTG), which aims at generating fluent text while satisfying multiple aspects of constraints simultaneously, has attracted increasing attention in recent years (Chan et al., 2021; Qian et al., 2022; Gu et al., 2022). To effectively control diverse aspects such as sentiment, topic, and detoxification, extensive efforts have been devoted to the task, including methods based on conditional generative model (Keskar et al., 2019), decoding-time regulation (Lin and Riedl, 2021; Kumar et al., 2021), and parameter efficient tuning (Qian et al., 2022; Gu et al., 2022). Despite their effectiveness, existing methods still suffer from low extensibility. Ideally, suppose a multi-aspect controllable text generation system has learned how to control sentiment, topic and keywords separately, it should be extensible to any combinations of the three aspects, e.g., generating a *sports-themed* sentence with *negative sentiment* containing *keywords "New York"* (see Figure 1). Moreover, an extensible system should also be easily extended to control new aspects in a plug-andplay way. However, it is non-trivial for existing methods to achieve this goal. Specifically, the dedicated conditional generative models (Keskar et al., 2019) mostly need to be trained from scratch or finetuned when facing new aspect combinations. The decoding-time regulation based methods (Lin and Riedl, 2021; Kumar et al., 2021) intervene in the probabilities of sentences by light-weight attribute classifiers or language models during inference, which significantly impairs text fluency when multiple distributions are interpolated. The parameter efficient tuning based methods (Qian et al., 2022; Gu et al., 2022) control aspects by inserting trainable prompts or prefixes into the model, referred to as plugins. By leveraging one plugin for each aspect, these methods can naturally work in a plug-and-play way, showing better potential to achieve high extensibility. However, existing studies show that directly combining multiple plugins results in significantly lower controllability of the corresponding aspects than before combining (i.e., attribute degeneration) (Qian et al., 2022; Gu et al., 2022). Gu et al. (2022) argue that *mutual interference* of the plugins is the major reason for attribute degeneration, which is further justified by our theoretical and empirical analyses. Previous works alleviate the problem by introducing connectors to connect mul- ![1_image_0.png](1_image_0.png) tiple plugins (Yang et al., 2022), latent variables to represent the unsupervised aspects (Qian et al., 2022), or objectives to narrow the discrepancy of aspects (Gu et al., 2022). However, these methods require joint training of plugins and are designed for pre-defined closed-set constraints. In consequence, their extensibility is limited. In this paper, we propose an extensible plug-andplay method, PROMPT GATING, for multi-aspect controllable text generation. We derive a theoretical lower bound for the interference of plugins and reveal that it accumulates with the increasing number of layers where prefixes are inserted. Based on these findings, we propose attaching trainable gates to the plugins, which normalize the interventions of plugins. As a result, the mutual interference has been significantly reduced such that the control of arbitrary combinations of aspects can be realized by simply concatenating the corresponding plugins. Thus, our method is both extensible and plug-andplay. Moreover, we represent the constraints of the aspects in textual form, which makes our method applicable not only to categorical aspects (e.g., sentiment) but also to free-form aspects (e.g., lexical constraint). Our contributions are three-fold: - We propose an extensible plug-and-play method, PROMPT GATING, for multi-aspect controllable text generation, which is able to control training-time unseen aspect combinations by simply concatenating plugins. - We provide a theoretical lower bound along with empirical analyses for the mutual interference problem, which we believe will facilitate future research. - Experiments show that our approach has lower mutual interference, leading to superiority over strong baselines on text quality, constraint accuracy, and extensibility. ## 2 Background In this section, we illustrate the widely-used prefixtuning-based method (Qian et al., 2022; Gu et al., 2022; Yang et al., 2022) for multi-aspect controllable text generation. Generally, prefix-tuning (Li and Liang, 2021) prepends light-weight continuous vectors to the multi-head attention sublayer of each Transformer layer (Vaswani et al., 2017): $$\mathbf{H}=\mathrm{Att}{\Big(}\mathbf{Q},\big[\mathbf{P}^{K};\mathbf{K}\big],\big[\mathbf{P}^{V};\mathbf{V}\big]{\Big)},\quad\quad(1)$$ where Att(·) is the attention function, Q are queries of inputs, K and V are activations of inputs, PK and PVare trainable prefixes, [·; ·] denotes the concatenation operation, H is the output of the attention sublayer. We use ϕ to denote the set of prefixes in all Transformer layers. Specifically, for multi-aspect controllable text generation, we assume that there are N aspects of constraints. Due to the lack of multi-aspect labeled data, each set of prefixes, which usually represents a specific constraint (e.g., "positive" for the sentiment aspect), is trained on the corresponding single-aspect labeled data: $${\hat{\phi}}_{i}={\underset{\phi_{i}}{\operatorname{argmax}}}\left\{P(\mathbf{y}|\mathbf{x};\theta,\phi_{i})\right\},1\leq i\leq N,\ (2)$$ where θ are the fixed parameters of the pretrained model, y is the controlled target sentence, x is the input sentence2, P(y|x; θ, ϕi) is the conditional probability of y, and ϕˆi are the learned parameters of prefixes for the i-th aspect. During inference, for a combination of multiple aspects, corresponding prefixes are selected and synthesized by either concatenating (Qian et al., 2022; Yang et al., 2022) or finding their intersection (Gu et al., 2022), and then the generation is conditioned on the synthesis. Without loss of generality, we take two aspects as an example. The conditioned probability can be represented as $$P{\Bigl(}{\hat{\mathbf{y}}}|\mathbf{x};{\boldsymbol{\theta}},\operatorname{syn}({\hat{\boldsymbol{\phi}}}_{1},{\hat{\boldsymbol{\phi}}}_{2}){\Bigr)},$$ , (3) where syn(·) is a synthesize function, yˆ is the candidate sentence, ϕˆ1 and ϕˆ2 are two sets of prefixes corresponding to two aspects (e.g., "positive" for sentiment and "sports" for topic), respectively. Although existing methods alleviate the mutual interference of prefixes by joint training (Qian et al., 2022; Gu et al., 2022; Yang et al., 2022), they are based on pre-defined closed-set constraints, which increases the overhead of extending a new constraint and thus limits extensibility. Thus, to maintain high extensibility, reducing mutual interference without joint training still remains a challenge. ## 3 Analyses On Mutual Interference To alleviate mutual interference while maintaining extensibility, we conduct theoretical and empirical analyses. First, we provide a definition of mutual interference as follows. Definition. Mutual interference (MI) is the interference between multiple plugins which are trained separately during training but are combined to guide the pretrained model simultaneously during inference (i.e., in the zero-shot setting). However, the exact interference is hard to analyze because of the complexity of deep neural networks. Intuitively, suppose multiple plugins are optimized simultaneously during training, which requires multi-aspect labeled data, their interference will be minimized because they have learned to work cooperatively under supervision (i.e., in the supervised setting). Therefore, we use the differences between the hidden states of the supervised and zero-shot settings ![2_image_0.png](2_image_0.png) $$({\mathfrak{I}})$$ to approximate the mutual interference of multiple plugins. Specifically, let ϕˆi and ϕ˜i be the parameters of plugins learned from the single- and multi-aspect labeled data, respectively. Taking twoaspect controlling as an example, the output of a Transformer layer is given by H(x, ϕ1, ϕ2), where x is the layer input, then mutual interference can be defined as $$\mathrm{MI}\approx\bigg\|\mathbf{H}(\mathbf{x},{\hat{\phi}}_{1},{\hat{\phi}}_{2})-\mathbf{H}(\mathbf{x},{\tilde{\phi}}_{1},{\tilde{\phi}}_{2})\bigg\|.\quad(4)$$ Empirical Analysis. Then, as mutual interference has been defined as the norm of gap between hidden states in the zero-shot and supervised settings, we can empirically estimate it on the authentic dataset. By calculating the averaged norm on the Yelp dataset (Lample et al., 2019), we plot the variations of mutual interference with the number of Transformer layers for Prompt-tuning (Lester et al., 2021) and Prefix-tuning (Li and Liang, 2021) in Figure 2. We can find that the interference accumulates with insertions of trainable parameters. Moreover, the magnitude of mutual interference at the last Transformer layer (i.e., the 12-th layer in Figure 2) is consistent with the performance gap, which is the difference between the fulfillment of constraints in single- and multi-aspect settings (see Table 1). Meanwhile, too few trainable parameters cannot guide the pretrained model effectively. In summary, the key point for remaining effective in the zero-shot setting is restraining the growth of mutual interference (for a lower performance gap) while *providing sufficient trainable parameters* (for better supervised performance). Theoretical Analysis. Next, to find a way to alleviate mutual interference, we conducted a theoretical analysis.3 As a result, we found that the mutual interference, which is caused by the interactions in attention sublayers, has a theoretical lower bound4: $$\mathrm{MI}>\alpha\|\Delta\mathbf{h}_{1}(\mathbf{x},{\hat{\phi}}_{1})\|+\beta\|\Delta\mathbf{h}_{2}(\mathbf{x},{\hat{\phi}}_{2})\|,$$ where 0 *< α, β <* 1, and ∥∆hi(x, ϕˆi)∥ is a norm that is positively related to the magnitude of ϕˆi. Moreover, the lower bound might accumulate with Transformer layers like in Figure 2. Intuitively, applying normalization (e.g., gates) to the parameters of the i-th plugin to reduce its magnitude will decrease the lower bound of mutual interference. ## 4 Prompt G**Ating** We propose a novel approach that attaches trainable gates to the plugins, which alleviates the mutual interference of multiple plugins and makes the model highly extensible. Figure 3 shows the architecture of our approach. We first provide intuition in §4.1, then define our approach formally in §4.2. ## 4.1 Intuition Although prefix-tuning provides sufficient interventions and avoids long-range dependencies by inserting continuous vectors into each attention sublayer, it suffers from the accumulation of mutual interference of these plugins (see §3). On the one hand, the vectors are inserted into the attention sublayer, where they interact with each other, which directly enhances mutual interference. On the other hand, the vectors are not normalized, which leads to a large lower bound of mutual interference (Eq. (5)). Intuitively, injecting the vectors in a position-wise manner will avoid direct interaction between them. Moreover, normalizing the vectors can limit the magnitude of the lower bound, which might decrease mutual interference. Therefore, we first propose attaching vectors outside the attention sublayer, which can be realized by appending trainable vectors to the output of the embedding layer and adding trainable vectors to the hidden states in each Transformer layer (see Figure 3). Then, trainable gates are applied to these hidden states to alleviate 3Please refer to Appendix A for more detail. 4For brevity, we show the lower bound in one head of attention (obtained by simplifying Eq. (13)), and a similar conclusion can be obtained on the multi-head attention (Eq. (14)). ![3_image_0.png](3_image_0.png) mutual interference further. In this way, we expect our approach to restrain the growth of mutual interference while providing sufficient interventions. ## 4.2 Method Prompting. We present our model in the order of forward propagation. To change how the trainable parameters are injected into the model, we first follow prompt-tuning (Lester et al., 2021) to append trainable prompts to the output of the embedding layer. Moreover, to make our model applicable not only to categorical aspects (e.g., sentiment) but also to free-form aspects (e.g., lexical constraint), we present the constraints of aspects in textual form and feed them to the model. When two aspects of constraints are required during inference, the model input is given by $$\mathbf{H}^{(0)}={\Big[}\mathrm{E}(\mathbf{x});\mathrm{E}(\mathbf{c}_{1});\mathrm{E}(\mathbf{c}_{2});\mathbf{P}_{1}^{(0)};\mathbf{P}_{2}^{(0)}{\Big]},\quad(6)$$ where E(·) is the embedding function, and x is the source sentence for sequence-to-sequence generation like machine translation and can be eliminated for text generation. c1 and c2 are textual form of constraints (e.g., "This is a positive review" for positive review generation, and "New York" for lexical constraint). P (0) 1, P (0) 2 ∈ R p×dare continuous prompts, where the right superscript (j)represents the j-th layer, p is the number of continuous vectors, and d is the dimension of hidden states. To avoid the discrepancy between training and inference in position, each textual sequence has its own position indexes starting from 1 and its own segment embedding (Devlin et al., 2019). Note that only one textual constraint and one set of trainable parameters are injected during training. Gating. Then, the model input H(0) is fed to the encoder, where trainable gates are attached to the hidden states in a position-wise manner, which alleviates mutual interference as well as steers the model. Formally, A(j) = Self-Att(H(j−1)) is the output of the j-th attention sublayer, and it is normalized by the gates: $$\tilde{\mathbf{A}}^{(j)}=\Big{[}\mathbf{A}_{X}^{(j)};\sigma\big{(}\mathbf{G}_{1}^{(j)}\big{)}\odot\big{(}\mathbf{A}_{P_{1}}^{(j)}+\mathbf{P}_{1}^{(j)}\big{)};$$ $$\sigma\big{(}\mathbf{G}_{2}^{(j)}\big{)}\odot\big{(}\mathbf{A}_{P_{2}}^{(j)}+\mathbf{P}_{2}^{(j)}\big{)}\Big{]},\tag{7}$$ where $\mathbf{A}_{X}^{(j)}\in\mathbb{R}^{(|\mathbf{x}|+|\mathbf{c}_{1}|+|\mathbf{c}_{2}|)\times d}$ and $\mathbf{A}_{P_{i}}^{(j)}\in\mathbb{R}^{p\times d}$ are hidden states split from $\mathbf{A}^{(j)}$, $\mathbf{P}_{i}^{(j)}\in\mathbb{R}^{p\times d}$ i ∈ R are trainable vectors that add to the hidden states, σ is the sigmoid(·) function and G (j) i ∈ R p×d are trainable vectors. ⊙ denotes the Hadamard product and the normalized vectors σ(G (j) i) serve as **gates** to selectively rescale the output of the attention sublayer in a position-wise manner and A˜ (j) ∈ R (|x|+|c1|+|c2|+2p)×dis the result of the normalization. Next, the normalized output is fed to the feed-forward sublayer: H(j) = FFN(A˜ (j)). Finally, the output of the last encoder layer is fed to a standard Transformer decoder to guide the generation. Training & Inference. As shown in Figure 1, during training, each plugin (including prompts and gates) for a single aspect of constraints is attached to the pretrained generative model and optimized by corresponding single-aspect labeled data separately (refer to Eq. (2)). In contrast, during inference, the control of arbitrary combinations of aspects can be realized by simply concatenating the corresponding plugins (refer to Eq. (3)). Moreover, our approach treats the training and inference processes for pre-existing and newlyintroduced constraints identically. The total training cost of N pre-existing aspects and M newlyadded aspects is O((N + M)C), where C denotes the cost of training on a single aspect. In this way, the cost of introducing new constraints is relatively low. ## 5 Experiments We conducted experiments on two representative tasks in natural language generation, which are text generation and machine translation. ## 5.1 Multi-Aspect Controllable Text Generation Dataset. Following previous work (Yang et al., 2022), we adopted the widely-used Yelp dataset (Lample et al., 2019), which contains restaurant reviews with the sentiment (positive and negative) and the topic (American, Mexican, and Asian) labels. To evaluate the extensibility of methods, we added two additional aspects of constraints: keywords (He, 2021) and tense (past and present) (Ficler and Goldberg, 2017), where their labels are automatically extracted from the reviews. Due to the page limit, please refer to Appendix B for more details about the experimental setup. Evaluation. Following previous work, we adopted automatic and human evaluations for constraint accuracy and text quality (Lyu et al., 2021; Dathathri et al., 2019; Gu et al., 2022). Specifically, we finetuned two RoBERTa-based (Liu et al., 2019) classifiers for the evaluations of sentiment and topic. The tense accuracy was evaluated by the same tool adopted in the training set, and we used word-level Copy Success Rate (CSR) (Chen et al., 2020) to evaluate the lexical constraint. In addition, we used the perplexity (PPL) given by GPT-2medium (Radford et al., 2019) and averaged distinctness (Li et al., 2016) to evaluate the fluency and diversity of the generated text, respectively. For human evaluation, each sentence received a score of 1 to 5 on sentiment and topic relevance as well as fluency given by three evaluators. The final scores are averaged over three ratings. Baselines. We compared our approach with several representative methods for multi-aspect controllable text generation: - GEDI (Krause et al., 2021): a decoding-time regulation method that uses light-weight con- | Category | Method | Dist.↑ | Sent.↑ | Topic↑ | Average↑ | PPL↓ | |----------------------|--------------|----------------|----------------|----------------|------------------|------------------| | DTR | GEDI | 0.75 (0.00) | 99.47 (+0.13) | 51.36 (-45.98) | 75.41 (-22.92) | 616.92 (+253.23) | | PET w/ JT | DIST. LENS | 0.26 (-0.10) | 77.47 (-17.17) | 66.98 (-14.95) | 72.22 (-16.06) | 52.59 (+19.73) | | PROMPT-TUNING | 0.42 (-0.06) | 48.29 (-5.84) | 48.11 (-8.82) | 48.20 (-7.33) | 40.89 | (-6.83) | | PREFIX-TUNING | 0.31 (-0.10) | 47.53 (-37.27) | 69.11 (-9.38) | 58.32 (-23.32) | 147.47 (+125.17) | | | TAILOR | 0.39 (-0.04) | 80.68 (-8.12) | 68.72 (-9.94) | 74.70 (-9.03) | 40.29 | (+8.52) | | PROMPT GATING (Ours) | 0.42 (0.00) | 84.80 (-10.93) | 75.02 (-8.00) | 79.91 (-9.47) | 21.77 | (+0.14) | # Aspects Method PPL ↓ Sent. ↑ Topic ↑ Tense ↑ Lex. ↑ Ave.↑ ∆**Time** 3 PREFIX-TUNING 154.69 44.91 54.38 24.49 / 41.26 +6.42 h DIST. LENS 63.13 65.31 55.84 54.25 / 58.47 +30.13 h PROMPT GATING (*Ours*) 21.87 76.93 62.73 62.24 / 67.30 **+6.30 h** 4PREFIX-TUNING 159.80 (+8.35) 37.33 (-7.58) 32.51 (-21.87) 18.82 (-5.68) 29.55 15.47 +2.77 h PROMPT GATING (*Ours*) 20.90 (-0.96) 75.32 (-1.61) 62.52 (-0.21) 60.05 (-2.20) 54.50 63.10 **+2.01 h** Table 3: Human evaluation on double-aspect controllable text generation. Sentences are rated 1 to 5 each for sentimental and topical relevance and fluency. ditional generative discriminator to guide pretrained models. The distributions given by multiple discriminators are normalized for controlling multiple aspects of target sentences. - DIST. LENS (Gu et al., 2022): a prefix-tuningbased method that introduces an autoencoder and additional objectives to map several constraints of attributes to one latent space (i.e., joint training of prefixes). It finds the intersection of prefixes of multiple constraints during inference. | Method | Sent. ↑ | Topic ↑ | Fluency ↑ | |----------------------|-----------|-----------|-------------| | GEDI | 1.67 | 2.72 | 3.12 | | DIST. LENS | 3.71 | 3.20 | 3.72 | | PROMPT GATING (Ours) | 4.44 | 4.23 | 4.19 | - PROMPT-TUNING (Lester et al., 2021): a pa- rameter efficient method that appends continuous prompts to the model input. Multiple prompts are trained separately and are simply concatenated during inference. - PREFIX-TUNING (Li and Liang, 2021): a parameter efficient method that appends continuous prefixes to the activations at attention sublayers. Multiple prefixes are trained separately and are simply concatenated during inference. - TAILOR (Yang et al., 2022): a prompt-tuningbased method that further modifies the attention mask and position indexes during inference to narrow the gap between training and inference. Results. Table 1 shows the automatic evaluation on double-aspect controllable text generation. We demonstrate the averaged accuracies to represent the overall performance on satisfying multiple aspects of constraints. Furthermore, we provide the performance gap between double- and single- | Method | Lex. ↑ | Tense↑ | Average↑ | BLEU↑ | |----------------------|----------------|----------------|----------------|--------------| | PREFIX-TUNING | 7.51 (-77.77) | 43.46 (-39.83) | 25.48 (-58.80) | 0.4 (-36.3) | | PARALLEL ADAPTER | 48.44 (-43.38) | 67.87 (-15.68) | 58.15 (-29.53) | 21.8 (-15.5) | | LORA | 50.79 (-37.15) | 74.16 (-10.16) | 62.47 (-23.65) | 25.0 (-11.2) | | PROMPT-TUNING | 64.64 (-10.29) | 81.12 (-0.07) | 72.88 (-5.18) | 34.2 (-1.2) | | PROMPT GATING (Ours) | 85.29 (-4.61) | 85.75 (+1.76) | 85.52 (-1.42) | 36.8 (-0.3) | aspect settings to represent the ability to combine multiple plugins in a zero-shot manner. Although GEDI achieves the highest scores on sentiment accuracy and distinctness, its perplexity explodes, and its tense accuracy is significantly decreased, which can be attributed to the interpolation of multiple discriminators. As PROMPT-TUNING does not have sufficient trainable parameters, it performs poorly on constraint accuracies. However, it has a relatively minor performance gap due to only inserting vectors once. PREFIX-TUNING suffers from severe mutual interference because of the insertions in all Transformer layers, leading to poor performance on either constraint accuracies or perplexity. Compared with PREFIX-TUNING, DIST. LENS has better constraint accuracies and lower performance gaps because of the joint training of prefixes. We found that DIST. LENS is sensitive to constraint distributions in the training set because it attempts to find the intersection of multiple constraints. Our approach (PROMPT GATING) achieves the highest constraint accuracies, lowest perplexity and a relatively small performance gap while our plugins are trained separately. Table 2 shows the extensibility of the methods. When extended from double-aspect to triple-aspect, DIST. LENS has to be retrained because of its joint training strategy. In contrast, our approach and PREFIX-TUNING only need to train one plugin, then combine plugins and plug them into the pretrained model. Unfortunately, when extended from triple-aspect to quadruple-aspect, as plugins of PREFIX-TUNING badly interfere with each other, its ability to control multiple aspects significantly degenerates. However, our approach has a slight performance gap with a relatively small training cost, revealing its high extensibility. Table 3 with an inter-annotator agreement of 0.31 (Fleiss' κ). Experiments indicate that our approach significantly outperforms both baselines with p < 0.01 on all three aspects, determined by paired bootstrap and t-test using a popular opensource tool (Dror et al., 2018) 5. Unlike automatic evaluations, GEDI performs the worst in sentiment relevance. It can probably be attributed to the fact that GEDI often generates ambivalent-sentiment and non-fluent sentences, and human annotators tend to give low ratings to them. The other results are in line with automatically evaluated results. ## 5.2 Multi-Aspect Controllable Machine Translation Dataset. To thoroughly compare our approach with baselines, we also adopted a sequence-tosequence generation task (i.e., machine translation (Bahdanau et al., 2015)). Experiments are conducted on the WMT14 German → English benchmark. We adopted three aspects of constraints in machine translation, and the labels are all automatically obtained from target sentences. We use keywords (Post and Vilar, 2018) and tense (Ficler and Goldberg, 2017) like in the text generation task to control translations. Specifically, we adopt French sentences with the same meaning as the German sources, which can be seen as an external knowledge to improve translation quality (Zoph and Knight, 2016), as the third constraint. Evaluation. We adopted SACREBLEU6(Post, 2018) to calculate BLEU scores (Papineni et al., 2002) to evaluate the translation quality. Similar to text generation (§5.1), we used CSR (Chen et al., 2020) and tense accuracy to evaluate lexical and The human evaluation results are illustrated in | Method | Sent. ↑ | Topic ↑ | PPL ↓ | |---------------------------------|-----------|-----------|---------| | PROMPT GATING (Ours) | 84.80 | 75.02 | 21.77 | | - Textual context for attribute | 83.60 | 71.89 | 22.00 | | - Normalization of gates | 76.53 | 61.02 | 27.55 | | Move the gates behind FFN | 56.71 | 32.49 | 36.74 | ## Temporal Constraints, Respectively. Baselines. Besides PROMPT-TUNING (Lester et al., 2021) and PREFIX-TUNING (Li and Liang, 2021) (§5.1), we adopted another two representative parameter efficient methods as baselines: - LORA (Hu et al., 2021): a method that adds trainable rank decomposition matrices into attention sublayers. - PARALLEL ADAPTER (Houlsby et al., 2019): a method that parallelly inserts feed-forward sublayers between pre-trained sublayers. Similar to PREFIX-TUNING, for both LORA and PARALLEL ADAPTER, each plugin is trained separately, and multiple plugins are simply concatenated for multi-aspect setting during inference. Results. Table 4 shows the results on controllable machine translation. Unlike text generation, constraints in machine translation do not merely contain attribute-based constraints. Therefore, methods specially designed for attribute-based constraints cannot be applied to this task. Surprisingly, PROMPT-TUNING achieves the highest constraint accuracies and translation quality among baselines because it largely retains the capabilities of plugins to satisfy constraints. PREFIX-TUNING faces the severe degeneration of both accuracies of constraints and BLEU scores, which might be attributed to the more complicated model structure in machine translation than text generation. Our approach outperforms all baselines in machine translation, and the consistent superiorities on both tasks show its generalizability. ## 5.3 Analysis Mutual Interference. Similar to empirical analysis on mutual interference for PREFIX-TUNING and PROMPT-TUNING (see §3), we also plotted the variation of the mutual interference with the number of injections of our approach in Figure 2. With the gates to selectively rescale the interventions of plugins, the growth of interference is restrained. Ablation Study. Table 5 shows the ablation study and comparison with the variant of our approach. According to the performance gaps corresponding to the changes, we can find that the textual context of constraints slightly affects the constraint accuracies, and the normalization of the trainable gates is a key point for good performance. Moreover, the trainable gates should be placed where the interactions have just happened (i.e., after attention sublayers). Please refer to Appendix C and D for more results, analyses, and cases. ## 6 Related Work Multi-aspect controllable text generation (MCTG) (Qian et al., 2022; Yang et al., 2022; Gu et al., 2022) that simultaneously satisfies multiple constraints is a challenging task for which highly extensible methods make more practical sense. Approaches to it can be roughly divided into the following three categories. Dedicated Model. The dedicated conditional generative models (Keskar et al., 2019; Dou et al., 2021; Huang et al., 2021; Chen et al., 2020) can accept multiple constraints by training from scratch or full-parameter finetuning on the multi-aspect labeled data. However, the multi-aspect labeled data is hard to obtain, and the constraints that can be satisfied are already determined during training. Thus it is usually too expensive to apply dedicated models to MCTG. Decoding-Time Regulation. Although multiaspect controlling can be achieved by interpolating distributions of multiple discriminators (Dathathri et al., 2019; Chen et al., 2021; Krause et al., 2021; Lin and Riedl, 2021) or optimizing towards multiple objectives (Qin et al., 2022; Kumar et al., 2021), they usually significantly impair text fluency because of the intervention in the decoding stage (Gu et al., 2022). Parameter Efficient Tuning. Unlike the above two branches, PET introduces plugins trained with fixed pretrained models for generating required text (Li and Liang, 2021; Lester et al., 2021; Wang et al., 2022). Because of its potential to achieve high extensibility in a plug-and-plug manner, our work also falls in this line. However, when multiple constraints are required, joint training of plugins is introduced to alleviate the mutual interference of plugins (Chan et al., 2021; Qian et al., 2022; Yang et al., 2022; Gu et al., 2022), which hurts extensibility. Differently, our work aims at reducing mutual interference while maintaining separate training. Similar to our work, Yang et al. (2022) proposes preventing two prompts from interactions in attention layers by modifying attention masks. Nevertheless, their method like prompt-tuning (Lester et al., 2021) only introduces trainable parameters to the model input, leading to insufficient trainable parameters and dissatisfied constraints. In contrast, we propose a novel PET method that attaches trainable gates to the pretrained model, alleviating mutual interference while providing sufficient interventions, leading to both desired extensibility and effectiveness. ## 7 Conclusion In summary, we propose an extensible plug-andplay method for multi-aspect controllable text generation. By replacing trainable prefixes with trainable prompts and gates, our approach alleviates the mutual interference of multiple plugins while providing sufficient interventions. Experiments on text generation and machine translation show its superiorities over baselines on the cost of extending to new combinations of aspects, the fulfillment of constraints, and text fluency. ## Limitations First, although our approach and existing methods for controllable text generation can improve the constraint accuracies, they are currently unable to achieve 100% accuracies in the vast majority of aspects (e.g., sentiment or topic). This makes them not yet applicable in scenarios with requirements of 100% control fulfillment. Second, there is still a gap between the automatic and human evaluation of text generation, which makes there a trade-off between precision and efficiency in the evaluation of controllable text generation. Third, although our approach reduces the mutual interference of plugins so that multiple plugins can be combined at a relatively small cost (a decrease in constraint accuracy), this cost will not be zero, which puts an upper limit on the number of plugins can be applied simultaneously. Fortunately, for controllable text generation, the number of controls applied simultaneously is generally not too large (e.g., four or five aspects). ## Ethics Statement Since the text generation model is trained on data collected from the web and often not thoroughly cleaned, it can generate offensive or toxic text. We must state that the texts generated by our approach do not represent our opinion. To alleviate these issues, we can take detoxification and politeness as the default aspects of constraints in our multiaspect controllable method. ## Acknowledgments This work is supported by the National Key R&D Program of China (2022ZD0160502), the National Natural Science Foundation of China (No. 61925601, 62276152), the National Social Science Fund of China (20&ZD279), and a grant from the Guoqiang Institute, Tsinghua University. We thank Kaiyu Huang, Fuchao Wei, Yuanhang Zheng and all the anonymous reviewers for their valuable comments and suggestions on this work, as well as all the volunteers who participated in the human evaluation. ## References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015. Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2021. Cocon: A self-supervised approach for controlled text generation. In *Proceedings of* ICLR 2021. Guanhua Chen, Yun Chen, and Victor O. K. Li. 2021. Lexically constrained neural machine translation with explicit alignment guidance. In *Proceedings of AAAI* 2021. Guanhua Chen, Yun Chen, Yong Wang, and Victor O. K. Li. 2020. Lexical-constraint-aware neural machine translation via data augmentation. In Proceedings of IJCAI 2020. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: A simple approach to controlled text generation. arXiv preprint arXiv:1912.02164. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*. Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. GSum: A general framework for guided neural abstractive summarization. In *Proceedings of NAACL-HLT 2021*. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In *Proceedings of ACL 2018*. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In *Proceedings of Style-Var 2017*. Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, and Bing Qin. 2022. A distributional lens for multi-aspect controllable text generation. *arXiv preprint arXiv:2210.02889*. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366. Xingwei He. 2021. Parallel refinements for lexically constrained text generation with BART. In *Proceedings of EMNLP 2021*. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *Proceedings of ICML 2019*. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *arXiv preprint* arXiv:2106.09685. Xuancheng Huang, Jingfang Xu, Maosong Sun, and Yang Liu. 2021. Transfer learning for sequence generation: from single-source to multi-source. In *Proceedings of ACL-IJCNLP 2021*. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. *arXiv preprint arXiv:1909.05858*. Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of EMNLP 2021. Sachin Kumar, Eric Malmi, Aliaksei Severyn, and Yulia Tsvetkov. 2021. Controlled text generation as continuous optimization with multiple constraints. In Proceedings of NeurIPS 2021. Guillaume Lample, Sandeep Subramanian, Eric Michael Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2019. Multipleattribute text rewriting. In Proceedings of ICLR 2019. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of EMNLP 2021*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of ACL 2020*. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of ACL-IJCNLP 2021. Zhiyu Lin and Mark Riedl. 2021. Plug-and-blend: A framework for controllable story generation with blended control codes. In Proceedings of NUSE 2021. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *TACL*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2021. StylePTB: A compositional benchmark for fine-grained controllable text style transfer. In *Proceedings of NAACL-HLT 2021*. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gulçehre, and Bing Xiang. 2016. ˘ Abstractive text summarization using sequence-to-sequence RNNs and beyond. In *Proceedings of SIGNLL 2016*. Dat Quoc Nguyen and Karin Verspoor. 2018. An improved neural network model for joint POS tagging and dependency parsing. In *Proceedings of CoNLL* 2018. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of WMT 2018*. Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of NAACL 2018. Jing Qian, Li Dong, Yelong Shen, Furu Wei, and Weizhu Chen. 2022. Controllable natural language generation with contrastive prefixes. In Findings of ACL 2022. Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. COLD decoding: Energy-based constrained text generation with langevin dynamics. arXiv preprint arXiv:2202.11705. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog. Zhixing Tan, Jiacheng Zhang, Xuancheng Huang, Gang Chen, Shuo Wang, Maosong Sun, Huanbo Luan, and Yang Liu. 2020. THUMT: An open-source toolkit for neural machine translation. In *Proceedings of* AMTA 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceedings of NeurIPS 2017*. Shuo Wang, Zhixing Tan, and Yang Liu. 2022. Integrating vectorized lexical constraints for neural machine translation. In *Proceedings of ACL 2022*. Kevin Yang and Dan Klein. 2021. FUDGE: controlled text generation with future discriminators. In *Proceedings of NAACL-HLT 2021*. Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie. 2022. Tailor: A prompt-based approach to attributebased controlled text generation. *arXiv preprint* arXiv:2204.13362. Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of NAACL-HLT 2016. ## A Theoretical Analysis In this section, we theoretically analyze mutual interference (MI) and derive a lower bound of MI for prefix-tuning (Li and Liang, 2021). As Feed Forward and Layernorm sublayers are position-wise operations (Vaswani et al., 2017) which would not introduce the interference of plugins, we focus on analyzing the multi-head attention (MHA) sublayers. According to the previous study (He et al., 2021), the output of a single head of attention with prefixes of the i-th plugin, which is represented by hi, could be described as $$\begin{array}{r l}{\mathbf{h}_{i}=\lambda(\mathbf{x}_{i}){\overline{{\mathbf{h}}}}_{i}+\left(1-\lambda(\mathbf{x}_{i})\right)\Delta\mathbf{h}_{i}}\\ {\ }&{{}=s_{i}{\overline{{\mathbf{h}}}}_{i}+t_{i}\Delta\mathbf{h}_{i},}\end{array}$$ where hi denotes the original output of the pretrained generative model with xi as input. λ(xi) is a scalar related to the attention weights, where λ(xi) = si = 1 − ti ∈ (0, 1). In addition, ∆hi is an offset determined by the i-th plugin, and its magnitude is positively correlated with the magnitude of ϕi, where ϕiis the set of parameters of the i-th plugin. Following the pattern above, when the i-th and jth plugins are inserted at the same time, the output of the head (i.e., hi,j ) turns to be $${\bf h}_{i,j}=\gamma\overline{{{\bf h}}}_{i,j}+\alpha\Delta{\bf h}_{i}+\beta\Delta{\bf h}_{j},$$ where hi,j is the output of pretrained generative model, and α, β, γ ∈ (0, 1), α < ti, β < tj *, γ <* si*, γ < s*j . Similarly, ∆hi and ∆hj are determined by the i-th and j-th plugins. According to the definition in Eq. (4), let h˜i,j and hˆi,j be the outputs like hi,j after training on multi- and single-aspect labeled data, respectively. The mutual interference of two plugins in a single head (i.e., MIs) can be measured by the norm of the gap between outputs under supervised and zeroshot inference: $$\mathrm{MI}_{s}=\left\|\hat{\mathbf{h}}_{i,j}-\hat{\mathbf{h}}_{i,j}\right\|$$ $$=\left\|\hat{\mathbf{h}}_{i,j}-(\gamma\overline{\mathbf{h}}_{i,j}+\alpha\Delta\hat{\mathbf{h}}_{i}+\beta\Delta\hat{\mathbf{h}}_{j})\right\|$$ $$\geq\left\|\hat{\mathbf{h}}_{i,j}-\gamma\overline{\mathbf{h}}_{i,j}\right\|-\left\|\alpha\Delta\hat{\mathbf{h}}_{i}+\beta\Delta\hat{\mathbf{h}}_{j}\right\|,\tag{10}$$ where ∆hˆi and ∆hˆj correspond to offsets that plugins are trained on single-aspect labeled data. Considering that the intervention caused by two plugins simultaneously should larger than the sum of two interventions caused by two plugins respectively because of the interaction between two plugins, we assume that there is $$||\hat{\mathbf{h}}_{i,j}-\overline{\mathbf{h}}_{i,j}||>||\hat{\mathbf{h}}_{i}-\overline{\mathbf{h}}_{i}||+||\hat{\mathbf{h}}_{j}-\overline{\mathbf{h}}_{j}||.\tag{11}$$ Based on this, we can derive $$\begin{array}{c}\mbox{MI}_{s}>\left|\left|\hat{\mathbf{h}}_{i}-\gamma\overline{\mathbf{h}}_{i}\right|\right|+\left|\left|\hat{\mathbf{h}}_{j}-\gamma\overline{\mathbf{h}}_{j}\right|\right|\\ \qquad\qquad-\left|\left|\alpha\Delta\hat{\mathbf{h}}_{i}+\beta\Delta\hat{\mathbf{h}}_{j}\right|\right|.\end{array}\tag{12}$$ Given that $s_{i}>\gamma,s_{j}>\gamma$, and $\hat{\mathbf{h}}_{i}=s_{i}\overline{\mathbf{h}}_{i}+t_{i}\Delta\hat{\mathbf{h}}_{i}$ (Eq. (8)), MIs satisfies $$\mathrm{MI}_{s}>\|\hat{\mathbf{h}}_{i}-s_{i}\overline{\mathbf{h}}_{i}\|+\|\hat{\mathbf{h}}_{j}-s_{j}\overline{\mathbf{h}}_{j}\|$$ $$-\|\alpha\Delta\hat{\mathbf{h}}_{i}+\beta\Delta\hat{\mathbf{h}}_{j}\|$$ $$=\|t_{i}\Delta\hat{\mathbf{h}}_{i}\|+\|t_{j}\Delta\hat{\mathbf{h}}_{j}\|-\|\alpha\Delta\hat{\mathbf{h}}_{i}+\beta\Delta\hat{\mathbf{h}}_{j}\|$$ $$\geq(t_{i}-\alpha)\|\Delta\hat{\mathbf{h}}_{i}\|+(t_{j}-\beta)\|\Delta\hat{\mathbf{h}}_{j}\|,\tag{13}$$ where $1>t_{i}-\alpha>0$ and $1>t_{j}-\beta>0$. $$({\boldsymbol{\delta}})$$ Therefore, the mutual interference of two plugins in a single head has a positive lower bound, and it is positively correlated with the magnitude of ϕˆi and ϕˆj . To step further, we derive the lower bound of MI in the multi-head scenario. Assume that K denotes the number of heads, Wo denotes the fixed output projection matrix in the MHA, Wo = QoRo is the QR-decomposition format of Wo, λˆo is the average of absolute eigenvalues. Specifically, hˆk i,j and h˜k i,j denotes hˆi,j and h˜i,j in the k-th head, respectively. Then, the lower bound of MI in MHA (i.e., MIm) can be derived as (viewing Ro as a diagonal matrix for simplicity) MIm = concat(h˜k i,j − hˆk i,j ) K k=1Wo = concat(h˜k i,j − hˆk i,j ) K k=1QoRo ≈ λˆo concat(h˜k i,j − hˆk i,j ) K k=1Qo = λˆo vuutX K k=1 h˜k i,j − hˆk i,j 2 ≥ λˆo √n X K k=1 h˜k i,j − hˆk i,j >λˆo √n X K k=1 (t k i − α k)∥∆hˆk i ∥ + (t k j − β k)∥∆hˆk j ∥ , (14) where 1 > tk i − α k > 0 and 1 > tk j − β k > 0, and ∆hˆk i and ∆hˆk j are also positively correlated with the magnitude of ϕˆi and ϕˆj , respectively. Therefore, the mutual interference of multiple plugins has a theoretical positive lower bound, which implies concatenating prefixes that are separately trained has an irreparable gap against supervised-trained prefixes. As a result, MI might accumulate along with the depth of the model, like in Figure 2. Intuitively, introducing gates, which contain trainable coefficients between 0 to 1, to ϕˆi is helpful for decreasing the offsets in Eq. (14) and thus mutual interference. ## B Reproducibility B.1 Data Preparation For text generation, we adopted the widely-used Yelp dataset7(Lample et al., 2019), which contains restaurant reviews with sentiment (positive and negative) and topic (American, Mexican, and Asian) labels. Specifically, following previous work (Yang et al., 2022), we randomly sampled 30K/3K sentences for each attribute for training/validation while ensuring the balance of different attributes in the final dataset (Table 6). For evaluation, we sampled 25 sentences for each given textual prefix and combination of aspects. In addition, we eliminated the sentences rated 3 in sentiment. To evaluate the extensibility of methods, we added two additional aspects of constraints: keywords (He, 2021) and tense (past and present) (Ficler and Goldberg, 2017), where their labels are automatically extracted from the reviews. More precisely, we randomly extracted 1 to 3 words as keywords for each sentence (Post and Vilar, 2018), and the tenses of sentences are labeled by an open-source toolkit8 that is based on a POS tagger (Nguyen and Verspoor, 2018). For machine translation, we adopted the WMT14 German → English benchmark9. Specifically, the training, validation, and test sets contain 4,500K, 3K, and 3K sentences, respectively. We adopted three aspects of constraints in machine translation, and they are all automatically obtained from target sentences. We use keywords (Post and Vilar, 2018) and tense (Ficler and Goldberg, 2017) like the text generation task to control translations. For the third constraint, we adopt French synonymous sentences as external knowledge, which is | Task | Training | Validation | Test | |---------------------|------------|--------------|--------| | Text Generation | 30K | 3K | 375∗ | | Machine Translation | 4,500K | 3K | 3K | beneficial to disambiguation. Note that it does not directly control any attribute of translations but will improve the translation quality (Zoph and Knight, 2016). The French synonymous sentences are given by a Transformer-based English→French translation model (Vaswani et al., 2017). ## B.2 Evaluation Metrics For text generation, following previous work (Lyu et al., 2021; Dathathri et al., 2019; Gu et al., 2022), we adopted automatic and human evaluation for constraint accuracy and text quality. For the evaluation of sentiment and topic, we finetuned two RoBERTa-based (Liu et al., 2019) classifiers on the Yelp dataset. Specifically, we randomly over-sampled 1,500K/15K/15K sentences for training/validation/test set of topic and 1,380K/1K/1K sentences for training/validation/test set of sentiment. The F1 scores for sentiment and topic are 98.71 and 89.62, respectively. The same toolkit as training evaluated the accuracy of tense, and we used word-level Copy Success Rate (CSR) (Chen et al., 2020) to evaluate the lexical constraint. In addition, we used the perplexity (PPL) given by GPT-2medium (Radford et al., 2019) and averaged distinctness (Li et al., 2016) to evaluate the fluency and diversity of the generated text, respectively. Similar to previous work (Dathathri et al., 2019; Yang and Klein, 2021), we used 15 textual prefixes10 and asked models to start writing from them for each combination of constraints during inference. For human evaluation, each sentence received a score of 1 to 5 on sentiment and topic relevance as well as fluency given by three evaluators. The final scores are averaged over three ratings. Specifically, the 15 textual prefixes are: "Once upon a time","The book","The chicken","The 10https://github.com/uber-research/PPLM | Hyper-parameter | TG | MT | |-----------------------|--------------|--------------| | Pretrained Model | | | | Encoder layers | 12 | 12 | | Decoder layers | 12 | 12 | | Attention heads | 16 | 16 | | Attention head size | 64 | 64 | | Hidden size | 1, 024 | 1, 024 | | FFN hidden size | 4, 096 | 4, 096 | | Max sentence length | 1, 024 | 1, 024 | | Training | | | | Optimizer | Adam | Adam | | Adam beta | (0.9, 0.999) | (0.9, 0.999) | | Training steps | 50, 000 | 150, 000 | | Warmup steps | 10, 000 | 10, 000 | | Batch size | 1, 024 | 512 | | Learning rate | 1 × 10−4 | 4 × 10−4 | | Initial learning rate | 5 × 10−8 | 5 × 10−8 | | Residual dropout | 0.1 | 0.1 | | Attention dropout | 0.0 | 0.0 | | Activation dropout | 0.0 | 0.0 | | Inference | | | | Length penalty | 0.6 | 1.2 | | Top K | 10 | / | | Beam size | / | 5 | Table 7: The commonly-used hyper-parameters in text generation (TG) and machine translation (MT). | Method | # Trainable Parameters | |------------------|-----------------------------------------------| | GEDI | 345M | | DIST. LENS | 110M + 7682 + 768 × 2 × 20 × 1024 × 24 = 866M | | PARALLEL ADAPTER | 2 × 19 × 1024 × (36 + 24) ≈ 2.33M | | LORA | 4 × 17 × 1024 × 36 ≈ 2.51M | | PROMPT-TUNING | 100 × 1024 ≈ 0.10M | | PREFIX-TUNING | 2 × 33 × 1024 × 36 ≈ 2.43M | | TAILOR | 100 × 1024 ≈ 0.10M | | PROMPT GATING | 1024 × 100 × 25 ≈ 2.56M | Table 8: The number of trainable parameters of a single aspect for each method. city","The country","The lake","The movie","The painting","The weather","The food","While this is happening","The pizza","The potato","The president of the country","The year is 1910.". For machine translation, we adopted SACREBLEU11 (Post, 2018) to evaluate the translation quality. Similar to text generation, we used CSR (Chen et al., 2020) and tense accuracy to evaluate lexical and tense constraints, respectively. ## B.3 Model And Hyper-Parameters ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) generation and mBART-large-cc2513 (Liu et al., 2020) for machine translation. For GEDI (Krause et al., 2021), DIST. LENS (Gu et al., 2022), and TAILOR (Yang et al., 2022), we follow the settings in their paper. Specifically, we found that the weights for attribute balance and the number of candidates in the decoding stage of DIST. LENS significantly affect constraint accuracies. For the weights for attribute balance and the number of candidates, we searched in {0.1, 0.2, 0.5, 1, 1.5, 2, 13https://huggingface.co/facebook/ mbart-large-cc25 | Method | Speed | | |------------------|-----------------------|--------| | Training (hours) | Inference (sec/sent.) | | | GEDI | 5.4032 | 1.2020 | | DIST. LENS | 30.1320 | 2.5705 | | PROMPT-TUNING | 4.0983 | 0.2122 | | PREFIX-TUNING | 3.9025 | 0.2220 | | TAILOR | 4.1055 | 0.4640 | | PROMPT GATING | 4.5204 | 0.2108 | 3, 4, 5, 8, 10, 25, 50} and {1, 2, 5, 10}, respectively. For the other methods, we demonstrate their hyperparameters in Table 7. Table 8 shows the number of trainable parameters of each method. The learning rates were determined by searching in {1 × 10−5, 2×10−5, 4×10−5, 8×10−5, 1×10−4, 2×10−4, 4 × 10−4, 1 × 10−3} on the development set. In our approach, the textual contexts used for attributebased constraints (see §4.2) are: - Sentiment: "This is a {} review." for "positive/negative". - Topic: "The following is about {} food." for "Asian/American/Mexican". - Tense: "The tense of this sentence is the {} tense." for "past/present/future", and "The tense of this sentence is undecided." for sentences that do not have an explicit tense. Note that our use of existing artifact(s) was consistent with their intended use. The source code of this work is available at https://github.com/ THUNLP-MT/PromptGating4MCTG and it was developed based on THUMT (Tan et al., 2020), an opensource toolkit for machine translation. ## C Experimental Results C.1 More Analyses Visualization of PROMPT G**ATING**. To further investigate how our approach alleviates the mutual interference of plugins, we visualized the trainable parameters in PROMPT GATING. Specifically, we first extracted P (j) iand σ(G (j) i) in Eq. (7) from each layer j for every single aspect. Then we calculated the average of σ(G (j) i) and the L1 norm of P (j) i | Method | Speed | | |------------------|-----------------------|--------| | Training (hours) | Inference (sec/sent.) | | | PREFIX-TUNING | 10.0150 | 0.2220 | | LORA | 10.8805 | 0.1920 | | PARALLEL ADAPTER | 12.0330 | 0.2150 | | PROMPT-TUNING | 8.7225 | 0.2122 | | PROMPT GATING | 11.0330 | 0.2108 | over all the layers, which are represented by red and blue bars respectively in Figure 4 and Figure 5. We can find that when the magnitude of P (j) i(i.e., trainable vectors added to hidden states) becomes larger, the values of σ(G (j) i) (i.e., trainable gates) tend to become smaller. In other words, these trainable gates attempt to normalize or stabilize the magnitude of hidden states and thus alleviate mutual interference. Efficiency. Table 9 and Table 10 show the training and inference speeds of each method in text generation and machine translation. All training and inference were run on a single GeForce RTX 3090 GPU. ## C.2 Detailed Results Table 11 and 12 are the detailed versions of Table 1 and 4, respectively. We provide detailed results of both single- and multi-aspect models. For text generation, we further demonstrate the accuracy of each attribute. ## D Case Study To further investigate the fulfillment and text quality of each combination of constraints of these methods, Table 13 and Table 14 demonstrate examples of text generation and machine translation, respectively. Models only trained on single-aspect data are required to give results satisfying multiple aspects of constraints. ## E Details In Human Evaluation In this section, we show more details about the human annotation adopted for evaluating model performance on text generation. We recruited three volunteers from schools, shuffled the output of models and provided it to them for scoring. Since they are volunteers, they were not paid. Their average age is 25 years old and they have enough daily English communication skills. The instruction we provided to them like "This human evaluation aims to evaluate the model-generated review texts in three aspects: sentiment and topic relevance, and text fluency. All three integer scores are on a scale of 1-5, with a higher degree of topic/sentiment relevance representing a more consistent theme/sentiment, and a higher degree of text fluency representing a more fluent text. Your personal information will not be retained and these scores will only be used for human evaluation in research". | Method | Constraint | Sentiment↑ | Topic↑ | PPL↓ | | | | |-----------------------------------------------------------|--------------|--------------|----------|---------|---------|--------|--------| | Positive | Negative | Asian | American | Mexican | | | | | Decoding-Time Regulation Method 98.67 / / | / | / | 227.87 | | | | | | / | 100.00 | / | / | / | 839.69 | | | | / | / | 94.93 | / | / | 206.97 | | | | / | / | / | 99.73 | / | 246.36 | | | | / | / | / | / | 97.33 | 297.54 | | | | single-aspect | 98.67 | / | 28.27 | / | / | 363.51 | | | 99.20 | / | / | 87.73 | / | 1834.65 | | | | 99.47 | / | / | / | 37.87 | 378.73 | | | | / | 100.00 | 44.53 | / | / | 329.38 | | | | / | 99.73 | / | 97.87 | / | 423.21 | | | | / | 99.73 | / | / | 11.87 | 372.03 | | | | single (avg.) | 98.67 | 100.00 | 94.93 | 99.73 | 97.33 | 363.69 | | | multi (avg.) | 99.11 | 99.82 | 36.40 | 92.80 | 24.87 | 616.92 | | | Parameter Efficient Tuning Method with Joint Training | | | | | | | | | GEDI | multi-aspect | 91.33 | / | / | / | / | 28.48 | | / | 97.95 | / | / | / | 28.70 | | | | / | / | 77.33 | / | / | 39.01 | | | | / | / | / | 88.98 | / | 29.87 | | | | / | / | / | / | 79.47 | 38.26 | | | | single-aspect | 36.27 | / | 43.73 | / | / | 45.89 | | | 57.87 | / | / | 71.73 | / | 49.84 | | | | 74.67 | / | / | / | 54.67 | 47.59 | | | | / | 99.73 | 70.13 | / | / | 56.20 | | | | / | 97.60 | / | 78.93 | / | 59.24 | | | | / | 98.67 | / | / | 82.67 | 56.77 | | | | single (avg.) | 91.33 | 97.95 | 77.33 | 88.98 | 79.47 | 32.86 | | | multi (avg.) | 56.27 | 98.67 | 56.93 | 75.33 | 68.67 | 52.59 | | | Parameter Efficient Tuning Methods without Joint Training | | | | | | | | | DIST. LENS | multi-aspect | 52.00 | / | / | / | / | 136.10 | | / | 56.27 | / | / | / | 26.26 | | | | / | / | 46.67 | / | / | 25.83 | | | | / | / | / | 84.53 | / | 26.07 | | | | / | / | / | / | 39.60 | 24.36 | | | | single-aspect | 57.07 | / | 40.80 | / | / | 65.96 | | | 59.47 | / | / | 83.87 | / | 30.45 | | | | 45.87 | / | / | / | 20.13 | 47.43 | | | | / | 49.73 | 30.27 | / | / | 40.78 | | | | / | 38.67 | / | 84.00 | / | 33.43 | | | | / | 38.93 | / | / | 29.60 | 27.30 | | | | single (avg.) | 52.00 | 56.27 | 46.67 | 84.53 | 39.60 | 47.72 | | | multi (avg.) | 54.13 | 42.44 | 35.53 | 83.93 | 24.87 | 40.89 | | | PROMPT-TUNING | multi-aspect | 70.67 | / | / | / | / | 23.53 | | / | 98.93 | / | / | / | 21.55 | | | | / | / | 77.87 | / | / | 21.89 | | | | / | / | / | 84.00 | / | 21.99 | | | | / | / | / | / | 73.60 | 22.55 | | | | single-aspect | 64.53 | / | 70.27 | / | / | 141.71 | | | 67.20 | / | / | 80.00 | / | 243.93 | | | | 51.20 | / | / | / | 65.73 | 125.30 | | | | / | 33.87 | 60.67 | / | / | 118.65 | | | | / | 27.32 | / | 78.13 | / | 138.91 | | | | / | 41.07 | / | / | 59.87 | 116.34 | | | | single (avg.) | 70.67 | 98.93 | 77.87 | 84.00 | 73.60 | 22.30 | | | multi (avg.) | 60.98 | 34.08 | 65.47 | 79.07 | 62.80 | 147.47 | | | PREFIX-TUNING | multi-aspect | | | | | | | | Method | Constraint | Sentiment↑ | Topic↑ | PPL↓ | | | | |----------------------|--------------|--------------|----------|---------|-------|-------|-------| | Positive | Negative | Asian | American | Mexican | | | | | 81.87 | / | / | / | / | 32.80 | | | | / | 95.73 | / | / | / | 24.21 | | | | / | / | 72.00 | / | / | 34.38 | | | | / | / | / | 88.00 | / | 33.35 | | | | / | / | / | / | 76.00 | 34.10 | | | | single-aspect | 72.73 | / | 67.47 | / | / | 43.08 | | | 72.53 | / | / | 70.07 | / | 32.87 | | | | 68.27 | / | / | / | 69.33 | 43.12 | | | | / | 90.67 | 68.00 | / | / | 44.64 | | | | / | 90.40 | / | 70.93 | / | 33.54 | | | | / | 89.47 | / | / | 66.53 | 44.50 | | | | single (avg.) | 81.87 | 95.73 | 72.00 | 88.00 | 76.00 | 31.77 | | | multi (avg.) | 71.18 | 90.18 | 67.73 | 70.50 | 67.93 | 40.29 | | | TAILOR | multi-aspect | 91.73 | / | / | / | / | 21.65 | | / | 99.73 | / | / | / | 20.39 | | | | / | / | 77.87 | / | / | 21.29 | | | | / | / | / | 89.87 | / | 21.88 | | | | / | / | / | / | 81.33 | 22.93 | | | | single-aspect | 73.07 | / | 62.13 | / | / | 21.28 | | | 77.60 | / | / | 82.93 | / | 21.65 | | | | 75.20 | / | / | / | 72.00 | 22.47 | | | | / | 93.33 | 73.87 | / | / | 21.64 | | | | / | 95.73 | / | 81.33 | / | 20.09 | | | | / | 93.87 | / | / | 77.87 | 23.50 | | | | single (avg.) | 91.73 | 99.73 | 77.87 | 89.87 | 81.33 | 21.63 | | | multi (avg.) | 75.29 | 94.31 | 68.00 | 82.13 | 74.93 | 21.77 | | | PROMPT GATING (Ours) | multi-aspect | | | | | | | | Method | Constraint | Lex. ↑ | Tense↑ | BLEU↑ | |------------------------------------------------------------------------|--------------|----------|----------|---------| | PREFIX-TUNING PARALLEL ADAPTER LORA PROMPT-TUNING PROMPT GATING (Ours) | | | | | w/o control 50.98 79.52 32.7 lexical 85.28 80.99 36.7 temporal 50.80 83.28 33.0 knowledgeable 50.20 79.29 32.8 single (max.) 85.28 83.28 36.7 multi (avg.) 7.51 43.46 0.4 lexical 91.82 81.12 37.3 temporal 50.93 83.55 33.1 knowledgeable 50.16 79.35 32.8 single (max.) 91.82 83.55 37.3 multi (avg.) 48.44 67.87 21.8 lexical 87.94 81.59 36.2 temporal 50.79 84.32 33.0 knowledgeable 50.57 80.22 32.7 single (max.) 87.94 84.32 36.2 multi (avg.) 50.79 74.16 25.0 lexical 74.93 80.99 35.4 temporal 50.25 81.19 33.0 knowledgeable 50.17 78.82 32.5 single (max.) 74.93 81.19 35.4 multi (avg.) 64.64 81.12 34.2 lexical 89.90 81.29 37.1 temporal 50.95 83.98 33.2 knowledgeable 51.18 79.55 33.1 single (max.) 89.90 83.98 37.1 multi (avg.) 85.29 85.75 36.8 | WARNING: Next may contain contents that are offensive in nature. | | | |--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Constraints | Method | Generation Result | | While this is happening to people of colour over and over again in schools and hospitals regardless of | | | | GEDI | how kids should behave, let alone thats happening here? Is we as a society humane towards minks that poo on each other before death leaves them. While this is happening i was very impressed. food is average and the service is great. | | | DIST. LENS | Advertisements: -) ***EDIT: The vegetarian menu is amazing! While this is happening in my hometown. In New York, New York, and the UK , are in need of something, they say. It's a good year in my blood. In New Orleans, is it is good for the heart to get rid of it.In New York is getting all of its emotions out because we don't want to make the traditional church | | | PROMPT-TUNING | in-state.It was also all the land that is, and it is not good is all are for me is all a rock, and we don' | | | PREFIX-TUNING | While this is happening food. is about Asian. isaboutAsian.is-foodsfood-trending.s.e...trend-stings-food-trends While this is happening in my hometown, i will say that it's a little pricey but worth it for a good meal. the food was good, the service was good, the service was fast and the food was tasty. the food arrived on time, so i could eat it. i did get a couple of things wrong with this place. the rice is bland. | | | TAILOR | i think the rice is too salty. my boyfriend ordered the fried squid. it was a bit greasy. the shrimp had a good flavor and it was fresh. i would give it a try again but it won't be here for at least 3 years. While this is happening, i've never been here to the japanese section and have never eaten here before. the place is small and the place is crowded. the food is small, and the service isn't very friendly. i did not expect to spend a lot of money for a sushi place because it's pretty small. they have a few places where they can take a large order and they give it to those who pay | | | Neg., Asian | PROMPT GATING (Ours) | for the food. also the staff seems to ignore you if you come for sushi. The food sucks!! Luckily we're Gilmore Girls and I still feel pretty strong | | GEDI | from an adrenaline rush, not being adequately clothed. Return to Content The food is great and a little overpriced. service was great though. i will never go back. | | | DIST. LENS | Advertisements for the sake of getting good food The food you don't need to worry about it. This is the most important food. This could be years. This would also have you look for new business to move to. This is a big one and we're not looking for one.This might get more people. It may get people | | | PROMPT-TUNING | looking for work elsewhere, it could get people lookin at work in the area. The food. is about negative aspects, and about negative food." - Michael Siegel. | | | PREFIX-TUNING | following is a negative review? –@siegel. The food was good for what you paid for it, but the service was terrible, the food tasted bad. the waiter was rude and was constantly taking orders that were not on the table. when he asked why the food was so hot or that they were out of food. | | | TAILOR | he said that the menu was out of order because of a busy night. after that he told me to call him back to make sure that the order was in order. the food itself was not that great, | | | Neg., American | but that's ok because it was only $6. The food was very bland. i had a burger that was really dry. they had the "tender beef patty". | | | PROMPT GATING (Ours) | the beef was dry so i didn't like the "tasty" flavor and the burger tasted stale. i don't eat that type of beef. | | | GEDI | The country didn't learn its lesson soon. The country fried is great, not the best. service was decent and we'll be back! | | | DIST. LENS | Advertisements are good! Related Posts: ( The country. hell yes I am. and I am off. I am off to do some drugs and drink with a woman. I am off. and I am also off on drugs with the women! I can see I could go. and have sex with someone. I don't think I am a homa, though I could have sex sometime, which wouldn't be good with the men.I | | | PROMPT-TUNING | am off my period and I'm not ready yet for that, except sex, it is good with the women The country and Mexican food. is about Mexican food. for a reason. It's about Mexican food. | | | PREFIX-TUNING | following is about Chinese food.is...Read Full Article and more »Zachary: Food is delicious...!}> More...</show>">Chef-in-chief"...Zach!</show><Chev...">ZACH! !"</show> The country style, fresh, delicious tacos, and fresh tortilla chips. the salsa, the burrito and rice were good. the salsa was very good, but the rice was a bit dry. i'm going to give the burrito a 10/10. the burrito was not as fresh as i would like, and the salsa was lacking, not fresh enough. my brother got | | | TAILOR | the burrito with the green salsa, he liked it. The country food at this place has become so bland that i have stopped coming here. the service was slow and the food didn't have any flavor to it at all. it just felt stale to the touch. i have never been in the place. i have been to many different mexian restaurants in my time in vegas, | | | Neg., Mexican | PROMPT GATING (Ours) | however i can only say they are the worst in vegas, at least to this point, in my memory | | Constraints | Method | Generation Result | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Once upon a time…freedom—an old story…"" As Vegeta's hands encased his claws, Virankhan extended them to enhance their effects. His powers - including Frieza's tail transformation, long range ki blasts and history-defining transformations like Ash Ketchum's samurai thrill and Bulma & | | | | GEDI | Pikkonü putting on their samurai mastership - swept through the crowd as they cheerfully flooded gutter comforts grounds majors like dubbed Kingston area. His qir | | | DIST. LENS | Once upon a time, great food. the service is great and very friendly. Advertisements - yummy! *** Helpful!! :) Once upon a time travel to another part of the park and back to this point, the park was still full of black. And this was the point after which we were left back there. And now I want to get up and go for another part. | | | PROMPT-TUNING | This is going back to this point but we are still left back in town to get rid of this.And you're left with that spot in the park.And you're left at the end of time for two. I'm getting rid from it but it has to go back | | | PREFIX-TUNING | Once upon a time. following is about food.Advertisements…comments…comments | | | Pos., Asian | Once upon a time, i was a big fan of this establishment. i had been to them a couple times already | | | TAILOR | with friends and the place was always packed with people. this place was a hit, so much so that now I am going to go there every time. Once upon a time i was at the restaurant and had dinner at a friend's place, the food was great. | | | PROMPT GATING (Ours) | the chef is very knowledgeable (the owner's daughter is a sushi professional), but i can only say that the restaurant was very clean and the food was very flavorful. The year is 1910. Ephraim Legrand spurges explores Europe's forgotten North Atlantic island, exploring its rich history, diverse landscapes, richside civic pride and communion. Through photography held in private | | | GEDI | property events concentrated throughout the year, content creators can collaborate creatively and celebrate individuality night-out! | | | DIST. LENS | The year is 1910. great food and the service was great. will be back! Advertisements for good sushi!!! Follow me @wendie The year is 1910. We are still going out to dinner. This time we're having fun with it but I don't like to leave it for another night.And this night has just been getting started so it's time for everyone | | | PROMPT-TUNING | to leave.And we've been eating this night for a week but not yet ready to leave the place to go to bed yet.And we've just eaten.But this morning has just started leaving this place so I am getting over it.You're not over it yet."And yet he is | | | PREFIX-TUNING | The year is 1910. is about American food. is… http://news-and-review-food-tourism.org/article.asp/?articleId=9c0c4-a2a-e0-b9-a-9c5d7a8b8f0_0_0.0 | | | TAILOR | The year is 1910. this is not your average sushi bar. the atmosphere was very casual and cozy. the food was good. the service was prompt and fast. the food was fresh and delicious. | | | Pos., American | The year is 1910. we went to this place for the first year. it looks nice. there are a couple of booths, a table | | | PROMPT GATING (Ours) | for 4 (and a counter) to get seated in. we had an early reservation. i ordered the steak. the steak was great. my boyfriend enjoyed the chicken wings. i was very impressed by the steak. While this is happening in Scotland, though, we come a good distance north to Saltire Mountains National Park where we sit beside mountains on the Welsh coast called Myrta Tor. It's amazing to discover more about these beloved mountain peaks that haven't seen much mainstream attention until now. We'll travel along | | | GEDI | beautiful clifftees via Priordeuorth during walking & hiking treks around Myrta Tor 'check out our full itinerary here: Also visit Stonyrithenview Chantry | | | DIST. LENS | While this is happening, great food! the margaritas are fantastic and friendly staff. Advertisements = good mexican restaurant! Related While this is happening. It has already been happening. It's not like it's not a bad thing to have it but it's still bad. I don't like that either. It's got to get out of here before we can make any significant changes to it this is | | | PROMPT-TUNING | not that big thing anymore, but you don't have to do that. That's just a few days ago in the USA.This is the best way I could have done it and not done yet but this is way worse than that. It has to | | | PREFIX-TUNING | While this is happening!Advertisements?Sponsors?Media? | | | Pos., Mexican | While this is happening to be my new favorite restaurant, and my new favorite location, i can't say that i've ever been to this place with all the other reviews. there is a lot of good food at the table, but the wait staff | | | TAILOR | and food is just as great! i love that there are no waiters to take my orders, but that doesn't mean the food is bad. the food here was good. While this is happening i'm going here for dinner for the first time. the food here was very, very good and very tasty!! we ordered a couple of different salads and some tacos. i got a vego beef taco with a spicy sauce (it is very good). | | | PROMPT GATING (Ours) | i also got an onion rings (it does not have any onions, nor are there many onions in this recipe), and it was delicious! | | | Table 13: Examples of multi-aspect controllable text generation. The given textual prefixes (see §B.2) are underlined. | | | | WARNING: Next may contain contents that are offensive in nature. Example 1 Constraint | | |------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Keywords | "This", "bedroom", "completely" | | Tense | The tense of this sentence is the past tense. | | Knowledge (French) | Cette chambre et une autre ont été complètement brûlées. Source and Reference | | Source (German) | Dieses und ein weiteres Zimmer brannten vollständig aus. | | Reference | This and another bedroom were completely burnt out. Translation | | PROMPT-TUNING | This and another room burned completely. | | PREFIX-TUNING | This is the 'room under the sun'. | | LORA | This and another room burned out completely. | | PARALLEL ADAPTER | This tense and another room is completely burnt out. | | PROMPT GATING (Ours) | This and another bedroom burned completely out. Example 2 Constraint | | Keywords | "The", "transgender", "employment," | | Tense | The tense of this sentence is the present tense. | | Knowledge (French) | Le rapport donne également un aperçu de la discrimination à laquelle sont confrontées les personnes intersexes et transgenres dans le domaine de l'emploi, ainsi que des niveaux de harcèlement, de violence et de préjugés. Source and Reference | | Source (German) | Der Bericht gibt auch einen Überblick über die Diskriminierung, der sich intersexuelle und Transgender-Personen im Berufsleben ausgesetzt sehen sowie über das Ausmaß von Belästigung, Gewalt und Vorurteilskriminalität. | | Reference | The report also gives an overview of the discrimination faced by intersex and transgender people in the realm of employment, as well as levels of harassment, violence and bias crimes. Translation | | PROMPT-TUNING | The report also gives an overview of the discrimination to which inter-sexual and transgender people are subjected in their professional life and the extent of harassment, violence and prejudice crime. | | PREFIX-TUNING | The subject of the report is the subject of the report. | | LORA | The report also gives an overview of the discrimination faced by inter-sexual and transgender people in their working lives and the extent of harassment, violence and prejudice. | | PARALLEL ADAPTER | The report also gives an overview of the present discrimination faced by inter-sexual and transgender people in the workplace, as well as the extent of harassment, violence and prejudice. | | PROMPT GATING (Ours) | The report also gives an overview of the discrimination suffered by inter-sexual and transgender people in employment, as well as the extent of harassment, violence and prejudice. Example 3 Constraint | | Keywords | "attempt" | | Tense | The tense of this sentence is the future tense. | | Knowledge (French) | Demain, il tentera de s'entraîner avec l'équipe. Source and Reference | | Source (German) | Morgen wird er versuchen, mit der Mannschaft zu trainieren. | | Reference | Tomorrow he will attempt to train with the team. Translation | | PREFIX-TUNING | This is the subject of this article. | | PARALLEL ADAPTER | Tomorrow he will try to train with the team. | | LORA | The team he will try to train with the future. | | PROMPT-TUNING | Tomorrow he will try to train with the team. | | PROMPT GATING (Ours) | Tomorrow he will attempt to train with the team. | | Table 14: Examples of multi-aspect controllable machine translation. "Keywords" denotes the given keywords | | Table 14: Examples of multi-aspect controllable machine translation. "**Keywords**" denotes the given keywords that should be included in the translation. "**Tense**" denotes the input indicating the tense of the translation results. Similarly, "**Knowledge (French)**" denotes the external knowledge (i.e., French synonymous sentence). Some translations that satisfy the constraints are highlighted in blue, while some translations that fail to satisfy the constraints are highlighted in red. 15254 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section* Limitations after the conclusion. ✓ A2. Did you discuss any potential risks of your work? Section* Ethics Statement after the limitations. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 Introduction ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly, grammar error correction, the whole paper. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section B Reproducibility ✓ B1. Did you cite the creators of artifacts you used? Section B Reproducibility ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section B Reproducibility ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section B Reproducibility B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section B Reproducibility ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section B Reproducibility ## C ✓ **Did You Run Computational Experiments?** Section 5 Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section C Experimental Results The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section B Reproducibility ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section C Experimental Results ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section B Reproducibility D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Seciton E Details in Human Evaluation ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Seciton E Details in Human Evaluation ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Seciton E Details in Human Evaluation ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Seciton E Details in Human Evaluation D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Seciton E Details in Human Evaluation
xu-etal-2023-double
Double-Branch Multi-Attention based Graph Neural Network for Knowledge Graph Completion
https://aclanthology.org/2023.acl-long.850
Graph neural networks (GNNs), which effectively use topological structures in the knowledge graphs (KG) to embed entities and relations in low-dimensional spaces, have shown great power in knowledge graph completion (KGC). KG has abundant global and local structural information, however, many GNN-based KGC models cannot capture these two types of information about the graph structure by designing complex aggregation schemes, and are not designed well to learn representations of seen entities with sparse neighborhoods in isolated subgraphs. In this paper, we find that a simple attention-based method can outperform a general GNN-based approach for KGC. We then propose a double-branch multi-attention based graph neural network (MA-GNN) to learn more expressive entity representations which contain rich global-local structural information. Specifically, we first explore the graph attention network-based local aggregator to learn entity representations. Furthermore, we propose a snowball local attention mechanism by leveraging the semantic similarity between two-hop neighbors to enrich the entity embedding. Finally, we use Transformer-based self-attention to learn long-range dependence between entities to obtain richer representations with the global graph structure and entity features. Experimental results on five benchmark datasets show that MA-GNN achieves significant improvements over strong baselines for inductive KGC.
# Double-Branch Multi-Attention Based Graph Neural Network For Knowledge Graph Completion Hongcai Xu, Junpeng Bao ∗ , Wenbo Liu Xi'an Jiaotong University xajt1822@stu.xjtu.edu.cn, baojp@mail.xjtu.edu.cn, stu_Lwb@stu.xjtu.edu.cn ## Abstract Graph neural networks (GNNs), which effectively use topological structures in the knowledge graphs (KG) to embed entities and relations in low-dimensional spaces, have shown great power in knowledge graph completion (KGC). KG has abundant global and local structural information, however, many GNN-based KGC models cannot capture these two types of information about the graph structure by designing complex aggregation schemes and are not designed well to learn representations of seen entities with sparse neighborhoods in isolated subgraphs. In this paper, we find that a simple attention-based method can outperform a general GNN-based approach for KGC. We then propose a double-branch multi-attentionbased graph neural network (MA-GNN) to learn more expressive entity representations that contain rich global-local structural information. Specifically, we first explore the graph attention network-based local aggregator to learn entity representations. Furthermore, we propose a snowball local attention mechanism by leveraging the semantic similarity between two-hop neighbors to enrich the entity embedding. Finally, we use Transformer-based selfattention to learn long-range dependence between entities to obtain richer representations with the global graph structure and entity features. Experimental results on five benchmark datasets show that MA-GNN achieves significant improvements over strong baselines for inductive KGC. ## 1 Introduction Knowledge graphs (KGs), which are collections of structured knowledge represented by factual triples (head entity, relation, tail entity), are crucial for many applications, including semantic search (Xiong et al., 2017), question-answering (Kaiser et al., 2021), recommendation systems (Wang et al., 2021), etc. However, even large-scale KGs (e.g., ∗The corresponding author. Freebase and DBpedia) that have billions of triples, are inevitably incomplete, limiting their real-world applications. Therefore, knowledge graph completion (KGC) recently attracted extensive attention and attempts to automatically predict the missing head (tail) given the relation and tail (head) in a factual triple. To address these challenges, knowledge graph embedding (KGE) methods have been recently introduced as an effective solution to solve the incompletion problem (Bordes et al., 2013; Dettmers et al., 2018; Sun et al., 2019; Trouillon et al., 2016). KGE aims to define a scoring function and embeds entities and relations into a low-dimensional vector space to assess the plausibility of triplets based on the observed triples. However, it is significantly reliant on the pre-defined scoring function and rather challenging to encode structural information about an entity into a single vector. To this end, graph neural networks (GNNs) (Defferrard et al., 2016; Kipf and Welling, 2017) have recently been proposed for inductive KGC, due to the intrinsic graph structure of KGs, which learn the hidden representation of each entity by aggregating its corresponding local neighbors' information (Hamaguchi et al., 2017; Albooyeh et al., 2020). These methods still suffer from restricted expressiveness and problems because of the shallow network architecture. First, they can only capture local information within the neighborhood of individual entities, lacking the capability to exploit global information. Second, a large knowledge graph consists of multiple isolated and small subgraphs that are not connected to the main subgraph, as shown in Figure 1(a). Existing GNN methods often suffer from over-smoothing or over-squashing (Chen et al., 2020) with the increase of GNN layers and limited entities and relations in isolated subgraphs. In contrast, attention-based methods are effective in aggregating multi-hop neighbor information, leading to richer entity representation 15257 The number of entites ![1_image_0.png](1_image_0.png) 2-hop neighbors 1-hop neighbors 2-hop neighbors 0 10000 20000 30000 40000 WN18RR ![1_image_1.png](1_image_1.png) (Nathani et al., 2019; Zhao et al., 2022; Li et al., 2022b). Despite the success of these attentionbased GNN methods, most of them concentrate on encoding high-order topological information (a path or random walking sequence) while ignoring the rich structural information from neighborhood entities (Li et al., 2018; Nathani et al., 2019). Besides they have not paid enough attention to the integration of local and global information, which is important for encoding complex information. As shown in Figure 1(a), three one-hop neighbors are connected to e i0 , and the three one-hop neighboring entities connect the diverging two-hop neighboring entities. Besides, we also note that the number of two-hop neighboring entities is much higher than the number of one-hop neighborhood entities, as shown in Figure 1(b). It is unreasonable to fuse two-hop neighborhood entities for each target entity directly. Thus, it is meaningful for a target entity to pay more attention to its multi-hop neighbors and incorporate both global and local structural information in order to learn effective knowledge representations. In this paper, we propose a Double-Branch Multi-Attention Graph neural network (MA-GNN) for KGC in order to preserve global-local structural information. MA-GNN is an encoder-decoder model where three types of attention mechanisms (Graph attention network (GAT), Snowball local attention-based mechanism, and Transformerbased self-attention module) and ConvE (Dettmers et al., 2018) play the roles of an encoder and decoder, respectively. Furthermore, three types of attention mechanisms are utilized in the two branches to extract global and local features due to the differences in these two branches' characteristics. Specifically, we first employ GAT and Transformer-based self-attention to learn entity representation in order to capture global-local structure information. Second, a snowball local attention module is proposed to compute the local semantic similarity between two-hop neighborhood entities. In summary, we make the following contributions: - We propose a double-branch multi-attention graph neural network for KGC. The network consists of two parallel branches, i.e., the global branch and the local branch. Compared with other attention-based GNN methods, our method can capture local information and long-term dependence between entities through GAT and Transformer based selfattention. - To extract more discriminative features, we design a snowball local attention mechanism that can learn the entity similarity between 2hop neighborhood entities of the target entity and encode more information like a snowball. - We compare our method with previous KGC methods on five benchmark datasets. Experiments demonstrate that MA-GNN leads to significant improvement, achieving Hits@10 scores of 0.679 on the WN18RR dataset, 0.823 on the NELL-995 dataset, and 0.932 on the FB15K dataset, outperforming state-ofthe-art methods by 12.7%, **4.3%** and **15.1%**, respectively. ## 2 Methods An encoder-decoder framework is used in the MAGNN model. The framework of MA-GNN is shown in Figure 2. We only need to stack the encoder into several layers to obtain entity representation. MA-GNN presents four main modules: GAT(Encoder), Transformer-based self-attention module (Encoder), Snowball local attention module (Encoder), and Knowledge graph completion module (Decoder). ## 2.1 Graph Attention Network Module Assuming that each entity eiin the knowledge graph G = (*E, R*) has an initial feature vector z0 ∈ Rd, we employ a graph attention network (GAT) to gather local information from the target entity's neighbors, which calculates the attention score α l i,j in the l-th layer given the entity ei and 15258 ![2_image_0.png](2_image_0.png) its one-hop neighbors ej ∈ Ni, Niis the one-hop neighbors of entity ei. $$\alpha_{i,j}^{l}=\frac{e x p\left(\sigma\left(f^{l}\left[W_{n}^{l}z_{i}^{l}:W_{n}^{l}z_{j}^{l}\right]\right)\right)}{\sum\limits_{k\in N_{i}}e x p\left(\sigma\left(f^{l}\left[W_{n}^{l}z_{i}^{l}:W_{n}^{l}z_{k}^{l}\right]\right)\right)}\tag{1}$$ where $z_{i}^{l}$ is the embedding of entity $e_{i}$ in the l- i is the embedding of entity eiin the lth layer. For entity features, there is a learnable weight matrix called Wln . f lrefers to a fully connected neural network. Here, we use ReLU as the activation function, which is σ. By summing the weighted features of entity ei's neighbors, we can aggregate local information for entity ei: $$s_{i}^{l}=\sum_{j\in N_{i}}\alpha_{i,j}^{l}z_{i}^{l}\tag{2}$$ The updated entity representation is then computed. by: passed between neighbors. Wlw is the parameter by. $$z_{i}^{l+1}=z_{i}^{l}+\lambda W_{w}^{l}s_{i}^{l}\tag{3}$$ where $\lambda$ determines how much information is that needs to be trained. 2.2 Transformer-based self-attention module Transformer is presented as a unique encoderdecoder architecture constructed from several selfattention components (Vaswani et al., 2017). Let Z ∈ Rn×d be the input for each Transformer based self-attention layer, where n is the number of entities and d is the dimension of each entity. Then, a function fW : Rn×d → Rn×d with the formula fW (Z) = z can be used as one Transformer-based self-attention layer: $$A=\frac{1}{\sqrt{d}}Z W_{Q}\left(Z W_{K}\right)^{\top}$$ $$\quad(4)$$ $$({\boldsymbol{S}})$$ ⊤ (4) $$Z^{\prime}=S o f t M a x\left(A\right)\left(Z W_{V}\right)$$ $$(6)$$ $$M=L a y e r N o r m_{1}\left(Z^{\prime}W_{O}+f Z\right)$$ $${\mathrm{(2)}}$$ ′WO + fZ(6) $$F=\sigma\left(M W_{1}+b_{1}\right)W_{2}+b_{2}$$ $$z=L a y e r N o r m_{2}\left(M+F\right)$$ (7) (8) $\frac{1}{2}$ $$({\boldsymbol{3}})$$ where LayerNorm() is the layer normalization function, Softmax() is the row-wise softmax function, and σ is the activation function (e.g., ReLU). In the layer, the following parameters are trainable: WQ, WK, WV , WO, W1, b1, W2, and b2. In more detail, WQ, WK, and WV are broken down into H heads using the formulas W (h) Q, W (h) K , W (h) V, and 15259 the matrices Z (h)from the attention heads are then concatenated to produce Z′. $$A^{(h)}=\frac{1}{\sqrt{d}}Z W_{Q}^{(h)}\left(Z W_{K}^{(h)}\right)^{\top}\qquad\qquad(9)$$ $$Z^{\prime}=\parallel_{h=1}^{H}\left(S o f t M a x\left(A^{(h)}\right)Z W_{V}^{(h)}\right)$$ (10) Once we obtain the final per-entity representation z l i encoded by GAT, where l is the total number of GAT layers, we pass it to the Transformer-based self-attention subnetwork of MA-GNN as shown in Figures 2 and 3. N and K refer to the number of stacked modules. In order to normalize the embedding, we first project the z l i into the Transformerbased self-attention dimension and normalize the embedding using a layer normalization: $$p_{i}^{l}=L a y e r N o r m\left(W_{p}z_{i}^{l}\right)\qquad(11)$$ where WP ∈ RdT ×dG is a learnable weight matrix, RdT is the self-attention dimension, and RdG is the dimension of the final GAT embeddings. Since the Transformer-based self-attention is permutationinvariant without a positional encoding (Wu et al., 2021), we utilize the random walk method (Perozzi et al., 2014) to obtain entity sequences. $$\alpha_{i,j}^{l(h)}=\frac{1}{\sqrt{d_{T}}}p_{i}^{l-1}W_{Q}^{l(h)}\left(p_{j}^{l-1}W_{K}^{l(h)}\right)^{\top}$$ ⊤(12) $$p_{i}^{l}=\|_{h=1}^{H}\left(\text{SoftMax}\left(\alpha_{i,j}^{l(h)}\right)p_{k}^{l-1}W_{V}^{l(h)}\right)\tag{13}$$ where $W_{Q}^{l}$, $W_{K}^{l}$, and $W_{V}^{l}$ are the learned query. $$(13)$$ key, and value matrices for a single attention head in layer l, respectively, and h is the number of attention heads. ## 2.3 Snowball Local Attention Module In this subsection, we first construct one-hop neighborhood subgraphs that focus on the neighbors of the target entities (e.g., e i0 , e j 0 , e k 0 ), as shown in Figure 4, and then utilize the proposed snowball local attention mechanism to aggregate local graph structure information. The snowball local attention mechanism samples two-hop neighbors and onehop neighbors of the target entity, and is capable of capturing entity similarity between two-hop neighborhood entities according to the attention scores between two-hop neighbors. Here, we only present ![3_image_0.png](3_image_0.png) $$(10)$$ one formulation for the snowball local attention layer: $$\bar{z}_{k}^{i}=\sigma\left(\sum_{e_{m}^{i},e_{k}^{i}\in{\mathcal{N}}_{i}}\alpha_{km}^{i}Wz_{k}^{i},k,m=1,2,3\ldots\right)\tag{14}$$ where z¯ i k and z i k denotes the embeddings of the entity e i k . Ni refers to the two-hop neighbors of entity e i. i in z i k refers to the i-th one-hop neighborhood subgraph or the target entity e i0 (see Figures 2 and 4), k = 0 refers to the target entity, and k, m = 1, 2, 3... refers to the two-hop neighborhood entities. α i km is the semantic similarity between two-hop neighbors. ![3_image_1.png](3_image_1.png) $$(12)$$ $$\alpha_{k m}^{i}=\frac{e x p\left(\left(z_{m}^{i}\right)^{T}\left(z_{k}^{i}\right)\right)}{e^{\sum\limits_{k}e_{m}^{i},\,e_{n}^{i}\in\mathcal{N}_{i}}}\quad(15)$$ After passing through the snowball local attention layer, the two-hop neighborhood entities are normalized. The normalization output is then fed into a feedforward neural network, and the output vector z of the feedforward neural network is added with the output vectors of the GAT module and Transformer-based self-attention module. The "Add" is comparable to simplified feature fusion. The Snowball local attention module is stacked into M layers, as shown in Figure 2, with M = 2. As illustrated in Figure 4, e i0 is the target entity, and the one-hop neighboring subgraph belongs to it. Graph attention network module performs linear combination on embeddings of one-hop neighbors according to the attention scores and then aggregates these one-hop neighborhood entities on the target entity to learn its new entity representation. However, GAT takes two stages of graph attention for entity e i0 to aggregate entity e j 1 . e j 1 is a one-hop neighboring entity of e j 0 (e i1 ) and also is a two-hop neighborhood entity of the target entity e i0 . Following this, e k 0 is a two-hop neighbor entity of e i1 , snowball local attention mechanism looks like a snowball effect. To capture richer graph structure information and entity features, we use a snowball local attention mechanism that learns different semantic similarity information to generate entity features and then utilizes these features to fuse feature vectors resulting from the graph attention network module and Transformer-based self-attention architecture. ## 2.4 Knowledge Graph Completion Module We specifically choose ConvE (Dettmers et al., 2018) as the decoder. In our experiments, relational features are represented using the initialization feature representations. ConvE computes the knowledge triple scores based on the reshaped tensors after first reshaping the embeddings of triples (h, r, t) into 2D tensors. The prediction from (h, r, ?) to t or from (?, r, t) to h is then carried out by using the output embeddings. The ConvE score function is: $$f(h,r,t)=\sigma\left(f(\sigma([\widetilde{h};\widetilde{r}]*\psi))W_{Q}\right)t\tag{16}$$ where eh and re refer to 2D reshapings of h and r, ∗ stands for the convolution operator, and ψ represents a set of convolution kernels. The vectorization function is f(), and the weight matrix is WQ. σ refers to the ReLU activation function. ConvE implies that triples with higher scores than those with lower scores are positive. The proposed MAGNN model's loss function is defined as follows: $$\mathcal{L}=\sum_{(h,r,t)\in\mathcal{T}_{t}}-\frac{1}{N}\sum_{i=1}^{N}\left(y_{(h,r,t_{i})}\left(g\left(f\left(h,r,t_{i}\right)\right)\right)\right.$$ $$\left.+\left(1-y_{(h,r,t_{i})}\right)\log\left(1-g\left(f\left(h,r,t_{i}\right)\right)\right)\right)\tag{17}$$ where $y(h,r,t_{i})$ is the triple $(h,r,t_{i})$'s label (1 or 0). The sigmoid function is represented by g, and N stands for the number of candidates for the tail entity. ## 3 Experiments 3.1 Experiment Settings 3.1.1 Datasets Following the standard train/test split, we evaluate our proposed method using the five benchmark datasets: FB15K-237 (Toutanova and Chen, 2015), WN18RR (Dettmers et al., 2018), FB15K (Bordes et al., 2011), WN18 (Bordes et al., 2011), and NELL-995 (Zhao et al., 2022). We further investigate the number of subgraphs in each dataset and the number of entities contained in them. WN18RR, WN18, and NELL-995 are much sparser than FB15K-237 and FB15K, which indicates that these datasets contain less structural information. Table 5 in the appendix shows statistics for all datasets. 3.1.2 Baselines We compare MA-GNN with geometric methods including TransE (Bordes et al., 2013), RotatE (Sun et al., 2019), ATTH (Chami et al., 2020), TimE (Zhang et al., 2021), Rot-Pro (Song et al., 2021), BiQUE (Guo and Kok, 2021), HBE (Pan and Wang, 2021), DeepER (Zeb et al., 2022), RotatE-IAS (Yang et al., 2022), HousE (Li et al., 2022a), and GIE (Cao et al., 2022); Tensor decomposition methods including ComplEx (Trouillon et al., 2016), Procrustes (Peng et al., 2021), HypER (Balaževic et al. ´ , 2019); Negative sampling (NS) methods including CAKE (Niu et al., 2022a), KGTuner (Zhang et al., 2022b); Deep learning and attentionbased methods including ConvE (Dettmers et al., 2018), Interact (Vashishth et al., 2020a), HittER (Chen et al., 2021), KGA (Wang et al., 2022), PUDA (Tang et al., 2022), JointE (Zhou et al., 2022), EC2 (Niu et al., 2022b), and StructurE (Zhang et al., 2022a); and Graph neural network methods including R-GCN (Schlichtkrull et al., 2018), KBGAT (Nathani et al., 2019), CompGCN (Vashishth et al., 2020b), SE-GNN (Li et al., 2022a), Rethinking (Zhang et al., 2022c), MRGAT (Li et al., 2022b), and EIGAT (Zhao et al., 2022). ## 3.2 Overall Results Table 1, 6, and 7 shows the link prediction performance on the test set on standard benchmarks. From the experimental results, we observe that our method significantly outperforms the benchmark methods, especially for sparse knowledge graphs, i.e., WN18RR and NELL-995, in which our method outperforms the second-best methods by **12.7%** and **15.1%** on the Hits@10, respectively. Our approach ranks second on the H@10 metric on the FB15K-237, but it still performs best on the other metrics. On WN18, FB15K and NELL995, MA-GNN also achieves competitive results compared to baseline models, with significant improvement on NELL-995 dataset. As shown in the table 1, other methods have competition on some measures, but our method has significant results on all metrics, which demonstrates the robust performance of the proposed approaches in capturing global-local structure information in KGs. In Table 8, we find that the result of predicting the tail entity in both the validation set and the test set is significantly higher than the result of predicting the head entity, on FB15K-237 and WN18RR, which indicates that our method is more capable of capturing extra information by aggregating neighboring entities when it predicts the tail entity. ## 3.3 Ablation Study There are three primary modules in MA-GNN: GAT, snowball local attention module, and Transformer based self-attention module. On FB15K237 and WN18RR, we evaluate the effect of different modules. "MA-GNN w/o A" is the model without using the snowball local attention module, "MA-GNN w/o T" is the model without using Transformer based self-attention module. Besides, "GAT w/ MLP" indicates that the snowball local attention module is replaced by a multilayer perceptron in the model, and the Transformer based self-attention module is removed. From Table 2, we can see that MA-GNN performs better than the model after removing different modules. Compared with "MA-GNN w/o A", "MA-GNN w/o T ", and "MA w/ MLP", MA-GNN is more powerful because it can capture the rich global-local features by using multi-attention methods. On the WN18RR ![5_image_0.png](5_image_0.png) 100 200 300 WN18RR Epochs Hits@1 GAT w/ MLP MA-GNN w/o A MA-GNN w/o T MA-GNN 100 200 300 WN18RR MA-GNN w/o A MA-GNN w/o T MA-GNN 100 200 300 WN18RR GAT w/ MLP MA-GNN w/o A MA-GNN w/o T MA-GNN dataset, compared to MA-GNN, the Hits@10-value of "GAT w/ MLP" is **20.8%** lower, the Hits@10value of "MA-GNN w/o A" is **11.8%** lower, and the Hits@10-value of "MA-GNN w/o T" is **7.5%** lower. The fact that "GAT w/ MLP" performed poorly shows how powerful our snowball local attention and Transformer-based self-attention modules are. Additionally, as observed in Figure 5, it is evident that the values of "GAT w/ MLP" undergo the most significant changes, while "MA-GNN w/o A" and "MA-GNN w/o T" exhibit the least variation. This observation implies that the snowball local attention module possesses a greater capacity for encoding information. Due to our early stopping strategy, the convergence curves in Figure 5 are not all the same length. The integration of the GAT module into MA-GNN is crucial for two reasons. Firstly, the snowball local attention module aggregates the feature representations of two-hop neighbors, resulting in the exclusion of certain target entities during calculation. Secondly, the transformer-based self-attention module truncates subgraphs with a high number of multi-hop neighborhood entities during batch computation, further preventing the calculation of specific target entities. | Models | FB15K-237 | WN18RR | | | | | | | | | |-------------------------------------------------------------------------------------------------------------|-------------|----------|-------|-------|-------|-------|-------|-------|-------|-------| | MRR | MR | H@1 | H@3 | H@10 | MRR | MR | H@1 | H@3 | H@10 | | | Geometric methods TransE (2013) 0.330 | 173 | 0.231 | 0.369 | 0.528 | 0.223 | 3380 | 0.014 | 0.401 | 0.529 | | | RotatE (2019) | 0.338 | 177 | 0.241 | 0.375 | 0.533 | 0.476 | 3340 | 0.428 | 0.492 | 0.571 | | ATTH (2020) | 0.348 | - | 0.252 | 0.384 | 0.540 | 0.486 | - | 0.443 | 0.499 | 0.573 | | TimE (2021) | 0.346 | 171 | 0.250 | 0.382 | 0.537 | 0.477 | 2858 | 0.428 | 0.493 | 0.577 | | Rot-Pro (2021) | 0.344 | 201 | 0.246 | 0.383 | 0.540 | 0.457 | 2815 | 0.397 | 0.482 | 0.577 | | BiQUE (2021) | 0.365 | - | 0.270 | 0.401 | 0.555 | 0.504 | - | 0.459 | 0.519 | 0.588 | | HBE (2021) | 0.336 | - | 0.239 | 0.372 | 0.534 | 0.488 | - | 0.448 | 0.502 | 0.570 | | RotatE-IAS (2022) | 0.339 | 195 | 0.242 | 0.374 | 0.532 | 0.483 | 3862 | 0.467 | 0.502 | 0.570 | | HousE (2022) | 0.361 | 153 | 0.266 | 0.399 | 0.551 | 0.511 | 1303 | 0.465 | 0.528 | 0.602 | | GIE (2022) | 0.362 | - | 0.271 | 0.401 | 0.552 | 0.419 | - | 0.452 | 0.505 | 0.575 | | Tensor decomposition methods ComplEx (2016) 0.323 165 | 0.229 | 0.353 | 0.513 | 0.468 | 5542 | 0.427 | 0.485 | 0.554 | | | | Procrustes (2021) | 0.345 | - | 0.249 | 0.379 | 0.541 | 0.474 | - | 0.421 | 0.502 | 0.569 | | Negative sampling (NS) methods CAKE (2022) 0.321 170 | 0.226 | 0.355 | 0.515 | - | - | - | - | - | | | | KGTuner (2022) | 0.352 | - | 0.263 | 0.387 | 0.530 | 0.484 | 3380 | 0.440 | 0.506 | 0.562 | | Deep learning and attention-based methods ConvE (2018) 0.325 244 0.237 0.356 | 0.501 | 0.430 | 4187 | 0.400 | 0.440 | 0.520 | | | | | | HittER (2021) | 0.373 | - | 0.279 | 0.409 | 0.558 | 0.503 | - | 0.462 | 0.516 | 0.584 | | KGA (2022) | 0.357 | - | 0.265 | - | 0.540 | - | - | - | - | - | | PUDA (2022) | 0.369 | - | 0.268 | 0.408 | 0.578 | 0.481 | - | 0.436 | 0.498 | 0.582 | | JointE (2022) | 0.356 | 177 | 0.262 | 0.393 | 0.543 | 0.471 | 4655 | 0.438 | 0.483 | 0.537 | | StructurE (2022) | 0.351 | 160 | 0.252 | 0.390 | 0.546 | 0.479 | 2865 | 0.425 | 0.500 | 0.585 | | Graph neural network methods CompGCN (2020) 0.355 197 | 0.264 | 0.390 | 0.535 | 0.479 | 3533 | 0.443 | 0.494 | 0.546 | | | | Rethinking (2022) | 0.355 | 249 | 0.264 | 0.389 | 0.535 | 0.472 | 3434 | 0.437 | 0.485 | 0.544 | | SE-GNN (2022) | 0.365 | 157 | 0.271 | 0.399 | 0.549 | 0.484 | 3211 | 0.446 | 0.509 | 0.572 | | MRGAT (2022) | 0.358 | - | 0.266 | 0.386 | 0.542 | 0.481 | - | 0.443 | 0.501 | 0.568 | | MA-GNN | 0.379 | 145 | 0.282 | 0.415 | 0.569 | 0.565 | 886 | 0.507 | 0.592 | 0.679 | | Table 1: Link prediction results on FB15K-237 and WN18RR. The best results are in bold. Results for TransE, | | | | | | | | | | | Table 1: Link prediction results on FB15K-237 and WN18RR. The best results are in bold. Results for TransE, ConvE, RotatE, and ComplEx are from (Li et al., 2022a). Other results are from the published paper. | Models | FB15K-237 | WN18RR | | | | | | | | | |--------------|-------------|----------|-------|-------|-------|-------|------|-------|-------|-------| | MRR | MR | H@1 | H@3 | H@10 | MRR | MR | H@1 | H@3 | H@10 | | | GAT w/ MLP | 0.345 | 214 | 0.252 | 0.380 | 0.532 | 0.465 | 3493 | 0.426 | 0.481 | 0.538 | | MA-GNN w/o A | 0.365 | 159 | 0.272 | 0.400 | 0.550 | 0.491 | 1179 | 0.437 | 0.512 | 0.599 | | MA-GNN w/o T | 0.371 | 142 | 0.275 | 0.409 | 0.561 | 0.509 | 888 | 0.446 | 0.540 | 0.628 | | MA-GNN | 0.379 | 145 | 0.282 | 0.415 | 0.569 | 0.565 | 886 | 0.507 | 0.592 | 0.679 | Table 2: Comparison of the results for different variants of MA-GNN. ## 4 Related Work 4.1 Gnn-Based Models To date, the most existing GNN-based methods have been proposed to deal with the multirelational edges in KGs (Nathani et al., 2019; Vashishth et al., 2020b; Yu et al., 2021; Zhang et al., 2022c; Li et al., 2022a). These approaches design different message-passing mechanisms to capture the graph structure and attributes of entities. CompGCN (Vashishth et al., 2020b) describes a compositional operator across each edge connecting the neighborhood entities of the target entity. SE-GNN (Li et al., 2022a) proposes a novel semantic evidence-aware graph neural networkbased KGE model to assist KGE extrapolation. Rethinking (Zhang et al., 2022c) aims to explore the real effect of GCNs in KGC, and proposes the LTE-KGE framework, which combines linearly transformed entity embeddings with existing KGE models. MRGAT designed a heterogeneous GNN-based framework for KGs that directly applies GNN to the multi-relational graph (Li et al., 2022b). KBGAT utilizes GAT to incorporate both entity and relational features in any entity's neighborhood (Nathani et al., 2019). ## 4.2 Attention-Based Models Recently, attention-based methods, which measure the semantic similarity of neighbors, have become increasingly popular for knowledge graphs. The recent approaches (Zhang et al., 2020a; Rong et al., 2020; Dwivedi and Bresson, 2020) propose GNNbased frameworks that adopt Transformer-based attention to aggregate neighbor information. (Zhang et al., 2020a) and (Rong et al., 2020) investigate the issue of learning long-distance relationships without over-smoothing by focusing on graph structure rather than one-hop neighbors. In addition to offering different views of attention over the nodes and edges, we attempt to describe link prediction in knowledge graphs. The first effort to forecast missing entities in KGs using attention-based approaches was presented in (Nathani et al., 2019). Additionally, attention techniques are used to incorporate additional information into the learned entity representation (Wang et al., 2021; Zhang et al., 2022d). (Nathani et al., 2019) introduces a triplelevel attention model that incorporates the combined information of both entities and relationships in the neighbors of a target entity. (Zhang et al., 2020b) offered a two-level hierarchical attention mechanism, decoupling the triple-level attention of (Nathani et al., 2019) into relationship-level attention and entity-level attention. MRGAT encodes each relation-path-based neighbors feature through the entity-level attention(Li et al., 2022b). Most recently, (Chen et al., 2021; Bi et al., 2022) propose the Transformer-based model to handle the heterogeneity of relations. For a variety of multimodal knowledge graph completion tasks, MKGformer (Chen et al., 2022) utilizes a hybrid transformer architecture with a unified input-output. In comparison, our method focuses on extracting all-sided semantic features by leveraging self-attention to learn long-distance global information on multihop paths and snowball local attention to capture local information. ## 5 Conclusion In this paper, we propose MA-GNN, a simple and efficient framework for learning global-local structure information based on multi-attention. We incorporate a Transformer-based self-attention module into a standard GAT module to encode local graph structure information and learn long-range relationships. We design a snowball local attention mechanism to enrich the entity embeddings based on the similarities between two-hop neighborhood entities. On the five commonly known benchmark datasets, empirical experiments show that our proposed model achieves competitive performance compared with state-of-the-art performance. ## 6 Limitations The paper has only focused on graphs with multitype relations (knowledge graphs). When MAGNN shows improvement over baselines, someone may doubt if MA-GNN will do well on single-type relation graphs. The limitations of the representational power of the MA-GNN model should be discussed more deeply. ## 7 Ethics Statement MA-GNN focuses on improving the performance of link prediction and has no particular ethical consideration. ## Acknowledgements We appreciate the reviewers and the program committee for their helpful comments and suggestions. The work is supported by the CAAI-Huawei Mindspore Fund. ## References Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2657–2666. Ivana Balaževic, Carl Allen, and Timothy M ´ Hospedales. 2019. Hypernetwork knowledge graph embeddings. In *International Conference on Artificial Neural Networks*, pages 553–565. Springer. Zhen Bi, Siyuan Cheng, Ningyu Zhang, Xiaozhuan Liang, Feiyu Xiong, and Huajun Chen. 2022. Relphormer: Relational graph transformer for knowledge graph representation. arXiv preprint arXiv:2205.10852. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In *Twenty-fifth AAAI conference on artificial intelligence*. Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, and Qingming Huang. 2022. Geometry interaction knowledge graph embeddings. In *AAAI Conference on Artificial Intelligence*. Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. 2020. Lowdimensional hyperbolic knowledge graph embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6901–6914, Online. Association for Computational Linguistics. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. 2020. Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3438–3445. Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. Hitter: Hierarchical transformers for knowledge graph embeddings. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10395–10407. Xiang Chen, Ningyu Zhang, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, and Huajun Chen. 2022. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. SIGIR '22, page 904–915. Association for Computing Machinery. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *Proceedings of the AAAI* conference on artificial intelligence, volume 32. Vijay Prakash Dwivedi and Xavier Bresson. 2020. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699. Jia Guo and Stanley Kok. 2021. Bique: Biquaternionic embeddings of knowledge graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8338–8351. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach. International Joint Conferences on Artificial Intelligence. Magdalena Kaiser, Rishiraj Saha Roy, and Gerhard Weikum. 2021. Reinforcement learning from reformulations in conversational question answering over knowledge graphs. In *Proceedings of the 44th International ACM SIGIR Conference on Research and* Development in Information Retrieval, pages 459– 469. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations. Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In *Thirty-Second AAAI* conference on artificial intelligence. Ren Li, Yanan Cao, Qiannan Zhu, Guanqun Bi, Fang Fang, Yi Liu, and Qian Li. 2022a. How does knowledge graph embedding extrapolate to unseen data: a semantic evidence view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 5781–5791. Zhifei Li, Yue Zhao, Yan Zhang, and Zhaoli Zhang. 2022b. Multi-relational graph attention networks for knowledge graph completion. Knowledge-Based Systems, 251:109262. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*. Association for Computational Linguistics. Guanglin Niu, Bo Li, Yongfei Zhang, and Shiliang Pu. 2022a. Cake: A scalable commonsense-aware framework for multi-view knowledge graph completion. In ACL. Guanglin Niu, Bo Li, Yongfei Zhang, Yongpan Sheng, Chuan Shi, Jingyang Li, and Shiliang Pu. 2022b. Joint semantics and data-driven path representation for knowledge graph reasoning. *Neurocomputing*, 483:249–261. Zhe Pan and Peng Wang. 2021. Hyperbolic hierarchyaware knowledge graph embedding for link prediction. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2941–2948. Xutan Peng, Guanyi Chen, Chenghua Lin, and Mark Stevenson. 2021. Highly efficient knowledge graph embedding learning with orthogonal procrustes analysis. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. 2020. Self-supervised graph transformer on largescale molecular data. *Advances in Neural Information Processing Systems*, 33:12559–12571. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *European semantic web conference*, pages 593–607. Springer. Tengwei Song, Jie Luo, and Lei Huang. 2021. Rotpro: Modeling transitivity by projection in knowledge graph embedding. *Advances in Neural Information Processing Systems*, 34:24695–24706. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In *International* Conference on Learning Representations. Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, and Xiangliang Zhang. 2022. Positive-unlabeled learning with adversarial data augmentation for knowledge graph completion. In *IJCAI*, pages 2248–2254. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, pages 57–66. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *International conference on machine learning*, pages 2071– 2080. PMLR. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Nilesh Agrawal, and Partha Talukdar. 2020a. Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3009–3016. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020b. Composition-based multirelational graph convolutional networks. In *International conference on learning representations*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Jiang Wang, Filip Ilievski, Pedro Szekely, and Ke-Thia Yao. 2022. Augmenting knowledge graphs for better link prediction. In *IJCAI*, pages 2277–2283. Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He, and Tat-Seng Chua. 2021. Learning intents behind interactions with knowledge graph for recommendation. In *Proceedings of the Web Conference 2021*, pages 878– 887. Zhanghao Wu, Paras Jain, Matthew Wright, Azalia Mirhoseini, Joseph E Gonzalez, and Ion Stoica. 2021. Representing long-range context for graph neural networks with global attention. *Advances in Neural* Information Processing Systems, 34:13266–13279. Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In *Proceedings of the* 26th international conference on world wide web, pages 1271–1279. Jinfa Yang, Xianghua Ying, Yongjie Shi, Xin Tong, Ruibin Wang, Taiyan Chen, and Bowei Xing. 2022. Knowledge graph embedding by adaptive limit scoring loss using dynamic weighting strategy. In *Findings of the Association for Computational Linguistics:* ACL 2022, pages 1153–1163. Donghan Yu, Yiming Yang, Ruohong Zhang, and Yuexin Wu. 2021. Knowledge embedding based graph convolutional network. In Proceedings of the Web Conference 2021, pages 1619–1628. Adnan Zeb, Summaya Saif, Junde Chen, and Defu Zhang. 2022. Learning knowledge graph embeddings by deep relational roto-reflection. *KnowledgeBased Systems*, 252:109451. Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. 2020a. Graph-bert: Only attention is needed for learning graph representations. *arXiv preprint* arXiv:2001.05140. Qianjin Zhang, Ronggui Wang, Juan Yang, and Lixia Xue. 2021. Knowledge graph embedding by translating in time domain space for link prediction. Knowledge-Based Systems, 212:106564. Qianjin Zhang, Ronggui Wang, Juan Yang, and Lixia Xue. 2022a. Structural context-based knowledge graph embedding for link prediction. *Neurocomputing*, 470:109–120. Yongqi Zhang, Zhanke Zhou, Quanming Yao, and Yong Li. 2022b. Efficient hyper-parameter search for knowledge graph embedding. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2715–2735. Zhanqiu Zhang, Jie Wang, Jieping Ye, and Feng Wu. 2022c. Rethinking graph convolutional networks in knowledge graph completion. In Proceedings of the ACM Web Conference 2022, pages 798–807. Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Zhiping Shi, Hui Xiong, and Qing He. 2020b. Relational graph neural network with hierarchical attention for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9612–9619. Zhenghao Zhang, Jianbin Huang, and Qinglin Tan. 2022d. Association rules enhanced knowledge graph attention network. *Knowledge-Based Systems*, 239:108038. Yu Zhao, Huali Feng, Han Zhou, Yanruo Yang, Xingyan Chen, Ruobing Xie, Fuzhen Zhuang, and Qing Li. 2022. Eigat: Incorporating global information in local attention for knowledge representation learning. Knowledge-Based Systems, 237:107909. Zhehui Zhou, Can Wang, Yan Feng, and Defang Chen. 2022. Jointe: Jointly utilizing 1d and 2d convolution for knowledge graph embedding. *Knowledge-Based* Systems, 240:108100. ## A Appendix A.1 Setup In our model, Both the modules (GATs and snowball local attention module) have 2 layers with the ReLU activation function. Transformer-based selfattention is configured with eight heads and five layers. Additionally, we use the following measures to evaluate our models against baselines: (1) Mean Rank (MR, the mean of all expected rankings); (2) Mean Reciprocal Rank (MRR, the mean of all reciprocals of predicted ranks) (3) Hits@n, n=1, 3, 10 (H@1, H@3, H@10). We perform a hyperparameter search on the dimensions, learning rate, optimizer, negative sample size, and batch size for our proposed model. For each dataset, we list the optimal hyperparameters (dimensionality, learning rate, optimizer, batch normalization, batch size, label smooth, entity dropout, convolution dropout, FC dropout, and Convolution kernel size) as follows: WN18RR:400, 0.0015, ``` 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 ``` Adam, True, 256, 0.1, 0.2, 0.1, 0.4, 7; FB15K-237: 400, 0.00035, Adam, True, 1024, 0.1, 0.3, 0.3, 0.5, 7; FB15K: 400, 0.00035, Adam, True, 1024, 0.1, 0.3, 0.3, 0.5, 7; WN18: 400, 0.00035, Adam, True, 1024, 0.1, 0.3, 0.3, 0.5, 7; NELL995: 400, 0.00035, Adam, True, 1024, 0.1, 0.3, 0.3, 0.5, 7. ## A.2 Number Of Parameters We compare the number of parameters of the baselines and the MA-GNN on the FB15K-237 dataset in Table 3. MA-GNN is substantially more parameter-efficient than the ConvE while improving the test Hits@10 score from 0.497 to 0.569. ## A.3 Dimension Selection To show the impact of entity embedding dimensions on performance, the performance of link prediction with the different number of embedding dimensions is shown in Table 4. It shows that MA-GNN's performance first rises and then falls, as the entity embedding dimension increases, and when the embedding dimension is 400, MAGNN achieves optimal results on FB15K-237 and WN18RR datasets. This verifies that the entity embedding dimension has an impact on the link prediction task. ## A.4 Impact Of Transformer-Based Self-Attention To study the effects of Transformer-based selfattention in MA-GNN, we random an instance and visualize its attention matrix to analyze the impact of transformer-based self-attention. From Figure 6, given the long-distance sequential paths, we observe that models with transformer-based selfattention have a vital impact on global weights and can capture the semantic similarity between longrange entities. 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 ![10_image_0.png](10_image_0.png) Models Parameters MRR Hits@1 Hits@10 ConvE (2018) 5.05M 0.312 0.225 0.497 HypER (2019) **4.30M** 0.341 - 0.520 InteractE (2020) 10.70M 0.354 0.263 0.535 JointE (2022) 4.54M 0.356 0.262 0.537 MA-GNN 9.34M **0.379 0.282 0.569** Table 4: Impact of different embedding dimensions. Table 5: Statistics of datasets | Dimensions | FB15K-237 | WIN18RR | | | | | | | | | |--------------|-------------|-----------|-------|-------|-------|-------|------|-------|-------|-------| | MRR | MR | H@1 | H@3 | H@10 | MRR | MR | H@1 | H@3 | H@10 | | | 200 | 0.338 | 151 | 0.250 | 0.370 | 0.512 | 0.504 | 1875 | 0.455 | 0.524 | 0.601 | | 300 | 0.363 | 138 | 0.271 | 0.397 | 0.544 | 0.541 | 1040 | 0.488 | 0.564 | 0.646 | | 400 | 0.379 | 145 | 0.282 | 0.415 | 0.569 | 0.565 | 886 | 0.507 | 0.592 | 0.679 | | 500 | 0.372 | 149 | 0.277 | 0.408 | 0.560 | 0.556 | 1788 | 0.505 | 0.580 | 0.657 | | 600 | 0.371 | 147 | 0.274 | 0.409 | 0.565 | 0.564 | 1487 | 0.510 | 0.587 | 0.667 | Table 6: Link prediction results on NELL-995. | Datasets | Entities | Relations | Train triplets | Validation Triplets | Test triplets | |------------|------------|-------------|------------------|-----------------------|-----------------| | FB15K-237 | 14541 | 237 | 272115 | 17535 | 20466 | | WN18RR | 40943 | 11 | 86835 | 3034 | 3134 | | FB15K | 14951 | 1345 | 483142 | 50000 | 59071 | | WN18 | 40943 | 18 | 141442 | 5000 | 5000 | | NELL-995 | 75492 | 200 | 149678 | 543 | 3992 | | Models | NELL-995 | | | | |----------------|------------|-------|-------|-------| | MRR | H@1 | H@3 | H@10 | | | TransE (2013) | 0.401 | 0.344 | 0.472 | 0.501 | | ComplEx (2016) | 0.482 | 0.399 | 0.528 | 0.606 | | ConvE (2018) | 0.491 | 0.403 | 0.531 | 0.613 | | R-GCN (2018) | 0.12 | 0.082 | 0.126 | 0.188 | | KBGAT (2019) | 0.530 | 0.447 | 0.564 | 0.695 | | EC2 (2022) | 0.350 | 0.281 | 0.402 | 0.475 | | EIGAT (2022) | 0.545 | 0.464 | 0.584 | 0.715 | | MA-GNN | 0.714 | 0.645 | 0.766 | 0.823 | | Models | FB15K | WN18 | | | | | | | |----------------|---------|--------|-------|-------|-------|-------|-------|-------| | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | | | TransE (2013) | 0.463 | 0.297 | 0.578 | 0.749 | 0.495 | 0.113 | 0.888 | 0.943 | | RotatE (2019) | 0.797 | 0.746 | 0.830 | 0.884 | 0.949 | 0.944 | 0.952 | 0.959 | | ConvE (2018) | 0.657 | 0.558 | 0.723 | 0.831 | 0.943 | 0.935 | 0.946 | 0.956 | | R-GCN (2018) | 0.348 | 0.252 | 0.384 | 0.540 | 0.486 | 0.443 | 0.499 | 0.573 | | ComplEx (2016) | 0.692 | 0.599 | 0.759 | 0.839 | 0.941 | 0.936 | 0.945 | 0.947 | | HypER (2019) | 0.790 | 0.734 | 0.829 | 0.885 | 0.951 | 0.947 | 0.955 | 0.958 | | CompGCN (2020) | 0.801 | 0.715 | 0.834 | 0.862 | 0.942 | 0.921 | 0.933 | 0.941 | | DeepER (2022) | 0.759 | 0.692 | 0.804 | 0.875 | 0.952 | 0.948 | 0.955 | 0.958 | | EC2 (2022) | 0.715 | 0.651 | 0.750 | 0.857 | 0.946 | 0.940 | 0.945 | 0.952 | | MRGAT (2022) | 0.813 | 0.741 | 0.853 | 0.893 | 0.947 | 0.932 | 0.946 | 0.971 | | MA-GNN | 0.817 | 0.747 | 0.871 | 0.932 | 0.905 | 0.851 | 0.959 | 0.991 | Table 7: Link prediction results on FB15K and WN18. | Models | FB15K-237 | WN18RR | | | | | | | | | |-------------|-------------|----------|-------|-------|-------|-------|------|-------|-------|-------| | MRR | MR | H@1 | H@3 | H@10 | MRR | MR | H@1 | H@3 | H@10 | | | Valid(Head) | 0.289 | 192 | 0.193 | 0.316 | 0.476 | 0.525 | 891 | 0.468 | 0.548 | 0.639 | | Valid(Head) | 0.487 | 100 | 0.384 | 0.525 | 0.672 | 0.595 | 615 | 0.541 | 0.583 | 0.704 | | Valid(Head) | 0.285 | 183 | 0.189 | 0.310 | 0.473 | 0.519 | 1036 | 0.457 | 0.548 | 0.636 | | Valid(Head) | 0.473 | 107 | 0.375 | 0.520 | 0.665 | 0.612 | 738 | 0.557 | 0.636 | 0.722 | Table 8: Evaluation for predicting head-and-tail entities. ## A.5 Effectiveness Of Snowball Local Attention On the WN18RR dataset, we visualize the similarity weights of the two-hop neighbors of entity e0. From the dark color in the red dashed box in Figure 7, we observe some degree of similarity between the two-hop neighbors. This result matches our motivation that captures richer graph structure information and entity features using two-hop neighbors and verifies the effectiveness of our proposed snowball local attention mechanism. ![12_image_0.png](12_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly was used to check for grammatical errors in this paper. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3, Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
guo-etal-2023-dual
Dual Cache for Long Document Neural Coreference Resolution
https://aclanthology.org/2023.acl-long.851
Recent works show the effectiveness of cache-based neural coreference resolution models on long documents. These models incrementally process a long document from left to right and extract relations between mentions and entities in a cache, resulting in much lower memory and computation cost compared to computing all mentions in parallel. However, they do not handle cache misses when high-quality entities are purged from the cache, which causes wrong assignments and leads to prediction errors. We propose a new hybrid cache that integrates two eviction policies to capture global and local entities separately, and effectively reduces the aggregated cache misses up to half as before, while improving F1 score of coreference by 0.7 5.7pt. As such, the hybrid policy can accelerate existing cache-based models and offer a new long document coreference resolution solution. Results show that our method outperforms existing methods on four benchmarks while saving up to 83{\%} of inference time against non-cache-based models. Further, we achieve a new state-of-the-art on a long document coreference benchmark, LitBank.
# Dual Cache For Long Document Neural Coreference Resolution Qipeng Guo1, Xiangkun Hu1, Yue Zhang2, Xipeng Qiu3**, Zheng Zhang**1 1Amazon AWS AI, 2School of Engineering, Westlake University 3School of Computer Science, Fudan University {gqipeng, xiangkhu, zhaz}@amazon.com, zhangyue@westlake.edu.cn, xpqiu@fudan.edu.cn ## Abstract Recent works show the effectiveness of cachebased neural coreference resolution models on long documents. These models incrementally process a long document from left to right and extract relations between mentions and entities in a cache, resulting in much lower memory and computation cost compared to computing all mentions in parallel. However, they do not handle cache misses when high-quality entities are purged from the cache, which causes wrong assignments and leads to prediction errors. We propose a new hybrid cache that integrates two eviction policies to capture global and local entities separately, and effectively reduces the aggregated cache misses up to half as before, while improving F1 score of coreference by 0.7 ∼ 5.7pt. As such, the hybrid policy can accelerate existing cache-based models and offer a new long document coreference resolution solution. Results show that our method outperforms existing methods on four benchmarks while saving up to 83% of inference time against non-cache-based models. Further, we achieve a new state-of-the-art on a long document coreference benchmark, LitBank. ## 1 Introduction Coreference Resolution (CR) is fundamental in document-level Natural Language Processing. Its goal is to identify mentions belonging to the same entity. These mentions often scatter throughout the document, but they can be linked together through coreference resolution. CR is a building block for common sense understanding (Levesque et al., 2012; Sakaguchi et al., 2021; Balahur et al., 2011; Liu et al., 2021a), reading comprehension (Dasigi et al., 2019; Storks et al., 2019), information extraction (Yao et al., 2019; Ji and Grishman, 2008; Lu and Ng, 2018), and text summarization (Liu et al., 2021b; Azzam et al., 1999; Xu et al., 2020). Conventional CR models enumerate every pair of mentions in a document in parallel, so the com- ![0_image_1.png](0_image_1.png) ![0_image_0.png](0_image_0.png) Figure 1: For a long document, the topic switching often happens, such as the discussion might shift from "Marie Curie" to "Pierre Curie," followed by "Radium," and then "Nobel Prize." These shifts in topic inevitably lead to certain entities being temporarily excluded from the ongoing discussion but reintroduced after a series of topic changes. putation and memory cost is quadratic to the number of mentions in a document (Xia et al., 2020), where mention is a pronoun, a noun phrase, or a text span that can be referred. This quadratic overhead poses a challenge for long-doc CR. Recent work (Xia et al., 2020; Toshniwal et al., 2020, 2021) proposes cache-based CR models which scan mentions in a document from left to right, storing resolved entities in a cache and determining whether to assign a new mention to an entity in the cache or push it to the cache as a new entity. When the cache is full, it will evict entities according to a certain eviction policy, such as LRU (Least Recently Used). We denote the cache with LRU eviction policy as a local cache (L-cache) because it keeps the more recent entities. Since the small cache size bounds the number of potential coreference relations that the model probes for each mention, it reduces the computation and memory cost. Specifically, the cost reduces to O(|C||D|) from O(|D| 2), where the cache size |C*| ≪ |*D|, |C| and |D| are cache size and document length, respectively. However, multiple topics are common when doc15272 ![1_image_0.png](1_image_0.png) ument is long and the focus of narrative may switch (Figure 1). This phenomenon impacts the performance of LRU policy due to the repeated alternation of topics, causing an entity to be mentioned after a long span of text, increasing its chance of eviction from the cache. When processing a mention that requires the evicted entity, it results in a cache miss. Empirically, we find 11.1% mentions encounter cache misses when adopting LRU policy on real data (see Section 5.1). Our analysis of cache miss reveals that the pattern of entity usages follows the Pareto principle (Koch, 2011) (or 80-20 rule). In other words, a few high frequency entities are used globally (i.e., key concepts for each topic) and account for most of the cache misses. In LitBank (Bamman et al., 2019), 6% of the entities account for 97% of the cache miss. Another example that fits this observation is that the 20 main characters are mentioned 1,722 times in Animal Farm (Orwell, 2021), a book of 30,000 words (see Section 5.5). Inspired by this insight, we propose a dual cache to address different patterns of word usage. One cache stays unchanged as before, i.e., an L-cache using LRU policy. Another cache is devoted to global and high frequent entities evicted from Lcache, called G-cache. Intuitively, G-cache adopts the classical cache policy LFU (Least Frequently Used). We denote the proposed hybrid cache as Dual cache. The idea is to promote a division of labor in order to make more effective use of cache: L-cache deals with local, clustered mentions, typically in one topic, whereas G-cache targets global entities within a longer range, typically across topics. A diagram of a model with Dual cache is shown in Figure 2. We conduct extensive experiments on coreference resolution benchmarks, including OntoNotes (Pradhan et al., 2012), LongtoNotes (Shridhar et al., 2022), LitBank (Bamman et al., 2019), and WikiCoref (Ghaddar and Langlais, 2016). Results show that a CR model with our Dual cache outperforms the prior work (Toshniwal et al., 2020; Xia et al., 2020) which use unbounded cache while saving 56% of the inference time. The model achieves a new state-of-the-art on LitBank (Bamman et al., 2019). The results shown in Section 4 demonstrate the effectiveness of the Dual cache on long-doc CR benchmarks. To further explore the capability of our Dual cache on long documents, we annotate a book with 30,000 words, our method outperforms the baseline by offering 10 points improvement of F1 while saving 70% of the inference time. The data and code are available https://github.com/ QipengGuo/dual-cache-coref. ## 2 Related Work Conventional CR methods (Joshi et al., 2019; Xu and Choi, 2020; Jiang and Cohn, 2021) are designed for short documents with length less than 1,000 words. They typically encode the input document and propose candidate mentions with a pretrained model. Their model enumerates all pairs of mentions in parallel to identify coreference relations, resulting in memory and computation cost quadratic to document length. These models also require a post-processing to gather connected mentions as entities and need to compute a large tensor of O(n × n × d), where n is the number of mentions, and d is the dimension of hidden states in the relation classifier. This is often impractical for long documents with thousands of mentions. Another direction is to reformulate CR, Wu et al. (2020) treats CR as a question answering problem and achieves higher performance on some benchmarks but costs more computation since the model needs to do question answering repeatedly. ## Cache-Based Neural Coreference Resolution Since directly applying conventional CR on long documents suffers from heavy computation and memory cost, recent works (Xia et al., 2020; Toshniwal et al., 2020, 2021) proposed cache-based CR models. Probing entities in the cache instead of the whole document largely saves the computation and memory cost but also brings errors when encountering a cache miss. Thus, the eviction policy plays an important role. Xia et al. (2020) follow the LRU principle, and Toshniwal et al. (2020) discuss more eviction policies, such as a variant of LRU and a learnable scorer to rank elements. However, these works lack the ability to capture the topic switching phenomena described in Figure 1. Datasets for Long document Coreference Existing CR datasets are typically on short documents, such as the most commonly used corpus, OntoNotes (Pradhan et al., 2012), whose average document length is 487 words. However, as Bamman et al. (2019) have shown, text in scientific papers and literature, which can be much longer than OntoNotes articles, exhibit different usages of coreference relations. Note that the original document collected for OntoNotes is much longer, but they split documents to lower the annotation cost, and a recent work LongtoNotes(Shridhar et al., 2022) attempts to recover the long document. Besides, recent work proposes benchmarks for long documents, such as LitBank (Bamman et al., 2019) for literature articles, ACL-Coref (Schäfer et al., 2012) for scholarly papers, and Wiki-Coref (Ghaddar and Langlais, 2016) for documents in Wikipedia. These corpora not only increase document length against traditional coreference benchmarks but also largely increase the spread of entities (the distance between the first and the last mention of an entity , which is defined in Toshniwal et al. (2020)), which requires the model to memorize longer history. Text books, story books, and professional books capture much of the human knowledge, but book-scale CR has not caught attention, and we made the first attempt to address this issue. We annotate a 30,000-word book, Animal Farm, taking the 20 characters as entities, and annotate the 1,722 mentions of them throughout the book. ## 3 Method A common workflow of neural CR models is to detect candidate mentions, encode them into vector representations, and identify coreference relations between each mention and past entities by feeding their representations to an MLP classifier. We denote candidate mentions as {m1, m2, · · · , mn}, and an entity is defined as a set of mentions, ek = {mk 1 , mk 2 , *· · · }*. We use a special operator H(·) to represent vector representations, for example, H(m1)is the representation of the mention m1. The key component of our method is a dual cache which contains an L-cache CL = {e1, · · · , eNL} and a G-cache CG = {e1, · · · , eNG}, where e is an entity in the cache, NL and NG are the sizes of the two caches respectively. Since we focus on cache design, we adopt Longformer (Beltagy et al., 2020) as the document encoder and mention detection model (an MLP scorer) from Toshniwal et al. (2021) which are described in Toshniwal et al. (2020). ## 3.1 Dual Cache When an entity is brought to use, it tends to be reused in a local context intensively, exhibiting clustering behavior that an LRU can effectively capture. As mentioned above, topic switching may cause some entities to be intensively used in certain paragraphs scattered in the document but completely not used in others. Thus, these entities are sensitive to be purged by an LRU cache due to a long absence, but their defining characteristic is its high frequency, easily captured by an LFU policy. As such, neither policy can handle both patterns, and the integration is an intuitive way to cover both cases. We adopt a divide-and-conquer approach to use two caches and adopt different eviction policies. Lcache captures the locally active terms and follows the LRU policy. G-cache focuses on the global terms and follows the LFU policy. When a new mention comes, we first ask the model to classify whether it is a new entity or is is assigned to an existing entity within the cache. Next, we will check the frequency of this new or updated entity, and we will move it to the G-cache if its frequency is higher than any entity in the G-cache and trigger an eviction of the G-cache when it is full. If the entity does not enter the G-cache, we will move it to the L-cache and trigger an eviction of the Lcache when it is full. L-cache will evict the least recently used entity and G-cache will evict the least frequently used entity. At the beginning of processing a document, both L-cache and G-cache are empty, and we will fill the G-cache first since the frequency of an empty slot equals zero and the frequency of a newly entered entity is at least one. ## 3.2 **Coreference Resolution With A Dual Cache** We follow Toshniwal et al. (2021) to design an incremental neural coreference model, which scans mentions one by one and computes pair-wise scores for entities in the cache plus a placeholder vector indicating new entity (noted as N ). As its name implies, if this placeholder achieves the highest prediction score, the model will create a new entity and push it to the cache. Otherwise, the model assigns the input mention to the entity with the highest score and updates its representation. Note that the cache structure is transparent to the coreference resolution model, and the model will access all entities in the Dual cache. The score between a mention m, an entity e in the Dual cache C = {CL, CG}, and a placeholder vector N can be computed as, $$\begin{array}{l c r}{{s(m,{\mathcal{N}})=f_{c}(\mathbf{H}(m),{\mathcal{N}}),}}&{{}}&{{\mathrm{(1)}}}\\ {{s(m,e)=f_{c}(\mathbf{H}(m),\mathbf{H}(e)),}}&{{}}&{{e\in C}}\end{array}\tag{2}$$ where H(m), H(e) are representations of a mention and an entity output by a pre-trained language model, respectively. fc means an MLP classifier. For the dataset that requires the model to distinguish singletons, we record the s(m, N ) as a measure. For an entity only contains one mention after processing the whole document, if the recorded score exceeds the threshold (set to 0 in this work), the model determines it as a singleton. Otherwise, it will not be considered as entity and dropped from the prediction results. When the model assigns a new mention to an existing entity, the entity representation gets updated by a gated sum aggregator. $$\begin{array}{r l}{\mathrm{Update}(e,m)=\alpha\mathbf{H}(m)+(1-\alpha)\mathbf{H}(e),}&{{}{\mathrm{(3)}}}\\ {\mathrm{~where}}&{{}\alpha=f_{g}(\mathbf{H}(e),\mathbf{H}(m)),}&{{}{\mathrm{(4)}}}\end{array}$$ where α is the aggregation coefficient and fg is an MLP module with sigmoid function. For a new created entity, its representation is set with the first mention's representation. ## 3.3 Learning And Inference There are three learnable components in our method, a mention-level representation extractor, which consists of a pre-trained language model and an MLP classifier and is the same as Toshniwal et al. (2021), a pair-wise scoring module eq. (2) to classify coreference relations, and an update function eq. (3) used to update clusters' representation when assigning new mentions to an entity. Since we only introduce a new cache policy, we can reuse other cache-based models' parameters like Xia et al. (2020) and Toshniwal et al. (2021) to perform coreference resolution without further training. We can also retrain or finetune the model with the Dual cache for long documents, which has two potential advantages: 1) the Dual cache has a higher cache hit ratio so that it contains more ground-truth entities, which means the model can see more positive relations during training; 2) it avoids feature space shift since we get the representation of entities by merging mentions sequentially. Different cache polices lead to different merging orders of mentions so that affect the entities' representation. We follow Toshniwal et al. (2021) to adopt Cross Entropy as the loss function and take the groundtruth of coreference relations and mention boundaries as labels. ## 4 Experiments We conduct experiments on four CR benchmarks and compare with prior cache-based CR methods. ## 4.1 Datasets The four datasets for our experiments are: OntoNotes and its extension LongtoNotes for longer documents; WikiCoref with only testing documents; LitBank which provides an official split of ten-fold cross-validation and we follow prior work (Toshniwal et al., 2021) to report the result of the first fold in this work. The statistics of the datasets are listed in Table 1. | Dataset | Train | Val | Test | Avg. Length | |-------------|---------|-------|--------|---------------| | OntoNotes | 2,802 | 343 | 348 | 487 | | LongtoNotes | 1,959 | 234 | 222 | 674 | | WikiCoref | - | - | 30 | 1,996 | | LitBank | 80 | 10 | 10 | 2,105 | | Animal Farm | - | - | 1 | 37,000 | ## 4.2 Baselines Xia et al. (2020) proposed a simple cache-based CR model built on SpanBERT (Joshi et al., 2020) with an L-cache. The model architecture is inherited from a strong baseline of conventional CR model (Joshi et al., 2019) and is finetuned based on their released model. They freeze SpanBERT during finetuning and only report results on OntoNotes instead of benchmarks with longer documents. Toshniwal et al. (2021) 1 provided more results of cache-based CR models for long documents, including results on OntoNotes, LitBank, and WikiCoref. They replace the backbone encoder with Longformer (Beltagy et al., 2020) for better performance and update all parameters during finetuning. We follow their training scripts and hyperparameters 2to train our model. Thirukovalluru et al. (2021) largely accelerates the model by changing span-level CR to token-level CR, which considers the relations between token pairs first, and then maps them as relations between mentions and entities. However, looking at the tokens in two mentions is not enough to identify a coreference relation, especially for long mentions. Their code 3 was not released when writing this paper. Thus, we only compare the F1 score with them. They report the result on a book of 2M tokens. Since their predictions are not released, we plan to compare with them on book-scale document in the future. ## 4.3 Setup We initialize our model with the parameters released by Toshniwal et al. (2021), including a document encoder, a mention proposer, and a CR model. We also use a unified mention detection strategy | Test set | S-subset | | | |-------------------------|------------|------|------| | Method | Cache Size | F1 | F1 | | Toshniwal et al. (2021) | Unbound | 77.8 | 76.4 | | Toshniwal et al. (2021) | 50 | 72.6 | 71.0 | | Dual cache | 25+25 | 76.2 | 74.8 | | Toshniwal et al. (2021) | 200 | 77.2 | 75.4 | | Dual cache | 100+100 | 77.7 | 76.3 | | Toshniwal et al. (2021) | 500 | 77.7 | 76.0 | | Dual cache | 250+250 | 77.9 | 76.3 | and keep the top 0.4|D| candidate mentions as Toshniwal et al. (2021) did for a fair comparison except the one on Animal Farm, where |D| is the document length. We focus on CR model and report its computation cost and inference time, as the document encoder and mention proposer enable plug-and-play. All experiments are conducted on NVIDIA T4 GPU (16GB). We report MAC (Multiply Accumulate operations) to measure the computation cost. 4 ## 4.4 Main Results Table 3 compares our method and prior method on three benchmarks. We denote the size of the Dual cache as "A+B", meaning there are A slots in the L-cache and B slots in the G-cache. We report the F1 score and inference time as measurements of performance and efficiency, respectively. We list more metrics like MUC (Vilain et al., 1995), BCUBED (Bagga and Baldwin, 1998), and CEAFE (Luo, 2005) in the Appendix C. Besides the overall performance, we compare with the strongest baseline (Toshniwal et al., 2021) under the setting of different cache sizes. Results demonstrate that our method consistently outperforms previous approaches, albeit with a slightly higher time consumption for a given cache size. Furthermore, our method achieves comparable performance in less inference time. In particular, our Dual cache with 50 slots outperforms the previous SOTA approach that utilizes an unbounded cache on LitBank, while only requiring 48% of the inference time. Additionally, we provide results on LongtoNotes, a recent extension of OntoNotes that combines pas-4The script for calculating MAC is adapted from https: //github.com/Lyken17/pytorch-OpCounter/ | LitBank | OntoNotes | WikiCoref | Avg | | | | | | | |-------------------------------|-------------|-------------|-------|-------------|-------|------|--------|-------------|-------| | Method | Cache Size | F1 | Time | F1 | Time | F1 | Time | F1 | Time | | Thirukovalluru et al. (2021)† | Unbound | 75.9 | - | 78.0 | - | - | - | - | - | | Xia et al. (2020) | Unbound | 76.7 | 5.72s | 79.6 | 0.98s | 58.7 | 12.35s | 71.6 | 6.35s | | Toshniwal et al. (2021) | Unbound | 78.6 | 5.17s | 80.6 | 0.66s | 63.5 | 13.72s | 74.2 | 6.52s | | Toshniwal et al. (2021) | 50 | 72.9 | 2.14s | 75.1 | 0.40s | 53.0 | 2.10s | 67.0 | 1.55s | | Dual cache | 25+25 | 78.8 ± 0.33 | 1.95s | 79.6 ± 0.21 | 0.41s | 59.9 | 2.20s | 72.7 ± 0.18 | 1.52s | | Toshniwal et al. (2021) | 200 | 78.2 | 3.32s | 79.8 | 0.53s | 61.4 | 3.96s | 73.1 | 2.60s | | Dual cache | 100+100 | 79.3 ± 0.32 | 3.53s | 81.0 ± 0.18 | 0.61s | 62.4 | 4.17s | 74.3 ± 0.17 | 2.77s | | Toshniwal et al. (2021) | 500 | 78.5 | 4.77s | 80.0 | 0.55s | 62.8 | 6.63s | 73.8 | 3.98s | | Dual cache | 250+250 | 79.5 ± 0.37 | 5.09s | 81.1 ± 0.19 | 0.63s | 63.0 | 7.15s | 74.5 ± 0.18 | 4.29s | ![5_image_0.png](5_image_0.png) sages into longer documents. Table 2 exhibits the same trend, where our Dual cache outperforms the baselines on both the complete test set and a subset of long documents. Notably, the performance gain is more significant in the long document subset, highlighting the effectiveness of the Dual cache structure in processing such documents. To better illustrate the effectiveness and efficiency, we report more results on WikiCoref in Figure 3. In this setting, we adopt a CR model trained with unbounded memory, so it does not have a preference of the cache structure. The curves show that the Dual cache always has the highest performance/cost ratio among different methods and outperforms both the L-cache and the G-cache either using a fixed cache size or consuming a fixed amount of time, demonstrating the benefit of integrating the L-cache and G-cache. ## 5 Analysis 5.1 Cache Miss Ratio By comparing the performance gain when using different cache sizes in Table 3 and Table 2, we find that the improvement of the Dual cache is more significant for a small cache, such as the performance gain for 50 slots is 5.7pt F1 and the gain for 200 slots is 1.2pt F1. The reason is that the cache miss ratio decreases rapidly as the cache gets larger, so the absolute improvement of our method is also getting smaller. In this section, we quantitatively discuss how the Dual cache reduces the cache miss ratio. We adopt an off-the-shelf mention detection model (Toshniwal et al., 2021) to detect mentions and use ground truth to replace the relation classifier to get rid of the model difference, except the cache structure. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) | Cache Size | | | | | | |--------------|-------|------|-------|-------|-------| | Model | 10 | 30 | 50 | 70 | 100 | | OntoNotes | | | | | | | L-cache | 7.4% | 2.8% | 1.5% | 0.98% | 0.54% | | G-cache | 6.4% | 3.7% | 2.5% | 1.8% | 1.1% | | Dual cache | 5.0% | 1.5% | 0.82% | 0.51% | 0.27% | | LitBank | | | | | | | L-cache | 7.2% | 3.1% | 1.9% | 1.4% | 0.98% | | G-cache | 3.9% | 2.3% | 2.0% | 1.9% | 1.5% | | Dual cache | 3.5% | 1.2% | 0.69% | 0.48% | 0.33% | | WikiCoref | | | | | | | L-cache | 11.1% | 6.3% | 4.8% | 4.0% | 3.2% | | G-cache | 8.4% | 5.8% | 4.8% | 4.2% | 3.5% | | Dual cache | 7.8% | 4.1% | 3.0% | 2.4% | 1.8% | Table 4 gives the cache miss ratio for different caches on three benchmarks. Results show that the Dual cache reduces the chance of cache miss significantly, especially in the case of 10 slots and 30 slots. A cache miss will lead to an inevitable prediction error since the groundtruth does not lie in the solution space, which means the model only considers assigning the mention to clusters within the cache, but the groundtruth cluster is not in the cache. Figure 5 illustrates the impact of cache miss ratio, indicating a lower cache miss ratio and suggesting a higher performance. ![6_image_2.png](6_image_2.png) In addition, we provide case studies of four entities in Figure 4, which shows the heatmap of an entity's occurrences. The color reveals the density of occurrences. A light color means the entity is frequently used there, and a dark color means no usage. We highlight regions with long periods of an entity's absence by red boxes. The region length can reach 500 words, meaning the L-cache will evict it unless the cache has more than 500 slots. The second case ("Katharine Hilbery") contains multiple relatively long periods (>100 words) of absence. If the cache size is less than 100, it will encounter the cache miss 5 times and cause errors. ## 5.2 Impact Of Entity Spread As we have shown in Figure 4, an entity may spread in a long range in a document, and this type of en- ![7_image_0.png](7_image_0.png) tities is challenging for cache mechanisms. One basic measurement is the Entity Spread, which describes how an entity scatters throughout a document. A large spread means the entity spans a large text area, which is likely to be missed by an L-cache if the L-cache is not large enough and the entity is absent for a while. Figure 6 reports models' results on different entity spread scales. We show that the Dual cache helps the model resolve entities with a large spread, resulting in nearly a 30 F1 gain. The reason is that an entity with a large spread is often frequently used, which can be captured by G-cache easily. We can also see that L-cache is good at entities with small spreads while G-cache is good at entities with large spreads, and the results also demonstrate that neither of them could solve the coreference alone. The dual cache achieves better results by taking advantage of both L-cache and G-cache. ## 5.3 Impact Of Cache Size Cache size is an essential factor for a cache design, and there are two caches in our Dual Cache structure, so we study the influence of total cache size and the allocation of L-cache and G-cache and discuss them in this section. We test our method with the cache size varying from 10 to 100. We report the results of the models without finetuning to avoid the potential noise of introducing new parameters and training processes. Figure 5 shows results on three benchmarks, and we can see the Dual cache outperforms both the L-cache and the G-cache, especially for a small cache. Since the Dual cache consists of two caches, it is | Cache Size | | | | | | |--------------|------|------|-------|-------|-------| | Model | 10 | 30 | 50 | 70 | 100 | | OntoNotes | | | | | | | L-first | 5.1% | 1.5% | 0.80% | 0.53% | 0.28% | | G-first | 5.0% | 1.5% | 0.82% | 0.51% | 0.27% | | LitBank | | | | | | | L-first | 3.4% | 1.2% | 0.71% | 0.50% | 0.34% | | G-first | 3.5% | 1.2% | 0.69% | 0.48% | 0.33% | | WikiCoref | | | | | | | L-first | 7.8% | 4.0% | 2.9% | 2.4% | 1.9% | | G-first | 7.8% | 4.1% | 3.0% | 2.4% | 1.8% | | Method | F1 | Time | Memory | |-------------------------|------|--------|----------| | Toshniwal et al. (2021) | 25.8 | 1284s | 5.2 GB | | Dual cache | 36.3 | 375s | 4.5 GB | valuable to discuss how their allocation affects the performance. We provide a heatmap in Figure 7, where the X-axis is the size of G-cache and the Y-axis is the size of L-cache. We enumerate all their combinations (200 settings) when varying the size from 1 to 20. Results show the G-cache is more important when the total size is less than 20 (the lower triangle). The allocation of two caches becomes less matter when the total size exceeds 20 (the upper triangle). ## 5.4 Priority Of Two Caches The positions of the L-cache and the G-cache are not equivalent in our Dual cache. What we have described in this work is actually a G-first Dual cache, which means the priority of the G-cache is higher than the L-cache. Particularly, it means we will push an entity to the G-cache when both the G-cache and L-cache has empty slots. We also tried the L-first version and report it in Table 5. The results show that the difference between G-first and L-first is around 0.1%, which is negligible. ## 5.5 Results On Book-Scale Cr The proposed Dual cache structure is for long-doc CR. However, the maximum length of documents in the four benchmarks is fewer than 10,000 words. To test our approach on a more challenging scenario, we annotate a book of approximately 30,000 words, Animal Farm. We choose the 20 characters in this book as entities and annotate all their 1,722 mentions across the book. Appendix A describes the details of the book annotation. We compare our model and the baseline trained on LitBank since its documents are also from books. We tokenize the book into 37,000 tokens and split them into 11 segments so that we can feed them into the document encoder. We set the cache size to 500 for the baseline and 250+250 for the Dual cache for a fair comparison. Our model can parse the entire book in 10 minutes when selecting 0.4|D| top-scored mentions, but the baseline can not finish the inference process in an hour. Thus, we reduce the number of mentions to 0.3|D| without changing other settings. We report F1 score, inference time, and inference memory in Table 6. The results reveal that the model with a Dual cache significantly outperforms the baseline in both efficiency and effectiveness on a book-scale document. ## 6 Conclusion We proposed a new cache structure, Dual cache, to tackle long document coreference resolution. It contains two caches governed by different eviction polices. The local cache follows LRU policy to deal with local, clustered mentions, whereas the global cache follows LFU policy to target global entities within a long span. Empirical results show that the Dual cache lowers the cache miss up to half as before and improves the F1 score consistently on four benchmarks by offering an average gain of 5.7 F1 score for small caches while taking the same inference time. We also achieve a new stateof-the-art, 79.5 F1 score, on LitBank. ## Limitations This work is motivated by the intuition that a mention is more likely to refer to an entity that occurs shortly earlier and refers to a high frequency entity but has not recently used. For the latter pattern, we show examples of topic switching to explain why these phenomena happened, but we have not found a rigorous linguistic explanation to support this finding. We provide empirical results on four benchmarks plus a book. However, the scarcity of long-doc CR benchmarks hinders us from verifying on a larger scale. Our major contribution is a new cache design, but we also find the cache design becomes less matter when using a huge cache. NVIDIA A100 has 80G memory, which means it can handle a document of 100,000 words with a conventional CR model. As the GPU becomes larger and cheaper, the importance of studying cache design is weaker. ## Ethics Statement We comply with the ACL Ethics Policy. Coreference resolution is fundamental in NLP, which often serves as a component of other NLP applications. Since it does not directly interact with users, there are no additional ethics concerns. All the data used in this paper is public. We confirm that the scientific artifacts used in this paper comply with their license and intended use. We list the licenses in Table 7. ## References Saliha Azzam, Kevin Humphreys, and Robert J. Gaizauskas. 1999. Using coreference chains for text summarization. In *COREF@ACL*. Association for Computational Linguistics. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563–566. Citeseer. Alexandra Balahur, Jesús M. Hermida, and Andrés Montoyo. 2011. Detecting implicit expressions of sentiment in text based on commonsense knowledge. In WASSA@ACL, pages 53–60. Association for Computational Linguistics. David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In NAACL-HLT (1), pages 2138–2144. Association for Computational Linguistics. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *CoRR*, abs/2004.05150. Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In *EMNLP/IJCNLP (1)*, pages 5924–5931. Association for Computational Linguistics. Abbas Ghaddar and Philippe Langlais. 2016. Wikicoref: An english coreference-annotated corpus of wikipedia articles. In *LREC*. European Language Resources Association (ELRA). Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In ACL, pages 254–262. The Association for Computer Linguistics. Fan Jiang and Trevor Cohn. 2021. Incorporating syntax and semantics in coreference resolution with heterogeneous graph attention network. In *NAACL-HLT*, pages 1584–1591. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. *Trans. Assoc. Comput. Linguistics*, 8:64– 77. Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel S. Weld. 2019. BERT for coreference resolution: Baselines and analysis. In EMNLP/IJCNLP (1), pages 5802–5807. Association for Computational Linguistics. Richard Koch. 2011. The 80/20 Principle: The Secret of Achieving More with Less: Updated 20th anniversary edition of the productivity and business classic. Hachette UK. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In KR. AAAI Press. Chunhua Liu, Trevor Cohn, and Lea Frermann. 2021a. Commonsense knowledge in word associations and conceptnet. In *CoNLL*, pages 481–495. Association for Computational Linguistics. Zhengyuan Liu, Ke Shi, and Nancy F. Chen. 2021b. Coreference-aware dialogue summarization. In *SIGDIAL*, pages 509–519. Association for Computational Linguistics. Jing Lu and Vincent Ng. 2018. Event coreference resolution: A survey of two decades of research. In IJCAI, pages 5479–5486. ijcai.org. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In *HLT/EMNLP*, pages 25–32. The Association for Computational Linguistics. George Orwell. 2021. *Animal farm*. Oxford University Press. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In EMNLP-CoNLL Shared Task, pages 1–40. ACL. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: an adversarial winograd schema challenge at scale. *Commun.* ACM, 64(9):99–106. Ulrich Schäfer, Christian Spurk, and Jörg Steffen. 2012. A fully coreference-annotated corpus of scholarly papers from the ACL anthology. In *COLING (Posters)*, pages 1059–1070. Indian Institute of Technology Bombay. Kumar Shridhar, Nicholas Monath, Raghuveer Thirukovalluru, Alessandro Stolfo, Manzil Zaheer, Andrew McCallum, and Mrinmaya Sachan. 2022. Longtonotes: Ontonotes with longer coreference chains. *CoRR*, abs/2210.03650. Shane Storks, Qiaozi Gao, and Joyce Y. Chai. 2019. Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches. *CoRR*, abs/1904.01172. Raghuveer Thirukovalluru, Nicholas Monath, Kumar Shridhar, Manzil Zaheer, Mrinmaya Sachan, and Andrew McCallum. 2021. Scaling within document coreference to long texts. In *ACL/IJCNLP (Findings)*, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 3921–3931. Association for Computational Linguistics. Shubham Toshniwal, Sam Wiseman, Allyson Ettinger, Karen Livescu, and Kevin Gimpel. 2020. Learning to ignore: Long document coreference with bounded memory neural networks. In *EMNLP (1)*, pages 8519–8526. Association for Computational Linguistics. Shubham Toshniwal, Patrick Xia, Sam Wiseman, Karen Livescu, and Kevin Gimpel. 2021. On generalization in coreference resolution. *CoRR*, abs/2109.09667. Marc B. Vilain, John D. Burger, John S. Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In MUC, pages 45–52. ACL. Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. Corefqa: Coreference resolution as querybased span prediction. In ACL, pages 6953–6963. Association for Computational Linguistics. Patrick Xia, João Sedoc, and Benjamin Van Durme. 2020. Incremental neural coreference resolution in constant memory. In *EMNLP (1)*, pages 8617–8624. Association for Computational Linguistics. Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text summarization. In ACL, pages 5021–5031. Association for Computational Linguistics. Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In *EMNLP (1)*, pages 8527–8533. Association for Computational Linguistics. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. Docred: A large-scale document-level relation extraction dataset. In ACL (1), pages 764–777. Association for Computational Linguistics. ## A Annotation Of Animal Farm The text of Animal Farm is accessed from the site of Project Gutenberg Australia.5 We describe the annotation process as follows. Preprocessing We append the chapter titles in front of the texts of the chapters as the first sentences of the chapters, and concatenate the texts of the chapters into a single document. Before the annotation, we adopt a character list containing the 20 characters as the 20 entities,6and first extract the their mentions with string matching. Annotation Tool We developed a web-based annotation tool with a server developed with Flask7 as shown in Figure 8. The entities are listed in the top of the web page, shown in the blue boxes with the texts from the character list. The text of the book are splitted into multiple segments so we can view each segment in one page, and the annotator can switch the segment by the "Previous" and "Next" buttons. In each page, the segment of text is splitted into multiple paragraphs to avoid getting lost in the long text. When we click one entity in the blue box, e.g. "Mr. Jones", it means we are now annotating this entity now, and the mentions of this entity in the text gets gray colored. We annotate the mentions by clicking the words in the text. Once the word is clicked, it gets gray colored (e.g. "his", "he", "himeself"). Consecutive words are treated as one span of mention. When finished annotation for the current page, we click the "Save" button to save the annotation to server. Annotation Standard The mentions of the entities include pronouns (e.g. "he", "she", "I" "they", etc), possessive pronouns (e.g. "his", "her", "my", "their", etc) as the case in Ontonotes 5.0, and other noun phrases (e.g. "Jones", "Major", "the three dogs", etc). The annotation is done by a human expert for 5 hours and there are totally 1,722 mentions annotated. ## B Data And Model License Following the instruction of ACL, we list the science artifacts used in this work in Table 7. | Dataset | License | |-------------------------|-------------------| | OntoNotes | LDC | | LongtoNotes | CC 4.0 + LDC | | LitBank | CC 4.0 | | WikiCoref | No license stated | | Toshniwal et al. (2021) | No license stated | | Longformer | Apache-2.0 | Table 7: License of science artifacts. ## C More Metrics Besides the F1 score that we reported in the main text, we give results that are evaluated by commonly used metrics for the comparison with others, including MUC (Vilain et al., 1995), B-CUBED (Bagga and Baldwin, 1998), and CEAFE (Luo, 2005). For better visualization, we split results into three tables, Table 8, Table 9, and Table 10. There metrics show the same trend of F1 score that are shown in the main text. Figure 8: The annotation tool for annotating the book Animal Farm. | LitBank | | | | | |------------|------------|-------------|-------------|-------------| | Method | Cache Size | MUC | B 3 | CEAFE | | Dual cache | 25+25 | 87.7 ± 0.28 | 78.8 ± 0.72 | 69.8 ± 0.04 | | Dual cache | 100+100 | 88.1 ± 0.28 | 79.1 ± 0.28 | 70.8 ± 0.03 | | Dual cache | 250+250 | 88.2 ± 0.31 | 79.2 ± 0.71 | 71.0 ± 0.13 | Table 8: Results on LitBank using three different metrics. | OntoNotes 3 | CEAFE | | | | |---------------|------------|-------------|-------------|-------------| | Method | Cache Size | MUC | B | | | Dual cache | 25+25 | 85.3 ± 0.10 | 78.9 ± 0.23 | 74.5 ± 0.29 | | Dual cache | 100+100 | 86.2 ± 0.09 | 80.2 ± 0.22 | 76.7 ± 0.23 | | Dual cache | 250+250 | 86.3 ± 0.09 | 80.3 ± 0.22 | 76.8 ± 0.25 | Table 9: Results on OntoNotes using three different metrics. | WikiCoref | | | | | |-------------|------------|------|------|-------| | Method | Cache Size | MUC | B 3 | CEAFE | | Dual cache | 25+25 | 70.2 | 58.8 | 50.7 | | Dual cache | 100+100 | 71.7 | 61.6 | 54.1 | | Dual cache | 250+250 | 72.1 | 62.1 | 54.7 | Table 10: Results on WikiCoref using three different metrics. | LongtoNotes | | | | | | |---------------|------------|-------------|-------------|-------------|-------------| | Method | Cache Size | F1 | MUC | B 3 | CEAFE | | Dual cache | 25+25 | 76.2 ± 0.14 | 84.3 ± 0.23 | 74.4 ± 0.22 | 70.1 ± 0.09 | | Dual cache | 100+100 | 77.7 ± 0.11 | 85.2 ± 0.10 | 75.9 ± 0.21 | 72.1 ± 0.03 | | Dual cache | 250+250 | 77.9 ± 0.13 | 85.2 ± 0.12 | 76.3 ± 0.24 | 72.2 ± 0.04 | Table 11: Results on LongtoNotes using three different metrics. | LongtoNotesS | | | | | | |----------------|------------|-------------|-------------|-------------|-------------| | Method | Cache Size | F1 | MUC | B 3 | CEAFE | | Dual cache | 25+25 | 74.8 ± 0.11 | 85.2 ± 0.25 | 72.0 ± 0.21 | 67.1 ± 0.07 | | Dual cache | 100+100 | 76.3 ± 0.10 | 85.9 ± 0.09 | 73.9 ± 0.20 | 69.3 ± 0.03 | | Dual cache | 250+250 | 76.3 ± 0.08 | 85.7 ± 0.11 | 74.2 ± 0.27 | 69.2 ± 0.03 | Table 12: Results on the long-doc subset LongtoNotesS using three different metrics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Ethics Statement section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes, we did. ✓ A4. Have you used AI writing assistants when working on this paper? We used ChatGPT for polishing the camera ready section, mainly for the introduction part. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1, 4.2 ✓ B1. Did you cite the creators of artifacts you used? Section 4.1, 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Ethics Statement section and Appendix ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics Statement section and Appendix B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1, 4.2 ## C ✓ **Did You Run Computational Experiments?** Section 4.4, 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.3, 4.4, we report the computation cost (MACs) and computing devices. The number of parameters can be founded in cited papers . The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.4, Appendix C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5.5, Appendix ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. The data is annotated by authors as a test set. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
huang-etal-2023-knowledge
Knowledge Transfer in Incremental Learning for Multilingual Neural Machine Translation
https://aclanthology.org/2023.acl-long.852
In the real-world scenario, a longstanding goal of multilingual neural machine translation (MNMT) is that a single model can incrementally adapt to new language pairs without accessing previous training data. In this scenario, previous studies concentrate on overcoming catastrophic forgetting while lacking encouragement to learn new knowledge from incremental language pairs, especially when the incremental language is not related to the set of original languages. To better acquire new knowledge, we propose a knowledge transfer method that can efficiently adapt original MNMT models to diverse incremental language pairs. The method flexibly introduces the knowledge from an external model into original models, which encourages the models to learn new language pairs, completing the procedure of knowledge transfer. Moreover, all original parameters are frozen to ensure that translation qualities on original language pairs are not degraded. Experimental results show that our method can learn new knowledge from diverse language pairs incrementally meanwhile maintaining performance on original language pairs, outperforming various strong baselines in incremental learning for MNMT.
## Knowledge Transfer In Incremental Learning For Multilingual Neural Machine Translation Kaiyu Huang1, Peng Li∗1,4, Jin Ma5,6, Ting Yao5**, Yang Liu**∗ 1,2,3,4 1Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 2Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China 3Beijing National Research Center for Information Science and Technology 4Shanghai Artificial Intelligence Laboratory, Shanghai, China 5Tencent 6Sch. of Comp. Sci. & Tech., University of Science and Technology of China {huangkaiyu,lipeng}@air.tsinghua.edu.cn; tessieyao@tencent.com majin01@mail.ustc.edu.cn; liuyang2011@tsinghua.edu.cn ## Abstract In the real-world scenario, a longstanding goal of multilingual neural machine translation (MNMT) is that a single model can incrementally adapt to new language pairs without accessing previous training data. In this scenario, previous studies concentrate on overcoming catastrophic forgetting while lacking encouragement to learn new knowledge from incremental language pairs, especially when the incremental language is not related to the set of original languages. To better acquire new knowledge, we propose a knowledge transfer method that can efficiently adapt original MNMT models to diverse incremental language pairs. The method flexibly introduces the knowledge from an external model into original models, which encourages the models to learn new language pairs, completing the procedure of knowledge transfer. Moreover, all original parameters are frozen to ensure that translation qualities on original language pairs are not degraded. Experimental results show that our method can learn new knowledge from diverse language pairs incrementally meanwhile maintaining performance on original language pairs, outperforming various strong baselines in incremental learning for MNMT.1 ## 1 Introduction Multilingual neural machine translation (MNMT) aims at handling multiple translation directions in a single model and achieves great success in recent years (Wenzek et al., 2021; Goyal et al., 2022; Cheng et al., 2022). However, the powerful MNMT models need to be retrained from scratch ∗Corresponding authors: Peng Li (lipeng@air.tsinghua. edu.cn) and Yang Liu (liuyang2011@tsinghua.edu.cn) 1Our code will be released at https://github.com/ THUNLP-MT/ktnmt | Method | en→uk | en→bn | |----------------------|---------|---------| | From-scratch | 23.70 | 15.54 | | Replay-Based | 21.37 | 10.32 | | Regularization-Based | 20.33 | 7.84 | | Adapter | 19.92 | 9.46 | Table 1: The BLEU scores of incremental learning methods on new translation directions. The original model is mBART25 which does not support the languages of Ukrainian (uk) and Bengali (bn). using a mixture of original and incremental training data when new language pairs arrive (Tang et al., 2020). Considering that the original training data of MNMT models is often large-scale (Fan et al., 2021; Costa-jussà et al., 2022), and thus the method that utilizes original data to train incrementally is time-consuming and cumbersome (Ebrahimi and Kann, 2021). Therefore, a practical scenario is that these models can continually support new language pairs while preserving the previously learned knowledge without accessing previous training data, which belongs to incremental learning, and has drawn much attention recently (Dabre et al., 2020; Gu et al., 2022; Zhao et al., 2022). In this scenario, existing studies attempt to overcome the issue of catastrophic forgetting (French, 1993) on original language pairs, such as replaybased methods (Garcia et al., 2021; Liu et al., 2021) and regularization-based methods (Huang et al., 2022; Zhao et al., 2022). However, the methods primarily focus on balancing performance between old and new translation directions and use only incremental data to acquire new knowledge, restricting the development of new language pairs (Escolano et al., 2021; Ke and Liu, 2022). As shown in Table 1, prior incremental learning methods cannot 15286 achieve comparable performance on new translation directions, compared with training a bilingual translation model from scratch. Therefore, is it possible to leverage external knowledge without increasing the amount of incremental training data to facilitate the learning procedure of new language adaptation? Fortunately, the development of Transformerbased language models unveils that Feed-Forward Networks (FFN) might be a core component that stores the knowledge (Geva et al., 2021; Dai et al., 2022; Geva et al., 2022; Vázquez et al., 2022). In these efforts, external knowledge is injected into FFN layers to enhance the performance of pre-trained language models. More importantly, it opens the door to leveraging the knowledge from neural models in adapting MNMT models to incremental language pairs. In this work, considering the knowledge in neural networks, we propose a knowledge transfer (KT) method that can efficiently adapt MNMT models to diverse incremental language pairs. First, we convert incremental training data into continuous representation by additional parameters, forming pluggable modules. Then the pluggable modules can be flexibly introduced in the embedding layer and FFN layers of original MNMT models, respectively. The two stages are regarded as a process of knowledge transfer and equip original models with knowledge of unseen languages before adaptation, alleviating the representation gap between original models and introduced parameters. And the knowledge transfer method can further provide better optimization than training from scratch for the introduced parameters. Moreover, except for the pluggable modules, all the parameters of the original model are frozen. Therefore, our architecture can also retain previously learned knowledge from the original translation model to completely maintain the translation qualities on original language pairs. To sum up, our contributions are as follows: - We propose a knowledge transfer method with pluggable modules to acquire more knowledge of new languages, which achieves competitive translation qualities on incremental language pairs. - Our architecture can efficiently adapt to diverse language pairs in incremental learning and naturally retain the performance on origi- nal language pairs when the original training data is not available. - Experiments show that our method can learn knowledge from the other large-scale translation models for adapting original models with different sizes to new language pairs. ## 2 Related Work Replay-Based Methods. The first branch of works utilizes previous training data or create pseudo data that is essentially a replay on old tasks (de Masson D'Autume et al., 2019; Liu et al., 2021; Kanwatchara et al., 2021). Specifically, previous data sometimes cannot be accessed due to data protection and security (Feyisetan et al., 2020; Qu et al., 2021). In this scenario, Sun et al. (2019) replay pseudo samples of previous tasks, which can avoid forgetting previously learned knowledge. However, the pseudo data with noise significantly hurts the performance for both old and new tasks, and the data generation procedure requires additional time costs, restricting the efficiency of incremental learning (Peng et al., 2020; Garcia et al., 2021). In contrast to these methods, our approach does not require extra data and is more flexible in the real-world scenario. Regularization-Based Methods. The second branch of works introduces additional penalty terms to the learning objective on the parameters, alleviating the issue of catastrophic forgetting (Kirkpatrick et al., 2017; Thompson et al., 2019; Castellucci et al., 2021; Gu et al., 2022). In particular, Shao and Feng (2022) propose a complementary online knowledge distillation method that utilizes previous models (teachers) to supplement current model (student) training. In contrast to these methods, our architecture can naturally avoid forgetting previously learned knowledge and retain the performance of old tasks. It allows our method to focus solely on learning new knowledge better instead of preserving old knowledge. Parameter-Isolation Based Methods. The third branch of works introduces extra parameters to support new tasks and freeze all original parameters to completely retain the performance on previous tasks (Bapna and Firat, 2019; Madotto et al., 2021; Zhu et al., 2021). However, the methods only utilize incremental training data to optimize the additional parameters which are randomly initialized (Escolano et al., 2021; He et al., 2021), ![2_image_0.png](2_image_0.png) hindering old models from learning knowledge of incremental languages (Dabre et al., 2020; Ke and Liu, 2022). Chalkidis et al. (2021) combine pseudo data with the prompt-tuning method to alleviate this issue for multilingual tasks. Our method attempts to exploit the potentiality of incremental training data to acquire new knowledge via knowledge transfer while not leveraging extra data, compared with existing methods. ## 3 Method In this work, we aim to completely maintain the performance of previous translation tasks without original training data. As shown in Figure 1, we introduce additional components for new language pairs and adopt a strategy that does not disturb the parameters of the original model. We hope to minimize the impact on the original model during the incremental learning process. As a result, we exclusively concentrate on how to handle the situation of learning new language pairs. Furthermore, the additional components are transferred from parameters in another pre-trained translation model, in a similar way to a pluggable module, rather than randomly initialized, as shown in Figure 2. It can also reduce the cost of learning new language pairs during the training stage, enhancing the practicability and efficiency of incremental learning methods in the real-world scenario. ## 3.1 Task Definition An ideal requirement is that original MNMT models can be continually updated to support new language pairs while retaining translation qualities on original language pairs without accessing previous training data, as shown in Figure 1. Formally, an MNMT model is trained on initially selecting a set of available parallel data D = {D1, ..., Di*, ...,* DN } which covers N languages, and Di represents the original parallel training corpus on the i-th language pair. The training objective of the initial MNMT model is to maximize the log-likelihood L: $${\mathcal{L}}_{{\mathcal{D}}}(\theta)=\sum_{{\mathcal{D}}_{i}\in{\mathcal{D}}}\sum_{(\mathbf{x},\mathbf{y})\in{\mathcal{D}}_{i}}\log p(\mathbf{y}|\mathbf{x};\theta)\quad(1)$$ where θ represents the trainable parameters of MNMT models. The source sentence is denoted as x, while the target sentence is denoted as y. In order to specify the source and target languages, two language tokens are added at the beginning of each source and target sentence, respectively. Incremental learning is updating the original MNMT model on an updated set of parallel data D(U) = {D1, ..., DN *, ...,* DM} which covers M languages, and *N > M*. A dilemma, though, is that the original training data D is often unavailable due to data security. Thus, we can only utilize the new data D′ = {DN+1, ..., Dj *, ...,* DM} to incrementally train the original MNMT model, and Dj represents the incremental parallel training corpus on the j-th language pair. The optimization objective in incremental learning is given by: $${\mathcal{L}}_{{\mathcal{D}}^{\prime}}(\theta)=\sum_{{\mathcal{D}}_{j}\in{\mathcal{D}}^{\prime}}\sum_{(\mathbf{x,y})\in{\mathcal{D}}_{j}}\log p(\mathbf{y}|\mathbf{x};\theta)\quad(2)$$ As a result, the number of language pairs that the MNMT model support increases from N to M. ## 3.2 Knowledge Transfer Via Pluggable Modules Based on the original MNMT model, we open up additional spaces for adapting to new language pairs, as shown in Figure 2. However, the additional spaces with initialized parameters are weak in their abilities to capture the shared linguistic features among different languages. Because the linguistic distribution of the original model is fitted previously, which leads to a representation gap between the original model and introduced spaces. Thus, we not only leverage new training data but also exploit the potentiality of these data by introducing two types of pluggable modules to bridge the representation gap. ↔︎ ↔︎ ↔︎ Vocabulary Adaptation. On the one hand, if the new language has a distinct script with the ![3_image_0.png](3_image_0.png) set of original languages, a certain proportion of the out-of-vocabulary (OOV) tokens with unclear semantics will occur due to different character sets between the original and incremental languages, which hinders performance on new language pairs (Zhang et al., 2022). However, the external model is not troubled by this situation and covers sufficient tokens for the incremental language pairs. Therefore, we expand an extra space in the embedding layer and concatenate the embeddings of non-overlap tokens between the original model and the external model, bridging the representation gap through vocabulary adaptation. Feed-Forward Adaptation. On the other hand, FFN layers can be seen as key-value memories, which has previously been investigated by (Sukhbaatar et al., 2019). Each FFN layer consists of two linear networks with a non-linearity activation function and is given by: $$\mathrm{FFN}(\mathbf{H})=f(\mathbf{H}\cdot K^{\top})\cdot V$$ where *K, V* ∈ R dffn×dmodel are parameter matrices, dffn is the dimension of the FFN, H is the input hidden state of FFN layers and the dimension is dmodel, f represents the non-linearity activation function, e.g., GELU and ReLU. The first linear network is regarded as the keys and activates a set of intermediate neurons. The second linear network integrates the corresponding value matrices by weighted sum, using the activated neurons as weights. The FFN layers store knowledge in the key-value memories manner. Thus, we leverage the continuous representation in the FFN layers of the external model that stores useful language knowledge and transfer the knowledge into the FFN layers of the original model, forming a type of pluggable module. We combine the output of the original FFN layers and the injected pluggable modules to share linguistic knowledge, alleviating the representation gap through FeedForward layers adaptation. The fusion FFN output H(f) is given by: $\bf H^{(f)}=$ FFNoriginal(H) + FFNexternal(H) (4) ## 3.3 Training And Inference During the training stage, previously learned knowledge can be naturally preserved with a frozen training strategy, which can avoid the issue of catastrophic forgetting. The training procedure of our method is divided into two stages, as shown in Figure 2. Stage 1: External Model Training. To convert incremental training data into continuous representation by additional parameters. We first leverage the incremental training data to train an external Transformer-based neural network. $${\mathcal{L}}_{{\mathcal{D}}^{\prime}}({\hat{\theta}})=\sum_{{\mathcal{D}}_{j}\in{\mathcal{D}}^{\prime}}\sum_{(\mathbf{x},\mathbf{y})\in{\mathcal{D}}_{j}}\log p(\mathbf{y}|\mathbf{x};{\hat{\theta}})\quad(5)$$ where ˆθ represents the trainable parameters of the external neural models. We only retain the parameters in the embedding layer (ˆθe) and FFN layers ( ˆθf ) of the external model as the pluggable modules for the next training stage. Stage 2: Pluggable Module Tuning. Directly transferring the additional parameters limits the MNMT model capacity, especially for the language pairs with sufficient data. Therefore, we further train the pluggable modules in the second stage: $$\mathcal{L}_{\mathcal{D}^{\prime}}(\hat{\theta}_{e},\hat{\theta}_{f})=\sum_{\mathcal{D}_{j}\in\mathcal{D}^{\prime}}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{D}_{j}}\log p(\mathbf{y}|\mathbf{x};\hat{\theta}_{e},\hat{\theta}_{f})\tag{6}$$ where ˆθe and ˆθf represent the trainable parameters of pluggable modules in the embedding layer and FFN layers, respectively. Inference. For the inference stage, the original translation directions follow the original model without any pluggable modules while the incremental translation directions require the concatenation of the original model and pluggable modules. ## 4 Experiments 4.1 Datasets To ensure the reliability of the experiments, the original MNMT model is implemented on a multilingual machine translation dataset2(WMT-7) that covers seven languages (Farhad et al., 2021). And we provide four incremental languages considered for incremental adaptation3. An extensive description and comprehensive information regarding the datasets for all languages can be found in Appendix A. All training data are sourced from the WMT (Workshop on Machine Translation) and FLoRes datasets, ensuring reliable quality. Language Choice. As contrasted to previous studies for incrementally adapting translation models to new languages, we further provide a comprehensive language setting. Previous works often investigate the situation of the related languages 2https://www.statmt.org/ 3https://data.statmt.org/cc-matrix/ which are similar language families and scripts to the original languages. In our setting, the incremental languages have distinct scripts and belong to several language families, which leads to a serious language representation gap. Please refer to Appendix A.2 for more details of language consideration in our setting. ## 4.2 Implementation Details Baselines. We implement a vanilla Transformer for original languages as the initial model which is trained on multiple parallel data jointly (Johnson et al., 2017). And we compare the proposed method with different architectures for adapting the original model to new language pairs. All methods utilize the preprocessing script of a shared BPE model with 32k tokens based on the Sentencepiece library4. The baselines can be listed as follows: From-scratch (Johnson et al., 2017): A vanilla Transformer is trained from scratch on the incremental languages with the multilingual training strategy. Note that the models do not support the original translation directions. Adapter (Bapna and Firat, 2019): We follow previous adapter architectures and introduce extra parameters in each FFN layer of the original MNMT model. All original parameters are frozen and only the adapters are trainable. Extension (Lakew et al., 2018): On the basis of the adapter architecture, we extend the original vocabulary (VP) for new languages adaptation. Initially, a supplementary vocabulary (VQ) is created using the standard Byte-Pair Encoding (BPE) procedure from the incremental training data. Subsequently, VP and VQ are combined to form a unified vocabulary V, which is defined as V = VP ∪ VQ. The embeddings of the original models are expanded to match the size of the complete vocabulary (V), and the additional embeddings are initialized using a Gaussian distribution. Serial/Parallel (Zhu et al., 2021): We follow Zhu et al. (2021) to introduce adapters in the serial or parallel connection manner. Our pluggable modules in the FFN layers can also be converted into a serial manner. Training Setup. We implement all models based on the open-source toolkit fairseq5(Ott et al., 2019). For a fair comparison, we employ the same configuration of Transformer-Big (Vaswani et al., 2017) 4https://github.com/google/sentencepiece 5https://github.com/pytorch/fairseq $$\mathbf{.}\,\mathrm{{\large{org/}}}$$ | Method | Modules | WMT16 | WMT14 | FLoRes | FLoRes | | | | | |--------------|-----------|---------|---------|----------|----------|-------|-------|-------|-------| | ro→en | en→ro | de→en | en→de | bn→en | en→bn | uk→en | en→uk | | | | From-scratch | - | 30.63 | 23.58 | 31.35 | 26.55 | 30.58 | 15.54 | 28.22 | 23.70 | | Adapter | Serial | 34.39 | 22.85 | 30.34 | 19.43 | 16.94 | 0.12 | 28.47 | 19.01 | | Parallel | 32.98 | 20.74 | 25.97 | 17.24 | 13.51 | 0.11 | 26.04 | 15.42 | | | Extension | Serial | 32.06 | 22.36 | 30.53 | 22.05 | 27.78 | 13.42 | 30.65 | 21.12 | | Parallel | 32.31 | 20.87 | 28.66 | 20.77 | 27.51 | 12.49 | 30.69 | 20.60 | | | KT (Ours) | Serial | 34.38 | 25.52 | 31.24 | 26.33 | 30.73 | 15.63 | 31.91 | 24.46 | | Parallel | 34.44 | 25.62 | 32.04 | 26.73 | 30.81 | 15.71 | 32.01 | 25.20 | | Table 2: Results in BLEU of adding a single language pair merely for MNMT in incremental learning. The highest score on each translation direction is highlighted in **bold**. The second highest score on each translation direction is highlighted in underline. Method Modules WMT16 WMT14 FLoRes FLoRes ro→en en→ro de→en en→de bn→en en→bn uk→*en en*→uk From-scratch - 35.42 25.90 31.33 25.34 30.40 14.66 32.71 25.05 Adapter Serial 31.02 15.11 24.34 7.86 10.04 0.54 23.19 9.91 Parallel 35.38 14.58 29.57 6.56 24.95 0.34 31.52 9.40 Extension Serial 32.17 19.88 26.90 15.94 24.40 9.88 27.56 14.33 Parallel 36.28 23.17 30.45 20.92 28.78 11.62 31.94 19.59 KT (Ours) Serial 36.34 25.53 30.96 24.52 29.82 12.95 32.95 24.31 Parallel 36.88 26.48 31.65 25.65 30.10 15.10 33.86 **25.42** in our experiments. All original parameters are frozen during the incremental learning procedure. More model training details are provided in Appendix B.1. Evaluation. We report the detokenized casesensitive BLEU of models by the SacreBLEU evaluation script (Post, 2018) 6. We show the training time of each method in terms of kiloseconds and use the beam search decoding algorithm with a beam size of 5 and a length penalty of 1.0. ## 4.3 Main Results Adding A Single Language As shown in Table 2, we investigate the translation qualities when a new language pair arrive. The results demonstrate that our proposed method (KT) outperforms several baselines in terms of average BLEU scores for all incremental translation directions. Specifically, KT achieves an average BLEU score of 27.14 for xx→en and 21.32 for en→xx translations. In particular, based on KT, the pluggable modules that are injected in a parallel manner further improve the performance over the serial manner on all incremental language pairs. The Adapter methods are more vulnerable to adapting original models to some incremental language pairs, e.g., 0.12 BLEU scores on en→bn and 16.94 BLEU scores on bn→en. Because the methods with an unaltered vocabulary result in the sentence being broken up into semantically meaningless OOV tokens. Although the Extension methods can alleviate the issue of OOV tokens and fragmentary semantics by rebuilding embedding layers, the | Method | WMT-7 (∆BLEU) | WMT16 (BLEU) | | | |--------------|-----------------|----------------|-------|-------| | xx→en | en→xx | ro→en | en→ro | | | From-scratch | - | - | 30.63 | 23.58 | | Fine-tuning | -18.51 | -14.00 | 33.87 | 25.22 | | Replay | -0.74 | -1.24 | 29.33 | 19.00 | | Replay† | -3.86 | -2.28 | 27.51 | 21.93 | | EWC | -0.99 | -2.82 | 28.14 | 17.87 | | Self-KD | -5.29 | -8.33 | 29.55 | 19.47 | | LFR | -1.01 | -2.56 | 32.13 | 22.73 | | Prompt | -0.00 | -0.00 | 31.33 | 15.64 | | Prompt† | -0.00 | -0.00 | 33.21 | 22.77 | | Prefix | -0.00 | -0.00 | 32.71 | 18.74 | | Prefix† | -0.00 | -0.00 | 33.17 | 22.62 | | KT (Ours) | -0.00 | -0.00 | 34.44 | 25.62 | | No. | Method | Model Size | Incremental Language Pairs (BLEU) | | | | | |----------|----------|--------------|-------------------------------------|-------|-------|-------|-------| | Original | External | ro→en | en→ro | uk→en | en→uk | | | | 1 | KD | 0.4B | 0.4B | 32.68 | 24.36 | 30.67 | 23.48 | | 2 | KD | 0.4B | 1.2B | 33.98 | 24.33 | 31.56 | 24.76 | | 3 | KT | 0.4B | 0.4B | 34.15 | 24.54 | 32.33 | 24.93 | | 4 | KT | 0.4B | 1.2B | 34.37 | 24.74 | 32.77 | 25.53 | | 5 | KT+KD | 0.4B | 0.4B | 33.31 | 25.72 | 32.49 | 25.18 | | 6 | KT+KD | 0.4B | 1.2B | 35.34 | 26.43 | 33.66 | 26.01 | | 7 | KD | 1.2B | 0.4B | 31.52 | 21.94 | 30.60 | 24.55 | | 8 | KD | 1.2B | 1.2B | 32.89 | 23.38 | 31.77 | 24.89 | | 9 | KT | 1.2B | 0.4B | 34.44 | 24.50 | 32.59 | 25.67 | | 10 | KT | 1.2B | 1.2B | 35.07 | 24.58 | 32.96 | 26.05 | | 11 | KT+KD | 1.2B | 0.4B | 34.56 | 23.21 | 32.66 | 25.98 | | 12 | KT+KD | 1.2B | 1.2B | 36.15 | 27.18 | 34.23 | 27.14 | extended parameters are still hard to optimize. The knowledge transfer method can further guide additional parameters to achieve greater improvements on these language pairs. Considering the different introduced manners of pluggable modules, based on the baselines, parallel modules tend to be weaker than serials. It demonstrates that parallel architecture is more difficult to learn new knowledge from limited training data. And the results show that the knowledge transfer method mitigates this issue and explores the potential of parallel architectures, achieving obvious improvement on all eight translation directions, even outperforming the model training from scratch. ## Adding Multiple Languages Simultaneously As shown in Table 3, we examine the translation qualities in incremental learning when eight new language pairs arrive simultaneously. The results show that our proposed method can also achieve better performance compared with the baselines. Notably, in the low resource scenario (ro and uk), our method of adding multiple languages obtains better performance compared with adding a single language. Besides, adding multiple languages simultaneously in incremental training makes more training samples available and it facilitates the optimization of challenging pluggable modules in a parallel manner. In this setting, the parallel pluggable modules of all methods demonstrate better performance than the serial. Moreover, the situation of incremental language pairs that are difficult to learn is still alive with the Adapter. It even shows more severe degeneration on the other incremental languages (17.24 BLEU on en→de of adding a single language while 6.56 BLEU of adding multiple languages simultaneously). However, our method does not significantly been disturbed by different conditions in incremental learning, which exhibits good stability, as shown in Table 2 and Table 3. ## Degeneration In Incremental Learning. As shown in Table 4, to demonstrate the reliability and effectiveness, we investigate the degeneration on the original translation directions, compared with various outstanding continual learning methods. The results demonstrate that our method achieves competitive performance on the incremental translation directions and even outperforms the fine-tuning strategy (up to +0.57/+0.40 for ro→en and en→ro respectively). Please refer to Appendix 4.6 and B.2 for more details of the original models and all baselines. Besides, the results also show that prior replaybased and regularization-based methods still suffer from pronounced degeneration on the original translation directions without the original data. Although no degradation has occurred using Prompt and Prefix, they are vulnerable to learning new knowledge from updated training samples incrementally. More importantly, considering the reliability of the comparison, we have only selected the translation directions between Romanian and English. Because previous methods cannot obtain comparable results when the incremental languages are not related to the set of original languages. ## 4.4 Results On Pre-Trained Models As shown in Table 5, we leverage pre-trained M2M100 models (Fan et al., 2021) as the external model and investigate the effectiveness of different knowl- No. Transfer Scopes Incremental Language Pairs(BLEU) Embedding FFN ro→en en→ro de→en en→de bn→en en→bn uk→*en en*→uk 1 ✗ ✗ 32.31 20.87 28.66 20.77 27.51 12.49 30.69 20.60 2 ✗ ✓ 33.04 24.48 30.59 25.35 29.04 14.12 30.67 23.72 3 ✓ ✗ 34.08 24.56 28.74 21.15 28.67 13.99 31.66 23.44 4 ✓ ✓ **34.44 25.62 32.04 26.73 30.81 15.71 32.01 25.20** edge transfer methods. Knowledge distillation (KD) (Hinton et al., 2015) is a widely used technique to transfer knowledge between models. The results show that only utilizing KD cannot achieve comparable performance for incremental language adaptation. However, KD can be arbitrarily integrated into our method (KT) and further facilitate the procedure of knowledge transfer. The combination of KD and KT achieves better translation qualities than only using one alone based on all model settings. Besides, both KD and KT are better at learning knowledge from the large pre-trained models. It proves that the large pre-trained model contains more useful knowledge. And we find that the size between models also determines the performance on incremental translation directions. The small M2M-100 model (0.4B) is beneficial for the same size original model (0.4B) but is insufficient to support the large original model (1.2B). In contrast, the large M2M-100 model (1.2B) plays a positive role in both small and large original models by knowledge transfer. However, the small original model (0.4B) limits learning sufficient knowledge from the large M2M-100 model according to the comparison between No.6 and No.12, as shown in Table 5. | Method | ro→en | en→ro | bn→en | en→bn | |-----------------|---------|---------|---------|---------| | Ours | 34.44 | 25.62 | 30.81 | 15.71 | | +Self-Attention | 34.14 | 25.35 | 30.37 | 15.28 | | +Gate-Fusion | 33.39 | 24.52 | 29.85 | 14.54 | | +Dropout | 33.98 | 25.42 | 30.01 | 14.13 | ## 4.5 Ablation Studies Effects On Transfer Areas As shown in Table 6, we further investigate the effectiveness of our method in different transfer areas. The results demonstrate that our method can help each pluggable module to be better optimized separately and achieves better performance when both two pluggable modules are injected through knowledge transfer for all incremental languages. Specifically, the method improves translation qualities related to Romanian and Ukrainian when it affects the pluggable module in the embedding layer. On the contrary, it is more effective to transfer the knowledge for the pluggable modules in the FFN layers on translation directions related to German and Bengali, according to the comparison between 2 and 3. A possible reason is that the resource of different language pairs influences the efficiency of knowledge transfer. ## Effects On Pluggable Modules Previous parameter-isolation based methods propose various components to introduce additional parameters in the hidden layers (He et al., 2021). As shown in Table 7, inspired by them, we modify the usage of the pluggable modules in the hidden layers and our method is stable on the four translation directions. In particular, we also inject the pluggable modules in the Self-Attention layer. However, the special modification of pluggable modules does not demonstrate effective performance in incremental learning for MNMT. ## 4.6 Results On Original Language Pairs To demonstrate the validity and reliability of our method, we build two powerful MNMT models as the original models. As shown in Table 8, the original models achieve state-of-the-art performance on all original translation directions, compared with the other powerful MNMT models. ## 4.7 More Comparisons Due to space limitation, we provide a more detailed analysis of our method in Appendix C, including the training cost of the incremental learning, the | Model | Size | en→ha | en→is | en→ja | en→pl | en→ps | en→ta | AVG. | |---------|--------|---------|---------|---------|---------|---------|---------|--------| | Ours | 0.4B | 12.41 | 21.30 | 14.48 | 26.43 | 4.93 | 11.26 | 15.14 | | Ours | 1.2B | 13.42 | 22.62 | 16.12 | 27.91 | 5.41 | 12.03 | 16.25 | | M2M-100 | 0.4B | 2.75 | 13.12 | 9.51 | 22.55 | 2.92 | 1.67 | 8.75 | | M2M-100 | 1.2B | 6.14 | 18.60 | 11.67 | 28.08 | 4.79 | 1.82 | 11.85 | | Model | Size | ha→en | is→en | ja→en | pl→en | ps→en | ta→en | AVG. | | Ours | 0.4B | 12.88 | 31.19 | 19.14 | 30.20 | 11.21 | 16.54 | 20.19 | | Ours | 1.2B | 13.41 | 32.31 | 19.27 | 31.91 | 12.29 | 17.62 | 21.14 | | M2M-100 | 0.4B | 4.78 | 22.44 | 10.59 | 25.75 | 7.65 | 2.82 | 12.34 | | M2M-100 | 1.2B | 9.24 | 29.33 | 13.43 | 28.87 | 10.91 | 2.70 | 15.75 | visualization of sentence representations on all language pairs, and the case study on new language pairs, demonstrating the effectiveness of the knowledge transfer method in incremental learning for new language adaptation. ## 5 Conclusion In this work, we propose a knowledge transfer method in incremental learning for MNMT, which leverages the knowledge from neural models. It can encourage original models to learn new knowledge from updated training data while naturally mitigating the issue of degradation on previous translation directions. Moreover, it is more efficient to utilize the knowledge transfer scheme than introducing randomly initialized parameters in incremental learning. Experimental results demonstrate that the proposed method outperforms several strong baselines in the comprehensive language consideration. ## Limitations In this work, we attempt to extend an existing MNMT model to support new language pairs with an acceptable expense. In addition to the advantages, our method has the following limitations: (1) Additional introduced parameters. We utilize the parameter-isolation based method to support new language pairs. The total parameters of the MNMT model have been increased by pluggable modules to achieve better performance than prior studies. In the future, we will compress the number of parameters to the same size of original models meanwhile preserve the performance on all translation directions. (2) The gap between our scenario and the realworld scenario. Our proposed method is a whitebox service in incremental learning. Thus, we train a powerful MNMT model as the original model instead of directly utilizing existing models from the Internet. And we only consider eight incremental language pairs due to the limitation of computation resources. We try our best to simulate the realworld scenario and we will apply our proposed method for large-scale pre-trained MNMT models (e.g., NLLB 54.5B and M2M 12B) to validate the effectiveness in industrial scenarios. ## Acknowledgements This work is supported by the National Key R&D Program of China (2022ZD0160502) and the National Natural Science Foundation of China (No. 61925601, 62276152, 62236011). We sincerely thank the reviewers for their insightful comments and suggestions to improve the quality of the paper. ## References Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *arXiv preprint arXiv:1907.05019*. Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538–1548. Giuseppe Castellucci, Simone Filice, Danilo Croce, and Roberto Basili. 2021. Learning to solve nlp tasks in an incremental number of languages. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 837–847. Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021. Multieurlex-a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996. Yong Cheng, Ankur Bapna, Orhan Firat, Yuan Cao, Pidong Wang, and Wolfgang Macherey. 2022. Multilingual mix: Example interpolation improves multilingual neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4092–4102. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. *ACM Computing Surveys (CSUR)*, 53(5):1– 38. Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, and Zhifang Sui. 2022. Neural knowledge bank for pretrained transformers. *arXiv preprint* arXiv:2208.00399. Cyprien de Masson D'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. Advances in Neural Information Processing Systems, 32. Abteen Ebrahimi and Katharina Kann. 2021. How to adapt your pretrained multilingual model to 1600 languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4555–4567. Carlos Escolano, Marta R Costa-Jussà, and José AR Fonollosa. 2021. From bilingual to multilingual neural-based machine translation by incremental training. Journal of the Association for Information Science and Technology, 72(2):190–203. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. *J. Mach. Learn. Res.*, 22(107):1–48. Akhbardeh Farhad, Arkhangorodsky Arkady, Biesialska Magdalena, Bojar Ondˇrej, Chatterjee Rajen, Chaudhary Vishrav, Marta R Costa-jussa, España-Bonet Cristina, Fan Angela, Federmann Christian, et al. 2021. Findings of the 2021 conference on machine translation (wmt21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88. Association for Computational Linguistics. Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. 2020. Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations. In *Proceedings of the 13th International Conference* on Web Search and Data Mining, pages 178–186. Robert French. 1993. Catastrophic interference in connectionist networks: Can it be predicted, can it be prevented? *Advances in Neural Information Processing Systems*, 6. Xavier Garcia, Noah Constant, Ankur Parikh, and Orhan Firat. 2021. Towards continual learning for multilingual machine translation via vocabulary substitution. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1184–1192. Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *arXiv preprint arXiv:2203.14680*. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 5484–5495. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Transactions of the Association for* Computational Linguistics, 10:522–538. Shuhao Gu, Bojie Hu, and Yang Feng. 2022. Continual learning of neural machine translation within low forgetting risk regions. *arXiv preprint* arXiv:2211.01542. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. In *International Conference on Learning Representations*. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531, 2(7). Yichong Huang, Xiaocheng Feng, Xinwei Geng, and Bing Qin. 2022. Omniknight: Multilingual neural machine translation with language-specific selfdistillation. *arXiv preprint arXiv:2205.01620*. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Kasidis Kanwatchara, Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Boonserm Kijsirikul, and Peerapon Vateekul. 2021. Rational lamol: A rationalebased lifelong learning framework. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2942–2953. Zixuan Ke and Bing Liu. 2022. Continual learning of natural language processing tasks: A survey. arXiv preprint arXiv:2211.12701. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526. Surafel M Lakew, Aliia Erofeeva, Matteo Negri, Marcello Federico, and Marco Turchi. 2018. Transfer learning in multilingual neural machine translation with dynamic vocabulary. In International Workshop on Spoken Language Translation. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582– 4597. Zihan Liu, Genta Indra Winata, and Pascale Fung. 2021. Continual mixed-language pre-training for extremely low-resource neural machine translation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2706–2718. Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul A Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021. Continual learning in task-oriented dialogue systems. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7452–7467. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Wei Peng, Chongxuan Huang, Tianhao Li, Yun Chen, and Qun Liu. 2020. Dictionary-based data augmentation for cross-domain neural machine translation. arXiv preprint arXiv:2004.02577. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Natural language understanding with privacy-preserving bert. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, CIKM 2021. Association for Computing Machinery. Chenze Shao and Yang Feng. 2022. Overcoming catastrophic forgetting beyond continual learning: Balanced training for neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2023–2036, Dublin, Ireland. Association for Computational Linguistics. Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Herve Jegou, and Armand Joulin. 2019. Augmenting self-attention with persistent memory. *arXiv* preprint arXiv:1907.01470. Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019. Lamol: Language modeling for lifelong language learning. In International Conference on Learning Representations. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062–2068. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. *Journal of machine* learning research, 9(11). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000–6010. Raúl Vázquez, Hande Celikkanat, Vinit Ravishankar, Mathias Creutz, and Jörg Tiedemann. 2022. A closer look at parameter contributions when training neural language and translation models. In *Proceedings of* the 29th International Conference on Computational Linguistics, pages 4788–4800. Guillaume Wenzek, Vishrav Chaudhary, Angela Fan, Sahir Gomez, Naman Goyal, Somya Jain, Douwe Kiela, Tristan Thrush, and Francisco Guzmán. 2021. Findings of the wmt 2021 shared task on large-scale multilingual machine translation. In *Proceedings of* the Sixth Conference on Machine Translation, pages 89–99. Shiyue Zhang, Vishrav Chaudhary, Naman Goyal, James Cross, Guillaume Wenzek, Mohit Bansal, and Francisco Guzman. 2022. How robust is neural machine translation to language imbalance in multilingual tokenizer training? arXiv preprint arXiv:2204.14268. Yang Zhao, Junnan Zhu, Lu Xiang, Jiajun Zhang, Yu Zhou, Feifei Zhai, and Chengqing Zong. 2022. Life-long learning for multilingual neural machine translation with knowledge distillation. arXiv preprint arXiv:2212.02800. Yaoming Zhu, Jiangtao Feng, Chengqi Zhao, Mingxuan Wang, and Lei Li. 2021. Counter-interference adapter for multilingual machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2812–2823. ## A Dataset Details We utilize six language pairs to train the original MNMT model that covers 12 translation directions and 7 languages (WMT-7). All the original training data comes from the recent WMT general translation track. And we conduct eight incremental language pairs in incremental learning from the WMT news translation track and FLoRes. All data follow the license that can be freely used for research purposes (Farhad et al., 2021). The license of FLoRes dataset is CC-BY-SA 4.0. In addition, we follow Fan et al. (2021) to clean the training sample. We introduce the characteristics of different languages to analyze the linguistic diversity, as shown in Table 9. All language pairs are Englishcentric and the statistics of training data are shown in Table 10. ## A.1 Data Statistics As the general setting, all language pairs are divided into three categories in terms of the amount of parallel data, including high resource (>10M), medium resource (1M~10M), and low resource (100k~1M). Specifically, the original language pairs are, High resource: Japanese and Polish; Medium resource: Icelandic and Pashto; Low resource: Hausa and Tamil. And the incremental language pairs are, Medium resource: German and Bengali; Low resource: Ukrainian and Romanian. Note that incremental training data is often a nonhigh resource in the real-world scenario. ## A.2 Language Consideration In this work, we explore a more complex and comprehensive scenario for MNMT in incremental learning, taking into account the diversity of incremental languages. These incremental languages differ from the original languages in terms of their scripts and belong to different language families, which leads to a serious vocabulary and linguistic gap. Inspired by Zhang et al. (2022), if the incremental language has a distinct script with the set of original languages, a certain proportion of OOV tokens with unclear semantics will occur between the original and incremental languages and hinder the performance on new language pairs. Moreover, it is important to note that a language family refers to a group of languages that share a common ancestry, known as the proto-language7. This concept | Code | Language | Genus | Script | Order | |--------|------------|-------------|----------|---------| | ha | Hausa | West Chadic | Latin | SVO | | is | Icelandic | Germanic | Latin | SVO | | ja | Japanese | Japanese | Kanji | SOV | | pl | Polish | Slavic | Latin | SVO | | ps | Pashto | Iranian | Arabic | SOV | | ta | Tamil | Dravidian | Tamil | SOV | | de | German | Germanic | Latin | SVO | | ro | Romanian | Romance | Latin | SVO | | uk | Ukrainian | Slavic | Cyrillic | SVO | | bn | Bengali | Indic | Bengali | SOV | highlights the historical connections among languages and their evolution over time. Additionally, differences in grammar and word order can be observed across distinct language families8. These linguistic variations further contribute to the existing gap between incremental languages, making their translation more challenging. In our setting, the 4 incremental languages include: Bengali, which is not related to any of the original 7 languages, and has a distinct script; Ukrainian, which is related to the original language Polish with the language family Slavic, but has a distinct script with Cyrillic; Romanian, is Romance language that is not related to all the original languages, but has a share script with Latin characters; German, which is similar to the original languages in the language families and scripts. The statistics and details of datasets for original and incremental languages are shown in Table 9. ## B Model Details B.1 Training Setup We implement Transformer translation models in all our experiments. In particular, the small original model (0.4B) consists of 6 stacked encoder layers, 6 stacked decoder layers, and 16 multiattention heads, followed by the configuration of Transformer-Big (Vaswani et al., 2017). The dimensions of dmodel and dffn are 1024 and 4096 respectively. The large original model (1.2B) consists of 24 stacked encoder layers, 24 stacked decoder layers, and 16 multi-attention heads, followed by the configuration of M2M-100 (Fan et al., 2021). The 7https://en.wikipedia.org/wiki/Languagefamily | Language Pair | Data Sources | # Samples | | | | | |-----------------|----------------|-------------|--------|------------|-------|-------| | Train | Dev | Test | Train | Dev | Test | | | ja-en | WMT21 | WMT20 | WMT21 | 18,001,428 | 993 | 1,005 | | pl-en | WMT20 | WMT20 | WMT20 | 10,206,520 | 2,000 | 1,001 | | is-en | WMT21 | WMT21 | WMT21 | 4,376,282 | 2,004 | 1,000 | | ps-en | WMT20 | WMT20 | WMT20 | 1,155,942 | 2,698 | 2,719 | | ha-en | WMT21 | WMT21 | WMT21 | 744,856 | 2,000 | 997 | | ta-en | WMT20 | WMT20 | WMT20 | 660,818 | 1,989 | 997 | | de-en | WMT14 | WMT13 | WMT14 | 4,508,785 | 3,000 | 3,003 | | ro-en | WMT16 | WMT16 | WMT16 | 610,320 | 1,999 | 1,999 | | uk-en | FLoRes | FLoRes | FLoRes | 8,604,580 | 997 | 1,012 | | bn-en | FLoRes | FLoRes | FLoRes | 925,896 | 997 | 1,012 | dimensions of dmodel and dffn are 1024 and 8192 respectively. We use Adam (Kingma and Ba, 2014) and a half-precision training scheme to optimize the parameters of all MNMT models. In addition, we reset the optimizer and learning scheduler in incremental learning and use the temperature-based sampling scheme (Arivazhagan et al., 2019) with a temperature of T = 5 to balance the training data between diverse language pairs. We adopt the early stop (patience is 10) strategy in incremental learning and the batch size is 4096×4 in all training procedures. To eliminate the randomness of the result, we report the mean BLEU scores of the models that are trained in five seeds. All incremental models are trained on 2 NVIDIA A100 GPUs. ## B.2 Continual Learning Baselines We compare our method with various representative baselines in continual learning. The baselines are as follows: - Replay (Sun et al., 2019): creating pseudo data for the original language pairs and training new language pairs jointly with the pseudo data and incremental training data. - EWC (Kirkpatrick et al., 2017): computing the importance of the parameters with Fisher matrix and employing an additional penalty into the loss function to preserve original knowledge. - Self-KD (Castellucci et al., 2021): utilizing the original models as the teacher model to distill old knowledge. - LFR (Gu et al., 2022): constraining the parameters of original models with low forget- | Method | ja(old)→ro(new) | de(old)→ro(new) | |-------------|-------------------|-------------------| | Adapter | 1.09 | 5.37 | | Adapter+LSE | 10.12 | 18.05 | | KT | 15.12 | 22.20 | Table 11: The BLEU scores on zero-shot direction. ting risk regions. We choose the LRF-CM for adapting new language pairs. - Prompt (Chalkidis et al., 2021): prepending prompts to the input embedding in the first layer. - Prefix (Li and Liang, 2021): prepending prefixes to the keys and values of the attention at every layer. ## C More Comparisons C.1 Training Cost To further illustrate the efficiency of our method, we investigate the training time compared with the stronger baselines, as shown in Figure 3. The results show that the knowledge transfer method can reduce the training time of incremental learning, which is more efficient and practical than the other methods. ## C.2 Visualization Of Multilingual Representations As shown in Figure 4, we visualize the sentence representations on xx-to-English translation directions to investigate the representation gap between languages. Due to comparability in one representation space, we need multi-source sentences that represent the same meaning in different languages. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) We use "FLoRes" and reduce the 1024-dim representations to 2-dim with t-SNE (Van der Maaten and Hinton, 2008) for visualization. As Figure 4 shows, the sentence representations using our method are drawn closer than the standard Adapter method (one of the baselines). It demonstrates that our method can well adapt to the new language. Moreover, previous studies have shown that if sentences with similar semantics are closer together in the representation space, it can usually improve the translation performance of zero-shot translation. Experimental results in two translation directions show that our method can achieve better performance for zero-shot translation, which is consistent with our visualization. ## C.3 Case Study We present several translation examples to provide a comprehensive understanding of the knowledge transfer method, as shown in Table 12. The examples demonstrate that our method can effectively adapt original models to new languages especially when the incremental language is not related to the set of original languages. In particular, due to the vocabulary gap, the Adapter method is vulnerable to learning incremental languages that have a distinct script with Latin. Although the Extension alleviates this issue by expanding the embedding layer, the additional parameters are not fully optimized to suffer from the off-target problem for MNMT. ## D Potential Risks Of Our Method Since our proposed method can increase the unlimited number of translation directions, it is possible for some malicious users to use the MNMT model to provide translation services for politically sensitive languages. For instance, a malicious user may utilize our model to generate hateful or offensive sentences in some politically sensitive languages. Source en→bn S1: **Widespread** looting reportedly continued overnight, as **law enforcement officers were not present** on Bishkek's streets. S2: Bishkek was described as sinking into a state of "anarchy" by one observer, as **gangs of people** roamed the streets and plundered stores of consumer goods. S3: **Several Bishkek** residents blamed protesters from the south for the lawlessness. S1: রাতজু ন চলেছ কারণ িবশেকক'স পক লু ট এ আইন েয়াগকারী ক ক রা উপ ত িছেলন না। S2: "একজন প িছল ""অরাজকতা"", কারন একদল েব েকর রা িবশেকক বলেত ঝা হ ক রা ের ত এবং প র িজিনসপ লুঠ Reference করত।" ড়ে ব্যা ণ্ঠ স্ট্রি প্র র্ম র্তা স্থি র্য ক্ষ দ্বা বো নো য়ে লো স্তায় ঘু বেড়া ভোগ্য ন্যে ত্র বে ছু ন্দা ন্য ক্ষি প্র দো য়ে স্তায় ক্ষা র্ম র্তা স্থি নো ব্যা য়ে র্য ক্ষ দ্বা য় স্থায় ডু য়া র্ণ য়ে স্তায় ঘো ভোক্তা ণ্যে দো য়ে বে য়ে ন্দা বে ন্য ক্ষি থে প্র দো রো স্তায় প্র র্ম র্তা স্থি য় ব্যা ব্যা গে র্য ক্ষ দ্বা ন্ত্র স্থা ধ্যে ডু য়া র্ণ দে য়া য়ে লো স্তায় ঘো ভোগ্য ণ্যে দো য়ে য় বে য়ে ন্দা ক্ষি থে প্র র্মে ন্য দো রো S3: িবশেকেকর শ িক বািস আইনশৃNলার ঘাটিতর জ দ েনর | Adapter (fail) | |-------------------| িতবাদকারীেদর ষ িদ েছন। Extension | S1: বাংলা বাংলা বাংলা বাংলা, বাংলা বাংলা বাংলা, বাংলা বাংলা, বাংলা বাংলা বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বালা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বালা S2: বাংলা, বাংলা, বাংলা, বাংলা, বালা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, ব S3: বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, বাংলা, ব | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | S1: িবশেকেকর রা | আইনশৃNলা র | কারী বািহনীর কক | রা উপ | ত িছেলন না বেলই রাতারািত লুটপাট চালা | অ | হত িছল বেল জানা িগেছ । | | |---------------------|-------------|--------------|--------|-----------------------------|-----------|--------------------|----------| | S2: িবশেক একজন পেব | েকর | রা 'রাজকী' অব | েব যাও | বেল বনা করা হিছল, কারণ জনতার দল রা | রােফরা কের এবং | প | র | | কান লুট কের িদিছল । | | | | | | | | | S3: শ ককজন িবশেকেকর বািস | আইিন আচরেণর জ | দ | ণ | েক | িতবাদকারীেদর | ষা | প কেরেছন । | S1: িবশেকেকর রা আইন েয়াগকারী ক ক রা উপ ত না থাকা রাতভর পক লুটপাট অ হত িছল বেল জানা েছ । S2: িবশেককেক একজন প েব েকর রা 'রাåজত ' অব র ম েব যাও র ব না ও হ িছল, কারণ একদল ক রা রােফরা কের এবং Ours প র কান লুট কের িন যা । S3: িবশেকেকর শ ক কজন বািস দ ণ িদক েক আসা িতবাদকারীেদরেক এই অধ র জ ষা প কেরেছ । | Source en→uk | |-----------------| S1: **Angel** (2006), explains the **Continuum** approach as a method being used to help organizations reach a higher level of performance. S2: It has been known for a long time that different types of brain damage, traumas, **lesions, and tumours** affect behaviour and cause changes in some mental functions. S3: **The Sundarbans has been declared a UNESCO World Heritage Site**. The part of the forest within Indian territory is called **Sundarbans** National Park. Reference | S1: "Ангел" (2006) пояснює підхід Континуума як метод, що використовується для допомоги організаціям у досягненні вищого рівня продуктивності. S2: "Було давно відомо, що різні типи пошкоджень мозку, травми, ураження та пухлини впливають на поведінку і призводять до змін у деяких психічних функціях. S3: Сундарбан був оголошений об'єктом Світової спадщини ЮНЕСКО. Частина лісу в межах індійської території називається Національним парком Сундарбан. S1: Angel (2006), explică abordarea Continuum ca metodă utilizată pentru a ajuta organizaţiile la un nivel mai mare de performanţă. S2: Відомо давного часу відомилося, що різні типи мозоку пошкоду, травми, резіонів і туморів впливають поведінку і викликають зміни в деяких психічних функціях. S3: Die Sundarbans wurde eine UNESCO Weltkulturbebe erklärt, die Teil des Waldes innerhalb indischen Territoriums wird Sundarbans National Park genannt. | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Adapter (fail) | |------------------| Extension | видатності. S2: З давно відомо, що різні типи пошкодження мозку, травми, перізаціями і тумари впливають на поведінку і викликають зміни в деяких психічних функціях. S3: Сундарбанські були оголошені UNESCO Світньої Спадщини, частина лісу в межах Індії називається Сандарбанською Національним парком. S1: Ангел (2006), пояснює підхід Континуума як метод, який використовується, щоб допомогти організаціям досягти більш високого рівня продуктивності. S2: Давно відомо, що різні типи пошкоджень мозку, травм, ураження та пухлини впливають на поведінку і викликають зміни в деяких психічних функціях. S3: Сундарбан був оголошений об'єктом Світової спадщини ЮНЕСКО. Частина лісу на території Індії називається Національний парком Сундарбан. | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Ours Table 12: Examples of several baselines and our method on en-to-bn and en-to-uk translation directions for each label based on FLoRes testset. We only highlight some words and fragments to show the representative difference between various methods. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We provide the limitations of our work in the section 'Limitation'. ✓ A2. Did you discuss any potential risks of your work? We provide the potential risks of our work in Appendix D. ✓ A3. Do the abstract and introduction summarize the paper's main claims? The paper's main claims are summarized in the section 'Abstract' and the section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We provide the dataset and open toolkit in the section 4.1, 4.2, and Appendix A. ✓ B1. Did you cite the creators of artifacts you used? We cite the dataset and open toolkit in the section 4.1, 4.2, and Appendix A. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We discuss the license of dataset in Appendix A. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We discuss the existing artifact was consistent with their intended use n the section 4.2 and Appendix A. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We check the details of dataset in Appendix A. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We provide the details of domains and languages in Appendix A. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We carefully provide the statistics of all data in Appendix A. ## C ✓ **Did You Run Computational Experiments?** We provide the computational experiments in Appendix B and C. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We provide all implementation details and training setup in section 4, Appendix B.1 and Appendix B.2. The section 4.4 also contains the number of parameters in the models used. We report the training cost in Appendix C.1. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We provide the experimental setup with hyper-parameters and configuration in Appendix B.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We report the statistics about our results in Appendix B.1. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We used existing packages of scripts and toolkit and we report these details in section 4.2. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
aragon-etal-2023-disorbert
{D}isor{BERT}: A Double Domain Adaptation Model for Detecting Signs of Mental Disorders in Social Media
https://aclanthology.org/2023.acl-long.853
Mental disorders affect millions of people worldwide and cause interference with their thinking and behavior. Through the past years, awareness created by health campaigns and other sources motivated the study of these disorders using information extracted from social media platforms. In this work, we aim to contribute to the study of these disorders and to the understanding of how mental problems reflect on social media. To achieve this goal, we propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders. We have evaluated our model in the detection of signs of three major mental disorders: Anorexia, Self-harm, and Depression. Results are encouraging as they show that the proposed adaptation enhances the classification performance and yields competitive results against state-of-the-art methods.
# Disorbert: A Double Domain Adaptation Model For Detecting Signs Of Mental Disorders In Social Media Mario Ezra Aragónα β, A. Pastor López-Monroyγ**, Luis C. González**δ David E. Losadaα**, Manuel Montes-y-Gómez**β α Centro Singular de Investigación en Tecnoloxias Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Spain βInstituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Mexico γ Centro de Investigación en Matemáticas (CIMAT), Mexico δ Facultad de Ingeniería, UACh, Mexico {ezra.aragon,david.losada}@usc.es, pastor.lopez@cimat.mx, lcgonzalez@uach.mx, mmontesg@inaoep.mx ## Abstract Mental disorders affect millions of people worldwide and cause interference with their thinking and behavior. Through the past years, awareness created by health campaigns and other sources motivated the study of these disorders using information extracted from social media platforms. In this work, we aim to contribute to the study of these disorders and to the understanding of how mental problems reflect on social media. To achieve this goal, we propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders. We have evaluated our model in the detection of signs of three major mental disorders: Anorexia, Self-harm, and Depression. Results are encouraging as they show that the proposed adaptation enhances the classification performance and yields competitive results against state-of-the-art methods. ## 1 Introduction Mental disorders are among the most common illnesses worldwide. Some estimates1indicate that more than 50% of the population will be diagnosed with a mental disorder at some point in their lives. The prevalence of these disorders is highly concerning since they alter the way people think, feel, and take action, resulting in the incapacity of daily life routines. In addition, the recent COVID-19 pandemic triggered a serious global social and economic disruption, which had a direct effect on people's lives, and brought many challenges that can be 1Center for Disease Control and Prevention, https://www.cdc.gov/mentalhealth/learn/index.htm stressful and overwhelming (Li et al., 2020). This situation was particularly difficult for people with mental health conditions and caused an increase in the prevalence of anxiety and depression (World Health Organization, 2022). There is therefore an increasing need for developing new tools to monitor the presence of mental disorders and to respond to early signs of psychological concerns. Nowadays, social media content is massive and provides an opportunity to do research on how people undergo difficulties. Many people use online platforms to publicly share their daily routines and important events, while others take advantage of the anonymity of these spaces to explicitly discuss mental health issues and to seek help (Ríssola et al., 2021; Crestani et al., 2022). In this work, we aim to contribute to the detection of signs of mental disorders by automatically analyzing social media posts. This type of analysis is expected to support new technologies able to warn about the onset of mental disorders and provide supporting evidence. As argued by Neuman et al. (2012), these new forms of screening should not be taken as "magic substitutes for the human expert" but, instead, as computational tools that can substantially reduce the workload of public health systems, e.g., by facilitating preventive measures. Latest developments in Natural Language Processing (NLP) encourage the fine-tuning of pretrained language models for a wide variety of tasks. This approach often yields good results, but it is problematic for tasks where the language is highly domain-specific (Villa-Cueva et al., 2022), such as in the case of mental disorders. An alternative is to pre-train the model –e.g., BERT (Devlin et al., 2019)– with data from the target domain. However, pre-training is expensive and complex, and collecting a sufficiently large training corpus can be difficult for certain domains, as exposed by the creators of MentalBERT (Ji et al., 2022). Instead of pre-training the model, we propose to perform a domain adaptation, similar to the one proposed in (Howard and Ruder, 2018) but refined in two stages (Villa-Cueva et al., 2022) that exploit a novel lexicon-driven learning. This process takes an already trained model and continues its training using a (relatively) small corpus focused on the target domain (Gururangan et al., 2020). In particular, we propose DisorBERT, a two-step domain adaptation model of BERT for detecting signs of mental disorders in social media. First, we teach BERT the general structure of the language used in social media texts (e.g., in Reddit posts), then we specialize it in the kind of language used to express information about mental disorders. Furthermore, we exploit lexical knowledge to guide the language model's masking process. Instead of learning the occurrence of general words, this "guided masking" opts to bias the learning process towards words that are important to the target application, which in our case is the detection of Anorexia, Depression, and Self-harm. We can summarize our contributions as follows: 1. We introduce DisorBERT, a simple yet effective double-domain adaptation model for detecting signs of mental disorders in social media. 2. We explore the use of lexical knowledge, extracted from a depression lexicon, to guide and enhance the masking process of the language model. 3. We empirically evaluate the proposed model and provide quantitative and qualitative evidence of its robustness for the detection of signs of Anorexia, Depression, and Self-harm in social media. ## 2 Related Work The detection of mental disorders is an interdisciplinary research area that has been fostered thanks to the current availability of a variety of data sources and computational models (Velupillai et al., 2019). In recent years, several works have explored social media platforms to study the manifestation of mental disorders (Skaik and Inkpen, 2020; Ríssola et al., 2021). Social media sources have been exploited to detect features that help to identify signs of need of medical or psychological support (Calvo et al., 2017). For example, expressions of distress or negative feelings, particularly published by young people (Robinson et al., 2016), abound in online media. A variety of methods have been applied to find relevant and discriminative patterns from user-generated text. For example, some studies employed words or word sequences as features (Tsugawa et al., 2015; Schwartz et al., 2014; Ning et al., 2018). Other groups of studies have applied sentiment analysis techniques to model emotional properties of users' posts (Ramírez-Cifuentes and Freire, 2018; Preo¸tiuc-Pietro et al., 2015), or exploited a set of psychological categories to capture social relationships, thinking styles as well as individual differences (Coppersmith et al., 2015). In Cheng et al. (2017) and O'Dea et al. (2017), the authors explored the association between linguistic inquiry features and suicide risk factors, extracting patterns in different linguistic profiles. Similarly, for the detection of signals related to suicide attempts, Coppersmith et al. (2018) used word embeddings and a bidirectional Long Short-Term Memory (LSTM) to capture contextual information. Furthermore, for detecting suicidal ideation, Ramírez-Cifuentes et al. (2020) explored the incorporation of images into text-based representations. These authors analyzed the combination and relationship of textual and visual information, identifying significant differences in the use of both. More recently, with the increasing popularity of transformers, BERT-based classifiers have been fine-tuned for detecting different mental disorders (Martínez-Castaño et al., 2020; Parapar et al., 2022). On the other hand, a number of studies have trained language models for specific domains (Zihan et al., 2021). For example, Beltagy et al. (2019) exploited a large-scale annotated dataset with scientific data to adapt BERT to the scientific domain, showing improvements over the BERT base model in multiple classification tasks. Similarly, in Lee et al. (2019), BERT was pre-trained on largescale biomedical corpora outperforming the original BERT in a variety of biomedical text mining tasks. In recent work, Nguyen et al. (2020) trained a BERT model with approximately 850M tweets achieving outstanding results in several tweet analysis tasks. Closer to our work, Ji et al. (2022) adapted BERT to the mental health domain by collecting specific ![2_image_0.png](2_image_0.png) data from Reddit. Their model, named MentalBERT, was trained using 13,671,785 sentences for around eight days using four Tesla v100 GPUs. The aforementioned pre-training approach is usually effective, but it is also expensive as collecting a corpus of a suitable size to pre-train BERT can be a big challenge in many domains. To overcome this issue, we propose to perform a double-domain adaptation of BERT, a less-expensive process that takes advantage of the already trained general model. We explain this strategy in detail in the following section. ## 3 From Bert To Disorbert: A Mental Disorder Detection Model This section introduces DisorBERT, a language model specially suited for the detection of signs of mental disorders in Social Media. As mentioned above, the construction of DisorBERT consists of a two-stage domain adaptation of BERT. In short, the idea is to first teach BERT the general structure of the language used in a large social media platform (e.g., in Reddit), and next, to specialize the model in the language of users with mental disorders. This whole process is depicted in Figure 1. For domain adaptation, we follow the procedure suggested by Howard and Ruder (2018) and Wolf et al. (2022), which consists of continuing BERT's pre-training through the fine-tuning of a Masked Language Model for more epochs. The process first uses the Reddit corpus for fine-tuning, and then, in a second adaptation, uses a collection of documents related to mental health. The idea is to adapt the language model from the general data of Wikipedia and books corpora (BERT's sources for training) to the more specific language of Reddit and mental health. For this process, we trained our model for three epochs, using a batch size of 128, a learning rate of 2e−5, on a GPU NVIDIA Tesla V100 32GB SXM2. It is particularly relevant to note that, in each of these steps, we employed a depression lexicon to guide the model's masking process and, therefore, to bias the learning towards words that are important to the target application domain. ## 3.1 Adapting The Model To Reddit The first step of our domain adaptation involves the adjustment of the model to the language style used in social media sources. To this end, we used the corpus from Kim et al. (2019), which contains pre-processed posts from Reddit for the task of text summarization. There are more than 120k text-summary pairs discussing diverse topics and interests; for our experiments, we used both. As data pre-processing, we concatenated all the examples and then split the whole corpus into chunks of equal size (128 words). The next step was to mask some of the words in each batch, picking 20% of them. It is worth mentioning that this percentage is within the typical range used for BERT, and it is a common choice in the literature. In Subsection 3.3, we explain in more detail the masking strategy. ## 3.2 Adapting The Model To Mental Health Our second step adapts the model to the mental health domain. To accomplish this, we used four datasets extracted from subreddits containing information related to mental health and depression2,3,4. Overall, we obtained more than 105k posts related to mental disorders. Focusing on these topics, our model can learn to identify how users with mental disorders express themselves as well as the language they use in social networks. Observe that this includes text from a large variety of users (e.g., some users may express negative feelings in their publications but they might not suffer from psychological problems). In this case, we also concatenated all the examples and split the whole data set into chunks for the fine-tuning of the language model. ## 3.3 Guided Masking Of Language Model In order to improve the learning process, we incorporate lexical knowledge related to mental disorders. To train masked language models, it is common to employ random masking. This technique consists of selecting a random number of words within a sentence and asking the model to predict the hidden word. With this technique, the main idea is that the model can learn the context in which words occur. For our study, we incorporated knowledge from a lexical resource during the masking process. Instead of randomly masking words, we first checked if the text had words from our lexicon. If so, the lexicon words are masked to begin the training. In the event that the masked words within the original text did not complete the required 20%, we added additional random words to the masking. This new form of masking helps the model pay more attention to words that are related to mental disorders. The hypothesis is that the model built in this way should be able to more easily identify users who show signs of disorders. Once the model has been trained, we can proceed to specialize it in the downstream detection tasks by applying a traditional fine-tuning process. The reference resource for the proposed "guided masking" is a depression lexicon built by Losada and Gamallo (2020). This is one of the few publicly available lexicons focusing on depression. Its word list resulted from expanding an already existing terminological resource by exploiting distributional strategies and lexical relationships such as synonymy. Here, we augmented this lexicon by adding different verb tenses to all the verbs, for example, the verb "abandon" will lead to words such as "abandons", "abandoned", and "abandoning". This addition helps to cover cases where people describe situations not happening in the present (e.g. when they refer to past events). Observe that we used a single (depression-oriented) lexicon to support the identification of risks for three different tasks, including Anorexia and Self-harm. ## 4 Experimental Settings 4.1 Collections For the evaluation, we used the datasets from the eRisk 2019-2020 evaluation tasks (Losada et al., 2019, 2020). These tasks propose the detection of signs of anorexia, depression, and self-harm. Table 1 shows general information about the collections and how the classes are distributed. For depression, we used the data set from eRisk 2018 (Losada et al., 2018) for training our models. | Data set | Train | Test | | | |-------------|---------|--------|-------|-------| | P | C | P | C | | | Anor'19 | 61 | 411 | 73 | 742 | | avg # posts | 407.8 | 556.9 | 241.4 | 745.1 | | avg # words | 37.3 | 20.9 | 37.2 | 21.7 | | Dep'20 | 214 | 1493 | 40 | 49 | | avg # posts | 440.9 | 660.8 | 493.0 | 543.7 | | avg # words | 27.5 | 22.75 | 39.2 | 45.6 | | SH'20 | 41 | 299 | 104 | 319 | | avg # posts | 169.0 | 546.8 | 112.4 | 285.6 | | avg # words | 24.8 | 18.8 | 21.4 | 11.9 | Table 1: Datasets used for experimentation. P indicates the positive users and C is used for control users. The eRisk organizers provided datasets containing the post history of several users from Reddit. For each task, we have two categories: i) positive users affected by either anorexia, depression, or self-harm, and ii) a control group composed of people who do not suffer from these mental disorders. For the anorexia and self-harm tasks, the organizers obtained the positive users searching for people who explicitly mentioned that they were diagnosed by a medical specialist with one of these conditions. Vague expressions like "I think I have anorexia" or "I am anorexic" were not considered as expressions of a diagnostic. On the other hand, the control group contains random users from different subReddits and users who often interact in the anorexia, depression, or self-harm threads. This adds more realism to the data as the control group includes, for example, expert clinicians who are active in mental health subReddits because they give support and advice to other people. Thus, risk prediction technology cannot be merely based on distinguishing the topic of the conversations. For the depression task, organizers asked users to fill out the BDI questionnaire (Beck et al., 1961) (thus obtaining the estimated level of severity of their depression). In the eRisk depression task of 2020, participants were given the thread of users' social media postings and they were asked to estimate the severity of the depression (where the real BDI questionnaires acted as the ground truth). For our study, we exclusively focused on a binary task (similar to anorexia and self-harm), i.e., to distinguish between positive and control users. So, we split the users into two categories based on the BDI scores. The positive class contains the users that obtained 21 or more points in the questionnaire (according to the medical literature a score higher than 20 is indicative of the presence of moderate or severe depression). The control group contains the rest of the individuals (BDI scores lower than 21). ## 4.2 Model Configuration Pre-processing: We performed a simple preprocessing of the texts by lowercasing all words and removing special characters like URLs, emoticons, and hashtags. Training and predictions: For each user, we separated the post history into N = 35 segments. We selected this value empirically, after testing some sizes of sequences recommended in the literature. For training, we processed each segment of the post history as an individual input or item and trained the model. For the test, each segment receives a label of 1 or 0; then, if the majority of the items are positive, the user is classified as a possible case of risk. The main idea is to consistently detect the presence of major signs of anorexia, depression, or self-harm through all the user posts. Parameters: We used the models provided by HuggingFace v4.24.0 (Wolf et al., 2022), and Pytorch v1.13.0 (Paszke et al., 2019). In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of 1e−5, and cross-entropy as a loss function. We trained the models for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2. ## 4.3 Baseline Approaches Bag-of-Words: We employed a traditional Bagof-Words (BoW) approach considering word unigrams and TF-IDF weights. For feature selection, we applied the Chi-Square test and used a Support Vector Machine (SVM) with a linear kernel as a classifier. We also explored alternative BoW classifiers, but an SVM was the best-performing choice in our experiments. Deep Neural Networks: We used CNN and BiLSTM networks. These neural networks used 100 neurons, an Adam optimizer, and Glove (Pennington et al., 2014) embeddings with a dimension of 300. For the CNN, we used 100 random filters of sizes 1 and 2. BERT: We employed a BERT-based classification model with fine-tuning over each training set. MentalBERT: This is a pre-trained language model for the mental healthcare domain. It was built from a large collection of sentences extracted from Reddit (Ji et al., 2022). Similar to BERT, we fine-tuned this model over each training set. For each baseline, we explored different parameters using manual and grid search (depending on the model) and selected the best-performing setting. In addition to the previous approaches, we also compared our results against those of the participants of the eRisk evaluation shared tasks. For this comparison, we considered the F1 score, precision, and recall over the positive class, as reported in (Crestani et al., 2022). ## 5 Evaluation And Discussion Table 2 shows the results of our approach and all baseline methods. It also includes the results of our approach using only Reddit adaptation, only mental health adaptation, and random masking instead of guided masking. The first thing to notice is that BERT performed well but MentalBERT and DisorBERT are better choices, highlighting the importance of having domain-oriented models. Going into detail, we can observe that most of our proposed models outperformed the baselines in terms of F1. Our singledomain adaptations obtained slight improvement in comparison with baselines, while the doubledomain adaptation further increased performance, particularly with the incorporation of lexical knowledge. Here it is important to highlight that the lexicon employed is not specific to the language of | Method | Anorexia | Depression | Self-Harm | | | | | | | | |--------------------------------------------------|------------|--------------|-------------|------|------|------|------|------|------|------| | Masking | F1 | P | R | F1 | P | R | F1 | P | R | | | Baselines | | | | | | | | | | | | BoW-SVM | - | 0.67 | 0.85 | 0.55 | 0.58 | 0.56 | 0.60 | 0.50 | 0.95 | 0.34 | | RNN-GloVe | - | 0.65 | 0.92 | 0.51 | 0.58 | 0.59 | 0.57 | 0.57 | 0.62 | 0.53 | | CNN-GloVe | - | 0.67 | 0.93 | 0.52 | 0.61 | 0.56 | 0.68 | 0.57 | 0.62 | 0.53 | | BERT | Random | 0.77 | 0.70 | 0.85 | 0.62 | 0.55 | 0.72 | 0.60 | 0.44 | 0.94 | | MentalBERT | Random | 0.76 | 0.66 | 0.89 | 0.67 | 0.57 | 0.80 | 0.71 | 0.62 | 0.84 | | Our methods: Single and Double Domain Adaptation | | | | | | | | | | | | BERT w/Reddit | Random | 0.81 | 0.75 | 0.88 | 0.66 | 0.56 | 0.80 | 0.71 | 0.66 | 0.76 | | BERT w/Reddit | Guided | 0.82 | 0.82 | 0.82 | 0.68 | 0.55 | 0.90 | 0.72 | 0.65 | 0.82 | | BERT w/Health | Random | 0.80 | 0.77 | 0.84 | 0.67 | 0.53 | 0.93 | 0.69 | 0.60 | 0.82 | | BERT w/Health | Guided | 0.82 | 0.81 | 0.84 | 0.68 | 0.57 | 0.85 | 0.74 | 0.72 | 0.76 | | DisorBERT | Random | 0.82 | 0.83 | 0.81 | 0.68 | 0.54 | 0.93 | 0.72 | 0.65 | 0.80 | | DisorBERT | Guided | 0.83 | 0.82 | 0.85 | 0.69 | 0.56 | 0.89 | 0.72 | 0.73 | 0.71 | anorexia or self-harm but, still, it was also beneficial for these two target tasks. From another perspective, DisorBERT showed a good balance between precision and recall, whereas other variants (e.g., RNN-GloVe) improved precision at the expense of recall. DisorBERT has, therefore, a solid retrieval behavior and it can effectively find multiple traces of psychological risks. This is an important outcome since high recall is essential for clinical screening tools. However, there might be some potential use cases where high recall is not the most preferable choice, e.g., a social network that wants to focus on the riskiest behavior. For these scenarios, it may be necessary to modify our model to prioritize precision. In Figure 2, we plot the precision and recall of DisorBERT and the baselines. Our model tends to locate in the main diagonal region (indicating its good balance), while other methods have high precision or recall but score low in the other dimension. For a more detailed analysis, we applied McNemar's statistical test (Rotem et al., 2020) to compare the best DisorBERT results with the best baseline results, Table 3 shows this comparison. The symbol '=' means not significantly different (p > 0.5), '*' means significantly different (p < 0.05), '**' means very significantly different (p < 0.01), and '***' (highly significantly different: p < 0.001). The results suggest that the proposed approaches differ significantly from the baselines. Task MB BERT CNN SVM BH Anor *** *** *** *** * Dep * *** *** *** * SH * *** *** *** ** Table 3: Pairwise significance differences between DisorBERT and the baseline models using McNemar's test comparison. MB = MentalBERT, BH = BERT w/Health. ## Comparison Against Erisk Participants: Figure 3 presents a boxplot of the F1 scores of all participants for the anorexia and self-harm eRisk shared tasks5. The red circles represent the best DisorBERT model. For both tasks DisorBERT gets to the highest quartile, and, especially, in the anorexia detection task, our result is above the highest-scoring participant. These results indicate that our approach is competitive in comparison with the participants. However, it is important to mention that eRisk participants focused on obtaining early and accurate predictions of the users, while our approach focuses exclusively on determining accurate classifications. Overall, we can highlight the following conclusions of the experimental results: - The combined effect of double domain adaptation and guided masking is effective at cap- turing signs of mental disorders in social media interactions; DisorBERT performed better than the original BERT model. - Our approach also obtained better results than those achieved by MentalBERT, a model trained with a larger amount of data and with higher consumption of computational resources. The proposed double-domain adaptation is effective and computationally lightweight. - The evaluation showed a solid balance between finding users and labeling them correctly, making DisorBERT suitable for clinical detection applications. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ## 6 Analysis Of The Models 6.1 Bert Vs Disorbert BERT is a language model trained from a general corpus, while DisorBERT is a model guided to the mental health domain. Let us illustrate the behavior of the learned model, and the kind of textual segments it tends to pay more attention to. First, we analysed the most likely words the models generate when given a sentence with masked words. As sentences, we used examples from the Beck Depression Inventory (BDI) (Beck et al., 1961). This clinical tool, which consists of 21 items, aims to identify and measure the severity of typical symptoms of depression in adults and adolescents aged 17 and older. For example, it measures mood, pessimism, sense of failure, self-dissatisfaction, and guilt, among others. This test gives several responses for each item. We selected one of the answers for each one, masked a keyword, and looked at the words predicted by BERT and DisorBERT. In Table 4, we can see some examples of these sentences and the answers returned (ordered by decreasing likelihood). With DisorBERT, the answers tend to have a more negative meaning or psychological orientation compared to BERT. Take, for example, the sentence "I used to be able to [MASK]", where DisorBERT predicts the words "focus", "talk", "breathe", "sleep", and "eat". These words are related to common problems that are associated with mental disorders and cause interference in the thinking and behavior of the affected person. The BERT model is more general whereas DisorBERT learns to focus on issues related to mental disorders. Let us now look at the models in a different way. For each BDI sentence, we know the target masked word and we can extract the position of this word in the ranked list provided by each model. For example, in the third case of Table 4, DisorBERT made a perfect job because the correct word ("killing") was the top-ranked word, while BERT put the correct word in the second position of the list. Mean reciprocal rank (MRR) is a natural way to quantitatively measure the ability of the models to find the correct word. It is a standard search effectiveness measure that compares systems in terms of their ability to rank the correct answer at top rank positions. To calculate this value, we generated the top 5 words for each sentence and averaged the reciprocal ranks6for all the answers (if the correct word is not in the ranked list then the system gets a RR equal to 0 for that sentence). BERT obtained an MRR of 0.2436 and DisorBERT 0.4325. This demonstrates that DisorBERT does a substantially better job at learning the language of the BDI inventory, which is a reference clinical tool to measure the prevalence of depression symptoms. Nevertheless, our model still struggled with several BDI items, showing the difficulty of the task. Phrase **BERT DisorBERT** "I used to be able to cry" read focus "I used to be able to " talk talk [MASK]. fly breathe walk sleep breathe eat "I hate myself." asked hate "I [MASK] myself." told kill shook killed brace blame hugged love "I have thoughts of him killing killing myself." killing hurting "I have thoughts of it dying [MASK] myself." death kill her destroying ![7_image_0.png](7_image_0.png) weighted by their frequency. This figure results from applying the two models to the entire set of 21 BDI items. Similar to what happened before, BERT tends to generate more general words, while our model tends to be biased toward words related to mental disorders. Even with words in common such as "kill" or "hate", the frequency of occurrence of those words is higher for DisorBERT (i.e., it predicted several of these words more times than BERT). Finally, to measure how different these two groups of words are, we calculated the cosine similarity between the two sets of words and obtained a value of 0.4767. This suggests that there is some agreement among the proposed completions, but the predictions between the two models largely differ. ## 6.2 **Adding Interpretability To The Detection Of** Signs Of Mental Disorders Although incorporating transformers led to enhanced performance in contrast to other techniques, it also complicates the analysis and visualization of the model, which is important to understand its behavior. For models based on transformers, the attention scores in the head modules provide useful information about the words or sequences that are relevant for detection. However, with multiple heads and layers, the analysis becomes more difficult. Pardo-Sixtos et al. (2022) proposed a visualization tool that provides an interactive head view in the form of a graph. This tool starts with the [CLS] token, then, searches for the tokens in the previous layer that are important (highest attention values) for [CLS]. This visualization allows us to analyze the most important sequences of the text by obtaining the most relevant words and sentences in each layer of the transformer module. For this analysis, we selected a depression user with the highest score in the BDI questionnaire and a user who self-harmed, and computed the attention scores of the user's posts. We used the attention scores in the head module to visualize the parts of the posts that are important for the classification. Figure 5 shows an example of the graphs generated. In this way, we can understand the words and contexts that are relevant to the classification. For example, in the upper graph the most prominent words are related to anxiousness and medication, topics that are highly relevant to depression. In the lower graph (self-harm case), the prominent words are related to low self-confidence. It is interesting to see how the model can focus on mental health issues and pay more attention to related contexts. ## 7 Conclusion ![8_Image_0.Png](8_Image_0.Png) In this study, we explored a Double Domain Adaptation approach for the tasks of detecting signs of Anorexia, Depression, and Self-harm in social media. The first step of the domain adaptation focused on learning the writing style of social media users. The second step was oriented to learn about mental disorders and how users refer to psychological issues. In both steps, we incorporated lexical knowledge to guide the model toward words that are highly indicative of mental-related topics. Results suggest that combining domain adaptation with lexical knowledge helps in detecting traces of mental disorders. This approach outperformed traditional and state-of-the-art baselines and is competitive with the performance of top early-risk algorithms. Furthermore, the analysis of our method revealed that the context learned by the model is important in getting a better understanding of the concerns expressed by people. In future work, we want to explore the application of different lexical resources that are even more specialized for the target tasks, as well as the usage of clinical data to train more specialized language models, e.g., MIMIC (Johnson et al., 2016). On the other hand, emojis are often important features for social media analysis, and we want to explore their incorporation into our training process. Also, we are interested in expanding this study to different languages, since most of the work related to mental disorders has focused on English. ## Limitations This study aims to detect signs of Anorexia, Selfharm, and Depression in users of social media environments through a double-domain adaptation of a language model. This study presents some limitations, mainly because these datasets are observational studies and we do not have access to the personal and medical information that is often considered in risk assessment studies. For example, we cannot discard that some users who publicly expressed that they have been diagnosed with anorexia are actually non-anorexia cases. However, the identification of positive users from selfexpressions of diagnosis is a common practice in this area (Coppersmith et al., 2014), and the test collections built in this way are regarded as solid experimental benchmarks. There are also some limitations given by the nature of the data, as the users in these datasets might differ from users at risk who do not have exposure to social media (e.g., elderly people or individuals who do not have an online account or decided to not make their profiles public). ## Ethics Statement When we analyze social media content, we may have concerns regarding individual privacy or certain ethical considerations. These concerns appear due to the usage of information that could be sensitive and personal (e.g., references to emotions and health concerns). It is also important to mention that these datasets could contain biases belonging to the nature of social media data, e.g., a gender, age, or sexual orientation profile that could cause someone to be mislabeled as having a mental disorder. The experiments and usage of this data are for research and analysis only, and the misuse or mishandling of information is prohibited. In any case, the datasets we employed (corpus by Kim (Sect 3.1), Reddit datasets (Sect 3.2), lexicon data set (Sect 3.3), and eRisk collections (Sect 4.1) are publicly available and we strictly followed the terms of use and user agreement of these collections (see e.g. https://tec.citius.usc.es/ir/code/eRisk2019.html). Furthermore, these collections are anonymized and our research does not involve any contact with social media users. Under such conditions, this research does not require review and approval by the Ethics Committee Board. ## Acknowledgements Mario Ezra Aragon and David E. Losada thank the support obtained from (i) project PLEC2021007662 (MCIN/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Plan de Recuperación, Transformación Resiliencia, Unión Europea-Next GenerationEU), and (ii) Consellería de Educación, Universidade e Formación Profesional (accreditation 2019–2022 ED431G-2019/04, ED431C 2018/29) and the European Regional Development Fund, which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. Mario Ezra Aragon also thanks to INAOE for the collaboration grant awarded from August to December 2022. ## References Aaron Beck, C.H. Ward, M. Mendelson, J. Mock, and J. Erbaugh. 1961. An inventory for measuring depression. *Arch Gen Psychiatry*. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Rafael A. Calvo, David N. Milne, M. Sazzad Hussain, and Helen Christensen. 2017. Natural language processing in mental health applications using non-clinical texts. *Natural Language Engineering*, 23(5):649–685. Qijin Cheng, Tim Mh Li, Chi-Leung Kwok, Tingshao Zhu, and Paul Sf Yip. 2017. Assessing suicide risk and emotional distress in chinese social media: A text mining and machine learning study. J Med Internet Res, 19(7). Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in Twitter. In *Proceedings of the Workshop on Computational* Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 51–60, Baltimore, Maryland, USA. Association for Computational Linguistics. Glen Coppersmith, Mark Dredze, Craig Harman, and Kristy Hollingshead. 2015. From ADHD to SAD: Analyzing the language of mental health on Twitter through self-reported diagnoses. In *Proceedings* of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 1–10, Denver, Colorado. Association for Computational Linguistics. Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of social media as screening for suicide risk. Biomed Inform Insights, 10. Fabio Crestani, David E. Losada, and Javier Parapar. 2022. *Early Detection of Mental Health Disorders* by Social Media Monitoring: The First Five Years of the eRisk Project. Springer Verlag, Englewood Cliffs, NJ. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2022. MentalBERT: Publicly available pretrained language models for mental healthcare. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 7184–7190, Marseille, France. European Language Resources Association. Alistair E. Johnson, Tom J. Pollard, Lu Shen, Li Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and G. Mark Roger. 2016. Mimic-iii, a freely accessible critical care database. *Scientific Data*, 3:1–9. Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of Reddit posts with multi-level memory networks. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Xiaoya Li, Mingxin Zhou, Jiawei Wu, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. Analyzing COVID19 on online social media: Trends, sentiments and emotions. *CoRR*, abs/2005.14464. David Losada and Pablo Gamallo. 2020. Evaluating and improving lexical resources for detecting signs of depression in text. *Lang Resources & Evaluation*, 54:1–24. David E. Losada, Fabio Crestani, and Javier Parapar. 2018. Overview of erisk: Early risk prediction on the internet. In *Experimental IR Meets Multilinguality, Multimodality, and Interaction*, pages 343–361, Cham. Springer International Publishing. David E. Losada, Fabio Crestani, and Javier Parapar. 2019. Overview of erisk 2019 early risk prediction on the internet. In *Experimental IR Meets Multilinguality, Multimodality, and Interaction*, pages 340– 357, Cham. Springer International Publishing. David E. Losada, Fabio Crestani, and Javier Parapar. 2020. Overview of erisk 2020: Early risk prediction on the internet. In *Experimental IR Meets Multilinguality, Multimodality, and Interaction*, pages 272–287, Cham. Springer International Publishing. Rodrigo Martínez-Castaño, Amal Htait, Leif Azzopardi, and Yashar Moshfeghi. 2020. Early risk detection of self-harm and depression severity using bert-based transformers : ilab at clef erisk 2020. *CEUR Workshop Proceedings*, 2696. Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum, Thessaloniki, Greece, September 22-25, 2020. urn:nbn:de:0074-2696-0. Yair Neuman, Yohai Cohen, Dan Assaf, and Gabbi Kedma. 2012. Proactive screening for depression through metaphorical and automatic text analysis. Artificial Intelligence in Medicine, 56(1):19–25. Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English tweets. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations, pages 9–14, Online. Association for Computational Linguistics. Liu Ning, Zhou Zheng, Xin Kang, and Ren Fuji. 2018. Tua1 at erisk 2018. Proceedings of the 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France. Bridianne O'Dea, Mark E Larsen, Philip J Batterham, Alison L Calear, and Helen Christensen. 2017. A linguistic analysis of suicide-related twitter posts. *Crisis*, 35(5):319–329. Javier Parapar, Patricia Martín-Rodilla, David E. Losada, and Fabio Crestani. 2022. Overview of erisk 2022: Early risk prediction on the internet. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 233–256, Cham. Springer International Publishing. L. Fernando Pardo-Sixtos, A. Pastor López-Monroy, Mahsa Shafaei, and Thamar Solorio. 2022. Hierarchical attention and transformers for automatic movie rating. *Expert Systems with Applications*, 209:118164. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Daniel Preo¸tiuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H. Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In *Proceedings of the 2nd Workshop on* Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 21– 30, Denver, Colorado. Association for Computational Linguistics. Diana Ramírez-Cifuentes and Ana Freire. 2018. Upf's participation at the CLEF erisk 2018: Early risk prediction on the internet. In *Working Notes of CLEF* 2018 - Conference and Labs of the Evaluation Forum, Avignon, France, September 10-14, 2018, volume 2125 of *CEUR Workshop Proceedings*. CEURWS.org. Diana Ramírez-Cifuentes, Ana Freire, Ricardo Baeza-Yates, Joaquim Puntí, Pilar Medina-Bravo, Diego Alejandro Velazquez, Josep Maria Gonfaus, and Jordi González. 2020. Detection of suicidal ideation on social media: Multimodal, relational, and behavioral analysis. *J Med Internet Res*, 22. Esteban A. Ríssola, David E. Losada, and Fabio Crestani. 2021. A survey of computational methods for online mental state assessment on social media. ACM Trans. Comput. Healthcare, 2(2). Jo Robinson, Georgina Cox, Eleanor Bailey, Sarah Hetrick, Rodrigues Maria, Steve Fisher, and Helen Herrman. 2016. Social media and suicide prevention: a systematic review. *Early Interv Psychiatry*, 10(2):103–121. Dror Rotem, Peled-Cohen Lotem, Shlomov Segev, and Reichart Roi. 2020. *Statistical Significance Testing* for Natural Language Processing. Springer Chamg. H. Andrew Schwartz, Johannes Eichstaedt, Margaret L. Kern, Gregory Park, Maarten Sap, David Stillwell, Michal Kosinski, and Lyle Ungar. 2014. Towards assessing changes in degree of depression through Facebook. In *Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From* Linguistic Signal to Clinical Reality, pages 118–125, Baltimore, Maryland, USA. Association for Computational Linguistics. Ruba Skaik and Diana Inkpen. 2020. Using social media for mental health surveillance: A review. ACM Comput. Surv., 53(6). Sho Tsugawa, Yusuke Kikuchi, Fumio Kishino, Kosuke Nakajima, Yuichi Itoh, and Hiroyuki Ohsaki. 2015. Recognizing depression from twitter activity. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, page 3187–3196, New York, NY, USA. Association for Computing Machinery. Sumithra Velupillai, Gergö Hadlaczky, Genevieve M. Gorrell, Nomi Werbeloff, Dong Nguyen, Rashmi Patel, Daniel Leightley, Johnny Downs, Matthew Hotopf, and Rina Dutta. 2019. Risk assessment tools and data-driven approaches for predicting and preventing suicidal behavior. *Frontiers in Psychiatry*, 10. Emilio Villa-Cueva, Ivan Gonzalez-Franco, Fernando Sanchez-Vega, and Adrian Pastor Lopez-Monroy. 2022. Nlp-cimat at politices 2022: Politibeto, a domain-adapted transformer for multi-class political author profiling. In Iberian Languages Evaluation Forum, volume 69. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2022. Fine-tuning a masked language model. WHO World Health Organization. 2022. Covid-19 pandemic triggers 25% increase in prevalence of anxiety and depression worldwide. Liu Zihan, Xu Yan, Yu Tiezheng, Dai Wenliang, Ji Ziwei, Cahyawijaya Samuel, Madotto Andrea, and Fung Pascale. 2021. Crossner: Evaluating crossdomain named entity recognition. AAAI Conference on Artificial Intelligence, 35:13452–13460. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We add a limitation section after the conclusions. ✓ A2. Did you discuss any potential risks of your work? In the Ethics Statement section after the conclusions. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 "Experimental Settings" ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 - subsection 4.2 "Model Configuration" The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 - subsection 4.2 "Model Configuration" ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 "Evaluation and Discussion" and Section 6 "Analysis of the Models" ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 and Section 4 - subsection 4.2 "Model Configuration" ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
li-etal-2023-toward
Toward Interactive Dictation
https://aclanthology.org/2023.acl-long.854
Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in open-ended natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 30{\%} end-state accuracy with 1.3 seconds of latency, while a larger model achieves 55{\%} end-state accuracy with 7 seconds of latency.
# Toward Interactive Dictation Belinda Li♠∗ Jason Eisner✸ Adam Pauls✸ **Sam Thomson**✸ ♠MIT CSAIL ✸Microsoft Semantic Machines ♠bzl@mit.edu ✸{jason.eisner,adam.pauls,samuel.thomson}@microsoft.com ## Abstract Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in *open-ended* natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 28% singlecommand interpretation accuracy with 1.3 seconds of latency, while a larger model achieves 55% with 7 seconds of latency. ## 1 Introduction Speech can be preferable for text entry, especially on mobile devices or while the user's hands are occupied, and for some users for whom typing is always slow or impossible. While fast and accurate automatic speech recognition (ASR) is now ubiquitous (Kumar et al., 2012; Xiong et al., 2016; Chiu et al., 2018; Radford et al., 2022), ASR itself only *transcribes* speech. In practice, users may also wish to *edit* transcribed text. The ASR output might be incorrect; the user might have misspoken; or they might change their mind about what to say or how to phrase it, perhaps after seeing or hearing their previous version. Azenkot and Lee (2013) found that users with visual impairment spent 80% of time editing text vs. 20% dictating it. ∗ Work performed during a research internship at Microsoft Semantic Machines. ![0_image_0.png](0_image_0.png) In this work, we study the task of **interactive** dictation, in which users can both perform verbatim dictation and utter open-ended commands in order to edit the existing text, in a single uninterrupted speech stream. See Figure 1 for an example. Unlike commercial systems like Dragon (DNS; Nuance, 1997, 2022) and dictation for Word (Microsoft, 2022) that require reserved trigger words for commanding, the commands in our data are invoked using unrestricted natural language (NL). For example, in Figure 1, both (b) and (d) invoke replace commands, but (d) uses nested syntax to specify both an edit action and location, while (b) is implicit (as natural speech repairs often are). In interactive dictation, users do not need to memorize a list of specific trigger words or templates in order to invoke their desired functionality. A dictation system should be as intuitive as dic15319 tating to a *human* assistant—a situation in which people quite naturally and successfully intersperse speech repairs and commands with their dictation. Beyond eliminating the learning curve, letting users speak naturally should also allow them to focus on what they want to say, without being repeatedly distracted by the frustrating separate task of getting those words into the computer. Because we accept unrestricted NL for commands, both *segmentation* and *interpretation* become nontrivial for a system to perform.1 Segmentation requires capturing (sometimes subtle) changes in intent, and is especially difficult in cases where command boundaries do not align with ASR boundaries.2 We collect a dataset of 1320 documents dictated in an interactive environment with live, incremental ASR transcription and Wizard-ofOz–style interpretation of user commands. Annotators were not told a set of editing features they were allowed to use, but simply instructed to make their commands understandable and executable by a hypothetical human helper. Collection required designing a novel data collection interface. Both the interface and dataset will be publicly released to help unlock further work in this area.3 Finally, we experiment with two strategies for implementing the proposed system: one that uses a pre-trained language model to directly predict the edited text given unedited text and a command, and another that interprets the command as a program specifying how to edit. Predicting intermediate programs reduces latency because the programs are short, at the expense of accuracy. This strategy also requires additional work to design and implement a set of editing functions and annotate commands with programs that use these functions. For each strategy, we also experimented with two choices of pre-trained language model: a small finetuned T5 model and a large prompted GPT3 model. Using the smaller model significantly improves latency, though again at the cost of accuracy. In summary, our contributions are: (1) a novel 1In template-based systems, by contrast, commands can be detected and parsed using regular expressions. An utterance is considered a command if and only if it matches one of these regular expressions. 2In Figure 1, for example, we must segment the first sentence into two parts, a dictation (*"Just wanted to ask about the* event on the 23rd") and a command (*"on Friday the 23rd"*). ASR can also *overpredict* boundaries when speakers pause in the middle of a sentence. For example, in our data "Change elude mansion to elude mentioned." was misrecognized by MSS as *"Change. Elude mansion to elude mentioned."* 3https://aka.ms/tertius task (interactive dictation), (2) a novel *data collection interface* for the task, with which we collect a new *dataset*, and (3) a *system* that implements said task, with experiments and analysis. ## 2 Background & Related Work Many modern speech input tools only support direct speech-to-text (*e.g.*, Radford et al., 2022). Occasionally, these models also perform disfluency correction, which includes removing filler words (e.g., um), repeated words, false starts, etc. (*e.g.*, Microsoft Azure, 2022). One form of disfluency that has received particular attention is speech repair, where the speaker corrects themself midutterance. For example, let's chat tomorrow uh I mean Friday contains a speech repair, where the user corrects "tomorrow" with "Friday." The repaired version of this should be *let's chat Friday*. Prior work has collected datasets and built systems specifically for speech repair (Heeman and Allen, 1994, 1999; Johnson and Charniak, 2004). Additionally, ASR systems themselves make errors that humans may like to correct post-hoc; there has been work on correcting ASR errors through respeaking misdetected transcriptions (McNair and Waibel, 1994; Ghosh et al., 2020; Vertanen and Kristensson, 2009; Sperber et al., 2013). Beyond disfluencies that were not automatically repaired but were transcribed literally, humans must fix many other mistakes while dictating. They often change their mind about what to say—the human writing process is rarely linear—and ASR itself commonly introduces transcription errors. Most systems require the user to manually fix these errors through keyboard-and-mouse or touchscreen editing (*e.g.*, Kumar et al., 2012), which can be inconvenient for someone who already relies on voice for dictation. Furthermore, most commercial systems that support editing through speech (DNS, Word) require templated commands. Thus, while speech input is often used to write short-form, imprecise text (*e.g.*, search queries or text messages), it is not as popular as it might be, and it is used less when writing longer and more precise documents. In our work, we study making edits through spoken natural language commands. Interpreting flexible natural language commands is a wellstudied problem within NLP, with work in semantic parsing (Zelle and Mooney, 1993; Zettlemoyer and Collins, 2009; Artzi and Zettlemoyer, 2013), instruction-following (Chen and Mooney, 2011; Branavan et al., 2009; Tellex et al., 2011; Anderson et al., 2018; Misra et al., 2017), and task-oriented dialogue (Budzianowski et al., 2018). Virtual assistants like Siri (Apple, 2011), Alexa (Amazon, 2014), and Google Assistant (Google, 2016) have been built to support a wide range of functionalities, including interacting with smart devices, querying search engines, scheduling events, etc. Due to advances in language technologies, modern-day assistants can support flexible linguistic expressions for invoking commands, accept feedback and perform reinterpretation (Semantic Machines et al., 2020), and work in an online and incremental manner (Zhou et al., 2022). Our work falls in this realm but: (1) in a novel interactive dictation setting, (2) with unrestricted commanding, and (3) where predicting boundaries between dictations and commands is part of the task. Recently, a line of work has emerged examining how large language models (LLMs) can serve as collaborative writing/coding assistants. Because of their remarkable ability to generate coherent texts over a wide range of domains and topics, LLMs have proven surprisingly effective for editing, elaboration, infilling, etc., across a wide range of domains (Malmi et al., 2022; Bavarian et al., 2022; Donahue et al., 2020). Though our system also makes use of LLMs, it supports a different mode of editing than these prior works. Some works use edit models for other types of sequence-tosequence tasks (e.g. summarization, text simplification, style transfer) (Malmi et al., 2019; Dong et al., 2019; Reid and Zhong, 2021), while others use much coarser-grained editing commands than we do, expecting the LLM to (sometimes) generate new text (Bavarian et al., 2022; Zhang et al., 2023). In addition to these differences, our editing commands may be misrecognized because they are spoken, and may be misdetected/missegmented because they are provided through the same channel as text entry. ## 3 Task Framework We now formalize our interactive dictation setting. A user who is editing a document speaks to a system that both transcribes user dictation and responds to user commands. This process results in a **interactive dictation trajectory**—a sequence of timestamped events: the user keeps speaking, several trained modules keep making predictions, and the document keeps being updated. Supervision could be provided to the predictive modules in various ways, ranging from direct supervision to delayed indirect reward signals. In this paper, we collect supervision that can be used to bootstrap an initial system. We collect **gold trajectories** in which every prediction is correct—except for ASR predictions, where we preserve the errors since part of our motivation is to allow the user to fix dictation errors.4 All predictions along the trajectory are provided in the dataset. Our dataset is not completely generic, since it assumes that certain predictive modules will exist and interact in particular ways, although it is agnostic to how they make their predictions. It is specifically intended to train a system that is a pipeline of the following modules (Figure 2): (a) ASR As the user speaks, the ASR module proposes transcripts for spans of the audio stream. Due to ASR system latency, each ASR result normally arrives some time *after* the end of the span it describes. The ASR results are transcripts of successive disjoint spans of the audio, and we refer to their concatenation as the **current transcript** (U in Figure 2(a)). (b) Segmentation When the current transcript changes, the system can update its segmentation. It does so by partitioning the current transcript U into a sequence of segments ui, labeling each as being either a **dictation** or a **command**. (c) Normalization *(optional)* Each segment ui can be passed through a normalization module, which transforms it from a literal transcript into clean text that should be inserted or interpreted. This involves speech repair as well as text normalization to handle orthographic conventions such as acronyms, punctuation, and numerals. While the module (a) may already attempt some version of these transformations, an off-the-shelf ASR module does not have access to the document state or history. It may do an incomplete job and there may be no way to tune it on gold normalized results. This normalization module can be trained to finish the job. Including it also ensures that our gold trajectories include the intended normalized text of the commands. (d) Interpretation Given a document state di−1 and a segment ui, the interpretation module predicts the new document state dithat uiis meant ![3_image_0.png](3_image_0.png) to achieve.5 The document is then immediately updated to state di; the change could be temporarily highlighted for the user to inspect. Here di−1 is the result of having already applied the updates predicted for segments u1*, . . . , u*i−1, where d0 is the initial document state. Concretely, we take a document state to consist of the document content together with the current cursor position.6 When uiis a dictation segment, no prediction is needed: the state update simply inserts the current segment at the cursor. However, when uiis a command segment, predicting the state update that the user wanted requires a text understanding model. Note that commands can come in many forms. Commonly they are imperative commands, as in Figure 1d. But one can even treat speech repairs such as Figure 1b as commands, in a system that does not handle repairs at stage (a) or (c). Rather than predict di directly, an alternative design is to predict a program pi and apply it to di−1 to obtain di. In this case, the gold trajectory in our dataset includes a correct program pi, which represents the intensional semantics of the command ui (and could be applied to different document states). Change Propagation The ASR engine we use for module (a) sometimes revises its results. It may replace the most recent of the ASR results, adding new words that the user has spoken and/or improving the transcription of earlier words. The engine marks an ASR result as partial or **final** according to whether it will be replaced.7 To make use of streaming partial and final ASR results, our pipeline supports change propagation. This requires the predictive modules to compute additional predictions. If a module is notified that its input has changed, it recomputes its output accordingly. For example, if module (a) changes the current transcript, then module (b) may change the segmentation. Then module (c) may recompute normalized versions of segments that have changed. Finally, module (d) may recompute the document state di for all i such that di−1 or ui has changed. The visible document is always synced with the last document state. This sync can revert and replace the effects on the document of previous incorrectly handled dictations and commands, potentially even from much earlier segments. To avoid confusing the user with such changes, and to reduce computation, a module can freeze its older or more confident inputs so that they reject change notifications (Appendix B). Modules (b)–(d) could also adopt the strategy of module (a)—quickly return provisional results from a "first-pass" system with the freedom to revise them later. This could further 7Full details and examples can be found in Appendix A.1. ## 4 Dataset Creation To our knowledge, no public dataset exists for the task of interactive dictation. As our task is distinct from prior work in a number of fundamental ways (§2), we create a new dataset, TERTiUS.8 Our data collection involves two stages. First, a human **demonstrator** speaks to the system and provides the gold segmentations, as well as demonstrating the normalizations and document state updates for the command segments. Later, for each command segment, an **annotator** fills in a gold program that would yield its gold state update. For a command segments, we update the document during demonstration using the demonstrated state updates—that is, they do double duty as *gold* and *actual* state updates. Thus, we follow a gold trajectory, as if the demonstrator is using an oracle system that perfectly segments their speech into dictations (though these may have ASR errors) versus commands, and then perfectly interprets the commands. A future data collection effort could instead update the document using the imperfect system that we later built (§5), in which case the demonstrator would have to react to cascading errors. ## 4.1 Collecting Interactive Dictation We build a novel data collection framework that allows us to collect speech streams and record gold and actual events. We used an existing ASR system, Microsoft Speech Services (MSS; Microsoft Azure, 2022). We asked the demonstrator to play both the role of the *user* (issuing the speech stream), and also the roles of the segmentation, *normalization*, and *interpretation* parts of the system (Figures 2b–d). Thus, we collect actual ASR results, while asking the demonstrator to demonstrate gold predictions for segmentation, normalization, and interpretation. The demonstration interface is shown in Figure 3. demonstrators were trained to use the interface, and told during training how their data would be used.9 A demonstrator is given a task of dictating an email into our envisioned system (shown in the yellow textbox). We collected data in three scenarios: 1. **Replicate doc:** Exactly recreate an email from the Enron Email Dataset (Klimt and Yang, 2004).10 2. **Elaborate doc:** Expand a terse description of an email into an full email. The exact wording of the full email is up to the demonstrator. 3. **Replicate segment:** Exactly recreate the poststate di of a single command segment ui (randomly sampled from the already-collected Replicate doc and Elaborate doc data), starting from its pre-state di−1. This does not have to be done with a single command. A demonstrator must then reach the target state (either exactly for Replicate doc or Replicate segment, or to their satisfaction for Elaborate doc), following these three steps: Step 1 (ASR, segmentation) The demonstrator starts speaking, which gets transcribed in real time by the built-in ASR system into ASR results. As they speak, they demonstrate what the segmentation system should do by holding down a key whenever they are speaking a command (as opposed to dictating). They can specify consecutive commands by quickly releasing and re-pressing the key.11 This gives us a list of time intervals when the key was held down. By matching these to the ASR timestamps, we identify the gold command segments in the ASR transcript. The remaining segments of the transcript are labeled as dictation.12 Step 2 (normalization) All labeled segments are displayed in the right column of the UI. After the demonstrator has finished speaking, they fill in the normalized text for each command segment. (The segment shows original and normalized text in the ASR and Gold ASR fields.) ![5_image_0.png](5_image_0.png) mouse and keyboard until it reflects the desired post-state after applying command ui. For reference, the UI also displays the pre-state di−1 and a continuously updated visual diff ∆(di−1, di)). Demonstrators can move freely among these steps, editing normalizations or state updates at any time, or appending new segments by speaking.14 We believe our framework is well-equipped to collect natural, flexible, and intuitive dictation and commanding data, for several reasons: (1) We do not restrict the capabilities of commands or the forms of their utterances, but instead ask demonstrators to command in ways they find most natural. (2) We simulate natural, uninterrupted switching between segments by making it easy for demonstrators to specify segment boundaries in real-time. (3) We collect a realistic distribution over speech errors and corrections by using an existing ASR system and asking demonstrators to replicate real emails. In future, the distribution could be made more realistic if we sometimes updated the document by using predicted normalizations and state updates rather than gold ones, as in the DAgger imitation learning method (Ross et al., 2011). 14They are also allowed to back up and remove the final segments, typically in order to redo them. $\exists\;\;\forall\;\;\forall\;\;\exists$ ## 4.2 Annotating Programs For Commands After obtaining sequences of demonstrated dialogues using the above procedure, we extract each command segment and manually annotate it with a **program** pithat represents the intensional semantics of the command. This program should in theory output the correct di when given di−1 as input. Program annotation is done post-hoc with a different set of annotators from §4.1. We design a domain-specific Lisp-like language for text-manipulating programs, and an execution engine for it. We implement a library consisting of composable actions, *constraints*, and *combinators*. A program consists of actions applied to one or more text targets, which are specified by contraints. Combinators allow us to create complex constraints by composing them. For example, in Figure 2, Capitalize the S in eSpeak, has the program (capitalize (theText (and (like "S") (in (theText (like "eSpeak")))))) where capitalize is the action, (like "S") and (like "eSpeak") are constraints, and and is a combinator. More examples are in Appendix A.4. | Trajectories | Segments | | | | |-------------------|------------|------|-------|------| | Task | Dict. | Cmd. | Total | | | Replicate doc | 372 | 473 | 1453 | 1926 | | Elaborate doc | 343 | 347 | 473 | 820 | | Replicate segment | 605 | 139 | 1299 | 1438 | | Total | 1320 | 959 | 3225 | 4184 | Table 1: Dataset size statistics. ## 4.3 Handling Of Partial Asr Results The current transcript sometimes ends in a partial ASR result and then is revised to end in another partial ASR result or a final ASR result. All versions of this transcript—"partial" and "final"—will be passed to the segmenter, thanks to change propagation. During demonstration, we record the gold labeled segmentations for all versions, based on the timing of the demonstrator's keypresses. However, only the segments of the "final" version are shown to the demonstrator for further annotation. A segment of a "partial" version can simply copy its gold normalized text from the segment of the "final" version that starts at the same time. These gold data will allow us to train the normalization model to predict a normalized command based on partial ASR results, when the user has not yet finished speaking the command or the ASR engine has not yet finished recognizing it. In the same way, a command segment ui of the "partial" version could also copy its gold document post-state di and its gold program pi from the corresponding "final" segment. However, that would simply duplicate existing gold data for training the interpretation module, so we do not include gold versions of these predictions in our dataset.15 ## 4.4 Dataset Details & Statistics In the first stage (§4.1), eleven human demonstrators demonstrated 1372 interactive dictation trajectories (see Table 1 for details). In the second stage (§4.2), two human annotators annotated programs for 868 commands.16 The dataset was then split into training, validation, and test sets with 991 training trajectories (consisting of 3199 demonstrated segments), 173 validation trajectories (562 segments), and 156 test trajectories (423 segments). All demonstrators and annotators were native English speakers. The dataset is currently only English, and the editor supports unformatted plain text. However, the annotation framework could handle other languages that have spoken and written forms, and could be extended to allow formatted text. A key goal of our system is flexibility. We quantify how well TERTiUS captures flexibility by measuring the *diversity* of natural language used to invoke each state change.17 We count the number of distinct first tokens (mainly verbs) used to invoke each action. These results are reported in Table 4 in the Appendix, alongside a comparison with DNS.18 We see that TERTiUS contains at least 22 ways to invoke a correction, while DNS supports only 1. In short, these results show that doing well on TERTiUS requires a much more flexible system that supports a wider array of functions and ways of invoking those functions than what existing systems provide. ## 5 Modeling & Training The overall system we build for interactive dictation follows our pipeline from Figure 2 and §3: 1. A **segmentation model** MSEG takes the current transcript U, and predicts a segmentation u1*, . . . , u*n, simultaneously predicting whether each ui corresponds to a *dictation* or *command* segment. 2. Each dictation segment is directly spliced into the document at the current cursor position. 3. For each command segment: (a) A **normalization model** MNOR predicts the normalized utterance u′i , repairing any ASR misdetections. (b) An **interpretation model**, MINT(state) or MINT(program), either: 1. directly predicts the end state of the command di, or 2. predicts the command *program* pi, which is then executed to di by the 17The system we build can theoretically support more flexibility than what is captured in TERTiUS. However, for TERTiUS to be a useful testbed (and training set) for flexibility, we would like it to be itself diverse. 18We also measure the diversity of state changes captured by TERTiUS in Appendix A.5. execution engine. We experiment with both types of interpretation model. Below we describe the specific models we use. ## 5.1 Segmentation The segmentation model partitions U into segments ui, each of which is labeled by mi as being either dictation or command: $$\mathcal{M}_{\mathrm{SEG}}(\mathcal{U})=[(u_{0},m_{0}),\cdots,(u_{n},m_{n})],$$ s.t. $$\mathcal{U}=u_{0}+u_{1}+\cdots+u_{n}\tag{1}$$ $$m_{i}\in\{\text{command,dication}\}$$ Concretely, the segmentation model does this using BIOES tagging (Jurafsky and Martin, 2009, Chapter 5). Here each command is tagged with a sequence of the form BI∗E ("beginning, inside, . . . , inside, end") or with the length-1 sequence S ("singleton"). Maximal sequences of tokens tagged with O ("outside") then correspond to the dictation segments. Note that two dictation segments cannot be adjacent. We implement the segmentation model as a T5-base encoder (Raffel et al., 2022) followed by a two-layer MLP prediction module. More details on why each tag is necessary and how we trained this model can be found in Appendix C.1. ## 5.2 Normalization And Interpretation For each uithat is predicted as a command segment, we first predict the normalized utterance u′i , 19 $${\mathcal{M}}_{\mathrm{NOR}}(d_{i-1},u_{i})=u_{i}^{\prime}.$$ ′i. (2) We then interpret u′i in context to predict either the document state di or an update program pi. $$\begin{array}{c}{{{\mathcal{M}}_{\mathrm{{INT}}\,(\mathrm{state})}\,(d_{i-1},u_{i}^{\prime})=d_{i},}}\\ {{{\mathcal{M}}_{\mathrm{{INT}}\,(\mathrm{program})}\,(d_{i-1},u_{i}^{\prime})=p_{i}.}}\end{array}$$ $$(3)$$ We then update the document state accordingly. We experiment with two ways of implementing the two steps: we either fine-tune two separate T5-base models (Raffel et al., 2022) that run in a pipeline for each command, or we prompt GPT3 (Brown et al., 2020) 20 to generate both the normalized utterance21 and the interpretation output in a single inference step. Training and prompting details can be found in Appendix C.2. 19Note that the normalization step additionally conditions on the state di−1, allowing it to consider what command would have been sensible in this context. Concrete examples are ## 6 Results We evaluate the segmentation model in isolation, and the normalization and interpretation steps together. (Appendices D.2 and D.3 evaluate the normalization and interpretation steps in isolation.) For simplicity, we evaluate the models only on current transcripts U that end in **final** ASR results (though at training time and in actual usage, they also process transcripts that end in **partial** ones).22 ## 6.1 Segmentation Metrics Exact match (EM) returns 0 or 1 according to whether the entire labeled segmentation of the final transcript U is correct. We also evaluate macro-averaged **labeled F1**, which considers how many of the gold labeled segments appear in the model's output segmentation and vice versa. Two labeled segments are considered to be the same if they have the same start and end points in U and the same label (dictation or command). Results Segmentation results on an evaluation dataset of transcripts U (see Appendix D.1) are shown in the top section of Table 2. All results are from single runs of the model. The model performs decently on TERTiUS, and in some cases is even able to fix erroneous sentence boundaries detected by the base ASR system (Appendix D.1.2). However, these cases are also difficult for the model: a qualitative analysis of errors find that, generally, errors arise either when the model is misled by erroneous over- and under-segmentation by the base ASR system, or when commands are phrased in ways similar to dictation. Examples are in in Appendix D.1.1. ## 6.2 Normalization & Interpretation Metrics We evaluate normalization and interpretation in conjunction. Given a gold normalized command utterance ui and the document's gold pre-state di−1, we measure how well we can reconstruct its post-state di. We measure **state exact match (EM)**23 between the predicted and gold post-states. If the interpretation model predicts | Metric | T5 | GPT3 | | | | |-----------------|------------|--------|-------|-------|-------| | F1 | 90.9% | - | | | | | Segmentation EM | 85.3% | - | | | | | Runtime (s/it) | 0.097 | - | | | | | prog | state | prog | state | | | | ASR Repair + | State EM | 28.3% | 29.5% | 38.6% | 55.1% | | Interpretation | Program EM | 28.3% | - | 41.9% | - | | Runtime (s/it) | 1.28 | 3.46 | 5.32 | 6.92 | | intermediate programs, then we also measure **program exact match (EM)** between the predicted program and the gold program. Results The bottom of Table 2 shows these results. All results are from single runs of the model. GPT3 generally outperforms T5, likely due to its larger-scale pretraining. When we evaluated ASR repair and interpretation separately in Appendices D.2 and D.3, we found that GPT3 was better than T5 at both ASR repair and interpretation. Furthermore, we find that *both GPT3 and T5 are* better at directly generating states (55.1 vs. 38.6 state EM and 29.5 vs. 28.3 state EM). However, the gap is larger for GPT3. We suspect that GPT3 has a better prior over well-formed English text and can more easily generate edited documents d directly, without needing the abstraction of an intermediate program. T5-base, on the other hand, finds it easier to learn the distinctive (and more direct) relationship between u and the short program p. Other than downstream data distribution shift, we hypothesize that program accuracy is lower than state accuracy because the interpretation model is trained mostly on *auto-generated* program annotations, and because the execution engine is imperfect. We anticipate that program accuracy would improve with more gold program annotations and a better execution engine. ## 6.3 Efficiency Table 2 reports runtimes for each component. This allows us to identify bottlenecks in the system and consider trade-offs between model performance ![8_image_0.png](8_image_0.png) and efficiency. We see that segmentation is generally quick and the ASR repair and interpretation steps are the main bottlenecks. The T5 model also runs much faster than the GPT3 model,24 despite performing significantly worse, indicating a tradeoff between speed and accuracy. Figure 4 shows that by generating programs instead of states, we achieve faster runtimes (as the programs are shorter), at the expense of accuracy. ## 7 Conclusion Most current speech input systems do not support voice editing. Those that do usually only support a narrow set of commands specified through a fixed vocabulary. We introduce a new task for *flexible* invocation of commands through natural language, which *may be interleaved with dictation*. Solving this task requires both *segmenting* and *interpreting* commands. We introduce a novel data collection framework that allows us to collect a pilot dataset, TERTiUS, for this task. We explore tradeoffs between model accuracy and efficiency. Future work can examine techniques to push out the Pareto frontier, such as model distillation to improve speed and training on larger datasets to improve accuracy. Future work can also look at domains outside of (work) emails, integrate other types of text transformation commands (*e.g.*, formatting), and may allow the system to respond to the user in ways beyond updating the document. 24Note that GPT3 is called via an external API, while T5 is run on a local GPU. GPT3 runtimes thus include an unknown communication overhead, which will not be present when run on local hardware. ## 8 Limitations TERTiUS is a pilot dataset. In particular, its test set can support segment-level metrics, but is not large enough to support reliable dialogue-level evaluation metrics. Due to resource constraints, we also do not report inter-annotator agreement measurements. While we made effort to make our interface low-friction, the demonstration setting still differs from the test-time scenario it is meant to emulate, and such a mismatch may also result in undesired data biases. Because our dialogues were collected before having a trained interpretation model, trajectories always follow gold interpretations. Because of this, the main sources of errors are ASR misdetections or user speech errors. In particular, TERTiUS contains data on: 1. misdetections and speech errors in transcription, and how to fix them through commands, 2. misdetections and speech errors in edits, and what intent they correspond to. We leave to future work the task of addressing semantic errors and ambiguities which result from incorrect interpretation of user intent. Some of these limitations can be addressed by incorporating trained models into the demonstration interface, which will allow faster demonstration, and capture trajectories that include actual system (non-gold) interpretations. Though the trained system runs, we have not done user studies with it because it is not production-ready. The T5-base models are efficient enough, but the prompted GPT3 model is too slow for a responsive interactive experience. Neither model is accurate enough at interpretation. We welcome more research on this task! When a human dictates to another human, interleaved corrections and commands are often marked prosodically (by pitch melody, intensity, and timing). Our current system examines only the textual ASR output; we have given no account of how to incorporate prosody, a problem that we leave to future work. We also haven't considered how to make use of speech lattices or n-best lists, but they could be very useful if the user is correcting our mistranscription—both to figure out what text the user is referring to, and to fix it. ## 9 Impact Statement This work makes progress toward increasing accessibility for those who cannot use typing inputs. The nature of the data makes it highly unlikely that artifacts produced by this work could be used (intentionally or unintentionally) to quickly generate factually incorrect, hateful, or otherwise malignant text. The fact that all speakers in our dataset were native speakers of American English could contribute to exacerbating the already present disparity in usability for English vs. non-English speakers. Future work should look to expand the diversity of languages, dialects, and accents covered. ## References Amazon. 2014. Amazon Alexa. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. 2018. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3674– 3683. ## Apple. 2011. Siri. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. *Transactions of the Association for Computational Linguistics*, 1:49–62. Shiri Azenkot and Nicole B. Lee. 2013. Exploring the use of speech input by blind people on mobile devices. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. Mohammad Bavarian, Angela Jiang, Heewoo Jun, and Henrique Pondé. 2022. New gpt-3 capabilities: Edit & insert. [Online; posted 15-March-2022]. S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In *Proceedings of* the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 82–90, Suntec, Singapore. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In *Proceedings of the* Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI'11, page 859–865. AAAI Press. Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, and Michiel Bacchiani. 2018. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), page 4774–4778. IEEE Press. Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2492– 2501, Online. Association for Computational Linguistics. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402, Florence, Italy. Association for Computational Linguistics. Debjyoti Ghosh, Can Liu, Shengdong Zhao, and Kotaro Hara. 2020. Commanding and re-dictation: Developing eyes-free voice-based interaction for editing dictated text. ACM Transactions on Computer-Human Interaction, 27. Google. 2016. Google Assistant. Peter Heeman and James F Allen. 1994. Detecting and correcting speech repairs. In *Proceedings of the 32nd* Annual Meeting of the Association for Computational Linguistics, pages 295–302, Las Cruces. New Mexico State University. Peter A. Heeman and James Allen. 1999. Speech repairs, intonational phrases and discourse markers: Modeling speakers' utterances in spoken dialog. Computational Linguistics, 25(4):527–572. Mark Johnson and Eugene Charniak. 2004. A TAGbased noisy-channel model of speech repairs. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 33–39, Barcelona, Spain. D. Jurafsky and J.H. Martin. 2009. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall series in artificial intelligence. Pearson Prentice Hall. Bryan Klimt and Yiming Yang. 2004. The Enron corpus: A new dataset for email classification research. In Proceedings of the 15th European Conference on Machine Learning, ECML'04, page 217–226, Berlin, Heidelberg. Springer-Verlag. Anuj Kumar, Tim Paek, and Bongshin Lee. 2012. Voice typing: A new speech interaction model for dictation on touchscreen devices. In Proceedings of CHI, 2012,, pages 2277–2286. ACM. Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, and Aliaksei Severyn. 2022. Text generation with textediting models. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts*, pages 1–7, Seattle, United States. Association for Computational Linguistics. Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics. Arthur E. McNair and Alex Waibel. 1994. Improving recognizer acceptance through robust, natural speech repair. In *Proc. 3rd International Conference on* Spoken Language Processing (ICSLP 1994), pages 1299–1302. Microsoft. 2022. Dictation for Microsoft Word. Microsoft Azure. 2022. Cognitive speech services. Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1004–1015, Copenhagen, Denmark. Association for Computational Linguistics. Nuance. 1997. Dragon NaturallySpeaking. Nuance. 2022. Dragon Speech Recognition Solutions. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. *PyTorch: An Imperative Style, High-Performance Deep Learning Library*. Curran Associates Inc., Red Hook, NY, USA. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak supervision. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Machel Reid and Victor Zhong. 2021. LEWIS: Levenshtein editing for unsupervised text style transfer. In *Findings of the Association for Computational* Linguistics: ACL-IJCNLP 2021, pages 3932–3944, Online. Association for Computational Linguistics. Stephane Ross, Geoff J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of AISTATS. Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. Task-oriented dialogue as dataflow synthesis. *Transactions of the Association for Computational Linguistics*, 8:556–571. Matthias Sperber, Graham Neubig, Christian Fügen, Satoshi Nakamura, and Alex Waibel. 2013. Efficient speech transcription through respeaking. pages 1087– 1091. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In *Proceedings of the* Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI'11, page 1507–1514. AAAI Press. Keith Vertanen and Per Ola Kristensson. 2009. Automatic selection of recognition errors by respeaking the intended text. In ASRU '09: IEEE Workshop on Automatic Speech Recognition and Understanding, pages 130–135. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig. 2016. Achieving human parity in conversational speech recognition. John M. Zelle and Raymond J. Mooney. 1993. Learning semantic grammars with constructive inductive logic programming. In Proceedings of the Eleventh National Conference on Artificial Intelligence, AAAI'93, page 817–822. AAAI Press. Luke Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 976–984, Suntec, Singapore. Association for Computational Linguistics. Jiyang Zhang, Sheena Panthaplackel, Pengyu Nie, Junyi Jessy Li, and Milos Gligoric. 2023. Coditt5: Pretraining for source code and natural language editing. In *Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering*, ASE '22, New York, NY, USA. Association for Computing Machinery. Jiawei Zhou, Jason Eisner, Michael Newman, Emmanouil Antonios Platanios, and Sam Thomson. 2022. Online semantic parsing for latency reduction in task-oriented dialogue. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554–1576, Dublin, Ireland. Association for Computational Linguistics. ## A Dataset A.1 Asr Results Types of segments Below we describe the types of ASR results we collect in TERTiUS. As dialogues are uttered, we obtain a stream of timestamped partial and full ASR results from MSS. Examples of partial and full ASR results can be found below: 0:00.00: *attached* 0:00.30: *attached is* 0:00.60: *attached is the* 0:01.05: *attached is the draft* 0:02.15: *Attached is the draft.* The first four lines are partial ASR results u partial that are computed quickly and returned by MSS in real time as the user is speaking. The last line is the final ASR result, which takes slightly longer to compute, but represents a more reliable and polished ASR result. After a final result u final has been computed, it obsolesces prior partial ASR results. While not used in present experiments, collecting partial ASR results enables building an incremental system that can be faster and more responsive in real time; rather than waiting for ends of sentences to execute commands, a system can rely on partial ASRs to anticipate commands ahead of time (akin to Zhou et al. (2022)). Collecting timing information is also helpful for evaluating the speed of our system: the system runtime continges on the rate at which it obtains new ASR results and how long it takes to process them. Furthermore, MSS additionally returns n-best lists for each final ASR results. These are a list of candidate final ASRs that may feasibly correspond with user audio, *e.g.*, Attached is the draft. Attached his draft. Attacked is the draft. · · · Aggregation segments For long user audio streams, partial and final results are returned sequentially, each describing roughly a single sentence. The most recent ASR result is concatenated together with the previous history of final ASR results, to return the full partial or final ASR result for the entire stream. For example, after the user utters the first sentence in the example above, the user may continue by saying: please please re please review please review win please review when pause please review when possible Please review when possible. We concatenate each of these new ASR results with the previous final ASR results to obtain the current transcript U (see §3),which evolves over time as follows: Attached is the draft. please Attached is the draft. please re Attached is the draft. please review Attached is the draft. please review win Attached is the draft. please review when pause Attached is the draft. please review when possible Attached is the draft. Please review when possible. Segmenting ASR results into Segments During Annotation During annotation (§4.1), all these partial and final ASR results get mapped to segments, forming u final iand u partial i. This is done by identifying the *timestamp of each token* within each partial and final result. For example, in the example ASR results sequence at the beginning of this section A.1, suppose the user specifies an segment boundary at time 0:00.45, (separating "Attached is" from "*the draft.*"). We get the following ASR results for the first segment: attached attached is Attached is (we refer to the first two as partial ASRs for the segment, as they are derived from partial ASR, and the third as the final ASR for the segment), and the following ASR results for the second segment: the the draft the draft. ## A.2 Annotation Instructions (§4.1) The full text of written instructions given to annotators during the first round of annotation (§4.1) is provided below: ## 1. **Transcribing** Your goal is to replicate the prompt in the target box verbatim / expand the prompt in the yellow textbox into a coherent email, starting from the given (potentially non-empty) starting document in the 'Transcription output' box. You are expected to do so using a series of speech-to-text transcriptions and commands. Try to use the starting document as much as possible (i.e. do not delete the entire document and start over). You can easily see what changes are to be made by toggling the 'See Diff View' button. Once that mode is on, the text you need to add will be highlighted in green, while the text you need to delete will by highlighted in red. Once there is no colored text, your text box matches the target text box and you are done. Begin this process by hitting the 'Begin transcription' button. This will cause a new 'insertText' command to appear in the command log on the right. You are now in transcription mode. Whatever you say will appear in the 'Transcription output' box. ## 2. **Editing** You can fix mistakes in transcription, add formatting, etc. through adding 'editText' commands. Hold down 'ctrl' on your keyboard to issue a new 'editText' command. While holding down 'ctrl' you will be in edit mode. In this mode, you can manually use mouse-and-keyboard to change the output. However, you must describe the edit you are making before you make it. Begin by describing your edit using your voice. Whatever you say now will appear in the editText ASR box, but not in the 'Transcription output'. Because the ASR system is imperfect, the textual description may be faulty. Fix any mistakes in the detected speech in the 'Gold ASR' box. Finally, manually edit the 'Transcription output' box to correspond the effect of your edit command. Note: It is important that you vocalize your change before making any edits to either 'Gold ASR' or 'Transcription output', as the ASR system stops recording as soon as you click into either one of these boxes. ## 3. **Undoing, Reseting, Submitting, & Saving** You can click on previous commands in the command log to revisit them. Note that if you edit the output associated with a 'editText' prior in the history, you will erase the changes associated with subsequent 'editText' operations. If you would like to undo some portion of command log, you can use the 'Delete Selected Command & Afterwards' button. Simply click on the first command you would like to remove, then click the button to remove that command and all commands after it. You can clear the entire command log by hitting "Reset". If you would like to work on transcribing another target, use the green arrow keys below the target. This will present you with a new target while saving progress on your current target. To delete a target prompt, press the red 'X'. Once you are done editing, click "Submit" button. Please double-check each command before submission! In particular, commands will appear red if they are potentially problematic (e.g. they are not associated with any change to the underlying text). Please check to make sure there are no red commands that you do not intend to be there! ## A.3 Target Text Preprocessing For replicating **Enron** emails, we process emails from the Enron Email Dataset to create our target final states. We break the email threads into individual emails, filtering out email headers and non-well-formed emails (emails that are either less than 50 characters or more than 5000 characters long, or contain too many difficult-to-specify nonEnglish symbols). Annotators also had the option to skip annotating certain emails, if they found the email too difficult to annotate. ## A.4 Annotation Programs Examples of programs can be found below: | Actions | Constraints & Combinators | Command action | # of distinct | # of distinct | |------------------------------------------------|-----------------------------|------------------|-----------------|-----------------| | first tokens | first tokens | | | | | (TERTiUS) | (DNS) | | | | | capitalize | 12 | 2 | | | | replace | 83 | - | | | | delete | 22 | 5* | | | | quote | 2 | 1 | | | | parenthesize | 3 | 1 | | | | do | 44 | - | | | | insert | 51 | 1 | | | | correction | 22 | 1 | | | | lowercase | 12 | 1 | | | | allCaps | 8 | 1 | | | | spell | 17 | 1 | | | | move | 3 | - | | | | respell | 1 | - | | | | combineSentences | 7 | - | | | | moveCursor | 3 | 1 | | | | combine | 1 | - | | | | combineSentences | union | between | | | | parenthesize | or | endsWith | | | | allCaps | and | at | | | | do | in | atStart | | | | respell | nthToLast | atEnd | | | | delete | nth | exactly | | | | spell | findAll | hasSubstring | | | | capitalize | thePosition | passage | | | | combine | theText | line | | | | quote | empty | sentence | | | | lowercase | extra | parenthetical | | | | move | nextTo | phrase | | | | moveCursor | take | word | | | | replace | contains | letter | | | | insert | before | text | | | | correction | after | like | | | | startsWith | alwaysTrue | | | | | Table 3: List of functions present in TERTiUS. | | | | | 1. ASR: *Lower case the W in the word when.* Program: ![14_image_0.png](14_image_0.png) $$(\begin{array}{l}{{(\mathrm{word})}}\\ {{(\mathrm{like~"when")~)~)~)~)}}}\end{array}$$ 2. ASR: *Get rid of the space in between the two* words off site and replace that with a -. Program: (replace (theText (and xt ( like " ") between (theText (like "off")) (theText (like "site"))) ## "-") A.5 Dataset Analysis To assess the diversity of *state changes*, we quantify the number of distinct actions, *constraints*, and constraint combinators (see §4.2) that appear in the annotated programs. In Table 3, we list out all actions, constraints, and constraint combinators present in TERTiUS. TERTiUS contains at least 15 types of actions (and allows for action composition with sequential chaining operation do), with 34 types of constraint and constraint combinators. In Table 4, we approximate the invocation diversity represented in TERTiUS, by measuring the number of distinct first tokens used to invoke each type of actions. For actions that overlap in function with ones supported by DNS, we also report a similar diversity metric against the full set of trigger words supported by DNS.25 ## B Running Online When running the system online in real time, we must consider efficiency and usability. We introduce a "commit point" that signifies that the system cannot re-segment, re-normalize, or reinterpret anything before that point. We only want to consider recent ASR results because the system quickly becomes inefficient as the dialogue length grows (the interpretation step, which is the bottleneck of the system, must run for every single command.) Furthermore, users often refer to and correct only recent dictations and commands; reverting early changes can have potentially large and undesirable downstream effects, leaving users potentially highly confused and frustrated. Concretely, the commit point is implemented as the system treating the document state at that point as the new "initial state," so that it is unable to access segments and the history of document states from before that point. We implement this 25https://www.nuance.com/asset/en_us/ collateral/dragon/command-cheat-sheet/ ct-dragon-naturally-speaking-en-us.pdf point so that it must coincide with the end of a final ASR result. We feed into the system this state as the initial state, and the entire sequence of ASR results starting from that point. All dictations and command segments returned by the model are executed in sequence from the commit point. We decide to set a commit point based on system confidence and time since last commit. System confidence is derived from the confidences of each component model at each step of the prediction. We measure the system confidence of the end state predicted by the system, by summing the logprobabilities of: 1. the segmentation model result, (summing the log-probabilities of each BIOES tag predicted for each token), 2. the ASR repair model result for each command (log-probability of the resulting sentence), 3. the interpretation model result for each command (the log-probability of the end state or program). Once the system confidence exceeds a threshold τcommit, we decide to commit immediately at that point. Otherwise, if we have obtained more than 4 final ASR results since the last commit, we must commit at our most confident point from within the last 4 turns. ## C Model Training Details In this section, we describe how we trained each component of the system. See §5 for a description of the inputs, outputs, and architecture of each model. Our final system is *incremental*, able to process both partial and final ASR results. ## C.1 Segmentation Model We use BIOES for the segmentation model. Note that we cannot just predict a binary command/dictation tag for each token, because it would be unable to discern two consecutive commands from one continuous command. Thus, we need to use B to specify the beginning of a new command segment. E is also necessary for the model to predict whether the final segment, in particular, is an incomplete and ongoing (requiring the ASR repair model to predict the future completion) or complete (requiring the ASR repair model to only correct errors). We expect in the final online version of the endto-end system, the segmentation model will: 1. run often, being able to accept and segment both partial and final ASR results, 2. run on only the most recent ASR, to avoid completely resegmenting an entire document that's been transcribed. Thus, we construct the training data for this model in a way to simulate these conditions. We extract all sequences of turns of length between 1 - 4 from TERTiUS (capping to at most 4 for condition 2), take their segments u, and concatenate them to simulate U, asking the model to segment them back into their individual u. For the final turn of each chosen sequence, we include in the training data both the final ASR result and all partial ASR results. We fine-tune on this data with a learning rate of 1e-4 and batch size of 4 until convergence. ## C.2 Asr Repair & Interpretation Models Below we describe the concrete implementations and training details of each model: T5 In the T5 implementation, both MNOR and MINT are T5-base encoder-decoder models. As described in §4.4, we do not have annotations of programs for the full training split. Thus, we automatically generate the missing programs using GPT3. We have an initial training reservoir that consists solely of data points with program annotations Dannot. For each example in the remaining training set, we retrieve a subset of samples from Dannot to form the prompt. We also use GPT3 for this retrieval step26. We then annotate programs in the remaining training set in an iterative manner: as new programs are annotated, we use the execution engine to check whether it executes to the correct end state, and if so, we add it to Dannot, such that future examples can include these programs in their prompt. GPT3 In the GPT3 implementation, both the ASR repair and interpretation steps occur in a single inference step, with GPT3 being prompted to predict both outputs in sequence. Specifically, it is prompted with: [ `Input State:` ] $d_{i-1}$ `[Utterance ASR:` ] $u_{i}^{\prime}$ `[Gold Utterance:` ] $u_{i}$ `[Final State:` ] $d_{i}$ `[` ` ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) The model is shown demonstrations in this format from the training data, then asked to infer, for each test sample, the highlighted portions from the non-highlighted portions. 26we compute similarity between two prompts by looking at the the similarity over next-token distributions when conditioned on each of the prompts | Metric | T5 | GPT3 | | |----------------|----------|--------|------| | ASR Repair | EM | 47.3 | 70.7 | | Program EM | 36.1 | - | | | Interpretation | State EM | 33.7 | 54.2 | Table 5: We evaluate the ASR repair and interpretation components in isolation. We experiment with a finetuned T5 vs. a prompted GPT3 model. In the setting that we are predicting programs instead of end states, the final 2 lines are replaced with ## [Lispress:] ℓi D Results D.1 Segmentation We run all the error analyses in this section on a model trained and tested exclusively on the Replicate doc task (where annotators were asked to replicate emails from the Enron Email Dataset). We do not evaluate the segmentation model on all of the transcripts that arise during a trajectory, many of which are prefixes of one another. Doing so would pay too little attention to the later segments of the trajectory. (F1 measure on the final transcript will weight all of the segments equally, but F1 measure on the earlier transcripts does not consider the later segments at all.) Instead, we create an evaluation set of shorter transcripts. For each trajectory, we form its final full transcript by concatenating all of its final ASR result results. Each sequence of up to 4 consecutive gold segments of this full transcript is concatenated to form a short transcript that the segmentation model should split back into its gold segments. For example, if the full transcript consists of 8 gold segments, then it will have 8 + 7 + 6 + 5 evaluation examples of 1 to 4 segments each. ## D.1.1 Error Analysis Below, we list some examples of segmentation errors ([·] is used to specify segment boundaries, yellow-highlighted segments correspond to command segments, while non-highlighted segments correspond to dictation segments). 1. **Input:** *Take out the word it. Before the word* should. And then replace third with three. True Segmentation: [Take out the word it. Before the word should. And then replace third with *three.*] ## Pred Segmentation: [Take Out The Word It.] [Before The Word Should. And Then Replace Third With Three.] 2. **Input:** *You learned. You lie not you learned.* True Segmentation: [*You learned.*] [You lie not you *learned.*] Pred Segmentation: [You learned. You lie not you learned.] 3. **Input:** Skillings calendar is amazingly full! Let's shoot for one of the following.Skillings should be apostrophe s Let's schedule it ASAP. True Segmentation: [*Skillings calendar is* amazingly full! Let's shoot for one of the following.] [Skillings should be *apostrophe* s] [*Let's schedule it ASAP.*] Pred Segmentation: [Skillings calendar is amazingly full! Let's shoot for one of the following.Skillings should be apostrophe s Let's schedule it ASAP.] These examples illustrate two prototypical modes of errors: (i) the ASR system making erroneous judgments about sentence boundary locations, leading the segmentation model astray, and (ii) commands being phrased in ways that disguise them as dictations. The first example illustrate error type (i): the ASR system oversegments the input (which should've been a single sentence) into three separate sentences, consequently leading the segmentation system to believe "*Take out the word it*" and "*Before the word should...*" are separate commands. The second example illustrates error type (ii): "You lie not you learned." is supposed to be a replace command indicating "*You learned*" should be replaced with "*You lie*", but it is not phrased as an explicitly command. Finally, the third example illustrates both error types: we see that the ASR system undersegments the input and combines the sentence "*Skillings should be apostrophe s*" with the sentence "*Let's schedule it ASAP*" without a period. Combined with the fact that "*Skillings should* be apostrophe s" is not issued explicitly as a command, this confuses the segmentation model into thinking that it is in fact part of the dictation. ## D.1.2 Success Cases: Fixing Erroneous Segmentations The above examples illustrated certain cases where the segmentation model was misled by erroneous ASR judgments about sentence boundary locations. In some cases, however, the segmentation model is able to fix these judgements, as shown below: 1. **Input:** *Take out the extra space. In between* the two words, but and should. True/pred Segmentation: [Take out the extra space. In between the two words, but and should.] ## 2. **Input:** Replace The Period. With A Comma After Restructuring. True/Pred Segmentation: [Replace The Period. With A Comma After Restructuring.] D.2 Asr Repair Metrics To measure the ASR repair step in isolation, we take noisy utterances ui corresponding to each command and measure to what extent we are able to reconstruct the ground-truth utterance. We measure the percent of ui for which our predicted repaired utterance exactly matches the ground-truth utterance (EM). Results From Table 5, we see that the GPT3 model is much better at repairing speech disfluencies and ASR errors than the T5 model, achieving 70% EM. We suspect this is due to the fact that GPT3 was pretrained on much more (English) language data than T5, giving GPT3 a greater ability to produce grammatically coherent and permissible English sentences, and likely also a better sense of common disfluencies. Qualitative Analysis Recall that we designed the ASR repair step to condition on not just the utterance ui but the state di−1. This allows it take di−1 into account when repairing ui. For example, when given the following utterance: Delete the period after events. An ASR repair model that looks at ASR alone may not see any issue with this utterance. However, given the document state: Eric, I shall be glad to talk to you about it. The first three days of the next week would work for me. Vince. (note the word *events* does not appear anywhere in this text), the ASR repair model realizes that the actual utterance should've been, Delete the period after Vince. Indeed, the T5 ASR repair model is able to make the appropriate correction to this utterance. ## D.3 Interpretation Metrics To measure the interpretation step in isolation, we take normalized utterances u′i corresponding to each command and measure to how well the interpretation model is able to reconstruct the ground-truth final state for the command di. We use the same set of metrics described in §6.2 (state EM, program EM). However, insteading of feeding the interpretation model ASR repair results, we feed in ground-truth utterances u. Results We evaluate a T5 interpretation model that produces programs (which is then executed by our execution engine) and a GPT3 interpretation model that directly generates states. Results are reported in Table 5. We can also compare these isolated interpretation results with the joint ASR and interpretation results reported in Table 2. Due to error propagation, the T5 model is ∼5–8% worse when asked to jointly perform ASR repair and interpretation from noisy ASR, than when simply asked to interpret normalized utterances. Surprisingly however, the GPT3 model performs nearly as well in the joint evaluation as the isolated evaluation. We suspect that even if the GPT3 ASR repair model does return the exactly normalized utterances, it is still able to reconstruct a semantically equivalent/similar command. ## E Infrastructure And Reproducibility We trained 220M-parameter T5-base model on a single NVIDIA Tesla A100 GPU machine. Each training run for each component of the model took at most a few hours (<8). We also prompted a 12B-parameter GPT3 model. We used PyTorch (Paszke et al., 2019) and Huggingface Transformers (Wolf et al., 2020) for implementing and training T5-base models. We use OpenAI's API27 for querying GPT3. We use the text-davinci-003 model. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract; Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We are unsure of the license. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Enron was intended for research purposes, according to http://www.cs.cmu.edu/ enron/ ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The Enron email dataset creators took steps to anonymize. We did not take further steps. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.3 ## C ✓ **Did You Run Computational Experiments?** Section 5,6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5,6, Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 4, Appendix A.2 ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Disclosing this information may risk anonymity ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4.1 ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? The dataset collection process posed no risks to annotators. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 4.3
li-etal-2023-codeie
{C}ode{IE}: Large Code Generation Models are Better Few-Shot Information Extractors
https://aclanthology.org/2023.acl-long.855
Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.
# Code**Ie: Large Code Generation Models Are Better Few-Shot Information** Extractors Peng Li1,∗ Tianxiang Sun2,∗ Qiong Tang2 **Hang Yan**2 Yuanbin Wu3 Xuanjing Huang2 **Xipeng Qiu**2,† 1Academy for Engineering & Technology, Fudan University 2School of Computer Science, Fudan University 3School of Computer Science and Technology, East China Normal University {lip21,qtang22}@m.fudan.edu.cn, ybwu@cs.ecnu.edu.cn {txsun19,hyan19,xjhuang,xpqiu}@fudan.edu.cn ## Abstract Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.1 ## 1 Introduction Information extraction (IE) aims to recognize structured information from plain text. It spans various tasks with diverse output structures such as named entity recognition (NER), relation extraction (RE), etc. (Sang and Meulder, 2003; Grishman, 2019; Wang et al., 2021a; Zhong and Chen, 2021; Lu et al., 2022). To express and address these different tasks in a unified framework, recent works propose to linearize the output structures into unstructured ![0_image_0.png](0_image_0.png) Figure 1: Illustrations of performing structured NER task with NL-LLMs and Code-LLMs, respectively. In contrast to prompting NL-LLMs with plain natural language, we utilize Code-LLMs with structured code-style prompts to mitigate the output discrepancy between the pre-training and inference stages. strings and solve the IE tasks with sequence generation models (Yan et al., 2021b; Huguet Cabot and Navigli, 2021; Paolini et al., 2021; Josifoski et al., 2022; Lu et al., 2022). For example, given the input sentence "Steve became CEO of Apple in 1998 ." of a NER task, UIE (Lu et al., 2022) generates the target as a sequence "((person: Steve) (organization: Apple))". While this kind of linearizing approach achieves promising results with sufficient training data, it still performs poorly under the few-shot scenario. For instance, compared with full-data training, the performance dropped by around 20% when applying UIE on a 5-shot NER task CoNNL03 (Lu et al., 2022). Considering the tremendous few-shot adapting capabilities of large language models (LLMs) (Brown et al., 2020; Rae et al., 2021; 15339 Chowdhery et al., 2022; Hoffmann et al., 2022), we manage to employ them to perform few-shot IE tasks, especially the few-shot NER task and RE task. Typically, for NLP tasks like text classification, previous works reformulate them into text-to-text generation formats and prompt the LLMs of natural language (NL-LLMs) like GPT-3 (Brown et al., 2020) to generate the answer. In contrast, due to the complex structure inside the targets of IE tasks, linearized targets of previous works like "((person: Steve) (organization: Apple))" are usually "unnatural", resulting in a mismatch between the output format at the pre-training time and the inference time (see Figure 1(a)). As a consequence, when using these flattening methods to perform IE tasks with pre-trained language models, the predicted outputs are fragile and often require complex decoding strategies to be post-processed into valid structures (Lu et al., 2022; Josifoski et al., 2022). In this paper, we propose to frame these two IE tasks into code generation tasks and leverage the LLMs of code (Code-LLMs) to address them. We argue the abundant structured code information encoded in the pretrained Code-LLMs can benefit these IE tasks. As demonstrated in Figure 1(b), it is easy to convert the text-to-structure IE task into a structure-to-structure code generation task, while transforming it into a text-to-text format can be difficult. Take the example input in Figure 1, *"Steve became CEO of Apple in 1998* .", we wrap it into a piece of Python code, and formulate the structured entity outputs as Python dictionaries with keys "text" and "type". We compose them into a Python function that is semantically equivalent to the NER example, which is shown as follows: ![1_image_0.png](1_image_0.png) After demonstrating a few training samples with the same format, we feed the code-style prompt (the highlighted lines with light grey color) into Code-LLMs and get the structured prediction. We conduct experiments on seven benchmarks of NER and RE tasks, and carefully analyze the benefits of our approach (named CODEIE). The findings are as follows: | Generative? Extremely Structured | | | | | |------------------------------------|------------|------------|----|----| | Model | Pre-train? | Few-Shot | | | | Large? | NER and RE | | | | | Type | Tasks | | | | | Unified | Few-shot | Structured | | | | Framework | Learning | Task | | | | Pre. Models (e.g., UIE) | ! | % | ! | % | | (e.g., GPT-3) | ! | ! | % | % | | NL-LLMs Code-LLMs (e.g., Codex) | ! | ! | ! | ! | 1) Prompting Code-LLMs (e.g., Codex (Chen et al., 2021)) with code-style inputs consistently outperforms fine-tuning UIE, a specially pre-trained model for IE tasks, and prompting NL-LLMs (e.g., GPT-3) under fewshot settings. 2) With the same LLM (either NL-LLM or CodeLLM), the code-style prompt performs better than the linearized text prompt, demonstrating the advantage of representing structured targets with code. 3) With the same prompt (either natural language or code), the Code-LLM (i.e., Codex) achieves better performance than the NLLLM (i.e., GPT-3), demonstrating the merits of performing IE tasks with Code-LLMs. 4) Compared with natural language prompts, using the code-style prompts showed higher fidelity to the output structures, i.e., the outputs have a lower structural error rate. The high-level differences between previous moderate-size models, NL-LLMs, and Code-LLMs for IE tasks are summarized in Table 1. ## 2 Codeie In this section, we first formulate the two IE tasks we focus on, named entity recognition (NER) and relation extraction (RE) in Section 2.1. Then we describe how we recast these structured prediction tasks into code generation tasks (Section 2.2) and prompt Code-LLMs to perform them (Section 2.3) under the few-shot scenario. We use Python language for our code generation tasks since public Python codebases are abundant and Code-LLMs are sufficiently pre-trained on them. ## 2.1 Task Formulation Given an input sentence x with l tokens x1, x2*, . . . , x*l, IE tasks are to predict structured target y from x. ![2_image_0.png](2_image_0.png) The target y of NER is a set of (*e, t*) pairs, where e is an entity span (e.g., "Steve") and t is the corresponding entity type (e.g., "person"). The entity span is a sequence of tokens from x, and the entity type belongs to a pre-defined entity type set T. The prediction target y of RE is comprised of a set of triplets (e1*, r, e*2), where e1 and e2 are two entity spans from x and r ∈ R is the semantic relation (e.g., "work for") between the two entities. Here R denotes a pre-defined relation type set. In addition to extracting the relation of entities, we are often interested in also predicting the entity types t1 and t2 of entities e1 and e2 at the same time. In the few-shot setting, we are given a small set of annotated samples {(xi, yi)} n i=1 that consists of k samples per class to compose a k-shot setting. ## 2.2 Formulating Ie Tasks Into Code Generation Task In order to utilize generative Code-LLMs for IE tasks, we reformulate IE tasks as code generation tasks. The code generation task is to predict the subsequent code sequence given an incomplete piece of code. Hence, we can recast the input and output of the IE task into an incomplete piece of code and the code to be predicted, respectively, such that they can compose a complete piece of code that is semantically equivalent to the original sample while maintaining the syntax of the programming language. In this work, we mainly use Python functions to represent IE tasks. We wrap the input text x into a code-style prompt x cand represent the output structure y with structured Python elements, such as the list, dictionary, etc. As shown in Figure 2, for NER and RE tasks, we first transform the task name into the name of the Python function and add a docstring to illustrate the goal of the task. We assign the input text string x to a variable input_text. Then we initialize an empty list to save the output and append a descriptive comment like "\# extracted named entities" to prompt Code-LLMs to put named entities into the list. We pack the above code as our code prompt x c. For the structured target y, we utilize the append method of Python list and represent each basic information unit (e.g., a pair for NER tasks or a triplet for RE tasks) as a Python dictionary. Hence, the target y cto be predicted by Code-LLMs is reformulated into a list of dictionaries. For NER, we add Python dictionaries with keys "text" and "type" to the list, where the values of the dictionaries are the corresponding entity span and entity type. For RE, we similarly add dictionaries with keys "rel_type", "ent1_type", "ent1_text", "ent2_type", and "ent2_text" to the list to represent the structured target. The Code-LLM is expected to complete the list conditioning on the function name, docstring, and input text. Figure 2 shows examples of formulating an original IE sample into a code-style one. It is worth noting that the design space of the code-style prompt is large and hard to be fully explored. The formulation described above is a straightforward instance using Python. We also explore several other formulations to recast IE tasks into code generation tasks, which can be found in Appendix A.1. ## 2.3 Prompting Code-Llms With In-Context Demonstrations Despite the carefully designed prompt, it is nontrivial to perform IE tasks by prompting CodeLLMs without any samples. Therefore, it is necessary to let Code-LLMs be aware of a few labeled samples in typical few-shot settings. With the increasing size of pre-trained language models, fine-tuning is becoming more and more expensive or even infeasible since recent LLMs are usually released as black-box APIs (Sun et al., 2022). Hence, instead of fine-tuning Code-LLMs on the few-shot dataset, we explore including the labeled samples in the context and performing incontext learning (Brown et al., 2020). We select n samples {(xi, yi)} n i=1 from the training dataset and convert them to corresponding code-style pairs {(x c i , yc i )} n i=1. We concatenate them as a string to compose the in-context demonstrations x c1 ⊕ y c1 . . . xcn ⊕ y cn . Given an arrived test sample x, we first convert it to a code prompt x cand prepend the demonstration context, i.e., x c1⊕y c1 . . . xcn⊕y cn⊕x c. After feeding the constructed input into the CodeLLM, we are expected to get an output y cthat is formatted as the same as y c1 , y c2 , *. . . y*cn (see Figure 2). We find that y calmost always retains the syntax of Python language and is easy to be converted back to its original structure y. ## 3 Experiments 3.1 Setup Datasets We evaluate our approach on NER task with CoNLL03 (Sang and Meulder, 2003), ACE04 (Doddington et al., 2004) and ACE05- E(Walker et al., 2006). For relation extraction, we evaluate on datasets CoNLL04 (Roth and Yih, 2004), ACE05-R (Walker et al., 2006), NYT (Riedel et al., 2010) and SciERC (Luan et al., 2018). Table 2 shows the dataset statistics. We follow Lu et al. (2022) to preprocess all these datasets. Code-LLMs For Code-LLMs, we conduct experiments mainly with the code-davinci-002 |Ents| |Rels| #Train #Val #Test CoNLL03 4 - 14,041 3,250 3,453 ACE04 7 - 6,202 745 812 ACE05-E 7 - 7299 971 1060 CoNLL04 4 5 922 231 288 ACE05-R 7 6 10,051 2,420 2,050 NYT 3 24 56,196 5,000 5,000 SciERC 6 7 1,861 275 551 version Codex from OpenAI. Codex is a large language model adapted from GPT-3 and further pre-trained on open-source codebases. The code-davinci-002 version Codex supports 8k input tokens at most. We get the model predictions by querying OpenAI API2in the few-shot in-context prompting way. We generate up to 280 tokens with greedy decoding. Baselines We compare our approach with two kinds of few-shot learning methods: 1) **Fine-tuning** We fine-tune the base and large versions of two moderate-size pre-trained models: T5 and UIE. T5 is a sequence-tosequence model pre-trained on large-scale text corpora. UIE is a model further pre-trained from T5 on the structured datasets. UIE utilizes the textual structured extraction language (SEL) to express the output structures. We use the same approach and parameters with Lu et al. (2022) when fine-tuning T5 and UIE. 2) **Prompting** We compare our approach with prompting NL-LLMs, in particular GPT-3. We mainly experiment with the text-davinci-002. We use a text prompt, of which the format is slightly modified from SEL. As shown in Figure 1(a), given an input text x, the text prompt and output format are like "The text is x. The named entities in the text: " and "((person: ...)(organization:...))", respectively. See Appendix A.2 for more details of the text prompt. The approach and super-parameters of NL-LLMs prompting and Code-LLMs prompting are identical. 2https://openai.com/api | Entity | Relation | | | | | | | | | |-----------------|-------------|-------------|------------|------------|-------------|------------|------------|-----------|-------| | Model | Prompt Type | CoNLL03 | ACE04 | ACE05-E | CoNLL04 | ACE05-R | NYT | SciERC | AVG | | Full Data | | | | | | | | | | | Pre. SoTA | - | 93.21 | 86.84 | 84.74 | 73.60 | 65.60 | 92.70 | 35.60 | 76.04 | | UIE-large | text | 92.99 | 86.89 | 85.78 | 75.00 | 66.06 | - | 36.53 | - | | Few Shot | | | | | | | | | | | #shot (#sample) | 5 (25) | 2 (16) | 2 (16) | 5 (25) | 2 (14) | 1 (24) | 2 (16) | | | | T5-base | text | 33.68±29.17 | 7.25±12.00 | 9.09±15.74 | 14.56±13.87 | 0.00±0.00 | 5.59±9.68 | 0.00±0.00 | 10.02 | | UIE-base | text | 70.37±0.54 | 44.31±1.61 | 39.71±0.91 | 45.63±1.50 | 8.69±1.41 | - | 5.69±0.49 | - | | T5-large | text | 53.08±7.71 | 24.67±5.26 | 24.31±4.74 | 10.03±8.75 | 1.41±0.74 | 15.29±8.76 | 0.25±0.43 | 18.43 | | UIE-large | text | 70.62±3.22 | 45.08±3.63 | 43.03±2.26 | 47.68±2.29 | 9.59±4.89 | - | 7.30±2.01 | - | | GPT-3 | text | 68.84±1.29 | 45.51±0.23 | 48.93±0.49 | 39.67±2.44 | 5.13±1.24 | 16.07±4.67 | 4.39±0.98 | 32.65 | | GPT-3 | code | 81.00±1.49 | 53.44±1.44 | 52.98±1.53 | 51.33±1.34 | 12.33±2.06 | 24.81±1.90 | 4.67±0.67 | 40.08 | | Codex | text | 72.66±0.66 | 49.58±1.37 | 49.55±1.14 | 47.30±2.25 | 10.08±2.06 | 24.63±6.74 | 5.40±2.65 | 37.03 | | Codex | code | 82.32±0.37 | 55.29±0.37 | 54.82±2.09 | 53.10±2.02 | 14.02±3.27 | 32.17±1.46 | 7.74±1.54 | 42.78 | Few-Shot Setting For each IE task, we randomly sample k training samples for each entity or relation type to construct a k-shot training set. The value of k varies across different datasets to satisfy the maximum length limitation (4097) of GPT-3. To be compatible with datasets that contain samples with empty targets, we regard those empty-target samples as an additional class and include k samples belonging to that class in the training set. Evaluation Same as previous work (Lu et al., 2022), we use **Entity F1** and **Relation Strict F1** as the evaluation metrics for NER tasks and RE tasks, respectively. Under these metrics, an entity span prediction is correct if its offsets and entity type match the golden entity. And a relation prediction is correct if the relation type is correct and the corresponding offsets and types of its entities are correct. Since few-shot training is of high variance, we perform 3 runs with different random seeds for each experiment and report the mean and standard deviation of the metric. ## 3.2 Results LLMs vs. Moderate-sized Models As shown in Table 3, LLMs (GPT-3 and Codex) achieve superior performance over moderate-sized models (T5 and UIE) under few-shot settings, demonstrating a strong few-show learning ability on IE tasks. Especially, on average performance over the seven con- ![4_image_0.png](4_image_0.png) sidered benchmarks, our proposed CODEIE (Codex + code prompt) achieves the best results, improving T5-large and T5-base by 132% and 327%, respectively. In addition, under 1-shot learning settings, CODEIE improves the performance of UIE-large by more than 60% on CoNLL03 and CoNLL04 benchmarks (see Table 6 in the Appendix). Code Prompt vs. Text Prompt We then compare the performance of code prompt vs. text prompt when using the same LLM, i.e., comparing ⟨GPT-3 + text prompt⟩ with ⟨GPT-3 + code prompt⟩ and comparing ⟨Codex + text prompt] with ⟨Codex + code prompt⟩. As a result, we find that prompting LLMs with code yields significant improvement (23% for GPT-3 and 16% for Codex). What is surprising is that code prompt is even more beneficial to GPT-3, which is not specif- | Model | Prompt | Entity | Relation | |--------------|-------------|------------|------------| | Design | CoNLL03 | CoNLL04 | | | GPT-3 | struct lang | 68.84±1.29 | 39.67±2.44 | | natural lang | 46.36±12.56 | 40.90±3.67 | | | func def | 82.32±0.37 | 53.10±2.02 | | | class init | 81.29±0.72 | 52.32±0.94 | | | Codex | func exec | 84.05±1.24 | 53.32±3.47 | | func init- | 81.95±1.01 | 53.59±1.10 | | ## Ically Trained On Code Data. Code-LLMs vs. NL-LLMs When using the same kind of prompt and comparing the used LLMs, i.e., comparing ⟨GPT-3 + text prompt⟩ and ⟨Codex + text prompt⟩ and comparing ⟨GPT-3 + code prompt⟩ and ⟨Codex + code prompt⟩, we find that Codex consistently surpasses GPT-3, demonstrating that code pre-training can be beneficial to IE tasks. Different Shot Numbers We further compare these approaches under different shot numbers on CoNLL03 and CoNLL04. As shown in Figure 3, we can see that the obtained phenomenons still hold when increasing the number of shots. Different Prompt Designs The design of the prompt can be an important factor affecting the model performance (Min et al., 2022). Hence, we explore additional prompt designs for both text prompt and code prompt. The detailed prompt deigns can be found in Appendix A. The experimental results are shown in Table 4, from which we find that code prompts consistently outperform text prompts. Hence, the superior performance of using code prompts is mainly contributed by the code style instead of some specific instance of prompt design. Different LLMs To verify the versatility of the proposed approach and the observed findings, we further conduct experiments with text-davinci-001 version of GPT-3 and code-davinci-001 version of Codex. As shown in Table 7, the previous findings still hold across the two different versions. ## 4 Analysis To take a closer look at the difference between prompting NL-LLMs with textual format input and ![5_image_0.png](5_image_0.png) prompting Code-LLMs with code format input, in this section, we define several informative metrics and conduct in-depth analyses to shed some light on the following question: *what contributes to the* final performance of CODEIE *for IE tasks?* ## 4.1 Format Consistency We can see from Figure 1(a) that an apparent inappropriateness to use NL-LLMs for IE tasks is the inconsistency between the structured output format at inference time and NL-LLMs that are trained on natural language at pre-training time, while the format of code-style output aligns well with Code-LLMs. It has been evidenced that adapting pre-trained models to downstream tasks in a manner that is well aligned with it pre-training paradigm usually achieves better few-shot learning performance. Hence we assume *the promising performance of* CODEIE *partly comes from the better* format consistency between the code-style sample and the pretrained code model. To verify this hypothesis, given a sample, we compare the perplexities of a pre-trained language model on its text format and a pre-trained code model on its code format. Formally, given a generative model M, the conditional perplexity ppl of a sample (*x, y*) is as follows, $$p p l_{M}(y|x)=\prod_{i=1}^{m}P_{M}(y_{i}|y_{1}\cdots y_{i-1},x)^{-{\frac{1}{l}}}.\quad(1)$$ For an original IE sample (*x, y*), we first transform it to its natural language text pair (x t, yt) and its code piece pair (x c, yc), and then compute the conditional perplexity of them with the language model Mnl and the code model Mc, respectively, i.e., the ![6_image_0.png](6_image_0.png) pplMnl (y t|x t) and pplMc (y c|x c). A lower conditional perplexity means the output format aligns well with the pre-training distribution of the model. Since LLMs usually limit user access by their black-box APIs, we instead utilize two agent models T5 (Raffel et al., 2020) and CodeT5 (Wang et al., 2021b) to calculate the perplexities. CodeT5 is a variant of T5 model that is further pre-trained on code data. We calculate the perplexities on the previous seven datasets with the base verison of the two models, namely T5-base and CodeT5-base. Figure 4 shows the mean perplexities of two base version models on the training samples of each task. We can observe the perplexity of the text format outputs measured by T5-base is usually larger than code format outputs measured by CodeT5-base. That means, transforming IE samples to code formats can better align with the data distribution of code pre-training and therefore the pre-trained code language model. ## 4.2 Model Fidelity Besides the low format consistency of prompting ML-LLMs, we find that NL-LLMs are more likely to generate outputs with structural and semantic errors when performing few-shot IE tasks than CodeLLMs. In other words, Code-LLMs seem to be more faithful to the demonstrated few-shot samples than NL-LLMs. To quantitatively measure the model fidelity, we define two metrics: Structure Fidelity Structure fidelity measures how faithful the model is to the structure of demonstrations provided in the context. This can be simply measured by calculating the structural error rate, which is the proportion of generated samples with structural errors. In particular, we construct a | Task | Error Type | Samples | |---------------------------------|---------------------------------------|---------------------------------------| | currency, company, time, event, | | | | NER | Entity type | profession,organizational indicator, | | not in T | finanical, object, event | | | RE | Relation type | called, organization, person, relate, | | not in R | specialize, assumption, cause, assign | | parser with a series of hand-written rules to transform the model-generated outputs back to the desired format and filter out samples with invalid structures. Figure 5 demonstrates the structure fidelity of different models with different prompts on the seven benchmarks. Results show that the outputs generated by GPT-3 and Codex using text prompts are fragile while using code prompts tends to generate nearly zero structurally erroneous samples. Besides, with the same text prompt, Codex tends to generate fewer structurally errant samples than GPT-3, demonstrating its superior understanding ability on general structured input instead of being limited to existing programming languages. Semantic Fidelity Another measurement of model fidelity is semantic fidelity, which is designed for those samples that have a valid structure and can succeed in our parser but are semantically incorrect. The difference between the defined semantic fidelity and the conventional prediction error is that semantic fidelity mainly considers model behaviours that violate the formulation of the task, e.g., predicting an entity type that does not exist in the given entity type set or extracting an entity span that does not appear in the input text. Some example semantic errors detected in our experiments are listed in Table 5. We report the statistical result of the tasks in Table 8 and Table 9 in the Appendix. As a result, we find that GPT-3 generated more semantic errors than Codex though some of the errors seem to be "correct" but are out of the pre-defined class set. In a nutshell, GPT-3 tends to generate free-form results and Codex is more faithful to the demonstrations provided in the context and therefore is more predictable for IE tasks. ## 4.3 Fine-Grained Performance In addition, we conduct a fine-grained evaluation to compare different approaches. In addition to the F1 score, precision and recall are also important metrics for NER and RE tasks. To investigate ![7_image_0.png](7_image_0.png) how different LLMs and prompting methods affect precision and recall, we report the two metrics in Figure 6. Results show that: (a) The code prompt improves model performance in both precision and recall; (b) Compared with GPT-3, Codex achieves higher recall and comparable precision on NER tasks and and achieves both higher precision and recall on RE tasks. ## 5 Related Work Generative Information Extraction Generative information extraction which frames IE tasks as token generation tasks receive more attention recently due to their potential to unify different tasks (Yan et al., 2021a; Josifoski et al., 2022). Yan et al. (2021a) designs various ways to linearize entities into a sentence to unify various named entity recognition subtasks. TANL (Paolini et al., 2021) uses augmented language to improve the effect of generative models. Lu et al. (2022) also proposes a structured extraction language (SEL) and pre-trains their UIE model with this language on multiple structured datasets. These works linearize the structure output of IE tasks into text format to align the pretrained models. Different from them, we propose to recast the structural samples of IE tasks into structural code format and utilize aligned pre-trained code models to perform the tasks. Code-LLMs for Complex Tasks Recent works show Code-LLMs perform better on complex tasks like commonsense and symbolic reasoning (Madaan et al., 2022; Cheng et al., 2022), mathematical logic (Suzgun et al., 2022) and event argument prediction (Wang et al., 2022) tasks. We focus on the two mainstream IE tasks different from them, i.e., NER and RE. Besides, in-depth analyses are conducted to provide more insights. LLMs for Few-Shot NER and RE While LLMs like GPT-3 have shown strong few-shot learning abilities in many NLP tasks, limited works have explored their capabilities on typical IE tasks like NER and RE. Epure and Hennequin (2021) evaluate GPT-2 (Radford et al., 2019) on open-domain NER tasks with few-shot demonstrating. A recent work (Gutiérrez et al., 2022) tests the performance of GPT-3 on biomedical NER and RE tasks and finds it underperforms compared to finetuning smaller pretrained models. Its concurrent work (Agrawal et al., 2022) finds that GPT-3 performs well on few-shot clinical IE tasks. We conduct our experiments on more general NER and RE datasets and find GPT-3 can achieve comparable performance to fine-tuning the UIE model. Besides, we successfully employ the LLMs of code with better performances for these IE tasks. ## 6 Conclusion We propose the first work to utilize the structured Code-LLMs with code-style prompts to perform the few-shot NER and RE tasks. Experiments show our approach consistently surpasses the UIE models and the NL-LLMs counterpart under the fewshot setting. We conducted extensive analysis and find the performances come from better format consistency and model fidelity, etc. We think these analyzes can facilitate future work. As the further works, we will employ CODEIE on more IE tasks in different domains, and inspect the robustness of it. ## Limitations Though our approach demonstrates better performances than the baseline models, how to design a good code-format prompt has not been fully inspected. Besides, we mainly conduct experiments on the black-box GPT-3 and Codex models but they are not open-sourced and querying the GPT-3 model cost the economic budget. And the use of LLMs may bring environmental pollution. Another limitation of our approach is that the Code-LLMs mainly trained on programming language datasets with English annotations. Exploring our model on non-English datasets (like Chinese datasets) is the future work. ## Acknowledgements We would like to express our gratitude to the reviewers for their helpful comments and suggestions. We are also very grateful to Yaojie Lu for his friendly assistance during our experiments. This work was supported by the National Natural Science Foundation of China (No. 62236004 and No. 62022027) and CCF-Baidu Open Fund. ## References Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David A. Sontag. 2022. Large language models are few-shot clinical information extractors. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1998–2022. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *CoRR*, abs/2107.03374. Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022. Binding language models in symbolic languages. *CoRR*, abs/2210.02875. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311. George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie M. Strassel, and Ralph M. Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In *Proceedings of the Fourth International* Conference on Language Resources and Evaluation, LREC 2004, May 26-28, 2004, Lisbon, Portugal. European Language Resources Association. Elena V. Epure and Romain Hennequin. 2021. A realistic study of auto-regressive language models for named entity typing and recognition. *CoRR*, abs/2108.11857. Ralph Grishman. 2019. Twenty-five years of information extraction. *Nat. Lang. Eng.*, 25(6):677–692. Bernal Jiménez Gutiérrez, Nikolas McNeal, Clay Washington, You Chen, Lang Li, Huan Sun, and Yu Su. 2022. Thinking about GPT-3 in-context learning for biomedical ie? think again. *CoRR*, abs/2203.08410. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. *CoRR*, abs/2203.15556. Pere-Lluís Huguet Cabot and Roberto Navigli. 2021. REBEL: Relation extraction by end-to-end language generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2370– 2381, Punta Cana, Dominican Republic. Association for Computational Linguistics. Martin Josifoski, Nicola De Cao, Maxime Peyrard, Fabio Petroni, and Robert West. 2022. GenIE: Generative information extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4626–4643, Seattle, United States. Association for Computational Linguistics. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin, Ireland. Association for Computational Linguistics. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. *CoRR*, abs/2210.07128. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *CoRR*, abs/2202.12837. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cícero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew J. Johnson, Blake A. Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. *CoRR*, abs/2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III, volume 6323 of Lecture Notes in Computer Science, pages 148–163. Springer. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning, CoNLL 2004, Held in cooperation with HLT-NAACL 2004, Boston, Massachusetts, USA, May 6-7, 2004, pages 1–8. ACL. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. ACL. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. 2022. Black-box tuning for language-model-as-a-service. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 20841–20855. PMLR. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. *CoRR*, abs/2210.09261. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus ldc2006t06. In *Philadelphia: Linguistic Data Consortium*. Web Download. Xingyao Wang, Sha Li, and Heng Ji. 2022. Code4struct: Code generation for few-shot structured prediction from natural language. *CoRR*, abs/2210.12810. Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan. 2021a. Unire: A unified label space for entity relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 220–231. Association for Computational Linguistics. Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021b. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021a. A unified generative framework for aspect-based sentiment analysis. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416–2429, Online. Association for Computational Linguistics. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021b. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808–5822, Online. Association for Computational Linguistics. Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) class **NamedEntityRecognition**: ![10_image_2.png](10_image_2.png) ## A Prompt Format Design A.1 Code Format Prompts For The Ner Task, The Format Is For The Ner Task, The Format Is NAACL-HLT 2021, Online, June 6-11, 2021, pages 50–61. Association for Computational Linguistics. We design several code-style prompt formats. We use the input sentence "Steve became CEO of Apple in 1998 ." and the corresponding entities ("Steve": person, "Apple": organization) and relations ("work for" of the two entities "Steve" and "Apple") as a running sample for the NER and RE tasks. The name of the format design is denoted with different font. we demonstrate the Python format prompt for NER and RE tasks. The prompt part is highlighted with grey color and the following codes are the expected output. We list the designed format as follows: 1. func def: our main code format prompt to transform the IE tasks into code formats. For the RE task, the format is 2. class init: we describe the IE tasks with the Python class. For the RE task, the format is ![11_image_0.png](11_image_0.png) 3. func exec: describe the IE tasks as a function execution procedure. For the NER task, the format is \# extract named entities from a sentence . input_text = "Steve became CEO of Apple in 1998 ." output = named_entity_recognition(input_text) \# the output is \# {"text": "Steve", "type": "person"} \# {"text": "Apple", "type": "organization"} For the RE task, the format is ``` # extract the relations of named entities from from a ,→ sentence . input_text = "Steve became CEO of Apple in 1998" output = relation_extraction(input_text) # the output is # {"rel_type": "work for", "ent1_type": "person", "ent1_text": "Steve", "ent2_type": "organization", "ent2_text": "Apple"} ,→ ,→ ``` 4. func init-: perturb the rational format design by exchanging the format design of NER and RE tasks. For the NER task, the format is def relation_extraction(input_text): ![11_image_1.png](11_image_1.png) """ extract the relations of named entities from the ,→ *input_text . """* input_text = "Steve became CEO of Apple in 1998 ." entity_relation_list = [] # extracted relations entity_relation_list.append({"text": "Steve", "type": ,→ "person"}) entity_relation_list.append({"text": "Apple","type": ,→ "organization"}) For the RE task, the format is def named_entity_recognition(input_text): """ extract named entities from the input_text . """ ![11_image_3.png](11_image_3.png) ## A.2 Text Format Prompts Similar to the above section (A.1), we describe the textual format prompt we used given the text input "Steve became CEO of Apple in 1998 .". The text input prompts are all the same and we highlighted the expected outputs with blue colour. 1. struct lang: our mainly used text format prompt. For the NER task, the transformed format is: The text is "Steve became CEO of Apple in 1998 .". The named entities in the text: ((person: Steve)(organization: Apple)) For the RE task, the transformed format is: The text is "Steve became CEO of Apple in 1998 .". The relations of named entities in the text: ((person: Steve (work for: Apple)) (organization: Apple)) 2. natural lang: a more "natural" format to describe the structures in natural language. For the NER task, the transformed format is: The text is "Steve became CEO of Apple in 1998 .". The named entities in the text: "Steve" is "person". "Apple" is "organization". For the RE task, the transformed format is: ![11_image_2.png](11_image_2.png) The text is "Steve became CEO of Apple in 1998 .". The relations of named entities in the text: person "Steve" work for organization "Apple". | Model | Prompt | Entity | Relation | | | | | | |-----------|----------|------------|------------|------------|------------|-----------|------------|-----------| | Type | CoNLL03 | ACE04 | ACE05-E | CoNLL04 | ACE05-R | NYT | SciERC | | | UIE-large | text | 46.75±6.13 | 35.25±2.31 | 34.29±1.93 | 25.81±5.93 | 5.15±4.43 | - | 4.65±0.61 | | Codex | code | 75.89±1.79 | 54.27±2.14 | 51.91±2.51 | 43.05±3.42 | 7.09±4.40 | 32.17±1.46 | 6.05±0.82 | | RoI↑ | 62.33 % | 53.96 % | 51.39 % | 66.80 % | 37.67 % | - | 30.11 % | | Table 6: The 1-shot results of UIE-large and Codex, and Rate of Increase (RoI) of Codex than UIE-large. | Model | Prompt | Entity | Relation | | | | | | |------------------|----------|------------|------------|------------|------------|------------|------------|-----------| | Type | CoNLL03 | ACE04 | ACE05-E | CoNLL04 | ACE05-R | NYT | SciERC | | | text-davinci-002 | text | 68.84±1.29 | 45.51±0.23 | 48.93±0.49 | 39.67±2.44 | 5.13±1.24 | 16.07±4.67 | 4.39±0.98 | | code-davinci-002 | code | 82.32±0.37 | 55.29±0.37 | 54.82±2.09 | 53.10±2.02 | 14.02±3.27 | 32.17±1.46 | 7.74±1.54 | | text-davinci-001 | text | 38.55±6.11 | 29.23±1.49 | 29.73±2.22 | 19.63±4.37 | 0.89±0.66 | - | 0.87±0.22 | | code-davinci-001 | code | 61.86±1.88 | 33.62±3.85 | 36.26±1.45 | 28.75±1.90 | 1.65±1.55 | - | 1.91±0.30 | Table 7: Performances of different LLMs. text-davinci-001 is an InstructGPT model based on the previous GPT-3 model with Feedback Made Easy strategy. code-davinci-001 is an earlier version of code-davinci-002. | Entity Label Error | Entity Span Error | | | | | | |----------------------|---------------------|---------|---------|-------|---------|------| | CoNLL03 | ACE04 | ACE05-E | CoNLL03 | ACE04 | ACE05-E | | | #test | 3453 | 812 | 1060 | 3453 | 812 | 1060 | | #in-context shot | 5 | 2 | 2 | 5 | 2 | 2 | | GPT-3+text | 15 | 298 | 414 | 113 | 140 | 114 | | GPT-3+code | 57 | 755 | 949 | 28 | 73 | 57 | | Codex+text | 3 | 30 | 64 | 90 | 88 | 141 | | Codex+code | 8 | 536 | 601 | 18 | 51 | 37 | Table 8: Detailed Errors on NER datasets. "Entity Label Error" means the predicted label is not in the predefined label set. "Entity Span Error" means the predicted span is not in the original input text. The reported error numbers are counted by summing 3 different seeds. | #test | 288 | 2050 | 5000 | 551 | 288 | 2050 | 5000 | 551 | 288 | 2050 | 5000 | 551 | |------------------|-------|--------|--------|-------|-------|--------|--------|-------|-------|--------|--------|-------| | #in-context shot | 5 | 2 | 1 | 2 | 5 | 2 | 1 | 2 | 5 | 2 | 1 | 2 | | GPT-3+text | 2 | 1078 | 669 | 335 | 26 | 266 | 491 | 160 | 169 | 617 | 3274 | 358 | | GPT-3+code | 3 | 410 | 102 | 410 | 13 | 105 | 1029 | 105 | 6 | 12 | 2000 | 50 | | Codex+text | 1 | 815 | 100 | 815 | 20 | 155 | 477 | 155 | 84 | 820 | 315 | 820 | | Codex+code | 0 | 346 | 10 | 346 | 1 | 108 | 544 | 108 | 2 | 0 | 141 | 17 | Table 9: Detailed Errors on RE datasets. "Ent1 Type Error" means the predicted entity type of the first entity is not in the predefined type set. "Ent1 Span Error" means the predicted span of the first entity is not in the original input text. "Relation Type Error" means the predicted label is not in the predefined relation type set. The reported error numbers are counted by summing 3 different seeds. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 'limitations" ✓ A2. Did you discuss any potential risks of your work? section 'limitations" ✓ A3. Do the abstract and introduction summarize the paper's main claims? section "abstract" and section 1 "introduction" ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? hard to find ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? complex procedure ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? complex procedure ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? complex procedure ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. examples number, details splits ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
patra-etal-2023-beyond
Beyond {E}nglish-Centric Bitexts for Better Multilingual Language Representation Learning
https://aclanthology.org/2023.acl-long.856
In this paper, we elaborate upon recipes for building multilingual representation models that are not only competitive with existing state-of-the-art models but are also more parameter efficient, thereby promoting better adoption in resource-constrained scenarios and practical applications. We show that going beyond English-centric bitexts, coupled with a novel sampling strategy aimed at reducing under-utilization of training data, substantially boosts performance across model sizes for both Electra and MLM pre-training objectives. We introduce XY-LENT: X-Y bitext enhanced Language ENcodings using Transformers which not only achieves state-of-the-art performance over 5 cross-lingual tasks within all model size bands, is also competitive across bands. Our XY-LENT XL variant outperforms XLM-R XXL and exhibits competitive performance with mT5 XXL while being 5x and 6x smaller respectively. We then show that our proposed method helps ameliorate the curse of multilinguality, with the XY-LENT XL achieving 99.3{\%} GLUE performance and 98.5{\%} SQuAD 2.0 performance compared to a SoTA English only model in the same size band. We then analyze our models performance on extremely low resource languages and posit that scaling alone may not be sufficient for improving the performance in this scenario
# Beyond English-Centric Bitexts For Better Multilingual Language Representation Learning Barun Patra∗, Saksham Singhal∗**, Shaohan Huang**∗, Zewen Chi, Li Dong, Furu Wei, Vishrav Chaudhary, **Xia Song** Microsoft {bapatra, saksingh, shaohanh, v-zewenchi, lidong1, fuwei, vchaudhary, xiaso}@microsoft.com ## Abstract In this paper, we elaborate upon recipes for building multilingual representation models that are not only competitive with existing stateof-the-art models but are also more parameter efficient, thereby promoting better adoption in resource-constrained scenarios and practical applications. We show that going beyond Englishcentric bitexts, coupled with a novel sampling strategy aimed at reducing under-utilization of training data, substantially boosts performance across model sizes for both Electra and MLM pre-training objectives. We introduce XY-LENT: X-Y bitext enhanced Language ENcodings using Transformers which not only achieves state-of-the-art performance over 5 cross-lingual tasks within all model size bands, is also competitive across bands. Our XYLENTXL variant outperforms XLM-RXXL and exhibits competitive performance with mT5XXL while being 5x and 6x smaller respectively. We then show that our proposed method helps ameliorate the curse of multilinguality, with the XYLENTXL achieving 99.3% GLUE performance and 98.5% SQuAD 2.0 performance compared to a SoTA English only model in the same size band. We then analyze our models performance on extremely low resource languages and posit that scaling alone may not be sufficient for improving the performance in this scenario. ## 1 Introduction Recent advancements in Natural Language Processing (NLP) have been a direct consequence of leveraging foundational models (Bommasani et al., 2021), pretrained on a large text corpora in a selfsupervised fashion. This has also been the case for multilingual NLP where pre-trained models like multilingual BERT (mBERT) (Devlin, 2018; Devlin et al., 2019), XLM (Conneau and Lample, 2019), XLM-Roberta (Conneau et al., 2020), XLMElectra (Chi et al., 2022) and mT5 (Xue et al., 2021) ∗Equal Contribution. ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) XNLI Accuracy have all shown non-trivial performance gains, especially in the setup of zero-shot transfer, and have been the work-horse for a diverse number of multilingual tasks. Given their ubiquitous applicability in zero-shot downstream scenarios, improving the quality and enabling their usage in resourceconstrained applications is also an important vein of research which we explore in this paper. A source of improvement for these models has been leveraging bitext data for better representation learning (Conneau and Lample, 2019; Chi et al., 2022). Most prior work, however, has focused on leveraging English-centric (*EN-X*) bitext data. Contemporaneously, the related area of Massively Multilingual Machine Translation (a single model for translating between different pairs of languages, eg: Aharoni et al. (2019); Zhang et al. (2020); Fan 15354 et al. (2021)) has shown tremendous progress, with Fan et al. (2021) showing that a crucial aspect of this improvement has been moving beyond *EN-X* parallel corpora and leveraging web-based mined X-Y bitexts spanning 1000s of translation directions (Schwenk et al., 2021a; El-Kishky et al., 2020; Schwenk et al., 2021b). This makes a compelling case to explore if leveraging X-Y bitexts can also improve multilingual representation learning. In this work, we introduce **XY-LENT** (pronounced as "Excellent"): X-Y bitext enhanced Language ENcodings using Transformers. We first identify problems with using the commonly used sampling strategy proposed in Fan et al. (2021), showing that it induces sparse sampling distributions leading to under-utilization of data, and thus propose a novel strategy to mitigate this issue (§3.2). We then propose leveraging X-Y bitexts in conjunction with the improved sampling strategy, as well as a VoCAP (Zheng et al., 2021) style sentencepiece vocabulary re-construction for improving multilingual representation learning (§3.1). We show that our proposed method improves performance across all model size bands (§6). Furthermore, these performance gains hold for both Masked Language Models (MLM) and ELECTRA style models. Our approach results in an almost 12x speedup in training for MLM model training (§6.2). We systematically analyse the impact of model scaling with respect to the curse of multilinguality (Conneau et al., 2020) to observe that the gap between current English only SoTA models and multilingual models can be considerably reduced (§6.3). Our analysis reveals that XY-LENT improves performance across language families (§6.4) and helps reduce the cross-lingual transfer gap in multilingual tasks (§6.5). We then demonstrate that the training dynamics of such models can be used to better understand the underlying datasets and use it to find interesting defects in them (§6.6). Finally, we show some limitations of such multilingual representational models vis-à-vis extremely low resource languages, identifying potential shortcomings that are not addressed with scaling of such models, as well as issues around catastrophic forgetting in the way current models are used for domain adaptation. In doing so, we establish state of the art on 5 multilingual downstream tasks (XNLI, PAWS-X, TYDIQA, XQuAD and MLQA) within a model size band, and achieve competitive performance *across* size bands, thereby showing for the first time (to the best of our knowledge) an interesting notion of parameter efficiency: XY-LENTXL outperforms XLM-RXXL (Goyal et al., 2021) and performs competitively with mT5XXL (Xue et al., 2021), whilst being 5x and 6x smaller respectively (Figure 1). Furthermore, our proposed model reduces the gap for English specific tasks: XY-LENTXL achieves 99.3% GLUE performance and 98.5% SQuAD 2.0 performance compared to a SoTA English only model in the same size band. ## 2 Related Work Large scale self-supervised learning has emerged as a prominent way of building cross-lingual language models that can be adapted for numerous multilingual downstream applications. Especially for building multilingual encoder transformer (Vaswani et al., 2017) models, two popular paradigms have been Masked language modeling (MLM; Devlin et al. (2019); Conneau et al. (2020)) and pre-training encoders as discriminators (ELECTRA; Clark et al. (2020b); Chi et al. (2022)), with the latter showing considerable compute efficiency. These approaches can further be improved by leveraging parallel corpora in different ways: Conneau and Lample (2019) propose a Translation Language Modeling task (TLM) wherein the model predicts masked tokens in concatenated translation pairs, Chi et al. (2022) propose a Translation Replaced Token Detection (TRTD) task, an analogous task for Electra-style models. Other approaches include using bitexts to construct code-switched sequences as inputs during pre-training (ALM; Yang et al. (2020)) and for contrastive learning (InfoXLM; Chi et al. (2021a)), or using token-level alignments in parallel data to improve cross-lingual modeling (Hu et al., 2021; Chi et al., 2021b, *inter alia*). However, all the aforementioned works rely on Englishcentric bitexts. Fan et al. (2021) show that moving beyond *EN-X* bitexts for Massively Multilingual Machine Translation affords substantial improvements over approaches that rely solely on English-centric data (Aharoni et al., 2019; Zhang et al., 2020). The primary factor responsible for this improvement has been the curation of X-Y aligned bitext data, constructed by mining bitexts from publicly available web data (Schwenk et al., 2021a; El-Kishky et al., 2020; Schwenk et al., 2021b). The dataset construction either follows a local mining approach (first aligning documents using heuristics, and then mining parallel bitexts from the aligned documents; used in CCAligned (El-Kishky et al., 2020)), or a global mining approach (all bitexts are embedded in a common vector space, and then aligned candidates are found by looking at the normalized nearest neighbors; used in CCMatrix (Schwenk et al., 2021b)). Fan et al. (2021) also propose a sampling strategy for leveraging the X-Y bitexts, wherein the marginals are constrained to be similar to what is used for *En-X* bitexts, and show their proposed method improves over uniform sampling. However, as we show in (§3.2), their proposed strategy has the undesirable artefact of inducing extremely sparse solutions, thereby resulting in data wastage. ## 3 Leveraging Many-To-Many Bitexts 3.1 Dataset Prior representation learning works usually consider English-centric (*EN-X*) bitexts to improve model quality. Thus, given the emergence of mining based approaches for extracting parallel bitexts from large monolingual datasets that are approximate translations of each other and are multi-way aligned (the source and target languages are not restricted to be English only), in this work we explore leveraging these many-to-many (X-Y) bitext datasets for better representation learning. We consider two such publicly available datasets: CCMatrix and multiCCAligned. ## 3.2 Sampling Distribution A common method used for balancing training data for the *EN-X* framework is using a temperature based exponential sampling approach (Aharoni et al., 2019), wherein the probability of sampling a language is chosen from a temperature smoothed distribution to downsample high resource languages, whilst upsampling low resource languages. This work was extended by Fan et al. (2021), wherein the authors propose Sinkhorn Temperature sampling: given a joint probability matrix Q across L×L language pairs (L being the number of unique languages), and the marginal distribution p of the L languages, the authors estimate a sampling distribution P∗as: $$\operatorname*{max}_{\mathbf{P}}\operatorname{Tr}(\mathbf{P}\mathbb{Q})\mid\mathbf{P}1_{L}=\mathbf{p}^{\frac{1}{T}}=\mathbf{P}^{\top}1_{L}\qquad(1)$$ where Tr is the trace operator. The primary advantage of using this is that P∗can be efficiently estimated with the Sinkhorn-Knopp algorithm and also allows us to set the marginal to be the temperature sampled based distribution which we know works well in practice. The authors found this to work better than uniform sampling. However, in practice, we observed this to generate extremely sparse sampling distributions: Figure 2a show the sparsity induced by the naive application of Eq. 1. We note that one potential way of overcoming the above issue is by modifying the optimization problem to also maximize the entropy of P. Consequently, we propose the following modified optimization objective : $$\begin{array}{l}\mathbb{P}^{*}=\arg\min_{\mathbf{p}}\operatorname{Tr}\left(P\left(-\log\mathbb{Q}\right)\right)-\mathcal{H}\left(P\right)\\ \mid\mathbf{P1}_{L}=\mathbf{p}^{\frac{1}{T}}=\mathbf{P}^{\top}\mathbb{Q})\mid\mathbf{P1}_{L}=\mathbf{p}^{\frac{1}{T}}=\mathbf{P}^{\top}\mathbf{1}_{L}\\ \text{where}\mathcal{H}(P)\text{denotes the entropy of}P\text{and}\end{array}\tag{2}$$ where H(P) denotes the entropy of P and KL(P||Q) denotes the Kullback-Leibler divergence between P and Q. This can be solved by using the Sinkhorn-Knopp algorithm for the entropic regularized optimal transport problem (Cuturi, 2013), by setting the cost matrix to be − log(Q + ϵ) (in practice, since Q can have zero entries, ϵ is used for smoothing). Since the cost of assigning a non-zero probability value to a zero entry is extremely high (− log (ϵ)), we never observe any entry of P∗to be non-zero if it's corresponding entry in Q was zero. In addition, since Eq. 2 also maximizes the entropy of P, it encourages its entries to be non-sparse, thereby avoiding the problem present in the solution of Eq. 1. In practice, we did not see this losing out on any data: if Q was non-zero, then P∗ was also non-zero (Figure 2b). ## 3.3 Vocabulary Construction We construct our vocabulary using Sentence Piece Models (SPM) (Kudo and Richardson, 2018) which cater to language specific complexities (tokenization, accent removal, etc. ). We increase the vocabulary size to 500k tokens to better serve the varied scripts encountered while working in the multilingual setting. For this construction, we follow the VoCAP algorithm (Zheng et al., 2021) to quantify the vocabulary capacity for each language separately and account for varied corpora sizes across languages. Better capacity allocation leads to smaller representative sequences (especially for mid and low resource languages) which ![3_image_0.png](3_image_0.png) in-turn improves the computational efficiency of the model. Increasing the size of the vocabulary, however, comes at the cost of inflating the model parameters which is particularly observed in the case of XY-LENTBase and XY-LENTLarge where the embeddings layer constitute *80.5%* and *62.9%* of the total parameters respectively. ## 4 Pretraining Details We follow the XLM-E (Chi et al., 2022) pretraining approach and only introduce a few architectural changes to improve the overall performance of the model. We use the Transformer model (Vaswani et al., 2017) trained with ELECTRA (Clark et al., 2020b) style of replace token detection (RTD) on both monolingual (MRTD) and bitext (TRTD) data. In the current setup of training, we use two Transformer encoders in conjunction: a generator G and a discriminator D, where the generator G is trained with masked language modeling objective (MLM; Devlin et al. (2019)) and the discriminator is trained on replaced token detection objective (RTD; Clark et al. (2020b) on all the tokens passing through the generator. In addition to using the Gated Relative Position Bias introduced in Chi et al. (2022), we do not mask the [CLS] token and flip bitext language order with probability p = 0.5 for the TRTD task. ## 5 Experiments Baselines: We compare the cross-lingual performance of our proposed model against 3 popular cross-lingual models: XLM-R, mT5 and XLM-E (across all model size variations). Note that Chi et al. (2022) use a 250k vocabulary size for XLMEBase and 500k vocabulary for their large and XL variants. As a follow-up, we re-train XLM-EBase with the same vocabulary as used by XY-LENT for a fair comparison. Thus all references to XLMEBase refer to the re-trained model variant with a 500k vocabulary size.1 For our downstream English evaluation (§6.3), we compare against the SoTA English model METRO-LM(Bajaj et al., 2022). Note that Bajaj et al. (2022) also train the models in an ELECTRA style framework, thereby allowing for a fair comparison. Pretraining Data: For our monolingual data, we follow Chi et al. (2022) and use the CC-100 dataset2(Conneau et al., 2020; Wenzek et al., 2020) which contains texts in 100 languages collected from Common Crawl. As mentioned in (§3.1), we explore the utility of the CCMatrix and the multiCCAligned X-Y aligned bitext data. CCMatrix consists of 1015 language pairs (97 unique languages) 3; while the multiCCAligned dataset consists of 2959 language pairs (92 unique languages) 4. We contrast this against only using *EN-X* bitexts (CCAligned, El-Kishky et al. (2020)). 1We also ablate out the impact of the vocabulary change, with Table 2 showing that this yields a 1.5 pt gain on XNLI. 2http://data.statmt.org/cc-100/ 3For some language pairs that are present in CCAligned and not in CCMatrix, we combine the data from those languages. Since de-duplication is expensive, we don't merge language pairs common to both datasets. 4We filter out languages with less than 50k pairs Model Size Bands: While our *base* and *large* models have more parameters when compared with XLM-R, most of the additional parameters come from the increased vocabulary size (§3.3). Concretely, our *base* model has 12 layers and 768 hidden states, while the *large* model has 24 layer and 1024 hidden states, which is identical to XLMRBase and XLM-RLarge respectively. However, even with the increased parameter count, the computation cost on a text classification task is roughly the same within a model size family (since mapping tokens to an embedding is a lookup operation). Finally, it is noteworthy that even with the increased vocabulary size, the number of parameters for XYLENTXL is less compared to the XL and XXL variants of both XLM-R and mT5. Pretraining Setup: For the base model, we train for 125k steps with a batch size of 8192 for MRTD task and for the large model, we train the model for 500k steps with a batch size of 2048. Finally for the XL model, we train for 150k steps with a batch size of 8192. We use a dynamic batch size for TRTD task which is based on original length of the translated bi-text pair. Please refer Appendix A for additional details. We adopt the standard practice of using a linear warmup schedule for the learning rate and use the Adam (Kingma and Ba, 2014) optimizer for all the models. Following Meng et al. (2021), we do not apply any dropout to the generator. Cross-lingual Downstream Evaluation: For evaluating the cross-lingual understanding of the model, we consider 5 multilingual evaluation benchmarks. We consider 2 classification tasks and 3 question answering tasks. For classification, we evaluate on the cross-lingual Natural Language Inference dataset (XNLI; Conneau et al. (2018)) and the cross-lingual paraphrase adversaries from word scrambling dataset (PAWS-X; Yang et al. (2019)). For cross-lingual question answering, we consider MLQA (Lewis et al., 2019), XQuAD (Artetxe et al., 2019) and TyDiQA-GoldP (Clark et al., 2020a). For all the aforementioned tasks, we perform the evaluation in zero-shot setting, i.e. only using the English data for fine-tuning. To further assess the model's performance when translated data is available, we evaluate the model on the translate-train setup for the classification tasks. English Downstream Evaluation: To further assess XY-LENT's performance on English and see how the curse of multilinguality impacts the model, we also assess the model's performance on the commonly used GLUE benchmark (Wang et al., 2018), comprising of 8 tasks: MNLI (Williams et al., 2017), SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2018a), MRPC (Dolan and Brockett, 2005), CoLA (Warstadt et al., 2018), QQP , STS-B (Cer et al., 2017) and RTE. Additionally, we also evaluate the English performance of our model on a question answering task, using the SQuAD 2.0 dataset (Rajpurkar et al., 2018b). Please refer to Appendix B for additional details on the datasets. ## 6 Results And Analysis 6.1 Main Results Table 1 presents our proposed model's performance across different model sizes for zero-shot transfer on sentence classification as well as question answering tasks (detailed results for all languages and all tasks can be found in Appendix D). We see that XY-LENT outperforms the baselines of XLM-E, XLM-R and mT5 across all model sizes, establishing (to the best of our knowledge) the state-of-theart (SoTA) for all the 5 considered multilingual datasets within the model size bands: with XYLENTBase outperforming XLM-EBase by 3.1 pts, XY-LENTLarge outperforming XLM-ELarge by 1.8 pts and XY-LENTXL outperforming XLM-EXL by 0.9 pts (averaged across all 5 datasets). Another interesting observation is that XY-LENT is competitive across model size families: the XYLENTBase model out-performs XLM-RLarge and mT5Large variants on 4 out of 5 datasets, similarly the XY-LENTLarge outperforms the mT5XL model on 4 out of 5 datasets. Furthermore, the XY-LENTXL model outperforms XLM-RXXL and is competitive with mT5XXL while being 5x and 6x smaller respectively. A practical implication of these better performing smaller models is their easy usage in downstream tasks. This behaviour is also consistent in the *TranslateTrain* setting where the translated version of the training data is present across all languages for training. Table 1 presents XY-LENT's performance on this setup for sentence classification tasks. We see that even in this setting, XY-LENT outperforms other models with the same size band, and is competitive across model size bands. | Zero-Shot | Translate-Train | | | | | | | |--------------|-------------------|----------------|-------------|-------|------|-------|------| | Model | Question | Sentence | Sentence | | | | | | Answering | Classification | Classification | | | | | | | XQuAD | MLQA | TyDiQA | XNLI | PAWSX | XNLI | PAWSX | | | Metrics | F1/EM | F1/EM | F1/EM | Acc. | Acc. | Acc. | Acc. | | XLM-RBase | - | - | - | 76.2 | - | 79.1 | - | | mT5Base | 67.0 / 49.0 | 64.4 / 45.0 | 58.1 / 42.8 | 75.4 | 86.4 | 75.9 | 89.3 | | XLM-EBase | 74.3 / 59.2 | 68.7 / 50.5 | 62.7 / 46.2 | 78.1 | 87.0 | 81.7 | 91.1 | | XY-LENTBase | 76.8 / 62.1 | 71.3 / 53.2 | 67.1 / 51.5 | 80.5 | 89.7 | 84.9 | 92.4 | | XLM-RLarge | 76.6 / 60.8 | 65.1 / 45.0 | 71.6 / 53.2 | 80.9 | 86.4 | 83.6 | - | | mT5Large | 77.8 / 61.5 | 71.2 / 51.7 | 57.8 / 41.2 | 81.1 | 88.9 | 81.8 | 91.2 | | XLM-ELarge | 78.7 / 63.1 | 72.8 / 54.4 | 71.8 / 54.7 | 81.3 | 89.0 | 84.1 | 91.9 | | XY-LENTLarge | 79.7 / 64.9 | 74.3 / 55.7 | 74.0 / 57.5 | 83.0 | 90.4 | 84.9 | 92.4 | | XLM-RXL | 80.0 / 64.9 | 73.4 / 55.3 | - | 82.3 | - | 85.4 | - | | mT5XL | 79.5 / 63.6 | 73.5 / 54.4 | 77.4 / 61.5 | 82.9 | 89.6 | 84.8 | 91.0 | | XLM-EXL | 80.4 / 66.0 | 74.3 / 55.8 | 76.7 / 60.6 | 83.7 | 90.3 | 85.5 | 92.2 | | XY-LENTXL | 81.3 / 66.3 | 75.4 / 56.7 | 78.0 / 62.1 | 84.8 | 91.0 | 87.1 | 92.6 | | XLM-RXXL | 81.1 / 66.3 | 74.8 / 56.6 | - | 83.1 | - | 86.0 | - | | mT5XXL | 82.5 / 66.8 | 76.0 / 57.4 | 81.0 / 65.6 | 85.0 | 90.0 | 87.8 | 91.5 | | Parameter | Choice | XNLI (Avg) | |--------------------|----------------|--------------| | Vocabulary Size | 250K | 76.6 | | 500K | 78.1 | | | CCAligned | 78.1 | | | Bitext Data | multiCCAligned | 79.5 | | CCMatrix | 80.5 | | | Training Objective | Masked LM | 78.4 | | ELECTRA | 80.5 | | ## 6.2 Ablations Different Many-to-Many Datasets Table 2 shows the impact of moving from English-centric bitexts to X-Y bitext data. Using multiCCAligned dataset gives a +1.4 pt improvement on average XNLI performance over the baseline which uses only the CCAligned data, thereby showing that the utility of leveraging multi-way bitext data is not limited to CCMatrix dataset. However, we still see an additional improvement of 1.0 pt with usage of CCMatrix data and we hypothesize this gain to more diversity present in it which in-turn helps in improving the multilingual representations. Different Pretraining Objectives While the gains are more substantial with ELECTRA training objective, Table 2 shows that the benefits of having a better quality bitext data is not just restricted to the ELECTRA paradigm of training and can also be observed with the Masked LM objective. For the ablation experiment, we train a base model model with the MLM objective for 125k steps with a batch size of 8192. Comparing this with XLMRBase's setup, which uses only monolingual data with MLM objective and trains for 1.5M steps (i.e. 12 times longer), finally achieving an XNLI (Avg) of 76.2, we observe that introduction of X-Y data not only brings performance gains but also significantly improves the training efficiency of these models. ## 6.3 **On English Performance Of Multi-Lingual** Models Given the strong performance of multilingual models on the English subset of XNLI, one interesting question that arises is how does model scaling impact the performance on English centric downstream tasks. In order to evaluate that, we measure the performance of XY-LENT on the commonly used GLUE benchmark (Wang et al., 2018) and the SQuAD 2.0 benchmark. To compare the multilingual model performance on English, we also consider English specific encoder models trained in an Electra pre-training paradigm. Specifically, we consider the Base, Large, XL and XXL models | Model | GLUE DEV Single Task | SQuAD 2.0 | | | | | | | | | | |---------------------------------------------------------|------------------------|-------------|------|------|------|------|------|------|------|------|------| | MNLI-(m/mm) (Acc) QQP-(Acc/F1) QNLI SST-2 CoLA RTE MRPC | STS-B (SCC) AVG EM | F1 | | | | | | | | | | | (Acc) (Acc) (MCC) (Acc) (Acc) | | | | | | | | | | | | | METRO-LMBase | 90.3 / 90.2 | 92.4/- | 94.4 | 95.9 | 71.8 | 88.1 | 91.4 | 92.0 | 89.5 | 85.9 | 88.5 | | XLM-EBase | 86.1 / 86.3 | 91.5/88.7 | 92.8 | 94.0 | 67.4 | 77.8 | 90.2 | 91.4 | 86.4 | 82.3 | 85.3 | | XY-LENTBase | 87.3 / 87.6 | 91.9/89.2 | 93.3 | 94.4 | 66.6 | 85.2 | 90.7 | 91.7 | 87.6 | 83.5 | 86.3 | | ∆ | 3.0 / 2.6 | 1.3/- | 1.1 | 1.5 | 5.2 | 2.9 | 0.7 | 0.7 | 1.9 | 2.4 | 2.2 | | METRO-LMLarge | 91.7 / 91.7 | 92.9 | 95.8 | 96.3 | 75.2 | 93.1 | 92.2 | 92.8 | 91.4 | 88.5 | 91.1 | | XLM-RLarge | 88.9 / 89.0 | 92.3 | 93.8 | 95.0 | - | - | 89.5 | 91.2 | - | - | - | | XLM-ELarge | 89.9 / 90.1 | 92.9/90.4 | 94.5 | 96.8 | 73.9 | 85.7 | 92.1 | 92.5 | 89.8 | 85.6 | 88.7 | | XY-LENTLarge | 89.7 / 89.9 | 92.7/90.3 | 94.7 | 95.8 | 71.1 | 88.4 | 91.4 | 92.6 | 89.6 | 85.8 | 88.7 | | ∆ | 2.0 / 1.8 | 0.2/- | 1.1 | 0.5 | 4.1 | 4.7 | 0.8 | 0.2 | 1.8 | 2.8 | 2.4 | | METRO-LMXL | 92.2 / 92.0 | 93.2 | 96.3 | 97.3 | 76.0 | 93.5 | 91.7 | 93.0 | 91.8 | 89.4 | 92.1 | | XLM-RXL | 90.4/- | 92.5 | 94.9 | 96.6 | - | - | 90.4 | - | - | - | - | | XLM-EXL | 91.1 / 91.2 | 92.5/89.9 | 94.0 | 97.2 | 74.7 | 91.4 | 92.1 | 93.2 | 90.8 | 87.8 | 90.7 | | XY-LENTXL | 91.2 / 91.1 | 93.0/90.7 | 95.8 | 96.4 | 74.9 | 92.8 | 91.9 | 93.2 | 91.2 | 88.1 | 90.9 | | ∆ | 1.0 / 0.9 | 0.2/- | 0.5 | 0.9 | 1.1 | 0.7 | -0.2 | -0.2 | 0.6 | 1.3 | 1.2 | ## Presented In (Bajaj Et Al., 2022). Table 3 shows the performance of our proposed method against the SoTA monolingual as well as other multilingual baselines. As observed in the results, with an increase in the number of parameters, we see that the gap in the performance of an English centric model and a multilingual model decreases, with the XL model being just 0.6 points behind on GLUE and 1.3 points on SQuAD 2.0. We hypothesize that an increase in model capacity alleviates the issues caused by the curse of multilinguality (Conneau et al., 2020); and when that is the case, English performance actually benefits from the presence of other languages in the training data. It is noteworthy that the even for the English language performance, having an X-Y centric data is more beneficial compared to an *EN-X* data (XLM-E vs XY-LENT). Furthermore, our proposed method outperforms XLM-R on large and XL sizes. ## 6.4 Performance Across Language Families Figure 3 shows the performance the delta of performance between XLM-E and XY-LENT across different language families. Following Hu et al. (2020), we use the number of Wikipedia articles as a proxy for a language family being high or low resource. As can be seen, leveraging X-Y bitexts helps improves performance consistently across language families. ![6_image_0.png](6_image_0.png) | Model | XQuAD MLQA TyDiQA XNLI | PAWS-X | | | | |---------|--------------------------|----------|------|------|------| | MBERT | 25.0 | 27.5 | 22.2 | 16.5 | 14.1 | | XLM-R | 15.9 | 20.3 | 15.2 | 10.4 | 11.4 | | XLM-E | 14.9 | 19.2 | 13.1 | 11.2 | 8.8 | | XY-LENT | 15.3 | 19.9 | 8.6 | 7.8 | 6.8 | ## 6.5 Crosslingual Transfer Gap In order to further evaluate the cross-lingual transferrability of our model, we follow Hu et al. (2020) and evaluate the cross-lingual transfer gap (the difference between the performance on the English test set and the average test set performance for other languages) for XY-LENTBase. This score indicates how much end task knowledge is not transferred to other languages post fine-tuning, with a smaller gap indicating better transferrability. As seen in Table 4, XY-LENT achieves lower scores on 3 out of 5 tasks, thereby demonstrating strong transferrability. ## 6.6 Using Training Dynamics To Explore Dataset Quality So far we have seen that leveraging X-Y aligned bitexts improves model quality. In this section, we consider the inverse direction: whether training dynamics of representation learning models can be used to identify dataset artifacts. Given these bitext datasets span over 1000 language pairs, a manual inspection of these datasets is extremely hard. Thus an automated method for spot-checking the dataset quality is quite valuable. To do so, we first train a model in line with the methodology presented by Zhou et al. (2021) for Distributionally Robust Multilingual Machine Translation. Specifically, we train XY-LENT with the following modified objective: $$\begin{array}{c}{{\operatorname*{min}_{\theta_{D},\theta_{G}}\operatorname*{sup}_{{\bf P}:\chi^{2}({\bf P},\mathbb{Q})\leq\rho}\sum_{i}p_{i}({\mathcal{L}}_{D}({\bf x};\theta_{D})+}}\\ {{{}}}\\ {{{}}}\\ {{{}}}\\ {{{\lambda{\mathcal{L}}_{G}({\bf x};\theta_{G}))}}}\end{array}\tag{3}$$ Here LG and LD refer to the generator and discriminator losses respectively (§4), P is the joint distribution over the bitext language pairs that we want to estimate (i.e P = pi| 1 ≤ i ≤ L × L;Pi pi = 1); and Q is the original training distribution (i.e the probability distribution over the bitexts when the training starts, equal to P∗as estimated in §3.2). At a high level, the objective minimizes the training loss over a χ 2 ball around the original training distribution, with the supremum up-weighing language pairs with higher loss values, and down-weighing languages with lower loss values 5. We train a model with the Distributional Robustness Optimization objective (DRO) using Iterated Best Response strategy, as proposed by Zhou et al. (2021) and resample 10 times throughout the training. We hypothesize that the two extremities (i.e language pairs that are highly upsampled as well as those that are downsampled) would be bitext datasets of interest for spot-checking. 5Table 11 in the Appendix shows that such an approach achieves reasonable performance on XNLI. ![7_image_0.png](7_image_0.png) Figure 4 presents the top 10 upsampled and 10 downsampled languages between the initial and final language distributions. Manual inspection of these language pairs shows that our hypothesis indeed holds true: we observe that the translations for English and Xhosa (en - xh) are extremely noisy and aligned with non-sensical text, with multiple different English sentences being aligned to the same Xhosa sentence. This can potentially be a manifestation of the hubness issue for Nearest Neighbor lookups in high dimensional spaces (Radovanovic et al. ´ , 2010; Dinu and Baroni, 2014). Bitexts for Catalan and Spanish (ca - es) and Czech and Slovak (cs - sk) are near duplicates, since the language pairs are very similar. Both of these issues can cause the TRD task to be trivial, explaining the downsampling. Similarly, looking at languages that are up-sampled, we observe a lot of translation quality noise in bitexts for Spanish and Tamil (es - ta), Turkish and Urdu (tr - ur) and Sinhala and Turkish (si - tr). ## 7 Conclusion In this work, we introduced a family of models which achieve SoTA performance over 5 multilingual benchmarks compared to other models belonging to similar model size bands and are competitive across the bands. Our XY-LENTXL model outperforms XLM-RXXL and is competitive with mT5XXL being 5x and 6x smaller respectively. Furthermore, the XL model variant also achieves 99.3% and 98.5% of the current best performant models on GLUE and SQuAD 2.0 respectively, thereby aiding in reducing the curse of multilinguality. The performance gains are consistent across language families. ## 8 Limitations Even though XY-LENT paves the way towards better general-purpose multilingual representation foundation models, in this section, we highlight the limitations associated with this work. We first expound upon the limitations associated with selfsupervised learning on large web extracted corpora. Then we show that while XY-LENT achieves strong performance on multiple multilingual benchmarks, when the downstream task involves unseen (during pretraining) languages, the performance drops by a substantial margin. Finally, we show the potential limitation associated with a common methodology used for domain adaptation associated with leveraging these multilingual foundation models, illustrating how catastrophic forgetting exacerbates certail issues pertaining to low resource language performance. ## Training Data XY-LENT uses CC-100 which a static multilingual corpus extracted from Common Crawl for 100 languages. As noted by Wenzek et al. (2020), several data filtering strategies have been applied to remove duplicated documents, paragraphs with high ratio of punctuations, digits and profanities, the resultant data may still result in many potential biases requiring further analysis. Additionally, these issues might be aggravated for models that leverage bitext data, since the bitexts themselves are mined from web crawls, and thus potentially have all the associated biases, stereotypes and other associated harms. Furthermore, the raw data was compiled from static Common Crawl snapshots from January, 2020 to December, 2020 and hence may not include information about some of the recent events such as COVID-19. ## Performance On Unseen Languages Given the performance improvements observed with scaling, we investigate how it impacts extremely low resource languages which are not present in the pre-training data. In order to do so, we consider our model's performance on the AmericasNLI dataset (Ebrahimi et al., 2022) which extends the XNLI dataset to 10 Indigenous languages of the Americas. Table 5 presents the results on the AmericasNLI dataset. As can be seen, XY-LENT does outperform XLM-R, indicating that better representation learning also benefits these extremely low resource languages. However, we do not see an increase in performance while scaling our models. Specifically, the performance of XY-LENTBase and XYLENTXL model is nearly the same, and substantially worse that the performance observed on the XNLI dataset. This indicates that, while parameter scaling can help improve performance on languages that the model has seen during pre-training, it does not automatically improve performance in the extremely low-resource regime 6. Thus, while model scaling allows for improvements across numerous dimensions, it is far from a panacea, especially if not done in conjunction with data scaling efforts. To be able to improve performance for unseen languages, an intervention would need to be made at the data collection efforts during pretraining, which we aim to assess in future works. ## Continued Training For Domain Adaptation In Pre-Trained Encoders In recent years, continued training on domain specific corpora has been considered a viable approach for domain adaptation of MLM style pre-trained models (Gururangan et al., 2020; Yao et al., 2021) where the core idea is to continue train the pretrained model on domain specific corpora with the goal of improving in-domain downstream evaluation. We first show that this phenomenon can be extended to models pretrained with an ELECTRA style training objective. Concretely, we apply domain adaptation in the biomedical domain where we continue to train our XY-LENTBase as well as XY-LENTMLM + TLM model on the PubMed data presented in Yao et al. (2021), and evaluate it on the ChemProt task (which aims at extracting relations between chemicals and proteins) presented in Gururangan et al. (2020) as the in-domain downstream task. We observe that the continued training approach presented in Gururangan et al. (2020) for the ELECTRA style models, using the same peak learning rate as used during pre-training, results in divergence. Interestingly, this neither happens for the generator of the ELECTRA model nor for the 6Note that since the tokenizer is a sentencepiece tokenzier, there are extremely few UNK words in the low-resource languages. Consequently, the poor performance is not explained by excessive occurrences of UNK tokens | Model | Avg Avg w/o en | en | aym bzd | cni | gn | hch | nah | oto | quy | shp | tar | |------------------|------------------|--------------------------------------------------------|--------------------------------------------------------|-------|----------------|-------|--------------------------|-------|-------|-------|-------| | XLM-RBase | 39.4 | 38.5 | 85.8 36.1 39.7 37.9 39.5 37.2 42.6 37.8 37.2 40.5 36.4 | | | | | | | | | | XLM-EBase | 44.8 | 40.6 | 87.5 | 40 | 38.8 41.7 43.6 | 38 | 43.8 39.8 41.7 42.3 35.9 | | | | | | XY-LENTBase 45.5 | 41.6 | 84.4 40.7 40.7 42.9 42.5 38.9 45.5 40.9 42.1 43.9 37.6 | | | | | | | | | | | XY-LENTXL | 47.2 | 42.8 | 90.8 42.1 42.5 45.6 42.9 41.2 45.0 41.3 42.7 46.9 37.9 | | | | | | | | | Table 5: Performance of models on the AmericasNLI dataset. Note that model scaling does not seem to improve performance as much for these unseen languages. | Model | CT | Avg | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | |-------------------------------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------|------|-----------------------------------------|------|------|----------------|------|------|------|------|------|------|------|------|------|------| | ✗ | 78.4 86.2 81.5 82.9 81.5 80.6 81.8 79.8 77.4 77.9 78.3 75.2 78.3 73.3 72.8 68.0 | | | | | | | | | | | | | | | | | | ✓ | 67.5 85.7 73.9 74.5 71.4 71.7 70.9 71.2 57.8 65.1 66.7 68.5 71.8 60.6 45.8 57.1 | | | | | | | | | | | | | | | | | | Relative ∆(%) | 13.9 | 0.6 | 9.3 | 10.1 12.4 11.0 13.3 10.8 25.3 16.4 14.8 | 8.9 | 8.3 | 17.3 37.1 16.0 | | | | | | | | | | | | ✓ w/ low LR 73.5 85.9 78.2 78.6 76.2 76.8 | 77 | 76 | 68 | 72.2 73.7 73.8 76.2 68.7 57.4 64.4 | | | | | | | | | | | | | | | Relative ∆(%) | 6.3 | 0.3 | 4.0 | 5.2 | 6.5 | 4.7 | 5.9 | 4.8 | 12.1 | 7.3 | 5.9 | 1.9 | 2.7 | 6.3 | 21.2 | 5.3 | | | XY-LENT MLM + TLM | ✗ | 80.3 87.9 83.4 84.4 82.9 82.6 83.1 81.1 79.5 79.5 80.0 77.7 80.1 76.4 75.3 71.3 | | | | | | | | | | | | | | | | | XY-LENTBase | ✓ | 75.6 87.5 80.5 81.6 78.2 79.3 79.4 76.8 72.4 74.2 76.4 75.3 78.7 70.3 58.8 65.1 | | | | | | | | | | | | | | | | | Relative ∆(%) | 5.9 | 0.5 | 3.5 | 3.3 | 5.7 | 4.0 | 4.5 | 5.3 | 8.9 | 6.7 | 4.5 | 3.1 | 1.7 | 8.0 | 21.9 | 8.7 | | Table 6: Drop in cross-lingual zero-shot performance before and after continued training (CT). For MLM, we show with original LR and lower LR. ∆ measured as a relative (%) drop compared to no CT | Model | Acc. | Acc. | |-----------------------------------|--------|--------| | (w/o Contd. Train) (Contd. Train) | | | | XY-LENT MLM + TLM | 82.0 | 86.0 | | XY-LENTBase | 81.6 | 86.2 | Table 7: Domain Specific Downstream task: Accuracy ![9_image_0.png](9_image_0.png) on Chemprot dataset Figure 5: Relative Zero-Shot performance drop with continued training for MLM and ELECTRA style models MLM style pre-trained model. Thus, for an ELECTRA style continued training setup, we posit reducing the peak learning rate to be a crucial change. Table 7 shows the performance on the downstream task post the continued training approach and unsurprisingly it helps with improving in-domain performance. However, given the multilingual nature of such models, we test the multilinguality of these models before and after continued training; using crosslingual zero-shot XNLI as a proxy for multilingual model quality. Table 6 shows the drop in performance across all languages pre and post continued training. We first note that this drop in performance is present for both MLM and ELECTRA style of models, and thus is not an artifact of the pre-training objective. We observe that the drop in performance is not uniform across all languages and the drop is worse for MLM style models (with using the same peak learning rate suffering more from this issue; Table 7). While we expect the drop in English performance to be relatively less, we do see that the drop is substantially more for the mid and low resource languages (especially Hindi, Turkish, Urdu and Swahili; see Fig. 5). While this can potentially be ameliorated by using techniques like Adapters (Houlsby et al., 2019) etc., we would like to draw attention towards the fact that general purpose continued training does suffer from this issue. ## References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. *arXiv preprint* arXiv:1910.11856. Payal Bajaj, Chenyan Xiong, Guolin Ke, Xiaodong Liu, Di He, Saurabh Tiwary, Tie-Yan Liu, Paul Bennett, Xia Song, and Jianfeng Gao. 2022. Metro: Efficient denoising pretraining of large scale autoencoding language models with model generated signals. *arXiv* preprint arXiv:2204.06644. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. *arXiv preprint* arXiv:2108.07258. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021a. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576–3588, Online. Association for Computational Linguistics. Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, XianLing Mao, Heyan Huang, and Furu Wei. 2021b. Improving pretrained cross-lingual language models via self-labeled word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418–3430, Online. Association for Computational Linguistics. Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Bo Zheng, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. 2022. XLM-E: Cross-lingual language model pre-training via ELECTRA. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6170–6182, Dublin, Ireland. Association for Computational Linguistics. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454– 470. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020b. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. *Advances in* neural information processing systems, 32. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. *Advances in Neural Information Processing Systems*, 26:2292–2300. Jacob Devlin. 2018. Multilingual BERT README document. https://github. com/google-research/bert/blob/ a9ba4b8d7704c1ae18d1b28c56c0430d41407eb1/ multilingual.md. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Georgiana Dinu and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. volume abs/1412.6568. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir Meza Ruiz, Gustavo Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Thang Vu, and Katharina Kann. 2022. AmericasNLI: Evaluating zero-shot natural language understanding of pretrained multilingual models in truly low-resource languages. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6279–6299, Dublin, Ireland. Association for Computational Linguistics. Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP 2020), pages 5960–5969. Association for Computational Linguistics. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1–48. Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021. Larger-scale transformers for multilingual masked language modeling. arXiv preprint arXiv:2105.00572. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, and Graham Neubig. 2021. Explicit alignment objectives for multilingual bidirectional encoders. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3633–3643, Online. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian ˘ Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Yu Meng, Chenyan Xiong, Payal Bajaj, Paul Bennett, Jiawei Han, Xia Song, et al. 2021. Coco-lm: Correcting and contrasting text sequences for language model pretraining. Advances in Neural Information Processing Systems, 34:23102–23114. Miloš Radovanovic, Alexandros Nanopoulos, and Mir- ´ jana Ivanovic. 2010. Hubs in space: Popular near- ´ est neighbors in high-dimensional data. volume 11, pages 2487–2531. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018a. Know what you don't know: Unanswerable questions for squad. ACL. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018b. Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021a. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351–1361, Online. Association for Computational Linguistics. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021b. CCMatrix: Mining billions of high-quality parallel sentences on the web. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6490–6500, Online. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *EMNLP*, pages 1631–1642. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. Proceedings of the 2nd Workshop on Evaluating Vector-Space Representations for NLP. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020. Alternating language modeling for cross-lingual pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9386–9393. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics. Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, and Furu Wei. 2021. Adapt-and-distill: Developing small, fast and effective pretrained language models for domains. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 460–470, Online. Association for Computational Linguistics. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628– 1639, Online. Association for Computational Linguistics. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics. Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Allocating large vocabulary capacity for crosslingual language model pre-training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3203–3215, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chunting Zhou, Daniel Levy, Xian Li, Marjan Ghazvininejad, and Graham Neubig. 2021. Distributionally robust multilingual machine translation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5664–5674, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. ## Appendix A Pre-Training And Model Hyperparameters | Hyperparameters | Base | Large | XL | |-----------------------|--------|---------|-------| | Layers | 12 | 24 | 48 | | Hidden size | 768 | 1,024 | 1,536 | | FFN inner hidden size | 3,072 | 4,096 | 6,144 | | Attention heads | 12 | 16 | 24 | Table 8: Model hyperparameters of XY-LENT discriminators across different sizes. | Hyperparameters | Base | Large | XL | |------------------------|-------------|-------------|-------------| | Training steps | 125K | 500K | 150K | | Batch tokens per task | 4M | 1M | 4M | | Adam ϵ | 1e-6 | 1e-6 | 1e-6 | | Adam β | (0.9, 0.98) | (0.9, 0.98) | (0.9, 0.98) | | Learning rate | 8e-4 | 2e-4 | 1e-4 | | Learning rate schedule | Linear | Linear | Linear | | Warmup steps | 10,000 | 10,000 | 10,000 | | Gradient clipping | 2.0 | 1.0 | 1.0 | | Weight decay | 0.01 | 0.01 | 0.01 | Table 9: Hyperparameters used for pre-training XYLENT. Table 8 shows the hyper-parameters of XYLENT across various model sizes. All the models are trained with a vocabulary size of 500K and we use batch size of 1M or 4M tokens based on model size as mentioned in Table 9. For multilingual replace token detection task we work with a fixed input sequence length of 512 and hence maintains a constant batch size. For translation replace token detection task, the input sequence length is dynamically set as the length of original translation pair and the max one is chosen across the batch. For the base and large models, we train on 128 Nvidia A100-40GB GPU cards, and for the XL model, we use 512 Nvidia A100-80GB GPU cards. ## B Downstream Performance For evaluating cross lingual understanding, we consider five multilingual evaluation benchmarks. We use XNLI (Cross-Lingual Natural Language Inference) and PAWS-X for classification and XQuAD, MLQA and TyDiQA-GP for question answering. Additionally, we use GLUE benchmark and SQuAD2.0 to evaluate the English performance of our model. XNLI The XNLI dataset (Conneau et al., 2018) comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages in two modes: (i)*zero-shot*: the model is fine-tuned only using the English training data and (ii) *translate-train-all*: the English training set is machine-translated to each language and we fine-tune a multilingual model on all training sets. For translations, we use the original XNLI data for consistency. PAWS-X The PAWS (Paraphrase Adversaries from Word Scrambling) dataset (Zhang et al., 2019) requires to determine whether two sentences are paraphrases. We use the subset of the PAWS dev and test sets translated to six other languages by professional translators, dubbed as PAWS-X (Yang et al., 2019) for evaluation, while using the PAWS set for training. XQuAD The English SQuAD v1.1(Rajpurkar et al., 2016) requires identifying the answer to a question as a span in the corresponding paragraph. In XQuAD(Artetxe et al., 2019), a subset of the English dev set was translated into ten other languages by professional translators which is then used for evaluation. MLQA The Multilingual Question Answering(Lewis et al., 2019) dataset is another crosslingual question answering dataset. In this dataset, the evaluation data for English and six other languages was obtained by automatically mining target language sentences that are parallel to sentences in English from Wikipedia, crowd-sourcing annotations in English, and translating the question and aligning the answer spans in the target languages. We use the SQuAD v1.1(Rajpurkar et al., 2016) training data for training and evaluate on the test data of the corresponding task. TyDiQA-GP We use the gold passage version of the Typologically Diverse Question Answering(Clark et al., 2020a) dataset, a benchmark for information-seeking question answering, which covers nine languages. The gold passage version is a simplified version of the primary task, which uses only the gold passage as context and excludes unanswerable questions. It is thus similar to XQuAD and MLQA, while being more challenging as questions have been written without seeing the answers, ![14_image_0.png](14_image_0.png) leading to 3× and 2× less lexical overlap compared to XQuAD and MLQA respectively. We use the English training data for training and evaluate on the test sets of the target languages. GLUE and SQuAD 2.0 We evaluate English performance of our model on the GLUE benchmark (Wang et al., 2018) which is a benchmark of eight diverse NLU tasks spanning over single-sentence tasks (CoLA, SST-2), similarity and paraphrase tasks (MRPC, STS-B, QQP) and inference tasks (RTE, MNLI, QNLI). The benchmark is also varied in terms of the training data sizes across tasks which makes it an effective benchmark for testing NLU capabilities of a pretrained model in a robust fashion. We also evaluate the English performance on SQuAD 2.0 (Rajpurkar et al., 2018b) task which is a collection of 100k crowdsourced question/answer pairs collected from Wikipedia where given a passage and a question, the task is to predict the answer span in the passage. The task also has the possibility that no answer exists, making the problem more grounded. ## C Sampling Sparsity Across All Language Pairs Figure 6 shows the sampling distribution as induced by the M2M sampling method and by our proposed method for all language pair directions. Our proposed method induces a much less sparse distribution, resulting in less data wastage. ## D **Detailed Performance On All Tasks And** Languages We present the detailed results associated with all tasks and languages in this section. ![14_image_1.png](14_image_1.png) | Model | # Params Avg | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | Cross-lingual zero-shot transfer (models fine-tune on English data only) Base mT5 580M 75.4 84.7 79.1 80.3 77.4 77.1 78.6 77.1 72.8 73.3 74.2 73.2 74.1 70.8 69.4 68.3 XLM-R 225M 76.2 85.8 79.7 80.7 78.7 77.5 79.6 78.1 74.2 73.8 76.5 74.6 76.7 72.4 66.5 68.3 XLM-E 477M 78.1 87.3 81.9 82.4 81 80.2 81.1 79.7 77.7 76.4 78.5 76.2 79.0 72.7 69.6 68.3 XY-LENT mCCA 477M 79.5 87.8 82.9 83.8 81.5 81.7 81.8 80.8 79.1 79.1 79.8 77.7 79.3 74.6 72.7 69.9 XY-LENT DRO + CCM 477M 79.7 87.3 82.2 83.7 82.6 82.0 82.5 80.2 78.5 79.0 80.1 77.0 80.3 74.4 74.9 70.2 XY-LENT CCM 477M 80.5 87.7 83.7 84.7 83.7 82 83 81.5 79.3 79.7 80.3 77.9 80.2 76.1 75.5 71.6 Large XLM-RLarge 550M 80.9 89.1 84.1 85.1 83.9 82.9 84 81.2 79.6 79.8 80.8 78.1 80.2 76.9 73.9 73.8 mT5Large 1.2B 81.1 89.4 84.1 84.2 83.4 83.2 84.1 81.5 80.1 79.8 81 79.4 80.3 77.6 75.4 73.5 XLM-ELarge 840M 81.3 89.4 84.7 85.5 84.4 83.5 84.1 81.9 81.3 80.7 81.2 79.2 81.5 76.5 74.1 72.4 XY-LENTLarge 814M 83 90.1 86 86.7 85.4 85.7 85.3 83.2 82.6 83.4 82.8 81.0 82.5 78.3 78.1 74.3 XL XLM-RXL 3.5B 82.3 90.7 85.5 86.5 84.6 84 85.2 82.7 81.7 81.6 82.4 79.4 81.7 78.5 75.3 74.3 mT5XL 3.7B 82.9 90.6 85.3 81.3 85.8 85.4 85.4 83.7 82 82.2 81.8 80.9 82.7 80.4 78.6 77.0 XLM-EXL 2.2B 83.7 91.3 86.8 87.4 86.7 85.8 85.9 84.2 83.4 82.7 83.4 80.9 83.1 80.2 77.6 75.7 XY-LENTXL 2.1B 84.8 92.2 87.4 88.7 87.3 87.2 87.3 83.8 84 84.6 85.1 81.9 83.9 81.6 80.5 77.0 XXL XLM-RXXL 10.7B 83.1 91.6 86.2 87.3 87 85.1 85.7 82.5 82 82.5 83 79.5 82.6 79.8 76.2 74.9 mT5XXL 13B 85.0 91.6 86.9 87.8 87.3 87.3 87.7 85.1 83.8 84.5 79.8 81.7 83.6 83.2 80.3 84.6 Translate-train (models fine-tune on English training data plus translations in all target languages) Base mT5Base 300M 75.9 82 77.9 79.1 77.7 78.1 78.5 76.5 74.8 74.4 74.5 75 76 72.2 71.5 70.4 XLM-RBase 225M 79.1 85.4 81.4 82.2 80.3 80.4 81.3 79.7 78.6 77.3 79.7 77.9 80.2 76.1 73.1 73.0 XLM-EBase 477M 81.7 88.2 83.8 84.7 83.9 83.5 84.1 82.6 81.6 81.1 82.6 81.0 82.5 77.8 75.2 73.7 XY-LENT mCCABase 477M 82.4 88.0 84.7 85.6 84.2 83.8 84.4 83.3 82.1 82.2 82.7 81.4 82.9 79.4 77.3 73.3 XY-LENT CCMBase 477M 82.9 88.7 85.6 86.1 85.3 85.2 85.8 83.1 83.1 82.9 83.3 81.0 83.7 79.6 78.1 72.7 Large mT5Large 1.2B 81.8 88.3 83.8 84.9 84.0 83.7 84.1 82.0 81.0 80.3 81.3 79.9 81.7 79.8 76.4 75.9 XLM-RLarge 550M 83.6 89.1 85.1 86.6 85.7 85.3 85.9 83.5 83.2 83.1 83.7 81.5 83.7 81.6 78 78.1 XLM-ELarge 840M 84.1 90.1 86.8 87.1 86.0 86.1 86.4 84.8 83.5 83.7 84.4 81.9 84.9 81.2 78.5 76.4 XY-LENTLarge 814M 84.9 90.2 87.4 87.9 86.7 87.0 87.4 85.0 84.7 84.8 85.0 83.4 85.0 82.0 80.9 75.9 XL mT5XL 3.7B 84.8 90.9 86.8 87.4 86.8 86.4 86.8 84.9 84.4 84.2 83.9 82.3 84 83.1 81.3 79.4 XLM-RXL 3.5B 85.4 91.1 87.2 88.1 87 87.4 87.8 85.3 85.2 85.3 86.2 83.8 85.3 83.1 79.8 78.2 XLM-EXL 2.2B 85.5 90.9 87.4 88.3 87.4 87.2 87.6 85.1 85.1 85.1 86.1 83.7 85.4 82.5 81.3 78.9 XY-LENTXL 2.1B 87.1 92.2 88.9 89.7 89.1 89.1 89.1 86.2 86.8 87.0 87.3 85.2 86.7 84.5 83.2 80.8 XXL XLM-RXXL 10.7B 86.0 91.5 87.6 88.7 87.8 87.4 88.2 85.6 85.1 85.8 86.3 83.9 85.6 84.6 81.7 80.6 mT5XXL 13B 87.8 92.7 89.1 90 89.8 89.5 89.4 87.6 87.1 87.2 87.5 85.6 86.5 86.5 84.3 83.8 Table 11: XNLI accuracy scores for each language | | | | | | | | | | | | | | | | | | Model | en | ar | de | es | hi | vi | zh | Avg | |-------------------------------------------------|----------------------------------------------------------------------------------|-------------------------------|-------------------------------|------|------|------|------|-------| | Base | | | | | | | | | | mT5 | 81.7/66.9 57.1/36.9 62.1/43.2 67.1/47.2 | 55.4/37.9 | 65.9/44.1 61.6/38.6 64.4/45.0 | | | | | | | XLM-E | 82.1/69.2 62.4/42.4 65.7/50.7 71.2/53.1 65.12/47.5 69.8/48.8 64.6/41.5 68.7/50.5 | | | | | | | | | XY-LENT 83.1/70.3 63.9/43.9 68.9/54.0 73.3/55.1 | 69.0/51.7 | 72.7/52.0 68.0/45.2 71.3/53.2 | | | | | | | | Large | | | | | | | | | | XLM-R | 80.6/67.8 63.1/43.5 68.5/53.6 74.1/56.0 | 69.2/51.6 | 71.3/50.9 68.0/45.4 70.7/52.7 | | | | | | | mT5 | 84.9/70.7 65.3/44.6 68.9/51.8 73.5/54.1 | 66.9/47.7 | 72.5/50.7 66.2/42.0 71.2/51.7 | | | | | | | XLM-E | 84.1/71.2 66.6/46.3 70.0/54.8 74.7/56.8 | 71.0/53.3 | 74.6/53.6 68.8/44.9 72.8/54.4 | | | | | | | XY-LENT 85.0/72.3 68.0/47.6 72.1/56.9 75.4/57.1 | 72.9/54.7 | 75.4/54.0 71.2/47.6 74.3/55.7 | | | | | | | | XL | | | | | | | | | | mT5 | 85.5/71.9 68.0/47.4 70.5/54.4 75.2/56.3 | 70.5/51.0 | 74.2/52.8 70.5/47.2 73.5/54.4 | | | | | | | XLM-R | 85.1/72.6 66.7/46.2 70.5/55.5 74.3/56.9 | 72.2/54.7 | 74.4/52.9 70.9/48.5 73.4/55.3 | | | | | | | XLM-E | 85.2/72.6 68.1/47.6 71.1/56.4 75.7/57.4 | 73.1/55.2 | 75.4/53.9 71.3/47.7 74.3/55.8 | | | | | | | XY-LENT 85.4/72.4 69.0/48.5 73.0/57.7 76.8/58.6 | 75.0/56.5 | 76.2/54.7 72.1/48.6 75.4/56.7 | | | | | | | | XXL | | | | | | | | | | XLM-R | 85.5/72.4 68.6/48.4 72.7/57.8 75.4/57.6 | 73.7/55.8 | 76.0/55.0 71.7/48.9 74.8/56.6 | | | | | | | mT5 | 86.7/73.5 70.7/50.4 74.0/57.8 76.8/58.4 | 75.6/57.3 | 76.4/56.0 71.8/48.8 76.0/57.4 | | | | | | Table 12: MLQA results (F1/EM) for each language. Model en ar bn fi id ko ru sw te Avg Base mT5 71.8/60.9 67.1/50.4 40.7/22.1 67.0/52.2 71.3/54.5 49.5/37.7 54.9/32.6 60.4/43.9 40.6/31.1 58.1/42.8 XLM-E 71.8/57.7 68.3/50.0 60.6/45.1 68.0/52.6 73.2/56.1 53.2/40.2 63.4/38.4 64.4/48.1 41.3/27.2 62.7/46.2 XY-LENT 73.4/59.1 71.6/54.1 63.7/51.3 66.5/52.3 77.0/63.4 57.2/43.5 68.0/49.0 67.3/51.1 59.4/39.3 67.1/51.5 Large XLM-R 71.5/56.8 67.6/40.4 64.0/47.8 70.5/53.2 77.4/61.9 31.9/10.9 67.0/42.1 66.1/48.1 70.1/43.6 65.1/45.0 mT5 71.6/58.9 60.5/40.4 42.0/23.9 64.6/48.8 67.0/49.2 47.6/37.3 58.9/36.8 65.7/45.3 41.9/29.7 57.8/41.2 XLM-E 74.7/62.0 75.2/57.1 72.9/56.6 69.9/54.9 78.9/66.7 61.4/47.8 68.0/44.9 72.2/56.7 72.8/45.6 71.8/54.7 XY-LENT 75.6/62.0 77.0/59.9 74.6/62.8 74.0/57.5 80.7/67.1 66.4/52.2 69.5/46.3 76.0/61.3 72.2/48.4 74.0/57.5 XL mT5 80.3/70.9 81.7/65.5 74.5/57.5 79.4/65.3 83.5/70.4 70.0/60.5 71.6/47.8 77.3/59.7 77.9/55.8 77.4/61.5 XLM-E 79.1/64.3 78.2/60.3 76.9/64.1 75.0/60.2 84.4/70.3 66.7/54.8 76.4/56.3 78.3/63.7 75.6/51.1 76.7/60.6 XY-LENT 78.2/64.1 79.3/60.8 78.8/67.3 77.7/63.2 84.9/70.6 68.5/56.2 77.0/57.5 79.9/66.3 77.7/53.2 78.0/62.1 XXL mT5 83.7/72.5 82.8/66.0 80.2/63.7 83.3/70.2 85.3/73.3 76.2/64.1 76.6/55.8 81.9/66.1 79.2/58.7 81.0/65.6 Table 13: TYDi QA GP results (F1/EM) for each language. Model en ar de el es hi ru th tr vi zh Avg Base mT5 84.6/71.7 63.8/44.3 73.8/54.5 59.6/35.6 74.8/56.1 60.3/43.4 57.8/34.7 57.6/45.7 67.9/48.2 70.7/50.3 66.1/54.1 67.0/49.0 XLM-E 84.9/72.9 70.5/54.3 78.9/63.2 75.6/57.8 78.4/60.8 71.2/54.5 75.9/59.7 68.7/58.8 71.6/55.4 75.9/56.4 65.5/56.9 74.3/59.2 XY-LENT 87.2/76.0 72.9/56.0 80.0/64.5 79.6/63.5 81.2/63.1 75.3/59.7 77.7/61.5 70.9/59.5 74.0/58.7 77.4/59.2 69.0/61. 76.8/62.1 Large XLM-R 86.5/75.7 68.6/49.0 80.4/63.4 79.8/61.7 82.0/63.9 76.7/59.7 80.1/64.3 74.2/62.8 75.9/59.3 79.1/59.0 59.3/50.0 76.6/60.8 mT5 88.4/77.3 75.2/56.7 80.0/62.9 77.5/57.6 81.8/64.2 73.4/56.6 74.7/56.9 73.4/62.0 76.5/56.3 79.4/60.3 75.9/65.5 77.8/61.5 XLM-E 87.1/75.5 75.1/58.1 82.1/66.0 80.9/64.0 82.5/64.3 77.5/61.3 80.3/63.7 73.4/59.4 76.8/60.8 79.2/59.0 70.5/61.6 78.7/63.1 XY-LENT 88.1/77.4 76.3/59.6 82.6/67.1 82.5/65.1 83.9/66.6 77.9/61.3 80.2/63.6 74.3/63.8 78.5/62.9 80.6/61.6 71.4/64.6 79.7/64.9 XL mT5 88.8/78.1 77.4/60.8 80.4/63.5 80.4/61.2 82.7/64.5 76.1/60.3 76.2/58.8 74.2/62.5 77.7/58.4 80.5/60.8 80.5/71.0 79.5/63.6 XLM-R 89.5/79.0 78.4/61.6 81.3/64.1 82.3/63.9 84.6/66.2 78.8/63.2 81.5/65.0 76.0/65.5 73.9/57.9 81.7/61.8 72.3/66.1 80.0/64.9 XLM-E 89.1/79.0 78.5/62.0 82.4/66.9 81.8/65.5 84.3/67.1 79.3/63.4 82.2/66.9 75.4/65.1 78.3/62.5 81.5/62.9 71.6/65.1 80.4/66.0 XY-LENT 89.4/79.2 79.2/62.0 84.1/68.3 83.5/66.1 84.9/66.6 80.4/64.5 82.9/67.1 75.0/61.7 79.5/64.5 83.2/64.1 72.7/65.0 81.3/66.3 XXL XLM-R 89.3/79.4 80.1/63.7 82.7/65.8 83.4/65.5 83.8/66.0 80.7/65.4 82.4/65.4 76.6/65.6 76.8/61.7 82.2/63.0 74.1/67.4 81.1/66.3 mT5 90.9/80.1 80.3/62.6 83.1/65.5 83.3/65.5 85.1/68.1 81.7/65.9 79.3/63.6 77.8/66.1 80.2/60.9 83.1/63.6 83.1/73.4 82.5/66.8 Table 14: XQuAD results (F1/EM) for each language. ## E Hyperparameters For Fine-Tuning In Table 15, we report the hyperparameters for fine-tuning XY-LENT on the downstream tasks. | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | | |---------------|------------|------------|------------|--------------|----------------| | Batch size | 32 | 32 | 32 | 32 | 32 | | Learning rate | {2,3,4}e-5 | {2,3,4}e-5 | {2,3,4}e-5 | {5,...,8}e-6 | {8,9,10,20}e-6 | | LR schedule | Linear | Linear | Linear | Linear | Linear | | Warmup | 10% | 10% | 10% | 12,500 steps | 10% | | Weight decay | 0 | 0 | 0 | 0 | 0 | | Epochs | 4 | {2,3,4} | {10,20,40} | 10 | 10 | Table 15: Hyperparameters used for fine-tuning on the downstream tasks. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A and E ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-etal-2023-bridging-gap
Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational Machine Reading Comprehension
https://aclanthology.org/2023.acl-long.857
Open-retrieval conversational machine reading comprehension (OCMRC) simulates real-life conversational interaction scenes. Machines are required to make a decision of {``}Yes/No/Inquire{''} or generate a follow-up question when the decision is {``}Inquire{''} based on retrieved rule texts, user scenario, user question and dialogue history. Recent studies try to reduce the information gap between decision-making and question generation, in order to improve the performance of generation. However, the information gap still persists because these methods are still limited in pipeline framework, where decision-making and question generation are performed separately, making it hard to share the entailment reasoning used in decision-making across all stages. To tackle the above problem, we propose a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decision-making and question generation in a global understanding manner. The extensive experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on the OR-ShARC benchmark. Our model and code are publicly available at an anonymous link.
## Bridging The Gap: Entailment Fused-T5 For Open-Retrieval Conversational Machine Reading Comprehension Xiao Zhang123**, Heyan Huang**123∗ , Zewen Chi123**, Xian-Ling Mao**123 1School of Computer Science and Technology, Beijing Institute of Technology 2Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications 3Southeast Academy of Information Technology, Beijing Institute of Technology {yotta,hhy63,czw,maoxl}@bit.edu.cn ## Abstract Open-retrieval conversational machine reading comprehension (OCMRC) simulates reallife conversational interaction scenes. Machines are required to make a decision of Yes/No/Inquire or generate a follow-up question when the decision is Inquire based on retrieved rule texts, user scenario, user question and dialogue history. Recent studies try to reduce the information gap between decision-making and question generation, in order to improve the performance of generation. However, the information gap still persists because these methods are still limited in pipeline framework, where decision-making and question generation are performed separately, making it hard to share the entailment reasoning used in decision-making across all stages. To tackle the above problem, we propose a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decisionmaking and question generation in a global understanding manner. The extensive experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on the OR-ShARC benchmark. Our model and code are publicly available1. ## 1 Introduction Open-retrieval conversational machine reading comprehension (OCMRC) (Gao et al., 2021) investigates real-life scenes, aiming to formulate multi-turn interactions between humans and machines in open-retrieval settings. As shown in Figure 1, given the user scenario and user question, machines are required to first retrieve related rule texts in the knowledge database, and then make a decision of Yes/No/Inquire or generate a follow-up question when the decision is Figure 1: An example in the OCMRC dataset. Given the user scenario and user question, machines are required to first retrieve related rule texts in the knowledge database, and then make a decision of Yes/No/Inquire or generate a follow-up question when the decision is Inquire based on retrieved rule texts, user scenario, user question and dialogue history. Inquire based on retrieved rule texts, user scenario, user question and dialogue history. Previous studies (Saeidi et al., 2018; Verma et al., 2020; Lawrence et al., 2019; Zhong and Zettlemoyer, 2019; Gao et al., 2020a,b; Ouyang et al., 2021; Gao et al., 2021; Zhang et al., 2021) typically adopt pipeline frameworks based on pretrained language models (PrLM) (Devlin et al., 2019; Clark et al., 2020; Lewis et al., 2020; Liu et al., 2020) , as shown in Figure 4, these frameworks usually consist of three stages, includ15374 ![1_image_0.png](1_image_0.png) ing decision-making, span extraction and question rephrasing. Different entailment reasoning strategies are utilized to improve the performance of decision-making. Span extraction and question rephrasing are conducted for question generation. These pipeline frameworks are either completely independent of the three stages (Zhong and Zettlemoyer, 2019; Gao et al., 2020a,b; Ouyang et al., 2021; Gao et al., 2021), or try to reduce the information gap between decision-making and question generation through representation-fused methods (Zhang et al., 2021) among three stages . However, the information gap still persists because these methods are still limited in pipeline framework, where decision-making and question generation are performed separately, making it hard to share the entailment reasoning used in decision-making across all stages. To tackle the above problem, we propose a novel one-stage end-to-end framework, called entailment fused-T5 (EFT) to bridge the information gap between decision-making and question generation in a global understanding manner. Specifically, our model consists of a universal encoder and a duplex decoder. The decoder consists of an activated entailment reasoning decoder and a fused answer generation decoder. The implicit reasoning chains of both decision-making and question generation in the multi-fused answer generation decoder are explicitly supervised by ac- ![1_image_1.png](1_image_1.png) tivated entailment reasoning through the shared entailment representation of our encoder. Moreover, a relevance-diversity fusion strategy is utilized to further improve the implicit reasoning abilities among the multiple retrieved rules for the fused answer generation decoder through the implicit ranking method. Thus, our model can reason in a global understanding manner. The extensive results, as illustrated in Figure 3, demonstrate that our proposed framework EFT achieves new stateof-the-art performance on the OR-ShARC benchmark. Our contributions are summarized as follows: $$\begin{array}{r l r l r l}{{\mathrm{~osc~}}}&{{\mathrm{~a~}}}&{{\mathrm{~r~}}}\end{array}$$ $$\mathbf{T}\mathbf{I}=0$$ - We propose a novel one-stage end-toend framework, called entailment fused-T5 (EFT) to bridge the information gap between decision-making and question generation through a global understanding manner. - We further investigate a relevance-diversity fusion strategy (RD strategy) to improve the implicit reasoning abilities of our model. - Extensive experiments demonstrate the effectiveness of our proposed framework on the OR-ShARC benchmark. ## 2 Related Work Conversation-Based Reading Comprehension Conversation-based reading comprehension (Saeidi et al., 2018; Sun et al., 2019; Reddy et al., 2019; Choi et al., 2018; Cui et al., 2020; Gao et al., 2021) aims to formulate human-like interactions. Compared to traditional reading comprehension, these tasks extend the reading comprehension scenarios with dialogue interactions. There are typically three main types of these tasks: span-based QA tasks (Choi et al., 2018; Reddy et al., 2019), multi-choice tasks (Sun et al., 2019; Cui et al., 2020), or hybrid-form tasks (Saeidi et al., 2018; Gao et al., 2021). Conversational Machine Reading Comprehension CMRC (Saeidi et al., 2018) is the hybrid form of conversation-based reading comprehension, which requires the machines to make a decision or generate a follow-up question based on rule text, user scenario, user question and dialogue history. In this paper, we focus on the open-retrieval conversational machine reading (OCMRC) task (Gao et al., 2021), which further extends the CMRC task into a real-life scenario. Machines are required to first retrieve related rule texts in a knowledge base based on user questions and user scenarios, then machines are required to make a decision of Yes/No/Inquire, or a follow-up question if the decision is Inquire based on the relevant rule texts, user scenario, user question and dialogue history. Due to the hybrid-form task, the previous methods (Zhong and Zettlemoyer, 2019; Gao et al., 2020a,b; Ouyang et al., 2021; Zhang et al., 2021) typically adopt pipeline architectures, including decision-making, span extraction and question phrasing. Various kinds of entailment reasoning strategies are proposed to improve the performance of decision-making. Despite the effectiveness of entailment reasoning, the performance is still limited because of the information gap between decision-making and question generation. Recent studies (Zhang et al., 2021, 2022) explored entailment reasoning sharing methods to reduce the gap between decision-making and question generation, but the performance is limited due to its frame flaws. In this paper, we propose a novel one-stage end-to-end model, called entailment fused-T5 (EFT), the details are written in the next sections. ## 3 Methods In open-retrieval CMRC, the machines are first required to retrieve related rule texts in a knowledge base, given user question and user scenario. Then machines are required to make decisions or generate follow-up questions based on retrieved rule texts, user scenario, user question and dialogue history. Thus, we conduct a retriever to first retrieve related rule texts from the knowledge database, and then generate the final answer through our end-to-end reader EFT. The training procedure is shown in Algorithm 1. ## 3.1 Retriever We first concatenate the user question and user scenario as the query to retrieve related rule texts in the knowledge base. The knowledge base is divided into the seen subset and the unseen subset. This is to simulate the real usage scenario: users will ask questions about rules they have already seen, or rules that are completely new. We only use seen rules in the training process. In this work, we utilize DPR (Karpukhin et al., 2020) to retrieve related rule texts. In contrast to previous approaches (Zhang et al., 2021) that employ TFIDF negatives as DRP hard negatives and restrict the scope of retrieved negatives to a limited data space, we adopt a different strategy. We randomly sample rule texts from the known knowledge base to serve as the negatives. Each step will randomly sample m numbers negatives in the training stage. We retrieve the top 20 relevant rule texts for each query, which is further used by our reader. ## 3.2 Reader: Eft In this stage, each item is formed as the tuple {*R, S, Q, D*}. R donates the rule text can15376 | Candidate 1: | User Scenario | User Question | Dialogue History | Rule1: EDU | Rule1: EDU | |----------------|-----------------|-----------------|--------------------|--------------|--------------| | Candidate 2: | User Scenario | User Question | Dialogue History | Rule2: EDU | Rule2: EDU | | Candidate 3: | User Scenario | User Question | Dialogue History | Rule3: EDU | Rule3: EDU | ![3_image_0.png](3_image_0.png) didates. {r1,r2, ...,rk}, where r do- R l nates the rule text item of R. S and Q represent user scenario and user questions, respec- D donates the dialogue history. tively. Given { R, S, Q, D }, EFT will directly generate a decision of Yes/No/Inquire or follow-up question when the decision is Inquire . EFT consists of a universal encoder and a duplex decoder. The Duplex decoder consists of an activated entailment reasoning decoder and a fused answer generation decoder. In this way, the whole implicit reasoning chains of the fused answer generation decoder will be fine-grained supervised by activated entailment reasoning with the shared entailment representation. Thus, the fused answer generation decoder will reason in a global understanding manner. The details are shown in Figure 4. ## Encoding 3.3 Given {R, S, Q, D}, we random sample k items in R , and concatenate each of them with S , Q , D as c , thus the item collection is formed as C l { C 1 , C 2 , ..., C k } . Specifically, each r in R is first parsed to elementary discourse unit (EDU) by a pre-trained model (Li et al., 2018). The final input format is shown in Figure 4. To prevent the leakage of location information in the fused answer generation decoder, and enhance the information extraction ability of the decoder, we utilize a relevance-diversity fusion (RD) strategy to randomly shuffle the order of items which are sampled from RD candidate rule texts, the details are written in Sec 3.4. Given C , we utilize T5 encoder as our backbone to get the rep- The presentation of special token resentation. are utilized as sentence-level representation H s = {h s 1 ,h s 2 , ...,h sk} for activated entailment reasoning decoding. The word-level representation H w = {hw1,hw2,...,hwk} are utilized for fused answer generation decoding. ## 3.4 Decoding We utilize duplex decoding to explicitly supervise our answer generation stage, which introduced the explicit entailment reasoning information in implicit answer generation reasoning. The answer generation will either generate a decision of Yes/No/Inquire or a follow-up question when the decision is Inquire . The activated entailment reasoning decoder will reason the entailment states of the EDUs. The duplex decoder is trained in a multi-task form. And the activated entailment reasoning only activates in training stage. Activated Entailment Reasoning Each EDU will be classified into one of three entailment states, including ENTAILMENT, CONTRADICTION and NEUTRAL. To get the noisy supervision signals of entailment states, we adopt a heuristic approach2. This is proposed to simulate fulfillment prediction of conditions in multi-sentence entailment reasoning, which can explicitly supervise the implicit reasoning chains of the answer generation. Previous studies typically introduce entailment reasoning in all rule text segmented EDUs. This will greatly increase the proportion of NEUTRAL labels and affect the model effect, because nearly all of the entailment states of EDUs in retrieved irrelevant rules are NEUTRAL, and introducing more noise in the training stage. In our method, entailment reasoning will only activate for the golden rule text. Utilizing this setting, one benefit is to balance the categories of entailment reasoning, and the other is to supervise the implicit reasoning of the fused decoder, which can help the fused decoder infer correct rule text from multiple retrieved rule texts. Given the sentence-level representation Hs, we utilize inter-attention reasoning to fully interact with various information in r, including EDUs, user question, user scenario and dialogue history. We utilize an inter-sentence Transformer (Devlin et al., 2019; Vaswani et al., 2017) to get the interacted sentence-level representation Gs. Then, we use a linear transformation to track the entailment reasoning states of each EDU in activated rule text. $$e_{i}=W_{c}\tilde{h}_{e_{i}}+b_{c}\in\mathcal{R}^{3},$$ ei = Wch˜ei + bc ∈ R3, (1) where the Wc is trainable parameters, eiis the predicted score for the three labels of the i-th states. Fused Answer Generation Given the wordlevel representation Hw = {hw1, hw2*, ..., h*wk} of R, we concatenate the Hw as the fused representation fw. In this manner, our answer generation decoder can reason through all the k items through an implicit ranking mechanism. It is worth mentioning that each item of Hw is first fully interacted among rule text, user question, user scenarios and dialogue history without other multi-rule noise through our encoder. To improve the information implicit reasoning abilities of our model, we further investigate 2The noisy supervision signal is a heuristic label obtained by the minimum edit distance. Algorithm 1 Training procedure of EFT Input: Contextualized context C, learning rate τ , Output: Final answer A, activated entailment reasoning state E, EFT encoder parameters θe, EFT fused answer generation decoder parameters θa, EFT activated entailment reasoning decoder parameters θd 1: Initialize θe,θa,θd 2: **while** not converged do e not converged do or $i=1,2,\ldots,N$do $h_{si},h_{wi}=f(c_{i},\theta_{e})$ s.t. $\forall c\in\mathcal{C}$ $e_{i}=f(h_{si},\theta_{d})$ $a_{i}=f(h_{wi},\theta_{a})$ and for $\leftarrow\nabla\theta\mathcal{L}$ $\leftarrow\theta$ -- $\pi\alpha$ 5: ei = f(hsi, θd) 6: ai = f(hwi, θa) 7: **end for** 8: g ← ∇θL 9: θe ← θe − τg 10: θd ← θd − τg 11: θa ← θa − τg 12: **end while** the relevance-diversity fusion strategy (RD fusion strategy), which consists of relevance-diversity candidate rule texts, order information protection and fused answer generation. The rule text candidates are consists of top k relevant rule texts and randomly sampled rule texts, which are called RD candidate rule texts. Thus, the candidates are full-filled with relevant and diverse rule texts. On the premise of ensuring relevance among the rule texts, the diversity of learnable information sampling combinations is further improved. Moreover, the order of items fused in fw may lead to information leakage and affect the reasoning ability of the decoder in the training stage, so as we mentioned in the last section, we will randomly shuffle the order of items when inputting to the encoder to protect the order information. In the evaluation stage, only the top 5 unshuffled relevant rule texts will be utilized for answer generation. The fused answer generation is utilized to generate either the decision or the follow-up question. We employ T5 decoder as our answer generation decoder. Given encoder fused representation fw, and the final answer a, including decision or follow-up question, the answer is composed of the variable-length tokens xi, the probabilities over the tokens are shown in the blow: $$p(a)=\prod_{1}^{m}p(x_{i}|x_{<i},f_{e};\theta),\qquad\qquad(2)$$ where θ donates the trainable parameters of our decoder. ## 3.5 Training Objective Activated Entailment Reasoning The activated entailment reasoning is supervised by crossentropy loss, by given the entailment stages ci: $${\mathcal{L}}_{e n a t i l}=-\frac{1}{N}\sum_{i=1}^{N}l o g\,\mathrm{softmax}(c_{i})_{r},$$ where r is the ground truth of entailment state. Fused Answer Generation The fused answer generation training objective is computed as illustrated in below: $${\mathcal{L}}_{a n s w e r}=-\sum_{i=1}^{M}l o g\,p(x_{i}|x_{<i},f_{w};\theta),$$ The overall loss function is: $\frac{1}{2}$ $${\mathcal{L}}={\mathcal{L}}_{a n s u e r}+\lambda{\mathcal{L}}_{e n t a i l}.$$ $\mathbf{A}$ ## 4 Experiment And Analysis 4.1 Data Dataset The experiment dataset is OR-ShARC (Gao et al., 2021), the current OCMRC benchmark. The corpus is crawled from the government website. There is a total of 651 rule texts collected in the knowledge base. For the validation and test set, the golden rule texts are split into unseen or seen. This is to simulate the real usage scenario: users will ask questions about rules they have already seen, or rules that are completely new. The train, dev and test size is 17,936, 1,105 and 2,373, respectively. Each item consists of utterance id, tree id, golden rule document id, user question, user scenario, dialog history, evidence and the decision. ## 4.2 Setup Evaluation The evaluation consists of two parts: decision-making and question generation. We utilize Micro-Acc and Macro-Acc for the results of decision-making, and use F1BLEU (Gao et al., 2021) for question generation. The F1BLEU is conducted to evaluate the question generation performance when the predicted decision is Inquire. $precision_{\text{BLEU}}=\frac{\sum_{i=0}^{M}\text{BLEU}(y_{i},\hat{y}_{i})}{M},$ (6) Where M is the total number of Inquire decisions made by our model. yiis the predicted question, yˆiis the corresponding ground truth prediction. The recall of BLEU is computed in a similar way. $$r e c a l_{\mathrm{BLEU}}={\frac{\sum_{i=0}^{N}\mathrm{BLEU}(y_{i},{\hat{y}}_{i})}{N}},\quad\quad(7)$$ $$(3)$$ where N is the total number of Inquire decision from the ground truth annotation, The calculation of F1BLEU is shown below: $$\mathrm{F1}_{\mathrm{BLEU}}={\frac{2\times{p r e c i s i o n_{\mathrm{BLEU}}}\times{r e c a l l_{\mathrm{BLEU}}}}{p r e c i s i o n_{\mathrm{BLEU}}+r e c a l l_{\mathrm{BLEU}}}}.\tag{8}$$ $${\mathrm{(4)}}$$ $$(5)$$ Implementation Details We utilize the T5-base (Raffel et al., 2020) as our reader backbone, and additionally add an activated entailment reasoning decoder, whose parameters are randomly initialized. We utilize BERT (Devlin et al., 2019) as our retriever backbone , whose parameters are initialized from DPR (Karpukhin et al., 2020). For the RD strategy, we use the top-20 retrieved rule texts and 30 randomly sampled rule texts as our fused candidates in the training stage, every step we will randomly select 5 samples from the candidates. We only use seen rule texts in the knowledge base for the training stage. And we only use top 5 retrieved rule text for the inference stage. The fused number k is set as 5 for fused answer generation for both training and inference. We use AdamW (Loshchilov and Hutter, 2018) to finetune our model. The learning rate is hierarchically designed, the learning rate of T5 is 2e-4, and the learning rate of activated entailment decoder is 2e5. We tried from 0.1 to 1.0 for λ, and find 0.9 is the best hyper-parameter. The beam-size is set as 5 for the answer generation. ## 4.3 Results All results on the OR-ShARC benchmark are illustrated in Table 1, including dev and test set with metrics for both decision-making and question generation. Experimental results demonstrate that our proposed methods achieve new SOTA on the ORShARC benchmark. EFT outperforms OSCAR by 3.6% in Micro-Acc, 3.6% in Macro-Acc for decision-making on the dev set, and outperforms OSCAR by 2.6% in Micro-Acc, 2.7% in MacroAcc for decision-making on the test set. In particular, our proposed EFT achieves considerable | Dev Set | Test Set | | | | | | | | |------------|-----------------|---------------|-----------------|---------------|----------|----------|----------|----------| | Model | Decision Making | Question Gen. | Decision Making | Question Gen. | | | | | | Micro | Macro | F1BLEU1 | F1BLEU4 | Micro | Macro | F1BLEU1 | F1BLEU4 | | | E 3 | 61.8±0.9 | 62.3±1.0 | 29.0±1.2 | 18.1±1.0 | 61.4±2.2 | 61.7±1.9 | 31.7±0.8 | 22.2±1.1 | | EMT | 65.6±1.6 | 66.5±1.5 | 36.8±1.1 | 32.9±1.1 | 64.3±0.5 | 64.8±0.4 | 38.5±0.5 | 30.6±0.4 | | DISCERN | 66.0±1.6 | 66.7±1.8 | 36.3±1.9 | 28.4±2.1 | 66.7±1.1 | 67.1±1.2 | 36.7±1.4 | 28.6±1.2 | | MP-RoBERTa | 73.0±1.7 | 73.1±1.6 | 45.9±1.1 | 40.0±0.9 | 70.4±1.5 | 70.1±1.4 | 40.1±1.6 | 34.3±1.5 | | MUDERN | 78.4±0.5 | 78.8±0.6 | 49.9±0.8 | 42.7±0.8 | 75.2±1.0 | 75.3±0.9 | 47.1±1.7 | 40.4±1.8 | | OSCAR | 80.5±0.5 | 80.9±0.6 | 51.3±0.8 | 43.1±0.8 | 76.5±0.5 | 76.4±0.4 | 49.1±1.1 | 41.9±1.8 | | EFT | 83.4±0.5 | 83.8±0.5 | 65.5±1.9 | 59.0±2.0 | 78.5±0.7 | 78.5±0.7 | 59.3±0.8 | 53.0±0.8 | Table 1: Results on the dev and test set of OR-ShARC. The average results with standard deviation on 5 random seeds are reported. | Model | Micro | Macro | F1BLEU1 | F1BLEU4 | |--------------|----------|----------|-----------|-----------| | EFT | 83.4±0.5 | 83.8±0.5 | 65.5±1.9 | 59.0±2.0 | | -w/o s | 82.9±0.6 | 83.4±0.5 | 63.8±1.6 | 57.0±1.8 | | -w/o s+a | 80.7±0.8 | 81.1±0.9 | 62.4±2.3 | 56.3±2.3 | | -w/o s+a+i | 80.2±0.5 | 80.5±0.6 | 61.2±1.4 | 55.0±1.6 | | -w/o s+a+i+f | 71.0±1.2 | 71.6±0.9 | 49.2±0.8 | 43.8±0.8 | Table 2: Ablation study of EFT on the dev set of ORShARC. The average results with standard deviation on 5 random seeds are reported. improvement in BLEU scores. EFT outperforms OSCAR by 27.7% in F1BLEU1, 36.9% in F1BLEU4 for the question generation on the dev set, and outperforms OSCAR by 20.8% in F1BLEU1, 26.5% in F1BLEU4 for the question generation on the test set. We further to investigate the classwise accuracy performance of EFT, as shown in Table 4. Experiments show that the accuracy of each category in OR-ShARC is improved by conducting EFT framework, compared with reported baselines. To further investigate the performance for our proposed EFT in seen and unseen settings, the performance of the split subset 3is illustrated in Table 3. Compared with OSCAR, the seen subset performance are greatly improved through our framework EFT. EFT greatly outperforms OSCAR by 29.1% in F1BLEU1, 36.4% in F1BLEU4 for the question generation on the seen test set. In addition, compared with the previous pipeline architectures utilized in MURDEN and OSCAR, our model not only improves the performance, but also makes the framework of OCMRC more lightweight. We reduce the number of model parameters from 330M to 220M, which is decreased by 33.3%. The performance on the seen subset of 3Only BLEU scores are reported in OSCAR. EFT is 35.0% higher in micro-acc than seen subset, 35.3% higher in macro-acc than unseen subset. Our retrieval results are illustrated in Table 6 and Table 5. The details are illustrated in Appendix A and Appendix B, respectively. ## 4.4 Ablation Studies The ablation studies of EFT on the dev set of ORShARC benchmark are shown in Table 2. There are four settings of our EFT is considered: - **EFT-wo/s** trains the model without relevance-diversity (RD) candidate rule texts. Only top-5 randomly shuffled relevant rule texts are considered in the training stage. - **EFT-wo/s+a** trains this model additionally remove activated entailment reasoning. - **EFT-wo/s+a+i** trains this model further cancels random shuffle in the training stage. - **EFT-wo/s+a+i+f** trains this model without multi rule fused answer generation, only top1 retrieved rule text is considered. Analysis of RD Candidate We investigate the necessity of the RD candidate rule texts. This strategy is utilized to improve the implicit reasoning abilities of our decoder by improving the learning space of fused candidates. On the premise of ensuring the relevance among the rule texts, the diversity of learnable information sampling combinations is further improved. As shown in Table 2, compared with EFT, the performance of both decision-making and question generation decline when RD candidate rule texts are removed, highlighting the effectiveness of RD candidate rule | Model | Seen | Unseen | Parameters | | | | | | | |---------|--------|----------|--------------|-------|-------|---------|---------|------|------| | Micro | Macro | F1BLEU1 | F1BLEU4 | Micro | Macro | F1BLEU1 | F1BLEU4 | | | | MUDERN | 88.1 | 88.1 | 62.6 | 57.8 | 66.4 | 66.0 | 33.1 | 24.3 | 330M | | OSCAR | - | - | 64.6 | 59.6 | - | - | 34.9 | 25.1 | 330M | | EFT | 92.4 | 92.3 | 83.4 | 81.3 | 68.4 | 68.2 | 34.9 | 24.0 | 220M | texts in enhancing the information-seeking abilities of our fused answer generation decoder. By removing RD candidate rule texts, the micro-acc is decreased by 0.5, the macro-acc is decrease by 0.4, the F1BLEU1 is decreased by 1.7, and the F1BLEU4 is decreased by 2.0. The above results emphasize the indispensability of RD candidate rule texts. Analysis of Activated Entailment Reasoning EFT-wo/s+a trains this model additionally remove activated entailment reasoning. As illustrated in Table 2, compared with EFT-wo/s+a, the performance of both decision-making and question generation are dropped without activated entailment reasoning, the micro-acc is decreased by 2.2, the macro-acc is decrease by 2.3, the F1BLEU1 is decreased by 1.4, and the F1BLEU4 is decreased by 0.7. The above results suggest that the implicit reasoning of conversational machine reading comprehension could be enhanced by introducing explicit fine-grained supervised signal in a global understanding manner. Analysis of Order Information Protection The order of fused representation used in fused answer generation decoder may lead to information leakage and affect the reasoning ability of the decoder. In order to avoid the problem of poor information seeking ability caused by excessive learning of position information of the model, we randomly shuffle the order of fused representation to protect the order information in the training stage. As illustrated in Table 2, compared with EFT-wo/s+a, EFT-wo/s+a+i decrease the performance of both decision-making and question generation without order information protection, the micro-acc is decreased by 0.5 , the macro-acc is decrease by 0.6, the F1BLEU1 is decreased by 1.2, and the F1BLEU4 is decreased by 1.3. The above results indicates the importance of order information protection. Analysis of Fused Generation Fused Generation is utilized to introduce the ability to pro- Model Yes No Inquire dev test dev test dev test E 3 58.5 58.5 61.8 60.1 66.5 66.4 EMT 56.9 55.4 68.6 65.6 74.0 73.6 DISCERN 61.7 65.8 61.1 61.8 77.3 73.6 MP-RoBERTa 68.9 72.6 80.8 74.2 69.5 63.4 MUDERN 73.9 76.4 80.8 72.2 81.7 77.4 EFT **80.1 81.2 83.2 75.6 88.2 78.7** cess multiple rule contextualized information. The multiple rule contextualized information are fused as a single fused information. EFT-wo/s+a+i+f trains this model without multi rule fused answer generation, only top1 retrieved rule text is considered. In this manner, the performance is limited with the retrieval performance. Compared with EFT-wo/s+a+i+f, the performance of both decision-making and question generation of EFTwo/s+a+i are significantly improved by introducing fused answer generation strategy, the microacc is increased by 9.2 , the macro-acc is increased by 8.9, the F1BLEU1 is increased by 12.0, and the F1BLEU4 is increased by 11.2. The above results suggests the necessity of fused answer generation strategy. ## 5 Conclusion In this paper, we propose a novel end-to-end framework, called EFT, to bridge the information gap between decision-making and question generation through the shared entailment representation in a global understanding manner. Extensive experimental results on the OR-ShARC benchmark demonstrate the effectiveness of our proposed framework EFT. In our analysis, the implicit reasoning ability of both decision-making and question generation is significantly improved by sharing external explicit entailment knowledge through our novel framework EFT. ## Limitations As shown in Table 3, the results demonstrate the effectiveness of our proposed EFT, but the performance of the unseen subset is still limited by comparing it with the performance of seen subset, which suggests plenty of room for improvement. Data augmentation or generalization methods based on semi-supervised methods could be effective to solve the problem in the future. ## Acknowledgements The work is supported by National Key R&D Plan (No. 2020AAA0106600), National Natural Science Foundation of China (No.62172039, U21B2009 and 62276110). ## References Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, Brussels, Belgium. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pretraining text encoders as discriminators rather than generators. In *International Conference on Learning Representations*. Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. MuTual: A dataset for multi-turn dialogue reasoning. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota. Association for Computational Linguistics. Yifan Gao, Jingjing Li, Michael R Lyu, and Irwin King. 2021. Open-retrieval conversational machine reading. *arXiv preprint arXiv:2102.08633*. Yifan Gao, Chien-Sheng Wu, Shafiq Joty, Caiming Xiong, Richard Socher, Irwin King, Michael Lyu, and Steven C.H. Hoi. 2020a. Explicit memory tracker with coarse-to-fine reasoning for conversational machine reading. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics. Yifan Gao, Chien-Sheng Wu, Jingjing Li, Shafiq Joty, Steven C.H. Hoi, Caiming Xiong, Irwin King, and Michael Lyu. 2020b. Discern: Discourse-aware entailment reasoning network for conversational machine reading. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2439–2449. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics. Carolin Lawrence, Bhushan Kotnis, and Mathias Niepert. 2019. Attending to future tokens for bidirectional sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Jing Li, Aixin Sun, and Shafiq Joty. 2018. Segbot: a generic neural text segmentation model with pointer network. In *Proceedings of the 27th International* Joint Conference on Artificial Intelligence, pages 4166–4172. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach. Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. In *International Conference on Learning Representations*. Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. 2021. Dialogue graph modeling for conversational machine reading. In *Findings of the Association for* Computational Linguistics: ACL-IJCNLP 2021, Online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140). Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*. Nikhil Verma, Abhishek Sharma, Dhiraj Madan, Danish Contractor, Harshit Kumar, and Sachindra Joshi. 2020. Neural conversational QA: Learning to reason vs exploiting patterns. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics. Xiao Zhang, Heyan Huang, Zewen Chi, and Xian-Ling Mao. 2022. ET5: A novel end-to-end framework for conversational machine reading comprehension. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Zhuosheng Zhang, Siru Ouyang, Hai Zhao, Masao Utiyama, and Eiichiro Sumita. 2021. Smoothing dialogue states for open conversational machine reading. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*. Victor Zhong and Luke Zettlemoyer. 2019. E3: Entailment-driven extracting and editing for conversational machine reading. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics. ## Appendix DPR-R Top1 Top5 Top10 Top20 Dev 54.5 93.4 99.2 99.5 Seen Only 96.8 100.0 100.0 100.0 Unseen Only 19.5 87.9 98.5 99.0 Test 77.5 93.5 96.3 98,8 Seen Only 97.6 100.0 100.0 100.0 Unseen Only 62.8 88.8 93.7 97.9 Table 5: Retrieval Results of DPR-R. ## A The Performance Of Retriever (Dpr-R) Table 6 presents the detailed performance of DPR-R, including the performance of Top-k on dev set and test set. Different from previous methods (Zhang et al., 2021) that utilize DRP but only use TF-IDF retrieved negatives, we use random negatives sampled from seen rule texts in knowledge base. Experimental results illustrate that DPR-R outperforms DPR by 16.5% in top5 accuracy on test set, and reaches competitive results with TF-IDF+DPR. ## B The Performance Of Retriever (Dpr-R**) On Subset** We further analyzed the performance of DPR-R on the seen and unseen subset. As shown in Table 5, experimental results demonstrate the effectiveness of DPR-R on seen sets, the top1 accuracy reached 97.6 on the seen subset of test set. But the performance still have a large latent space of improvement on unseen sets. ## C Additional Experiment Details We implement EFT with the PyTorch4library and using pre-trained Transformers from the Hugging Face5repositories. The retriever DPR-R is based on the DPR6repositories. The data of OR-ShARC are from the OR-ShARC7repository. The above repositories provide the data, models and licenses. The whole training process takes about several hours on eight Nvidia A100 GPUs. | Model | Dev Set | Test Set | | | | | | | |--------------|-----------|------------|-------|------|------|-------|-------|------| | Top1 | Top5 | Top10 | Top20 | Top1 | Top5 | Top10 | Top20 | | | TF-IDF | 53.8 | 83.4 | 94.0 | 96.6 | 66.9 | 90.3 | 94.0 | 96.6 | | DPR | 48.1 | 74.6 | 84.9 | 90.5 | 52.4 | 80.3 | 88.9 | 92.6 | | TF-IDF + DPR | 66.3 | 90.0 | 92.4 | 94.5 | 79.8 | 95.4 | 97.1 | 97.5 | | DPR-R(ours) | 54.5 | 93.4 | 99.2 | 99.5 | 77.5 | 93.5 | 96.3 | 98.8 | Table 6: Comparison of the open-retrieval methods. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section after section 5 ✓ A2. Did you discuss any potential risks of your work? Limitations section after section 5 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and section1 ✓ A4. Have you used AI writing assistants when working on this paper? We use chatGPT to verify the grammar of our abstract. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix.C ✓ B1. Did you cite the creators of artifacts you used? Appendix.C ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix.C B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix.C and Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Table 1, Table2 in Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix.C and Section 4.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
gao-etal-2023-livechat
{L}ive{C}hat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming
https://aclanthology.org/2023.acl-long.858
Open-domain dialogue systems have made promising progress in recent years. While the state-of-the-art dialogue agents are built upon large-scale social media data and large pre-trained models, there is no guarantee these agents could also perform well in fast-growing scenarios, such as live streaming, due to the bounded transferability of pre-trained models and biased distributions of public datasets from Reddit and Weibo, etc. To improve the essential capability of responding and establish a benchmark in the live open-domain scenario, we introduce the LiveChat dataset, composed of 1.33 million real-life Chinese dialogues with almost 3800 average sessions across 351 personas and fine-grained profiles for each persona. LiveChat is automatically constructed by processing numerous live videos on the Internet and naturally falls within the scope of multi-party conversations, where the issues of Who says What to Whom should be considered. Therefore, we target two critical tasks of response modeling and addressee recognition and propose retrieval-based baselines grounded on advanced techniques. Experimental results have validated the positive effects of leveraging persona profiles and larger average sessions per persona. In addition, we also benchmark the transferability of advanced generation-based models on LiveChat and pose some future directions for current challenges.
# Livechat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed From Live Streaming Jingsheng Gao1,2∗ , Yixin Lian2, Ziyi Zhou2**, Yuzhuo Fu**1† , Baoyuan Wang2† 1 School of SEIEE, Shanghai Jiao Tong University, China 2 Xiaobing.AI {gaojingsheng, yzfu}@sjtu.edu.cn {lianyixin, zhouziyi, wangbaoyuan}@xiaobing.ai ## Abstract Open-domain dialogue systems have made promising progress in recent years. While the state-of-the-art dialogue agents are built upon large-scale text-based social media data and large pre-trained models, there is no guarantee these agents could also perform well in fast-growing scenarios, such as live streaming, due to the bounded transferability of pretrained models and biased distributions of public datasets from Reddit and Weibo, etc. To improve the essential capability of responding and establish a benchmark in the live opendomain scenario, we introduce the LiveChat dataset, composed of 1.33 million real-life Chinese dialogues with almost 3800 average sessions across 351 personas and fine-grained profiles for each persona. LiveChat is automatically constructed by processing numerous live videos on the Internet and naturally falls within the scope of multi-party conversations, where the issues of Who says What to Whom should be considered. Therefore, we target two critical tasks of response modeling and addressee recognition and propose retrieval-based baselines grounded on advanced techniques. Experimental results have validated the positive effects of leveraging persona profiles and larger average sessions per persona. In addition, we also benchmark the transferability of advanced generation-based models on LiveChat and pose some future directions for current challenges. 1 ## 1 Introduction Building dialogue systems to converse naturally with humans has been one of the longest-running goals in artificial intelligence (Zhou et al.; Roller et al., 2021). To usher that chatbot response properly in diverse scenarios, it is desirable to train a conversational agent based on massive large-scale ∗Work done during an internship at Xiaobing.AI †Corresponding Author 1The code and dataset will be publicly available at https://github.com/gaojingsheng/LiveChat. ![0_image_0.png](0_image_0.png) Figure 1: A session example of LiveChat. A streamer will respond to one audience's comment from the comments area. datasets with multiple domains. Current dialogue datasets mainly leverage online forum posts to build reply-to relationships between users, such as Reddit (Mazaré et al., 2018; Zhong et al., 2020) and Weibo (Zheng et al., 2019; Qian et al., 2021). Despite the scalability and diversity of current dialogue corpora, dialogue models pre-trained on these conversation datasets can not perform effectively when applied to a completely new domain, such as live streaming. The reason lies in the intrinsic domain gap between online-post constructed data and those required in downstream conversational tasks. Even recent state-of-the-art (SOTA) dialogue models built upon large pre-trained language models (PLMs) like LaMDA (Thoppilan et al., 2022) and ChatGPT2 heavily rely on publicly available text-only data. These large pre-trained models' distributions remain different across domains (Zeng et al., 2022) and are distinct from those of models learning the information contained in other modalities, video as an example. Video is also an important dialogue data source in the wild with great diversity. As a form of popular video-based conversations, streaming is a broadcasting scenario that transcribes and broadcasts at | Dataset | Data Source | Dialogues | Persona | Addressee | Avg. Sessions | Language | |---------------------------------------|-----------------------|-------------|-----------|-------------|-----------------|------------| | PersonaChat (Zhang et al., 2018b) | Crowdsourced | 10,907 | " | % | 8.69 | English | | PCR (Mazaré et al., 2018) | Online Posts | 700,000,000 | " | % | 53.0 | English | | PersonalDialog (Zheng et al., 2019) | Online Posts | 20,830,000 | " | % | 6.64 | Chinese | | PEC (Zhong et al., 2020) | Online Posts | 355,000 | " | % | 26.0 | English | | PchatBot (Qian et al., 2021) | Online Posts | 198,875,796 | " | % | 7.58 | Chinese | | MSC (Xu et al., 2022b) | Crowdsourced | 5,001 | " | % | 42.9 | English | | DuLemon (Xu et al., 2022c) | Crowdsourced | 27,501 | " | % | 16.3 | Chinese | | Linux-IRC (Elsner and Charniak, 2008) | Online Chatroom | 2,500 | % | " | - | English | | Ubuntu-IRC (Kummerfeld et al., 2019) | Online Chatroom | 77,563 | % | " | - | English | | INTERVIEW (Majumder et al., 2020) | Interview Transcripts | 105,000 | % | % | - | English | | RealMedDial∗ (Xu et al., 2022a) | Short Videos | 2,637 | " | % | 44.7 | Chinese | | LiveChat (ours) | Live Videos | 1,332,073 | " | " | 3795 | Chinese | the same time, which involves entertainment, lifesharing, education and so on (Wongkitrungrueng and Assarut, 2020). Such video-based conversations are one of the main ways human beings spread and exchange information efficiently in their daily lives and are naturally in line with the way people communicate. They are also the desired sources of dialogue datasets that are vitally significant in training large-scale dialogue models for homologous downstream virtual human scenarios, such as Virtual YouTubers, Virtual Employees, and Virtual Celebrities. Nevertheless, works that extract data from online videos do not receive enough attention although video-sourced dialogues are more life-oriented and naturally abundant. Current video-sourced spoken corpora can be separated into two main categories (Mahajan and Shaikh, 2021): scripted and unscripted. The former refers to planned dialogues such as movie and TV scripts (Danescu and Lee, 2011; Li et al., 2016). The latter means spontaneous conversations in real situations, for instance, the interview dataset of Majumder et al. (2020). However, these previous video-sourced dialogues can not meet the scale of training a satisfied chatbot, owing to the trouble of continuously obtaining and processing various kinds of videos, and troubles of extracting valid dialogue sessions from them. For example, it is challenging to build valid dialogue sessions automatically from movies without human annotators. Thus, a large-scale video-sourced dialogue dataset in live streaming is essential for facilitating research in this area. The live broadcast is a typical one-tomany chat scene, which generally involves one streamer and multiple audiences. The challenge of building such a dataset lies in retrieving the reply-to relationships between the streamers and audiences. Unlike post-based social media with clear links between posts and replies, the streamer's responses in the live scene have no explicit reply-to relationships with audiences' comments. To tackle the aforementioned problems, in this paper, we propose a novel and automatic videosourced dialogue-constructing method and build a large-scale personalized dialogue dataset from the live streaming domain, named **LiveChat**. It is a non-trivial work since this dataset originates from a video-based source, distinct from most previous text-sourced data. Meanwhile, as far as we know, this is almost the only work that can effectively and endlessly extract dialogue sessions from videos. As illustrated in Huang et al. (2020), one of the main challenges of existing open-domain chatbots is lacking a consistent personality as these agents are trained over different dialogues each with no or limited speaker information, while LiveChat naturally contains distinctive persona features (especially for streamers). To promote research in this field, we collect publicly available information for each streamer and add manual annotations to create the persona profiles, with individual information anonymized for privacy concerns. Compared to the previous personalized dialogue datasets (Zhang et al., 2018b; Mazaré et al., 2018; Zheng et al., 2019; Zhong et al., 2020; Qian et al., 2021; Xu et al., 2022c), our dataset provides more fine-grained persona profiles, and more importantly, the average session number of each speaker exceeds previous ones extraordinarily, as shown in Table 1. This proves to be beneficial for personalized dialogue modeling. Moreover, live streaming is also a multi-party conversation (MPC) scene involving more than two interlocutors. An example of LiveChat is illustrated in Figure 1. During the streaming process, a streamer naturally has to recognize which audience to reply to. We collect public live videos and process the streamer's responses and all audiences' comments to form multiple sessions of dialogues where each session contains a streamer's response and multiple candidates of addressee comments. A reply-to-whom matching method is brought forward to accurately find the correct candidate for a streamer's response. In this way, we can leverage the reply-to-whom relationship to build datasets for two classical tasks: response modeling and addressee recognition. Our proposed two classical dialogue tasks in LiveChat can help solve the MPC problem in a unified dataset, essential for building a practical dialogue agent in live streaming. To sum up, our main contributions are as follows: - We propose a large-scale personalized dialogue dataset LiveChat with a unique automatic dialogue-constructing method for countless live streams in the wild. To the best of our knowledge, our LiveChat is not only the largest video-sourced dialogue dataset, which contains detailed persona profiles and the largest average sessions per persona, but also the largest MPC dataset for addressee recognition released to the community. - Sufficient experiments on two benchmark tasks: Response Modeling and Addressee Recognition, prove that our persona selection method is beneficial and larger average sessions per persona do help the modeling of the dialogue. We design retrieval baselines with considerable performance on both tasks to facilitate further research and build more genuine live-domain dialogue systems. - We further investigate transfer learning of generation models and illustrate that pre-trained dialogue models perform poorly under the video-sourced data after fine-tuning, while large PLMs exhibit richer informativeness but worse relevance under few-shot settings. This arouses the interest in exploring domain adaptation with large PLMs in such video-sourced datasets. ## 2 Related Work Dialogue Datasets A qualified open-domain dialogue model is usually trained on sufficient supervised datasets. Due to the accessibility and characteristics of social media, the current largescale open-domain dialogue datasets are mainly constructed from text-based social media, such as Reddit (Mazaré et al., 2018; Zhong et al., 2020), Douban (Wu et al., 2017), and Weibo (Qian et al., 2021). Besides, a large-scale dataset with persona annotations is essential in building a personalized dialogue system. The persona profiles utilized in current persona datasets can be generally classified into two categories: basic profiles and text profiles. The basic profiles in Zheng et al. (2019) and Qian et al. (2021) are composed of personality traits like age, gender, and location. The text profiles are mainly composed of crowdsourced (Zhang et al., 2018b; Xu et al., 2022c) or automatically collected (Mazaré et al., 2018; Zhong et al., 2020) descriptive persona sentences. In LiveChat, we collect more fine-grained basic profiles and text profiles, with extraordinarily larger average sessions per persona than in previous works. Furthermore, multi-party dialogue datasets are crucial when occurring conversations consisting of more than two speakers. However, most existing MPC datasets (Danescu and Lee, 2011; Lowe et al., 2015; Firdaus et al., 2020) have no explicit reply-to-whom annotations, and thus can not be leveraged in addressee recognition. Elsner and Charniak (2008) manually group sentences of disentangled conversations into separated sessions in Linux IRC. Kummerfeld et al. (2019) propose a larger MPC dataset manually annotated with replyto structure from the Ubuntu IRC channel, which extremely prompts the research in MPC problems. Our LiveChat naturally originates from a multiparty scenario, whose size also remarkably exceeds previous ones, credit to the automatically reply-towhom matching method. As for those spoken dialogue corpora (Xu et al., 2022a; Majumder et al., 2020; Li et al., 2016; Danescu and Lee, 2011), most are pre-scripted or manually transcribed, intrinsically difficult to scale up because of the restricted video- or audio-based sources where people can effortlessly extract valid dialogue sessions. Personalized Response Modeling Early works use explicit persona profiles from predefined information or implicit persona vectors from dialogue history to generate personality-coherent responses. Explicit models use persona descriptions, attributes, or extracted profiles to learn personalized response modeling. Kim et al. (2014) leverages a persona knowledge base to extract predefined triples and entities in a retrieval-based dialogue system. Qian et al. (2018) propose an explicit persona model to generate personalized responses based on a prespecified user profile. Song et al. (2019) propose a memory-augmented architecture to exploit persona information from context to generate diverse and sustainable conversations. On the other hand, implicit methods like Zhang et al. (2019) generate consistent responses by maintaining certain features related to topics and personas, while Li et al. (2021) encodes all the dialogue history of a speaker into the implicit persona. Zhong et al. (2022) design a personality selecting module to obtain abundant and accurate persona information from the user dialogue history. In LiveChat, we leverage explicit persona information to maintain persona consistency. Addressee Recognition Addressee recognition which is also named explicit addressee modeling aims at understanding who speaks to whom in a multi-party conversation. Previous works mainly focus on predicting the targeting addressee of the last utterance in one conversation (Ouchi and Tsuboi, 2016; Zhang et al., 2018a). Later on, a who-to-whom model for predicting all the missing addressees to understand the whole conversation was introduced by Le et al. (2019a). Gu et al. (2021) further leverages a pre-trained language model for learning this problem in a unified manner. We follow this learning paradigm, and furthermore, are able to investigate personalized addressee recognition in LiveChat attributed to the available persona profiles. ## 3 Dataset Construction 3.1 Dataset Overview The raw data constructed in LiveChat are collected from Douyin3(Chinese Tiktok), one of the largest Chinese live streaming and short video platform 3https://www.douyin.com Algorithm 1 Dialogue construction through replyto-whom matching method. Input: The streamer responses R and audience comments C; each sentence is accompanied with timestamp T; max response time interval ∆t; length ratio threshold τ ; matching function F. Output: Matched dialogues D. 1: **Step 1:** ci ← C ✄ *Traverse all comments* 2: rj ← R where 0 ≤ Trj − Tci ≤ ∆t ✄ Traverse the responses during time interval 3: ci → Mj if F(ci, rj ) = 1 ✄ Record all matched comments of response j in a set Mj 4: **Step 2:** rm ← R ✄ *Traverse all responses* 5: if Mm ̸= ⊘, cn ← Mm **then** ✄ Traverse matched comments in reverse order. 6: if rm[−1] = . or ? **then** ✄ Detect if the response with an ending punctuation 7: if len(rm) len(cn) > τ **then** 8: (cn, rm) → D, break ✄ Add matched dialogue pairs. 9: **else** rm → rm+1 ✄ *Merge current response sentence into next one* 10: **else** rm → rm+1 ✄ *Merge current response sentence into next one* 11: **return** D with over 10 million streamers and around 800 million users. We selected 351 representative streamers that interact and chat with the audiences frequently. By capturing the publicly available streamers' live videos and the audiences' comments in the broadcast room for a long time, we retrieved massive video clips with a huge amount of comments. The whole dialogue construction process is shown in Figure 2, consisting of three steps. The first two steps are to construct dialogue sessions by processing videos and matching audience comments with streamer responses, and the last step is to enrich the dataset with fine-grained persona profiles, including basic profiles and text profiles. ## 3.2 Dialogue Construction Firstly we have to collect the raw spoken texts of the streamers. Since the original data are in the form of video clips, we need to transcribe them into text utterances. A video format converter is utilized to extract the voice content. Then we leverage an automatic speech recognition (ASR) model4 to transcribe these voice clips into texts with times- ![4_image_0.png](4_image_0.png) tamps, and this model is fine-tuned on a large-scale pan-entertainment dataset. Consequently, the raw data is transcribed into the streamer's spoken texts. Details of ASR are illustrated in Appendix A. Secondly, we collect the raw audience comments and propose a reply-to-whom matching method to retrieve the reply-to relationships between streamers and audiences. Our proposed matching method is mainly based on the observations particularly apt to the streaming scenario: the streamer will reply to one audience in the comments area after that audience sent the message for a while. And usually, the streamer will repeat or summarize the audience's comment before responding to it, which helps the rest of the audiences understand what the streamer is talking about. We simply focus on extracting valid dialogue sessions based on the above observations and filter out others that are not satisfied. On this basis, the pseudocode of the whole matching process is illustrated in Algorithm 1. For each audience comment, we go through all the transcribed spoken utterances by the streamer within one minute. If there exists a repetition or summarization of this comment in the transcribed streamer's utterance, they will be recorded as a matched pair. Note that we apply a combination of BOW (bag of words) and pre-trained Chinese BERT (Cui et al., 2021) as the matching function. After retrieving the matched pairs, we iteratively concatenate the transcribed streamer's utterances to meet the ending punctuation and satisfy the required threshold τ for sufficient length, because the transcribed response from the ASR tool can sometimes be a broken sentence from what the streamer originally expresses. In addition, if a response matches several comments, we choose the closest one in time. For each constructed dialogue pair, the response will repeat the comment. To prevent models from overfitting in this kind of manner, we remove the repetition prefix of each response. Besides, considering the specificity of this scenario, we filter out noisy pairs such as "谢谢**(Thanks to **)" or "欢迎**(Welcome **)" which miss valuable dialogue information. Finally, we can construct the dataset based on such matched pairs. ## 3.3 Persona Extraction The last step is to construct detailed persona profiles in LiveChat, which are composed of basic profiles and text profiles. Following the work of PersonalDialog (Zheng et al., 2019) and Pchatbot (Qian et al., 2021), the basic profiles contain age, gender, and location. Except these, the basic profile in LiveChat also includes streamer characters and live room information such as live time, fans number, live streaming style, and so on. Part of this information can be retrieved from the live room or the streamers' homepages, besides, we crowdsource a set of questions and each annotator is required to label those missing contents by watching these streamers' streaming videos. Details about data privacy and annotators are elaborated in Ethical Consideration and Appendix A. The text profile is composed of several sentences which describe the streamer's personal habits or characteristics. Sentences in the text profile are extracted in two ways: rules-based and classifierbased. Similar to Mazaré et al. (2018) and Zhong et al. (2020), we collect persona sentences from all history spoken utterances and posts the streamer spoke or wrote on Douyin by rules. The final selected sentences must satisfy the following requirements: 1) between 4 and 20 words; 2) the contents include "我(I)"; 3) at least one verb; 4) at least one noun or adjective. Besides this, we train an additional persona classifier to further refine the text profiles. In detail, the classifier-based method means to discriminate if a single sentence contains persona facts by a learned classifier, which in our case is trained from DuLemon (Xu et al., 2022c). ## 3.4 Livechat We combine each pair of audience comments and streamer responses along with each streamer's corresponding persona to create LiveChat, the first large-scale personalized dialogue dataset from the live streaming domain. It is worth noting that each session in LiveChat contains not only the pairs of comments and responses but also several comments candidates within the same period, details illustrated in the appendix A. Although the LiveChat we discussed in this paper consists of single-turnonly dialogues, the multi-turn dialogues can be easily built by continuously tracing the interaction between the streamer and the same audience in a range of time. Data privacy in LiveChat including persona profiles is assured by carrying out the transformation, deletion, and anonymization of personal information as illustrated in Ethical Consideration. With LiveChat, we propose that two benchmark tasks should be considered: (1) Response Modeling; (2) Addressee Recognition. The matched dialogue pairs can be directly leveraged in response modeling, while the other candidates of comments can be grouped together for training the addressee recognition task. ## 4 Models 4.1 Task Definition Response Modeling Suppose we have a dialogue dataset D = {(Ci, Ri, Pi)} n i=1, where ∀i ∈ 1*, ..., n*, Ciis the input dialogue context, Riis the response, and Piis the corresponding persona profile for the respondent of Ci. The goal is to learn a dialogue model g from D, where for any new input context Cj , g can generate a response Rj based on its given persona Pj . Previous works chiefly include retrieval-based and generation-based methods. To study the quantitative influence of our proposed persona profiles, we apply the retrieval-based architecture for the main experiments. As for the study of the transferable performance of advanced models in LiveChat, most generation-based ones are investigated. Addressee Recognition Given a streamer Si with persona profile Pi, a response Ri, and a set of comments Ci1, Ci2*, ..., C*im, where ∀j ∈ 1*, ..., m*, each comment Cij is associated with an audience Aj . The goal is to recognize which Cij (or Aj ) the Ri targets. Note that the purpose of this task is to identify the appropriate addressee comment instead of the appropriate streamer reply in response modeling. Dataset details about the settings of candidate comments can be seen in Appendix A. ## 4.2 Architecture To investigate how existing dialogue baseline models can be leveraged in LiveChat, we build three retrieval-based models for response modeling and addressee recognition. Besides, five generationbased pre-trained language models (PLMs) are taken into account to study transfer learning on LiveChat. Details of our utilized models in this paper are described below. ## 4.2.1 Retrieval-Based Models CoBERT The overall architecture of our retrievalbased persona model is depicted in Figure 3, which is inspired by Zhong et al. (2020). ![5_image_0.png](5_image_0.png) We encode context, response, and text profile by separated BERT (Devlin et al., 2019). Given an input user context, we leverage the basic profile as the streamer's initialized embedding, and a [SEP] token is added between the basic profile and context. During our experiments, we only use the streamer ID information instead of all annotations. As for the multiple text profile sentences, we concatenate them with [SEP] to meet the length of maximum input tokens. After retrieving three individual representations, two co-attention modules (Zhong et al., 2020) are implemented for better feature fusion. Finally, we obtain context embedding and candidate response embedding, then apply dot product to compute the matching score and calculate crossentropy loss to optimize the full network. TwinBERT Current advanced retrievalbased models can be generally classified into context-response matching double-stream frameworks (Humeau et al., 2019; Lu et al., 2020) and PLMs-based single-stream frameworks (Gu et al., 2020). To keep the bi-encoder model consistent with CoBERT, we also adopt the attention module into TwinBERT (Lu et al., 2020), but without extra inputs of persona profiles to compare the effects of personal information. BERT BERT (Devlin et al., 2019) is a typical single-stream network. The interaction and aggregation operations can be performed in a unified way by feeding the concatenation of the context and the response candidate into the model. During the inference stage, we can sort the output scores between the context and all response candidates to finally obtain the matched response. Note that in experiments of CoBERT, TwinBERT, and BERT, we use the pre-trained BERT checkpoint of the Chinese version. ## 4.2.2 Generation-Based Models BART (Shao et al., 2021) is a denoising autoencoder for pre-training sequence-to-sequence model and pre-trained by reconstructing the original text from the arbitrary corrupting text, which has been a universal transformer-based baseline PLM. CDialGPT Wang et al. (2020) proposed a Chinese GPT pre-trained from a large version of the opendomain dialogue dataset. The dataset sources originate from Chinese online forums, such as Weibo and Douban. EVA2.0 is an encoder-decoder PLM for opendomain dialogue modeling (Gu et al., 2022), whose architecture is similar to BART. This model is pre-trained on a 60GB high-quality dialogue dataset, which is composed of WDC-Dialogue (Zhou et al., 2021) and some extra copra, like movie scripts or crowdsourcing datasets. WDCDialogue is sourced from Chinese social media and is the main training dataset of EVA2.0. GLM (Du et al., 2022) is a large-scale model based on autoregressive blank infilling to unify all language tasks. The original Chinese GLM owns 10 billion parameters pre-trained on a Chinese corpus. GPT3 (Brown et al., 2020) is an autoregressive language model with 175 billion parameters, which has shown engaging performance on many NLP tasks and exhibits powerful abilities in multilingual zero-shot, one-shot, and few-shot settings. ## 5 Experiments We train retrieval baselines for two tasks as described in Section 4.1: response modeling and addressee recognition. We also investigate transfer learning of current popular generation-based models on LiveChat. Experimental settings including training details and evaluation metrics can be found in Section B. ## 5.1 Results Of Response Modeling In this session, we fully investigate the influence of our persona profiles, the extraction methods for text profiles, and the impact of larger average sessions per persona. The main architecture follows the work of CoBERT (Zhong et al., 2020). Note that CoBERT without extra persona profile input is equal to TwinBERT (Lu et al., 2020). Impact of Personas The test performance of retrieval-based response modeling is shown in Table 2. Obviously, CoBERT with text profile and basic profile achieves the best performance in our experimental settings, indicating both text profile and basic profile will facilitate the modeling of response. We attribute this to the fact that the basic profile is significant in denoting the corresponding speaker, and the text profiles include detailed personal descriptions which may have correlations with the candidate responses. An exclusive text profile achieves a higher score than a single basic profile, that is, detailed persona features of text profiles retrieve a more essential influence on model performance. Impact of Average Sessions To study the influence of the length of average sessions per persona on the model performance, we conduct experiments on different settings of data scales and the number of persona IDs based on CoBERT along with complete persona profiles. Since the data scale is equal | Model | Recall@1 | Recall@2 | MRR | |------------------------|------------|------------|-------| | CoBERT | 68.72 | 75.58 | 76.25 | | + text profile | 70.04 | 77.43 | 77.66 | | + basic profile | 69.43 | 76.58 | 77.06 | | + text & basic profile | 72.18 | 79.58 | 79.63 | | Data Scale | ID Num | Recall@1 | Recall@2 | MRR | |--------------|----------|------------|------------|-------| | 400k | 150 | 69.39 | 77.87 | 77.67 | | 100k | 150 | 67.86 | 74.99 | 75.63 | | 100k | 50 | 67.65 | 75.95 | 75.95 | | 100k | 15 | 68.78 | 77.25 | 77.09 | | 40k | 150 | 64.01 | 71.57 | 72.50 | Table 2: Comparison of automatic evaluation metric results (%) among different retrieval-based settings. | Persona Selection | Length | Recall@1 | MRR | |---------------------|----------|------------|-------| | - | 0 | 69.43 | 77.06 | | rules + classifier | 256 | 71.09 | 78.49 | | random from user | 512 | 69.49 | 77.27 | | random from dataset | 512 | 69.46 | 76.92 | | rules | 512 | 71.07 | 78.55 | | classifier | 512 | 71.19 | 78.61 | | rules + classifier | 512 | 72.18 | 79.63 | to the persona ID number times the average session number by person, and the same number of persona IDs with larger data scales and the same data scales with fewer IDs both indicate that there are more average sessions per persona. To reduce the influence of different scales of training data and make a fair comparison, we also keep the same data scale (100k) while decreasing the number of IDs from 150 to 15 as shown in Table 3. We make sure the persona IDs of the test set are all seen before. Consequently, all of our testing persona IDs are incorporated into the training settings. Experimental results demonstrate: (1) Obviously, more average sessions with the same number of IDs will enhance the model to capture the speaker's personalized response. (2) The average number of sessions is more significant than the number of IDs for response modeling. The priority of the number of sessions per persona also proves the superiority of our proposed dataset to other existing ones since LiveChat exceeds others extraordinarily in this indicator. Influence of Text Profiles For the extraction of Table 4: Test performance (in %) among different persona selection methods. our text profiles, we empirically analyze the effect of different extraction methods as illustrated in Table 4. The *random from user* means we randomly select sentences by the streamer as his or her text profiles, and *random from dataset* refers to randomly selected in the whole dataset. The *Length* represents the maximum truncation length for all concatenated text profiles. We can see that the rules and classifier both improve the model performance, indicating rules can filter the noisy sentences to some extent and persona definition in DuLemon is effective for training a classifier to further refine text profiles. Besides, the increase in persona sentence length will also enrich persona profiles and improve the results. ## 5.2 Results Of Addressee Recognition Previous works (Gu et al., 2021; Le et al., 2019b) adopt BERT to classify the relationship between the streamer response and multiple user comments, and we adopt a similar approach with a step further to explore the benefits of persona profiles. TwinBERT, compared with BERT, is utilized to study the difference between single-stream and double-stream architecture, and CoBERT is for investigating the influence of our collected persona profiles. Table 5 presents the results of addressee recognition. It shows that single-stream BERT outperforms double-stream TwinBERT. The reason is that by feeding the concatenation of the context and the response into a unified BERT, the interaction and aggregation operations can be performed through the attention mechanism sufficiently. Besides, CoBERT retrieves a better performance than TwinBERT, demonstrating our persona profiles are also beneficial to addressee recognition. | Model | Recall@1 | Recall@2 | MRR | |----------|------------|------------|-------| | BERT | 62.29 | 75.38 | 74.59 | | TwinBERT | 58.76 | 72.52 | 71.92 | | CoBERT | 59.27 | 73.04 | 72.43 | ## 6 Transfer Learning To further investigate the performance of the pretrained dialogue model on our LiveChat, we finetune BART, Chinese CDialGPT, and EVA2.0 to study whether pre-trained dialogue corpora can contribute to the learning of our case. The latter two are trained on dialogue data from text-based | Pre-trained model | Parameters | ROUGE1 | ROUGE-L | BLEU1 | BLEU4 | +2 | +1 | +0 | Score | | |---------------------|--------------|----------|-----------|---------|---------|-------|-------|-------|---------|-------| | BART | 220M | 31.64 | 29.95 | 35.02 | 12.46 | 3.2% | 81.4% | 15.4% | 0.878 | | | Fine-tuning | EVA2.0 | 300M | 25.18 | 23.29 | 31.60 | 8.25 | 1.5% | 67.6% | 30.9% | 0.706 | | CDialGPT | 104M | 18.98 | 17.42 | 28.54 | 7.42 | 2.9% | 38.5% | 58.6% | 0.443 | | | 1-Shot | GLM | 10B | 18.44 | 16.99 | 29.48 | 7.26 | 12.6% | 61.7% | 25.7% | 0.868 | | GPT3 | 175B | 13.87 | 12.10 | 23.98 | 5.84 | 11.4% | 56.3% | 32.3% | 0.791 | | | 8-Shot | GLM | 10B | 20.72 | 19.22 | 28.78 | 7.70 | 14.9% | 65.0% | 20.1% | 0.949 | | GPT3 | 175B | 18.87 | 16.80 | 29.05 | 7.69 | 10.8% | 66.3% | 22.8% | 0.880 | | social media. Furthermore, we conduct in-context learning on GLM and GPT3 to explore the few-shot transferability of large language models (LLMs) on this video-sourced dataset. The data utilized in Table 6 and Figure 4 are dissimilar, and the details of the training data as well as our in-context templates are expounded upon in Appendix B.1. Table 6 shows the results. First, the performance of BART is better than EVA2.0 and Chinese DialGPT. It confirms that the domain of our LiveChat is far away from the domains of those dialogue datasets utilized in existing pre-trained dialogue models. Therefore, it is challenging work to directly transfer from models trained on other dialogue domains. LLMs, nevertheless, offer a solution to this problem due to their great ability to generalization. Although the automatic evaluation results of fine-tuned models are better than LLMs by the reason that fine-tuning enables the models to learn the intrinsic distribution of LiveChat. We discover that the percentage of score 2 in human evaluation results of LLMs is dramatically larger than fine-tuned ones, which means better performance in terms of rich informativeness. We attribute this to the massive knowledge contained in LLMs and the few-shot demonstrations to elicit such knowledge. Yet despite this, we see a performance gap in score 1 with BART, which indicates a large room to increase contextual coherence through ways like parameters-efficient domain adaptation of LLMs to LiveChat, simultaneously maintaining their original powerful capabilities. As a supplement, we also have performed a series of experiments of in-context learning on different shots to study the influence of demonstrations. The ROUGE1 and BLEU1 results are depicted in Figure 4. The performances keep growing as the shots gradually increase. However, when the number of demonstrations exceeds 8 shots, the performances of the LLMs slightly decrease due to the ![8_image_0.png](8_image_0.png) GLM GPT3 ## 7 Conclusion In this paper, we propose LiveChat, a Chinese video-sourced and personalized dialogue dataset from the live streaming domain with detailed persona profiles. It maintains the largest average sessions per persona and is also the largest MPC dataset for addressee recognition since live streaming is a natural MPC scenario. This is achieved owing to the reply-to-whom matching method that enables automatically extracting dialogue sessions from live videos, while most video extraction methods can not. Experimental results on two benchmark tasks show that the selected persona profiles and the larger number of average sessions per persona are advantageous in learning the speaker's personalized response and addressee decision. In addition, the comparisons between BART with other pre-trained dialogue models and LLMs have unveiled the distinctiveness of this video-sourced dialogue domain and we expect further research on parameters-efficient transfer learning of LLMs for LiveChat. ## Limitations There exist some limitations in our work. LiveChat is a Chinese-originated dataset involving unique cultures and abundant replying styles. However, this intensifies the difficulty of fully understanding the content of this dataset. Fortunately, the same data construction pipeline can be applied to streaming platforms of other languages, like TikTok. And currently, our LiveChat is only sourced from 351 streamers on Douyin, not sufficient to train a general chatbot. We believe that LiveChat helps get one's foot in the door to the wonderful and diversified live scenarios and a dialogue model pre-trained on the considerable amount of videosourced dialogue data among cross-platforms is promising. Besides, LiveChat contains some noisy spoken language segments that are not easy to read after transcribing from the ASR tool. The upper bound data quality is limited by such third-party tools. The future work to concatenate such text segments to restore the content of the original expression by streamers is highly anticipated. As for the dialogue-matching method, we simply implement a combination of BOW and BERT for semantic matching, which needs further optimization. Other limitations from the training perspective can also be highlighted. For example, contextual background information is not considered in our modeling. That includes history dialogues in multiturn settings and information from other modalities, like the streamer eating in front of the camera. In addition, we have not explored enough of our annotated basic profiles. In our primary experiments, we found that directly adding basic information such as age, gender, location, and other room information has limited influence on the model performance. We account for the fact that these basic profiles have limited connections with reply styles and contents in LiveChat. Also, note that we remove the repetition part of a streamer's response before training, while it is useful to maintain this pattern in practical application. ## Ethical Consideration This work presents LiveChat, a free and open Chinese dataset for the research community to study personalized open-domain dialogue generation and addressee recognition. Our dataset contains wellprocessed dialogues, and annotations (basic profiles and text profiles). Data Privacy The original live-streaming clips and streamers' profiles of LiveChat are collected from Douyin, one of the largest Chinese livebroadcasting platforms. Similar to previous dialogue data from Reddit (Mazaré et al., 2018) and Weibo (Qian et al., 2021), LiveChat is an opendomain dialogue dataset that crossover multiple topics and users. Since all streamers must comply with platform rules during their online live streaming under the strict supervision of the Chinese government, their topics do not contain any pornographic, violent, reactionary, or discriminatory statements. Besides, due to the property of streaming, historically broadcast videos are no longer available when finished. Therefore it is not traceable from LiveChat to the identity of real streamers. Moreover, we clean the raw data with transformation, anonymization, and deletion to ensure there is no disclosure of private information and the identity of the streamers or audiences can not be inferred from it. Thus, all the collected data (including persona profiles) is publicly available and does not contain any private information of streamers and audiences, such as emails, phone numbers, and real user names. Although we collect the Age and Location information, in our basic profile, the Age is expressed as an interval range that doesn't represent the real age of the streamers, and the Location only contains the province's information. Besides, all the attributes of our basic profiles are re-indexed as numbers in the final released dataset. Thus, both our raw data and persona profiles do not create additional ethical risks. Moreover, we are sure that all the collected data is consistent with the platform usage rules and protocols. LiveChat will only be allowed to be used for academic research. At last, our construction of LiveChat was approved by an internal review board (IRB). Annotators In terms of basic profile annotation and manual evaluation, all the annotators are Chinese undergraduates specifically responsible for annotation work in our institution. They are informed of the ongoing research and well known the way the curated data will be used. All the annotated information and evaluation results do not contain any private information. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Cristian Danescu and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. arXiv preprint arXiv:1106.3077. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General language model pretraining with autoregressive blank infilling. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 320–335, Dublin, Ireland. Association for Computational Linguistics. Micha Elsner and Eugene Charniak. 2008. You talking to me? a corpus and algorithm for conversation disentanglement. In *Proceedings of ACL-08: HLT*, pages 834–842, Columbus, Ohio. Association for Computational Linguistics. Mauajama Firdaus, Hardik Chauhan, Asif Ekbal, and Pushpak Bhattacharyya. 2020. MEISD: A multimodal multi-label emotion, intensity and sentiment dialogue dataset for emotion recognition and sentiment analysis in conversations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4441–4453, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 2041–2044. ACM. Jia-Chen Gu, Chongyang Tao, Zhenhua Ling, Can Xu, Xiubo Geng, and Daxin Jiang. 2021. MPC-BERT: A pre-trained language model for multi-party conversation understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3682–3692, Online. Association for Computational Linguistics. Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Xiaoyan Zhu, Jie Tang, et al. 2022. Eva2. 0: Investigating open-domain chinese dialogue systems with largescale pre-training. *arXiv preprint arXiv:2203.09313*. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS), 38(3):1–32. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. *arXiv* preprint arXiv:1905.01969. Yonghee Kim, Jeesoo Bang, Junhwi Choi, Seonghan Ryu, Sangjun Koo, and Gary Geunbae Lee. 2014. Acquisition and use of long-term memory for personalized dialog systems. In International workshop on multimodal analyses enabling artificial agents in human-machine interaction, pages 78–87. Springer. Jonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph J. Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos, and Walter Lasecki. 2019. A large-scale corpus for conversation disentanglement. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 3846–3856, Florence, Italy. Association for Computational Linguistics. Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, and Rui Yan. 2019a. Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1909– 1919, Hong Kong, China. Association for Computational Linguistics. Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, and Rui Yan. 2019b. Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1909– 1919, Hong Kong, China. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 994–1003, Berlin, Germany. Association for Computational Linguistics. Intelligence, IJCAI-18, pages 4279–4285. International Joint Conferences on Artificial Intelligence Organization. Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, and Rui Yan. 2021. Dialogue history matters! personalized response selection in multi-turn retrieval-based chatbots. ACM Transactions on Information Systems (TOIS), 39(4):1– 25. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Wenhao Lu, Jian Jiao, and Ruofei Zhang. 2020. Twinbert: Distilling knowledge to twin-structured compressed bert models for large-scale retrieval. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2645–2652. Khyati Mahajan and Samira Shaikh. 2021. On the need for thoughtful data collection for multi-party dialogue: A survey of available corpora and collection methods. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 338–352, Singapore and Online. Association for Computational Linguistics. Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian McAuley. 2020. Interview: A large-scale open-source corpus of media dialog. arXiv preprint arXiv:2004.03090. Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics. Hiroki Ouchi and Yuta Tsuboi. 2016. Addressee and response selection for multi-party conversation. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 2133– 2143, Austin, Texas. Association for Computational Linguistics. Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, and Ji-Rong Wen. 2021. Pchatbot: A largescale dataset for personalized chatbot. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2470–2477. Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personality/profile to a chatting machine for coherent conversation generation. In Proceedings of the TwentySeventh International Joint Conference on Artificial Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Hang Yan, Fei Yang, Li Zhe, Hujun Bao, and Xipeng Qiu. 2021. Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation. arXiv preprint arXiv:2109.05729. Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. In *Proceedings of the Twenty-Eighth International* Joint Conference on Artificial Intelligence, IJCAI-19, pages 5190–5196. International Joint Conferences on Artificial Intelligence Organization. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam M. Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, I. A. Krivokon, Willard James Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Hartz Søraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Díaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin HoffmanJohn, Josh Lee, Lora Aroyo, Ravindran Rajakumar, Alena Butryna, Matthew Lamm, V. O. Kuzmina, Joseph Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. *ArXiv*, abs/2201.08239. Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020. A large-scale chinese short-text conversation dataset. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 91–103. Springer. Apiradee Wongkitrungrueng and Nuttapol Assarut. 2020. The role of live streaming in building consumer trust and engagement with social commerce sellers. *Journal of Business Research*, 117:543–556. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrievalbased chatbots. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505, Vancouver, Canada. Association for Computational Linguistics. Bo Xu, Hongtong Zhang, Jian Wang, Xiaokun Zhang, Dezhi Hao, Linlin Zong, Hongfei Lin, and Fenglong Ma. 2022a. RealMedDial: A real telemedical dialogue dataset collected from online Chinese short-video clips. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3342–3352, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Jing Xu, Arthur Szlam, and Jason Weston. 2022b. Beyond goldfish memory: Long-term open-domain conversation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5180–5197, Dublin, Ireland. Association for Computational Linguistics. Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, and Shihang Wang. 2022c. Long time no see! open-domain conversation with long-term persona memory. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2639–2650, Dublin, Ireland. Association for Computational Linguistics. Andy Zeng, Adrian S. Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael S. Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Peter R. Florence. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *ArXiv*, abs/2204.00598. Rui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir Radev. 2018a. Addressee and response selection in multi-party conversations with speaker interaction rnns. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. *arXiv preprint arXiv:1903.05759*. Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue generation with diversified traits. Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, and Ji-Rong Wen. 2022. Less is more: Learning to refine dialogue history for personalized dialogue generation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5808–5820, Seattle, United States. Association for Computational Linguistics. Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao. 2020. Towards persona-based empathetic conversational models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6556–6566, Online. Association for Computational Linguistics. Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, and Jie Tang. 2021. Eva: An opendomain chinese dialogue system with large-scale generative pre-training. Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. *Computational Linguistics*, 46(1):53–93. ## A Dataset Construction Details Our constructed dataset are composed of 1332073 dialogues, and each dialogue consists of one streamer response and several audience comments. The overall statistics of the LiveChat and raw data are illustrated in Table 7. Details of Automatic Speech Recognition Our HuoShan ASR tool is from Chinese company ByteDance. The ASR is pretrained on a large entertainment dataset that includes domains such as fashion, food, games, and singing. After testing on a 64k Chinese video-based recognition dataset from various domains, the ASR achieved a Character Error Rate (CER) of 3.17%. Dialogue samples in response modeling In response modeling, we select all the matched dialogue pairs from our raw conversation dataset. Several constructed dialogue cases are shown in Figure 5. Each audience comment is associated with a streamer response. During our retrieval-based response modeling experiments, given an audience comment, all the responses in one batch are negative responses. Persona Annotations Our persona annotations include the basic profile and text profile, and a persona profile sample of one streamer is shown in Figure 6. Text profiles are collected from the history posts and dialogues based on the rules and a persona classifier, and basic profiles are collected and annotated by crowdworkers who are native Chinese speakers and familiar with live streaming. Apart from the basic information on the streamer's | Category | Size | |---------------------------------|------------| | Raw Audiences Comments | 13,745,577 | | Raw Total Video Num. | 182,943 | | Raw Total Videos Hours | 30,248 | | Raw Streamer Sentences | 35,475,979 | | Dialogues | 1,332,073 | | Utterances | 9,416,573 | | Streamer Num. | 351 | | Audience Num. | 1,058,595 | | Avg. Sessions per Streamer | 3,795 | | Avg. Length of Utterances | 10.22 | | Avg. Sentences of Text Profiles | 69 | homepage, the crowdworkers are required to label some extra information that may have an influence on the streamer's speaking style. We present our annotation interface in Figure 7. For each streamer, the annotator is required to answer these questions based on the provided live streaming videos. Selection of candidate audiences A streamer in LiveChat will respond to one audience selectively, and the segmentation of all audience comments is shown in Figure 8. We noted the timestamp of the matched comments and responses among all the comments. The comments between matched (i−1)-th comment and i-th comment are the candidate comments of the streamer's i-th response. In addressee recognition, the streamer aims to retrieve which comment among these candidates to respond to. ![13_image_0.png](13_image_0.png) Basic Profile Age: 18-24 Gender: Female Location: Guangdong Character: Active, Warm Skill: Sing Live Streaming Time: Forenoon Audiences number: Less than 1000 Text Profile 1. 我长得像高中生。 I look like a high school student. 2.我觉得紫色好看。 I think purple is beautiful. 3.我是一个晚婚的人。 I am a late married person. 4.我喜欢吃手抓饼。 I like to eat finger biscuits. 5.我是广东人在广州。 I am Cantonese in Guangzhou. 6.我以前领养过一只小猫是朋友家猫妈妈生的。 I used to adopt a kitten from a friend's cat. … Figure 6: The annotated basic profile and collected text profile of one streamer. Note that in the final released dataset, all basic profiles are re-indexed as numbers for privacy concerns. Q1: Do you think the streamer is active? Yes No Not Sure Q2: Do you think the streamer is warm? ![13_image_1.png](13_image_1.png) Yes No Not Sure Q3: Do you think the streamer is humor? Yes No Not Sure Q4: Do you think the streamer is confident? Yes No Not Sure Q5: What skill **does the anchor process?** None ________ Q6: When **is the streamer start live streaming?** Forenoon Afternoon Night Q7: How many audiences are there usually in the streaming room. Less than 100 More than 100, less than 1000 More than 1000 Figure 7: Annotation User Interface. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ## B Training And Evaluation Details B.1 Training Details Retrieval-based models Figure 9 provides the distribution of session length for each persona. There exist some persona IDs without enough sessions, thus we filter those IDs with more than 2400 sessions to study the influence of the average session number and persona profiles in a more clear setting. In this way, we retrieve 150 persona IDs in total. During our training process, we use 400k dialogues for training and 10k dialogues for testing in all retrieval-based dialogue experiments if there is no declaration before. The batch size is set to 24, which also means the size of the dynamic searching library of response modeling is 24. In addressee recognition, the number of candidate comments ranges from one to hundreds. Thus, we process each session into one response and 10 candidate comments. If comments are too many, we select the last 10 comments, where the final sentence is the corresponding comment. And if the number of comments in one session is less than 10, we add comments in the front sessions to keep the total comment number to 10 in each session. The batch size we set here is also 24. During training, we set the max input length and output length as 64, the max text profiles length as 512, and the epoch number and learning rate are set to 30 and 1e-5. All the experiments in the above two dialogue tasks are conducted on Nvidia Tesla V100s. Generation-based models During the process of fine-tuning the pre-trained language models, we keep the most original experimental settings from their initial training parameters, and the utilized GPT3 version is text-davinci-002. In Table 6, the training dataset for fine-tuning is 400k, and the test dataset is 10k. Due to the cost of the GPT3 API, we only evaluate 1k samples for each experiment of GPT3 in Figure 4. In order to keep in line with GPT3, all data utilized in GLM is the same as GPT3. Thus, the results in Table 6 are inconsistent with those in Figure 4. As for the in-context learning of GLM and GPT3, the template of n-shots is formulated as "我是一名线上直播间的主播,爱好是唱 歌 、 与 粉 丝 聊 天 等 。 以 下 是 我 在 直 播 间 和 粉 丝 的 互 动 。 粉 丝 说 :[CONTEXT-1]。 我说:[RESPONSE-1]。...粉丝说:[CONTEXT-N]。我说:[RESPONSE-N]。以下是另一段我 在直播间和粉丝的互动。粉丝说:[CONTEXT-TEST]。 我 说 :[RESPONSE-TEST]" ("I am a streamer of an online live room, hobbies are singing, chatting with fans and so on. Followings are my interactions with fans in the live room. One fan says: [CONTEXT-1] I say: [RESPONSE-1] ... One fan says: [CONTEXT-N] I say: [RESPONSE-N]. Here is another interaction I have with my fans in the live room. One fan says: [CONTEXT-TEST] I say:[RESPONSE-TEST]). The [CONTEXT-K] and [RESPONSE-K] (0 < k <= n) is the n-shot cases provided for LLMs. The [CONTEXT-TEST] and [RESPONSE-TEST] are the two utterances of one test dialogue pair, where the LLMs are required to return the [RESPONSETEST]. ## B.2 Metrics Retrival-based Recall@k is a commonly used metric for evaluating whether the correct response exists among the top k candidates out of all the candidate responses. MRR (Mean Reciprocal Rank) is a statistic measure for evaluating any process that produces a list of possible responses to a sample of queries and is formulated as the average of the reciprocal ranks of results. Generation-based BLEU-n measures the ratios of the co-occurrences of n-grams between the generated and real text. **ROUGE-n** measures the text quality by counting the overlapping n-grams between the generated and real text, and **ROUGEL** means leveraging the longest common subsequence. Human Evaluation We employ crowd workers to evaluate the responses generated by different models, and 1000 samples for each model. Our evaluation schema scores each sentence according to the following rules, inspired by Wang et al. (2020): 1. **Relevance** If a fluent response is logically consistent and relevant to the content of the comment, it will get 1. Otherwise, it will get 0. 2. **Informativeness** If a response has achieved the requirements of 1 score and is additionally rich in content, it will get 2. ## C Case Study To concretely demonstrate response ability in generative-based models, we further provide several sample cases (between BART, C-DialGPT, EVA2.0, GLM, and GPT3) in Table 8 and Table 9. BART, C-DialGPT, and EVA2.0 are finetuned on our LiveChat. LLMs present the results of 1-shot and 8-shot in-context learning. Relevance Seen in these cases, all the responses generated by BART have strong connections with input contexts. Compared to EVA and C-DialGPT, BART maintains a higher relevance. For example, in Case 2-"走错直播间了?(In the wrong live streaming room?)" from Table 9, we can find the response of C-DialGPT is not logically relevant to the comment, and the response of EVA2.0 is also not reasonable. Informativeness Pre-trained models generally contain knowledge inside themselves. We can see that LLMs reply with more informative content in some cases, which means the richness and abundant knowledge of LLMs will be leveraged in dialogue generation. | Context | 搞笑女来了。 | 我一个月的工资都刷给你了。 | |---------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|----------------------------------------| | The comedian girl has arrived. | I have spent all of my salaries for the month on you. | | | 我不是搞笑女我是颜值主播。 | 你一个月工资多少啊? | | | BART | I am not a comedian, | How much do you make a month? | | I am a good-looking anchor. | | | | EVA2.0 | 搞笑女孩我来了。 | 你这个工资是不是很高啊。 | | The funny girl is here. | Your salary is not very high. | | | C-DialGPT | 可是我很少活的很有感情。 | 你为什么可以刷到9,000? | | But I rarely live with much feeling. | Why can you achieve 9,000? | | | GLM (1-shot) | 对呀,我是搞笑女。 | 你刷得有点多哦。 | | Yeah, I am a comedian girl. | You are doing a little too much. | | | GPT3 (1-shot) | 是谁来了? | 你辛苦了,我会努力争取更多粉丝。 | | Who is there? | Thank you. I'll try to get more fans. | | | 搞笑女来了,快进来,快进来。 | 刷?你不许刷。 | | | GLM (8-shot) | The comedian girl is here, | Spend? You are not allowed to do that. | | come in quickly, come in quickly. 你要搞笑,我可不搞笑. | | | | GPT3 (8-shot) | You want to be funny, | | | I do not want to be funny. | 你一个月工资都刷给我了?你不用上班了? You spent your entire month's salary on me? You don't have to work anymore? | | | Expected Response | 不不皮了。 | 是听到我们的号召了吗? | | No, do not joke. | Is that for hearing the call of us? | | Table 8: Generated responses by different pre-trained language models. | Context | 跳个舞蹈吧。 | 走错直播间了? | |---------------------------------------------------------------------------------------------------|---------------------------------------------------|-----------------------------------------------| | Have a dance. | In the wrong live streaming room? | | | BART | 你们想看什么舞蹈? | 没有走错直播间。 | | What dance do you want to see? | Haven't gone to the wrong live streaming room. | | | EVA2.0 | 你给我跳个跳舞吧。 | 没有啊,他们都说我在直播。 | | I want you to dance for me. | No, they all say that I am live streaming. | | | C-DialGPT | 感谢落落谢谢无心球。 | 你好,仙人掌花开放26度。 | | Thank Luoluo, thank Wuxin Ball. | Hello, the cactus is blooming at 26 degrees. | | | GLM (1-shot) | 好呀好呀,不过呢,现在得先唱歌。 | 没有。 | | Great, however, now we have to sing first. | No. | | | 你没有走错,这里是我的直播间, 你回去收看我的直播吧。 | | | | You did not go wrong, this is my live streaming room, please go back and watch my live streaming. | | | | GLM (8-shot) | 我现在心情不是很好。 | 走错直播间了吗? | | I am not feeling well now. | Have you gone into the wrong live streaming room? | | | GPT3 (1-shot) | 不行,我不会跳舞。 | | | No, I don't know how to dance. | 怎么你问这个,走错我的直播间了吗? | | | GPT3 (8-shot) | 跳个舞蹈要不然? | Why are you asking this, did you accidentally | | Why not have a dance? | go to the wrong live streaming room of mine? | | | Expected Response | 我不跳要不你给我跳一个看看。 | 你没有走错,这是在下。 | | I don't dance and you can dance for me. | You haven't taken a wrong turn, this is me. | | | Table 9: Generated responses by different pre-trained language models. | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In Section: Limitations ✓ A2. Did you discuss any potential risks of your work? In Section: Ethical Consideration ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Abstract and Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** In Sections 5 & 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Sections 5 & 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Sections 5 & 6 & Appendix D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** In Section 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? In Appendix ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? In Appendix ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? In Appendix ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? In Ethical Consideration ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? In Appendix
vilar-etal-2023-prompting
Prompting {P}a{LM} for Translation: Assessing Strategies and Performance
https://aclanthology.org/2023.acl-long.859
Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM{'}s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM{'}s MT output which reveals some interesting properties and prospects for future work.
# Prompting Palm For Translation: Assessing Strategies And Performance David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, George Foster Google {vilar, freitag, colincherry, jmluo, vratnakar, fosterg}@google.com ## Abstract Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an indepth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM's MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of stateof-the-art supervised systems. We conclude by providing an analysis of PaLM's MT output which reveals some interesting properties and prospects for future work. ## 1 Introduction Large language models (LLMs) trained to predict the next token from a lengthy context have demonstrated impressive machine translation capabilities, despite being trained on corpora that are overwhelmingly English, with no intentionallyincluded parallel text. In this paper, we carry out an in-depth investigation into the translation capabilities of LLMs, testing different prompting strategies and carefully assessing the resulting performance. We study the recently-introduced PaLM model (Chowdhery et al., 2022), a 540B-parameter decoder-only language model trained on a heavily English-centric, multilingual corpus. It has achieved the strongest MT results among LLMs trained on non-parallel multilingual corpora. To ensure a fair assessment of PaLM's MT capability, we begin with an exploration of example selection methods for use with fixed prompt templates. We vary both the pool from which examples are chosen and the method for choosing ![0_image_0.png](0_image_0.png) them, comparing standard random selection to knearest-neighbour (kNN) selection that customizes prompts for specific inputs. Figure 1 highlights the importance of example selection by showing that two randomly-selected sets of examples can result in significantly different distributions of sentencelevel BLEURT scores. Although Chowdhery et al. (2022) report interesting results on low-resource and non-English language pairs, their most striking findings concern high-resource pairs. Accordingly, we limit our investigation to French, German, and Chinese translation to and from English. We evaluate sentencelevel translation quality using recommended practices for high-quality MT, specifically: (i) we use recent WMT test sets to guard against train/test data leakage, and to facilitate comparison with state-of-the-art (SOTA) MT systems; (ii) we use a SOTA automatic metric (BLEURT) instead of BLEU which has been demonstrated to be suboptimal for high-quality translations (Kocmi et al., 15406 2021; Freitag et al., 2021b); and (iii) we conduct an expert-based human evaluation with detailed categories to characterize the error patterns of the automatically generated translations. Our contributions are as follows: - We carry out the first systematic study of LLM prompting for MT, exploring both the example candidate pool and the selection strategy. We find that the *quality* of examples matters more than the domain from which they are drawn or their lexico-semantic proximity to the current input. - We evaluate the translation capability of LLMs with the procedure currently recommended by the MT community. We find that, although impressive, the sentence-level translation capacity of LLMs still lags behind SOTA MT. ## 2 Related Work Inspired by the findings of Radford et al. (2019); Brown et al. (2020), prompting strategies for LLMs have become a topic of intense interest, generating work across a broad spectrum of methods and applications (Liu et al., 2021). A basic distinction can be made between *hard* (explicit text) prompting such as we use, and *soft* prompting that seeks to learn embeddings (Lester et al., 2021), activations (Li and Liang, 2021; Hambardzumyan et al., 2021), or attention weights (Liu et al., 2022a) that condition the model to perform a desired task. The latter approach is more expressive and more efficient at inference time, but performance can be sensitive to initialization (Hou et al., 2022), and some techniques require modifications to the model. Hard prompts have the advantage of being easy to interpret and modify. Work in this area includes tools to facilitate development of handcrafted prompts (Strobelt et al., 2022; Bach et al., 2022); algorithms to find optimal prompts through gradient-guided search (Shin et al., 2020) or exhaustive search through labels (Schick and Schütze, 2021) or both labels and templates (Gao et al., 2021); as well as studies on the effect of example order (Kumar and Talukdar, 2021; Lu et al., 2022). Hard prompts have also been used to analyze model capabilities (Garg et al., 2022; Li et al., 2022a), the role of data (Singh et al., 2022), and the nature of prompting itself (Min et al., 2022; Wei et al., 2022). With few exceptions, e.g. (Li et al., 2022b; Liu et al., 2022b; Valvoda et al., 2022), early approaches to hard prompting tended to condition on the task rather than the specific input. Our kNN approach for conditioning on the input was pioneered by Liu et al. (2022b), who used RoBERTa embeddings to identify relevant GPT-3 prompts for sentiment, table-to-text, and QA tasks. They found that kNN works better than a random-selection baseline, and that the advantage grows as the size of the (domain-controlled) example pool increases. Work on prompting LLMs for MT began with the GPT-3 and PaLM papers (Brown et al., 2020; Chowdhery et al., 2022), which adopted similar approaches, comparing 0, 1, and n-shot1random selection of independent sentence pairs from WMT training corpora, and testing on older French, German, and Romanian WMT test sets traditionally used in ML, augmented in PaLM with French→German and Kazakh. For both models, performance increased with number of shots, and n-shot BLEU scores were found to be competitive with previous unsupervised SOTA, and in some settings—particularly into English—supervised SOTA as well. In other early MT work, Reynolds and McDonell (2021) experimented with prompt templates for GPT-3, and found that 0-shot prompts with carefully-chosen templates can outperform n-shot prompts with sub-optimal templates. Garcia and Firat (2022) explored using prompts with mT5 (Xue et al., 2021) to control output attributes such as formality, and also examine the effect of using promptlike natural-language tags during fine-tuning. Patel et al. (2022) proposed autoregressive prompting: concatenating only the first predicted word to a prompt and output prefix at each step. ## Après Nous, Le Déluge Since our paper appeared on arXiv in November 2022, there has been a flood of work on using LLMs for MT, which we summarize briefly for completeness. A number of papers (Agrawal et al., 2022; Zhang et al., 2023; Jiao et al., 2023; Hendy et al., 2023) investigate prompt quality and source proximity using methods similar to ours but with different LLMs, notably GPT-3.5, GPT-4 and their instruction-tuned counterparts. Their findings are in line with ours, with the exception of Agrawal et al. (2022), who achieve significant gains using 1Where n is 64 for GPT-3 and 5 for PaLM. lexical matching augmented with a diversity mechanism to select prompts. Apart from differences in model and setting, a potentially salient discrepancy is their emphasis on BLEU rather than neural metrics to measure performance. Other interesting work that conditions prompts on source segments uses dictionaries to supply translations in lowresource settings (Ghazvininejad et al., 2023; Lu et al., 2023), or chain-of-thought inspired prompts that elicit keywords, topic, and related examples from the model itself (He et al., 2023). Further recent work looks at the role of data, attributing LLM MT capabilities to the presence of incidental bilingual examples (Briakou et al., 2023), or showing that parallel data (Schioppa et al., 2023), dictionaries (Jones et al., 2023), or restriction to bilingual settings (Garcia et al., 2023) can boost performance in smaller LMs. Another popular line aims at controlling various properties of translations such as formality or use of specified terminology, either statically (Garcia et al., 2023; Moslem et al., 2023) or with human interaction (Pilault et al., 2023). Finally, there is extensive work on analyzing the translation output of LLMs, generally finding that it is more fluent than accurate (Hendy et al., 2023; Anonymous, 2023), good at handling document context (Wang et al., 2023; Karpinska and Iyyer, 2023) but also prone to problems such as hallucination (Zhang et al., 2023; Guerreiro et al., 2023), and frequently sub-par in low-resource settings (Zhu et al., 2023; Bawden and Yvon, 2023) ## 3 Prompting For Machine Translation For a general task, prompting an LLM to generate a desired output y from an input x can involve many steps (Liu et al., 2021), including template generation, slot filling, answer search, and answer mapping. In MT, the answer search and mapping processes are simplified because the answers generated by the LLM can be used directly; we simplify further by using a fixed template. What we explore in depth is the slot filling portion; in particular, we test a variety of methods to select few-shot examples for the prompt. In initial experiments we determined that for few-shot prompting the exact form of the template is unimportant, see Appendix A for details. Following this observation, we decided to adopt simple templates where each example if preprended by the corresponding language name. These results in prompts of the form (for n-shot prompting): $$\begin{array}{l l}{{\mathrm{[source]:}}}&{{[\,X_{1}\,]}}\\ {{\mathrm{[target]:}}}&{{[\,Y_{1}\,]}}\end{array}$$ ... [source]: [Xn] [target]: [Yn ] [source]: [ X ] [target]: where [source] and [target] are instantiated with the names in English of the source and target languages, e.g. English and German. Note that this scheme has been found to be present in the training data as a marker for multilingual content (Briakou et al., 2023). Each slot pair (Xi, Yi) is filled with a translation example for these languages, and the final slot X is filled with the current source text. Our algorithm for n-shot translation from a source text x to a target text y is: 1. Choose translation example pairs (x1, y1) ... (xn, yn). In general, these can depend on x. 2. Plug the example pairs and x into the template. Condition PaLM on the resulting string. 3. Perform a greedy search,2stopping when the model outputs a newline. 4. Output the predicted suffix verbatim as y. Example selection operates in two phases: first choose a pool containing parallel text, then choose examples from the pool. Choosing the pool lets us control global attributes of examples such as domain and average quality. Our baseline method for choosing examples is to select them randomly from the pool. We also experiment with selecting examples that are "closest" to the source text, on the hypothesis that such examples will help guide the model to produce similar translations. To find relevant examples, we use k-nearest neighbor (kNN) search on the source side of our parallel pool, inspired by Khandelwal et al. (2021). We carry out the search itself using the method of Guo et al. (2020) 3, and investigate two possible representations of the sentences, with associated distance measures: 2We found that using a sampling temperature other than 0 tended to degrade translation quality. 3Available at https://github.com/google-research/ google-research/tree/master/scann. LP Year #sents Ref en → de 2021 1002 C de → en 2021 1000 B en → zh 2021 1002 A zh → en 2021 1948 A en → fr 2014 3003 N/A fr → en 2014 3003 N/A Bag-of-words (BOW): Each sentence is represented by a (sparse) vector of counts associated with words in the vocabulary. As the associated distance measure we use cosine distance. This representation focuses on the surface form of the words, and thus favors lexical similarity between the examples. Ro**BERT**a: Sentences are represented as embeddings in the space defined by RoBERTa (Liu et al., 2019), a multilingual transformer-based model, with Euclidean distance used for retrieval. We expect these embeddings to reflect the semantics of the sentence, and thus retrieve prompts that are relevant to their subject matter.4 ## 4 Data We experiment with translation into and out of English for Chinese, French and German. After English (78.0%), German (3.5%) and French (3.3%) are the two largest languages in PaLM's 780B token training corpus; Chinese (0.4%) is the 15th largest, and it also represents an inherently more difficult translation task. To facilitate comparisons with recent SOTA systems, and to minimize the chance of overlap with PaLM's training corpus, we test on news data from the WMT 2021 evaluation campaign (Akhbardeh et al., 2021). Since French was not included in WMT21, we use data from WMT14; apart from being older, these test sets are not purely source-original (Freitag et al., 2019) like the more recent ones. Table 1 shows statistics for our test data. 4Note that it would be conceivable to use PaLM itself as embedding model, which would provide a representation (and associated similarity measure) closer to the application that we are targeting. However, due to the high computational cost and large amounts of data (for some experiments we embed the totality of the WMT training data) we decided to use a smaller model. For prompt selection, we use three distinct pools: the full WMT training corpus for each language pair (WMT-full), the corresponding WMT development sets (WMT-dev), and a manually-curated "high-end" pool. Sizes are shown in Table 2. The WMT-full pool is largest and offers the highest probability of close kNN matches, but it is crawled text drawn from sources of varying quality. The WMT-dev pool has generally better quality, and is a closer domain match to our test set; to encourage PaLM to produce more natural text, we included only target-original texts.5 For German ↔ English and Chinese ↔ English we include all the news test sets from 2010 to 2020. As English ↔ French was discontinued after 2015, we used sets from 2010 to 2013, augmented with newsdiscussion2015. The high-end pool comes from websites containing bilingual articles that we judged to be professionally edited, with native or near-native quality in both languages. The articles are drawn from various domains (biography, business, commentary, culture, fashion, food, news, and obituary), with the news domain of the test sets comprising less than 50% for each language. We treat these articles as symmetrical, and use them as prompt sources in both translation directions. Due to the non-literal nature of the translations, there is frequently no 1-1 correspondence between sentence pairs, so we extract aligned paragraphs for prompting. More detailed information about the high-end pool is provided in Appendix B. | Size | | | | |----------|-----------|--------|--------| | LP | Pool | en → X | X → en | | WMT-full | 96M | | | | de ↔ en | WMT-dev | 11 732 | 13 060 | | high-end | 152 para. | | | | WMT-full | 55M | | | | zh ↔ en | WMT-dev | 7 481 | 5 916 | | high-end | 170 para. | | | | WMT-full | 40M | | | | fr ↔ en | WMT-dev | 2 886 | 2 957 | | high-end | 98 para. | | | ## 5 Experiments For compatibility with Chowdhery et al. (2022), we ran all experiments at the sentence level, translating each test sentence individually and in isolation from its context. This deprives PaLM of the ability to exploit the longer contexts it was exposed to during training, but it matches the operating mode of our baselines (including SOTA baselines), and facilitates evaluation.6 We leave an exploration of potential gains from conditioning on longer histories to future work. In preliminary experiments, we varied the number of shots from 0 to 10, and found clear performance gains as we increased the number of shots, with diminishing returns after 5 sentence pairs (see Appendix A). Accordingly we report all results on the WMT pools in the 5-shot setting, where each shot is a single sentence pair, matching the configuration in Chowdhery et al. (2022). For the high-end pool, lacking 1-1 sentence alignments, we use 1-shot examples, where each shot is a single paragraph pair. This provides roughly the same quantity of text as 5-shot with sentences, although it creates a stylistic mismatch with our test setup, as we still translate on a sentence-by-sentene basis, as in the other conditions. When randomly selecting examples, we observed that there is little variability in automatic scores when selecting different samples7(see Appendix C). For the results reported in this section, we let PaLM produce translations with 5 different seeds and we selected the run with the median BLEURT score. Translation time was some orders of magnitude longer than a dedicated translation system. Following recent recommendations (Kocmi et al., 2021; Freitag et al., 2021a) we favour neural metrics (BLEURT in our case) over BLEU, although we also report BLEU scores for completeness. We use a cased version of BLEURT (Sellam et al., 2020) that is based on RemBERT (Chung et al., 2020). We use BLEU as implemented in SACREBLEU8(Post, 2018), with zh tokenization for English-Chinese, and 13a tokenization for all other languages. 6Evaluation of document-level translations is complicated by potentially non 1-1 sentence correspondences, resulting in long translation units that are truncated by BLEURT and can be difficult for humans to rate reliably. 7Note that this holds for document level scores. The effect on single sentences can still be very important, cf. Figure 1. 8SACREBLEU signature: nrefs:1|case:mixed|eff:no| tok:TOK|smooth:exp|version:2.1.0, where TOK is 13a or zh. To perform human evaluation, we hired professional translators (7 for En→De, 5 for De→En, 4 for Zh→En, and 4 for En→Zh) and measure translation quality with a document-context version of MQM (Lommel et al., 2014) which mimics the setup proposed in Freitag et al. (2021a). This includes using the same error categories, severity levels and error weighting schema. As suggested in the study, we weight each major error with 5 and each minor error with 1, except for minor punctuation errors which get a score of 0.1. We depart from Freitag et al. (2021a) in using only a single annotator per segment, and in not imposing a limit of 5 errors per sentence. Additionally, due to technical restrictions on the length of an evaluation session, we limited the MQM evaluation to the first 12 segments per document. ## 5.1 Selection Strategies And Pools We warm up by comparing example selection strategies on the two WMT pools, using automatic metrics to evaluate quality on English↔German. Results are shown in Table 3. The main observation is that the choice of pool is much more important than the selection method: the results for WMT-dev are notably higher than those for WMT-full across all settings. When comparing kNN selection methods, RoBERTa is more effective than BOW, but it does not provide a consistent advantage over random selection. We conjecture that the quality of an example is more important than its proximity to the current source sentence. The larger size of the full WMT pool means that the kNN approaches will in general be able to find examples that are closer to each source sentence than those from the dev pool, but any resulting gain is offset by the greater risk that an example from the full pool will be a poor translation (since we match only on the source side). Interestingly, had we relied only on BLEU, we would have concluded that the choice of pool is unimportant, and that random selection consistently outperforms kNN. ## 5.2 Results On All Language Pairs Table 4 contains our main results, for German ↔ English, Chinese ↔ English, and French ↔ English. For each language pair, we ran PaLM with random selection on all three pools and with kNN RoBERTa on the WMT-full pool. We compared these systems to output from the best performing system in the 2021 WMT evaluation campaign for | LP | Pool | Selection | BLEURT | BLEU | |-------------|--------|-------------|----------|--------| | random | 71.8 | 32.9 | | | | kNN BOW | 71.7 | 32.4 | | | | kNN RoBERTa | 73.0 | 32.5 | | | | → | | | | | | de | full | | | | | en | dev | random | 74.8 | 32.8 | | kNN RoBERTa | 74.8 | 32.3 | | | | random | 74.8 | 38.4 | | | | kNN BOW | 72.7 | 36.9 | | | | kNN RoBERTa | 73.8 | 35.4 | | | | → | | | | | | en | full | | | | | de | dev | random | 75.9 | 38.0 | | kNN RoBERTa | 75.8 | 37.2 | | | German and Chinese, and for off-the-shelf Google Translate for all six language pairs. We evaluate with BLEU and BLEURT as in the previous section, augmented with human MQM assessments for German and Chinese. French is a special case, as its evaluation set is eight years old, and it is difficult to ensure that any of the MT systems we evaluate have not been exposed to it during training. We include it mostly for the purposes of comparison to Chowdhery et al. (2022), and do not provide SOTA results or perform human evaluation. Comparing PaLM results for German and Chinese, the pattern from the previous section holds up: random selection from the WMT-dev pool outperforms selection from the full pool. MQM scores correlate well with BLEURT for these results. Despite domain and style mismatch, results for the highend pool are very similar to those for WMT-devcloser than any results on the full pool—adding support to the hypothesis that example quality is the main determinant of PaLM's output quality. The French results reverse the general pattern. For this language pair, random selection from the WMT-full pool does best, although the results for all methods are fairly similar, with a difference of approximately 0.5 BLEURT between the best and worst. One potential explanation is the age and quality of newstest2014, as WMT test-set creation has dramatically improved since then. Turning to a comparison between PaLM and conventional MT systems, the specialized SOTA systems have a substantial advantage of between 1 and 3 BLEURT points over the best PaLM results, a gap that is reflected in their much lower MQM scores. The difference is narrower for the general-purpose Google Translate system: less than 1 BLEURT except for Chinese→English (1.8), with French→English at parity. PaLM's performance relative to the best MT system for each language pair is generally better when translating into English, where it is lower by 1.0, 2.3, and 0.0 BLEURT for German, Chinese, and French, compared to drops of 2.1, 2.5, and 0.6 in the reverse direction. The MQM results show some interesting characteristics of translations produced by PaLM. In all language pairs evaluated, fluency MQM scores for PaLM are generally similar to those for SOTA systems, while accuracy scores are lower. The accuracy gap is dominated by Major Accuracy/Omission errors, followed by inconsistent patterns of other Accuracy/* errors across language pairs. In some languages, the best-performing PaLM systems make fewer Style/Awkward errors than SOTA. Table 5 shows a selection of MQM error counts for PaLM WMT-dev random and SOTA systems; full details are provided in Appendix D. ## 5.3 Comparison To Previous Results Our only results that are directly comparable to the few-shot results from Chowdhery et al. (2022) are the WMT-full BLEU scores in table 4c (WMT14 French test-set). Our result for French→English matches theirs exactly, but our score for English→French is lower by 1.7 (42.3 versus 44.0). We attribute this discrepancy to their use of the SACREBLEU intl tokenizer; when we evaluate our output using this version, we obtain matching scores. Our general finding that PaLM's into-English performance is better than the reverse direction matches the conclusion from Chowdhery et al. (2022), while our comparison with recent SOTA systems on current test sets contrasts with their results indicating that PaLM can rival supervised performance in older settings. ## 6 Analysis In this section we delve further into various aspects of PaLM's MT performance. ## 6.1 K**Nn Versus Random Prompts** To understand the performance difference between kNN RoBERTa and randomly-selected examples, we performed a qualitative analysis, choosing sen- | LP | System | MQM ↓ | BLEURT ↑ | BLEU ↑ | |----------------------------------------------------|------------------------------------------------------------------------------------------|---------|------------|----------| | WMT21 Facebook Submission (Tran et al., 2021) | 1.18† | 76.9 | 42.0 | | | Google Trans. | 1.59 | 75.7 | 39.8 | | | en → de | WMT-full random | 1.90 | 73.7 | 32.9 | | WMT-full kNN | 1.93 | 73.0 | 32.5 | | | WMT-dev random | 1.58 | 74.8 | 32.8 | | | high-end random | 1.67 | 74.7 | 32.9 | | | PaLM WMT21 Facebook Submission (Tran et al., 2021) | 1.31† | 76.9 | 41.9 | | | Google Trans. | 1.71 | 76.4 | 40.9 | | | WMT-full random | 2.38 | 74.7 | 38.3 | | | WMT-full kNN | 3.03 | 73.8 | 35.4 | | | WMT-dev random | 1.92 | 75.9 | 38.0 | | | high-end random | 1.89 | 75.8 | 38.8 | | | de → en | PaLM | | | | | (a) German→English (nt2021). | All MQM results labelled with † are significantly better than all other systems based on | | | | (a) German→English (nt2021). All MQM results labelled with † are significantly better than all other systems based on PERM-BOTH pair-wise significance testing (Koehn, 2004) with p = 0.05. (b) Chinese→English (nt2021). All MQM results labelled with † are significantly better than all other systems based on PERM-BOTH pair-wise significance testing (Koehn, 2004) with p=0.05. | LP | System | MQM ↓ | BLEURT ↑ | BLEU ↑ | |---------------------------------------------|------------------------------------------------------|---------|------------|----------| | WMT21 WeChat Submission (Zeng et al., 2021) | 2.47† | 66.6 | 36.9 | | | Google Trans. | 3.23 | 65.0 | 36.2 | | | WMT-full random | 4.35 | 62.2 | 28.6 | | | WMT-full kNN | 5.06 | 60.7 | 28.5 | | | WMT-dev random | 3.24 | 64.1 | 29.2 | | | high-end random | 3.70 | 63.9 | 29.6 | | | en → zh | PaLM WMT21 Borderline Submission (Wang et al., 2021) | 3.11 | 70.0 | 33.4 | | Google Trans. | 3.12 | 69.5 | 32.2 | | | zh → en | WMT-full random | 3.95 | 67.2 | 25.8 | | WMT-full kNN | 4.06 | 65.8 | 23.8 | | | WMT-dev random | 3.60 | 67.5 | 25.3 | | | high-end random | 3.89 | 67.7 | 25.1 | | | PaLM | | | | | Table 4: Translation results for all language pairs. Values for random selection are the BLEURT median of 5 runs. | LP | System | BLEURT ↑ | BLEU ↑ | |-----------------|--------------------|------------------------------|----------| | Google Trans. | 76.5 | 45.7 | | | WMT-full random | 75.9 | 42.3 | | | WMT-full kNN | 75.3 | 41.8 | | | WMT-dev random | 75.4 | 41.9 | | | high-end random | 75.2 | 38.6 | | | en → fr | PaLM Google Trans. | 77.7 | 43.2 | | WMT-full random | 77.7 | 42.7 | | | WMT-full kNN | 77.3 | 41.2 | | | WMT-dev random | 77.2 | 42.1 | | | high-end random | 77.6 | 40.4 | | | fr → en | PaLM | (c) French→English (nt2014). | | LP Sev. Category PaLM SOTA de → en Major Omission 51 19 en → de Major Omission 26 7 zh → en Major Omission 109 42 en → zh Major Omission 80 46 de → en Minor Awkward 73 81 en → de Minor Awkward 166 144 zh → en Minor Awkward 205 284 en → zh Minor Awkward 115 142 tences with the largest BLEURT difference between the two systems. Table 14a in Appendix F shows an example where the kNN system correctly retrieves relevant translation examples in the football domain, guiding PaLM to produce a better translation than the random selection system. This contrasts with the example in Table 14b, where the retrieved source sentences are also from the relevant domain, but all have alignment errors, causing PaLM to generate hallucinated output. In general, random selection is also prone to landing on alignment errors, but as each prompt is selected independently, the odds that all examples will be errors are low. An informal analysis of kNN examples indicates that if one non-parallel prompt is selected, the others also tend to be of poor quality, perhaps due to corpus alignment errors that are concentrated in particular documents or topics. Since kNN matches only on the source side, it is not robust to this noise. ## 6.2 Example Translations Example translations comparing PaLM and SOTA systems for German→English and English→Chinese are given in Appendix 6.2, in Table 15 and Table 16, respectively. We compared the translations of both systems and chose examples that are short, but include the most frequent patterns that we observed also in longer translations. In general, PaLM's translations are less literal when compared to supervised NMT systems. Even though this is one of the strengths of PaLM, it occasionally misses some important information in the source or hallucinates facts not present in the source sentence. The supervised models on the other hand are faithful to the source; this reduces the risk of omission and addition errors, but occasionally leads to translations that are not natural in the target language (e.g. translating street names or using the wrong time format). These findings are in line with the MQM results presented in section 5.2. | 2021 | |--------| | Year | LP | % Clean | |---------|---------|-----------| | 2014 | fr → en | 69.2 | | en → fr | 93.6 | | | 2016 | de → en | 80.3 | | en → de | 97.3 | | | en → de | 99.6 | | | en → zh | 99.7 | | | de → en | 97.9 | | | zh → en | 98.1 | | ## 6.3 Overlap Of Test And Training Data One major change with respect to Chowdhery et al. (2022) is our use of more recent WMT test sets, which are unlikely to overlap with PaLM's training data.9 We test this hypothesis using the technique from Chowdhery et al. (2022), which involves measuring high-order n-gram matches; specifically, we measure 15-gram overlap as tokenized by the mBERT tokenizer (Devlin et al., 2019).10 For test sequences with fewer than 15 tokens, we consider them overlapping if the complete sequence is found as a subsequence of a training example. We report the degree of overlap by showing the percentage of original test examples that survive in the clean test set after removing overlap in Table 6. This confirms that the older French→English and German→English sets have substantial overlap with PaLM's training data, while the newer test sets, whether into or out of English, have much smaller overlapping portions. Chowdhery et al. (2022) also measure the effect of test-set overlap on translation quality, comparing scores on the original test set to the clean set with overlapping examples removed. In section H we 9Here we measure target-side overlap only; we assume there is no substantial parallel data in PaLM's training corpus, and therefore no substantial parallel overlap. 10We selected the mBERT tokenizer, as opposed to the PaLM's sentence-piece tokenizer, because it decouples the measurement of overlap from the model under test. report similar scores for the older test sets, and extend the analysis to calibrate the effect of overlap on MT evaluation, by comparing to an overlap-free off-the-shelf system. ## 7 Conclusion We perform a careful assessment of the sentencelevel MT capabilities of PaLM, which we compare to SOTA and a current off the shelf (COTS) MT system for three high-resource languages—German, Chinese, and French—into and out of English, using the latest test sets from WMT. We chose to focus on a small set of high-resource language pairs in order to test the claims of the original PaLM paper, which are most striking for these pairs. The time and expense of performing high-quality human evaluations precluded a broader investigation. Comparing kNN and random strategies for selecting 5-shot translation examples to instantiate fixed prompt templates, we find that kNN's potential advantage in identifying examples relevant to the source sentence is outweighed by its susceptibility to corpus noise. Choosing examples randomly from small, high-quality pools works well, and performance appears to be independent of the domain and translation style of the pool, suggesting that example quality is the most important factor. Using both the BLEURT metric and MQM human evaluations, we show that PaLM's performance, while very impressive for a system never deliberately exposed to parallel text, still significantly lags that of competition-grade SOTA systems on recent WMT test sets, and to a lesser extent the performance of COTS systems as well. This contrasts with some of the findings of Chowdhery et al. (2022). As in that work, we find that performance into English is somewhat better than the reverse direction. Finally, we perform an extensive analysis of the characteristics of PaLM's MT output, notably finding that in all languages we tested it tends to be creative and fluent but prone to omissions and other accuracy errors; broadly speaking, it matches the fluency but lags the accuracy of conventional NMT. In future work we look forward to testing PaLM on document-level translation tasks, unleashing its formidable capacity for leveraging long contexts. We would also like to explore prompt tuning methods that are more sophisticated than the hard-prompt setting we adopted for this paper, particularly to see if these might offer a way to tighten up PaLM's MT accuracy without destroying its impressive ability to generate highly-fluent text. ## Limitations As we use only a small number of language pairs, it is not clear how general our conclusions are; in particular, they pertain only to languages that are well represented in PaLM's training corpus, and only to translation into and out of English. Our restriction to independent sentence-level translations may have caused us to underestimate PaLM's true capabilities, since some of the accuracy problems we observed might be considered less severe in the context of whole-document translation where less literal translations are more typical. Our exploration of prompting barely scratches the surface of the many methods that have been proposed for adapting LLMs to particular tasks, and we may have missed a technique that produces higher-quality translations than we observed. Finally, the human evaluation we rely on to provide our most accurate results is necessarily subjective, and if we were to have carried out the evaluation with different raters and a different methodology, our conclusions might well have been different. ## Ethical Considerations Working with large language models comes with many ethical concerns that are discussed in detail in Brown et al. (2020) and Chowdhery et al. (2022). There, MT is often one task of many, while we focus on the question of proper example selection for few-shot prompting of MT, which adds a few specific concerns. Our conclusion that prompt quality is important could lead one to build a system with prompts drawn from a small set of trusted sources; indeed, our high-end set is one such example of this. In such a scenario, this small source will have an outsized impact on the output of the translation system, and one must be careful to manage issues of attribution and intellectual property. Furthermore, an editorial choice defining high-quality language can potentially reduce quality for groups and topics not typically discussed in this style (Gururangan et al., 2022). Finally, by highlighting the power of few-shot examples, one might be tempted to turn example selection over to the users of a system. There, special steps must be taken to avoid exposing users to biased or toxic outputs, which may be triggered by unconstrained prompting (Gehman et al., 2020; Costa-jussà et al., 2022). ## References Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2022. Incontext examples selection for machine translation. arXiv preprint arXiv:2212.02437. Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In *Proceedings of the Sixth Conference on Machine* Translation, pages 1–88, Online. Association for Computational Linguistics. Anonymous. 2023. Does gpt-3 produces less literal translations? Anonymous preprint under review. Stephen H Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, et al. 2022. Promptsource: An integrated development environment and repository for natural language prompts. arXiv preprint arXiv:2202.01279. Rachel Bawden and François Yvon. 2023. Investigating the translation performance of a large multilingual language model: the case of bloom. arXiv preprint arXiv:2303.01911. Eleftheria Briakou, Colin Cherry, and George Foster. 2023. Searching for needles in a haystack: On the role of incidental bilingualism in palm's translation capability. *arXiv preprint arXiv:2305.10266*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in Neural Information Processing Systems*, 33:1877–1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2020. Rethinking embedding coupling in pre-trained language models. *arXiv preprint:2010.12821*. Marta R. Costa-jussà, Eric Smith, Christophe Ropers, Daniel Licht, Javier Ferrando, and Carlos Escolano. 2022. Toxicity in multilingual machine translation at scale. *arXiv preprint arXiv:2210.03070*. Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A statistical analysis of summarization evaluation metrics using resampling methods. *Transactions of the* Association for Computational Linguistics, 9:1132– 1146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at scale and its implications on MT evaluation biases. In *Proceedings of the Fourth Conference on* Machine Translation (Volume 1: Research Papers), pages 34–44, Florence, Italy. Association for Computational Linguistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021b. Results of the WMT21 metrics shared task: Evaluating metrics with expertbased human evaluations on TED and news domain. In *Proceedings of the Sixth Conference on Machine* Translation, pages 733–774, Online. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830. Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Fangxiaoyu Feng, Melvin Johnson, and Orhan Firat. 2023. The unreasonable effectiveness of few-shot learning for machine translation. *arXiv preprint arXiv:2302.01398*. Xavier Garcia and Orhan Firat. 2022. Using natural language prompts for machine translation. *arXiv* preprint arXiv:2202.11822. Shivam Garg, Dimitris Tsipras, Percy Liang, and Gregory Valiant. 2022. What can transformers learn in-context? a case study of simple function classes. arXiv preprint arXiv:2208.01066. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online. Association for Computational Linguistics. Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation. arXiv preprint arXiv:2302.07856. Nuno M Guerreiro, Duarte Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and André FT Martins. 2023. Hallucinations in large multilingual translation models. arXiv preprint arXiv:2303.16104. Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning. Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, and Noah A. Smith. 2022. Whose language counts as high quality? measuring language ideologies in text data selection. *arXiv preprint* arXiv:2201.10474. Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: Word-level Adversarial ReProgramming. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4921–4933, Online. Association for Computational Linguistics. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Exploring humanlike translation strategy with large language models. arXiv preprint arXiv:2305.04118. Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are gpt models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210. Yutai Hou, Hongyuan Dong, Xinghao Wang, Bohan Li, and Wanxiang Che. 2022. Metaprompting: Learning to learn better prompts. arXiv preprint arXiv:2209.11486. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? a preliminary study. *arXiv preprint* arXiv:2301.08745. Alex Jones, Isaac Caswell, Ishank Saxena, and Orhan Firat. 2023. Bilex rx: Lexical data augmentation for massively multilingual machine translation. arXiv preprint arXiv:2303.15265. Marzena Karpinska and Mohit Iyyer. 2023. Large language models effectively leverage document-level context for literary translation, but critical errors persist. *arXiv preprint arXiv:2304.03245*. Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In *9th International Conference on Learning Representations, ICLR 2021,* Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In *Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing*, pages 388– 395, Barcelona, Spain. Association for Computational Linguistics. Sawan Kumar and Partha Talukdar. 2021. Reordering examples helps during priming-based few-shot learning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 4507–4518. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2022a. Probing via prompting. *arXiv preprint* arXiv:2207.01736. Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, and Xin Zhao. 2022b. Learning to transfer prompts for text generation. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human* Language Technologies, pages 3506–3518, Seattle, United States. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online. Association for Computational Linguistics. Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022a. Few-shot parameter-efficient finetuning is better and cheaper than in-context learning. arXiv preprint arXiv:2205.05638. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022b. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional Quality Metrics (MQM) : A Framework for Declaring and Describing Translation Quality Metrics. *Tradumàtica*, pages 0455– 463. Hongyuan Lu, Haoyang Huang, Dongdong Zhang, Haoran Yang, Wai Lam, and Furu Wei. 2023. Chainof-dictionary prompting elicits translation in large language models. *arXiv preprint arXiv:2305.06575*. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *arXiv* preprint arXiv:2202.12837. Yasmin Moslem, Rejwanul Haque, and Andy Way. 2023. Adaptive machine translation with large language models. *arXiv preprint arXiv:2301.13294*. Ajay Patel, Bryan Li, Mohammad Sadegh Rasooli, Noah Constant, Colin Raffel, and Chris Callison-Burch. 2022. Bidirectional language models are also few-shot learners. *arXiv preprint* arXiv:2209.14500. Jonathan Pilault, Xavier Garcia, Arthur Bražinskas, and Orhan Firat. 2023. Interactive-chainprompting: Ambiguity resolution for crosslingual conditional generation with interaction. *arXiv* preprint arXiv:2301.10309. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In *Extended Abstracts of the* 2021 CHI Conference on Human Factors in Computing Systems, pages 1–7. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269. Andrea Schioppa, Xavier Garcia, and Orhan Firat. 2023. Cross-lingual supervision improves large language models pre-training. arXiv preprint arXiv:2305.11778. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7881–7892, Online. Association for Computational Linguistics. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222– 4235. Chandan Singh, John X Morris, Jyoti Aneja, Alexander M Rush, and Jianfeng Gao. 2022. Explaining patterns in data with language models via interpretable autoprompting. arXiv preprint arXiv:2210.01848. Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, and Alexander M Rush. 2022. Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. *IEEE transactions on visualization and computer graphics*. Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021. Facebook AI's WMT21 news translation task submission. In Proceedings of the Sixth Conference on Machine Translation, pages 205–215, Online. Association for Computational Linguistics. Josef Valvoda, Yimai Fang, and David Vandyke. 2022. Prompting for a conversation: How to control a dialog model? *arXiv preprint arXiv:2209.11068*. Longyue Wang, Mu Li, Fangxu Liu, Shuming Shi, Zhaopeng Tu, Xing Wang, Shuangzhi Wu, Jiali Zeng, and Wen Zhang. 2021. Tencent translation system for the WMT21 news translation task. In Proceedings of the Sixth Conference on Machine Translation, pages 216–224, Online. Association for Computational Linguistics. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023. Document-level machine translation with large language models. *arXiv preprint* arXiv:2304.02210. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Xianfeng Zeng, Yijin Liu, Ernan Li, Qiu Ran, Fandong Meng, Peng Li, Jinan Xu, and Jie Zhou. 2021. WeChat neural machine translation systems for WMT21. In Proceedings of the Sixth Conference on Machine Translation, pages 243–254, Online. Association for Computational Linguistics. Biao Zhang, Barry Haddow, and Alexandra Birch. 2023. Prompting large language model for machine translation: A case study. arXiv preprint arXiv:2301.07069. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Lingpeng Kong, Jiajun Chen, Lei Li, and Shujian Huang. 2023. Multilingual machine translation with large language models: Empirical results and analysis. *arXiv preprint arXiv:2304.04675*. ## Appendices A Prompt Exploration As preliminary experiments we tried different prompting templates: Language This is the prompt template used in the paper (see Section 3). It prepends the examples with the corresponding language name in English. | # shots | | | | | | |-----------|------|------|------|------|------| | Prompt | 0 | 1 | 2 | 5 | 10 | | Language | 63.9 | 69.1 | 71.7 | 73.6 | 74.4 | | Codes | 59.0 | 68.5 | 71.2 | 73.4 | 74.1 | | Header | 72.4 | 69.1 | 70.7 | 73.4 | 74.1 | | Textual | 36.9 | 67.5 | 71.8 | 73.0 | 73.7 | | Deutsch | 72.6 | 70.8 | 71.9 | 73.5 | 74.1 | | None | 3.2 | 38.5 | 59.6 | 73.0 | 74.1 | Table 7: BLEURT results with different prompt templates and number of prompts for the English → German translation directions. The prompt examples were randomly selected. The median of 5 runs are shown. Codes Like "Language", but instead of full English names, two-letter languages codes are used (e.g. "en", "de"). Header Like "Language", but the header "Translate following sentences:" is added. Textual A textual request for translating a sentence: "Translate Xn from English into German: Yn", where Xn and Yn are the translation examples, as in Section 3. The source sentence X is given with the same template, but without specifying any translation. Deutsch Like "Language", but the language names are given in German ("Englisch", "Deutsch"). None No added text. Source and target examples are just input one after the other. As shown in Table 7, the choice of a prompting strategy has a crucial impact when the number of shots is low, but the effect is reduced when we increase the number of examples shown. The number of examples also has a significant impact on translation quality. We chose to work with 5 examples, as there are diminishing returns when increasing the number of prompts, and choosing a higher number has additional practical implications (e.g. possibly exceeding the maximum input length). ## B High-End Pool Table 9 describes the high-end pool. All listed articles were manually downloaded in June–August 2022, and semi-automatically divided into bilingual paragraphs. Our high-end pool consists of all paragraphs from all articles. The domain breakdown for each language pair is shown in Table 8. | Proportion | | | | |--------------|---------|---------|---------| | Genre | en ↔ de | en ↔ fr | en ↔ zh | | biography | 31% | 20% | - | | business | - | - | 15% | | commentary | 25% | 10% | 16% | | culture | - | 44% | 14% | | fashion | 16% | - | - | | food | - | 8% | - | | news | 4% | 18% | 43% | | obituary | 24% | - | 13% | Table 8: Genre distributions for the high-end pool. ## C Variability Of Random Runs Table 10 shows the automatic scores for random runs for the German→English language pair. It can be observed that the range of scores is quite small, less than 0.5 BLEURT points for all language directions. For both directions, the use of WMT-dev, as opposed to WMT-full, for the random pool reduces the observed range in BLEURT by at least 0.1. ## D Detailed Mqm **Scores** Table 11 presents MQM scores for PaLM WMTdev random and SOTA systems in the four language pairs evaluated, along with the breakdown of the scores into their Accuracy and Fluency components. Table 12 presents detailed MQM error counts for PaLM WMT-dev random and SOTA systems in en→de and de→en. ## E Significance Numbers We calculate pairwise significance numbers based on PERM-BOTH pair-wise significance testing (Koehn, 2004; Deutsch et al., 2021). Results can be seen in Table 13. ## F Example Prompts Tables 14a and 14b show prompt examples where kNN and random selection do better, respectively, as described in section 6.1. ## G Example Translations Tables 15 and 16 show example translations for German→English and English→Chinese as described in section 6.2. ## H Overlap Analysis Chowdhery et al. (2022) show BLEU differences between clean and original test sets, and provide some evidence that differences are not due to memorization, but it still isn't clear how much overlap actually inflates a model's score. We directly quantify the effect of train-test overlap on decision making by comparing 5-shot PaLM to Google Translate (GT)11 on our two sets with substantial overlap, testing under original, clean and ¬clean (including only overlapping examples) scenarios. BLEU and BLEURT scores for the two systems and three test sets are shown in Table 17. We can see that directly comparing original and clean results for a single system conflates differences from overlap with those from the increased difficulty of the clean subset. For example, for de→en BLEU, comparing PaLM's original and clean scores gives an overlap gap of 2.6-BLEU, in line with the gaps reported by Chowdhery et al. (2022). However, the non-overlapping GT system also has lower scores on the clean set, indicating that it may simply be more difficult.12 It's more useful to see that the original test indicated a 1.5-BLEU difference between the two systems, while the clean test indicates a 2.0-BLEU difference, meaning PaLM benefited from overlap by 0.5 BLEU in this comparison. The fully overlapping ¬clean further distorts the difference between the two systems: the true (clean) delta of 2.0 BLEU shrinks to only 0.4. Trends for fr→en are similar: though PaLM and GT are very close according to the original test set, the clean set reveals a delta of 0.8 BLEU. Interestingly, BLEURT may be less sensitive to overlap, with the original-versus-clean deltas hovering around 0 for fr→en regardless of the test subset, and de→en showing that PaLM benefits from an overlap bonus of only 0.3 BLEURT. In summary, overlap between the target side of the test data and the LLM training data can have an impact on both BLEU and BLEURT scores, altering the delta between two systems where one benefits from overlap and another does not by up to 0.7 11We chose Google Translate for comparison because it is non-trivial to build a SOTA baseline for older WMT scenarios. Through personal communication, we understand that Google Translate has no overlap with WMT test sets. 12The difference in difficulty between Clean and ¬Clean for systems without overlap is not easily explained. A common difficulty indicator is sentence length, but average lengths, as measured by number of SACREBLEU tokens per sentence, are similar between Clean and ¬Clean for both de→en (23.8 versus 23.0) and fr→en (21.1 versus 22.7). 15419 en ↔ de en ↔ zh en ↔ fr | LP | paras | words | URL | |------|---------|---------------------------------------------------------------------------------------------------------------------------|-------| | 4 | 255 | www.deutschland.de/en/news/new-supercomputer-in-operation | | | 4 | 208 | www.deutschland.de/en/news/patents-germany-ranks-second | | | 11 | 609 | www.deutschland.de/en/news/syrian-swimmer-yusra-mardini-provides-message-of-hope-at-olympics | | | 24 | 1787 | www.zeit.de/kultur/2019-12/schoenheit-fotografie-aesthetik-rankin-mitch-epstein-roger-ballen-english | | | 28 | 2817 | www.zeit.de/kultur/2020-07/ desinformation-peter-pomerantsev-social-media-regulation-democracy/komplettansicht | | | 60 | 2961 | www.zeit.de/politik/ausland/2020-11/ polarization-us-elections-democrats-republicans-donald-trump-family-division-english | | | 21 | 2757 | www.zeit.de/politik/deutschland/2015-11/helmut-schmidt-obituary-english/komplettansicht | | | 30 | 1323 | cn.nytimes.com/asia-pacific/20220509/taiwan-china-covid/dual | | | 31 | 1317 | cn.nytimes.com/china/20220427/brownface-barrack-okarma-1968-hong-kong/dual | | | 6 | 780 | cn.nytimes.com/china/20220401/china-cheng-lei-australia/dual | | | 13 | 609 | cn.nytimes.com/china/20220421/china-eastern-crash-report/dual | | | 23 | 1520 | cn.nytimes.com/china/20220412/china-russia-propaganda/dual | | | 22 | 1373 | cn.nytimes.com/business/20220621/china-housing-real-estate-economy/dual | | | 13 | 478 | cn.nytimes.com/china/20220415/shanghais-food-crisis-prompts-residents-in-beijing-to-stockpile-supplies/dual | | | 26 | 1202 | cn.nytimes.com/obits/20220418/peng-ming-min-dead | | | 6 | 843 | https://cn.nytimes.com/world/20220330/solomon-islands-china/dual | | | 6 | 846 | france-amerique.com/a-france-of-many-colors | | | 10 | 1177 | france-amerique.com/alice-guy-cinema-forgotten-pioneer | | | 10 | 1237 | france-amerique.com/americanization-is-back-did-it-ever-go-away | | | 8 | 666 | france-amerique.com/a-propos-a-hard-hitting-french-american-podcast | | | 3 | 457 | france-amerique.com/camille-laurens-a-womans-life | | | 8 | 970 | france-amerique.com/football-and-soccer | | | 6 | 377 | france-amerique.com/france-united-states-naval-battle-and-diplomatic-crisis | | | 7 | 615 | france-amerique.com/jeanne-damas-all-the-women-in-her-city | | | 6 | 631 | france-amerique.com/guedelon-building-a-castle-by-hand | | | 11 | 811 | france-amerique.com/raphael-francois-culinary-director | | | 12 | 874 | france-amerique.com/thierry-mugler-provocateur | | | 7 | 934 | france-amerique.com/winds-of-change-over-democracy | | | 4 | 255 | www.deutschland.de/en/news/new-supercomputer-in-operation | | Table 9: Sizes and provenance for articles in the high-end prompt pool. The *words* column contains the number of English words (whitespace-separated character sequences) in each article. | BLEURT | BLEU | | | | | | | | | | | |----------|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | LP | Pool | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | | en → de | full | 71.9 | 71.9 | 71.6 | 71.8 | 71.9 | 32.4 | 32.8 | 32.1 | 32.9 | 32.9 | | dev | 74.7 | 74.7 | 74.7 | 74.9 | 74.8 | 32.7 | 32.6 | 32.6 | 32.6 | 32.8 | | | de → en | full | 74.8 | 75.0 | 74.8 | 74.5 | 74.7 | 38.4 | 38.5 | 38.2 | 38.0 | 38.3 | | dev | 75.9 | 75.9 | 76.0 | 75.7 | 75.9 | 38.0 | 38.0 | 38.0 | 38.3 | 38.2 | | Table 10: Results for random runs for the German→English translation direction. | PaLM | SOTA | | | | | | |---------|-----------|----------|-------|-----------|----------|------| | MQM ↓ | Accuracy↓ | Fluency↓ | MQM ↓ | Accuracy↓ | Fluency↓ | | | en → de | 1.58 | 1.12 | 0.46 | 1.18 | 0.81 | 0.37 | | en → zh | 3.24 | 2.69 | 0.52 | 2.47 | 1.96 | 0.48 | | de → en | 1.92 | 1.43 | 0.48 | 1.31 | 0.88 | 0.43 | | zh → en | 3.60 | 2.97 | 0.62 | 3.11 | 2.43 | 0.68 | Table 11: MQM scores for PaLM WMT-dev random and SOTA systems, split into Accuracy and Fluency. Accuracy scores include "Accuracy/*," "Terminology/*," and "Non-translation!" error categories. Fluency scores include "Fluency/*," "Style/*," and "Locale/*" categories. The "Other" error category is not included in Accuracy or Fluency scores. | en → de | de → en | | | | | | | | |------------------|-----------|-------|-------|-------|-------|-------|-------|-----| | PaLM | SOTA | PaLM | SOTA | | | | | | | Major | minor | Major | minor | Major | minor | Major | minor | | | Non-translation! | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | | Acc/Mistrans. | 103 | 89 | 79 | 67 | 73 | 41 | 61 | 49 | | Acc/Omission | 26 | 6 | 7 | 3 | 51 | 33 | 19 | 11 | | Acc/Addition | 1 | 6 | 3 | 1 | 10 | 2 | 0 | 3 | | Acc/Untranslated | 12 | 4 | 14 | 0 | 6 | 7 | 5 | 8 | | Ter/Inappr | 0 | 7 | 0 | 7 | 17 | 21 | 12 | 15 | | Ter/Incons | 0 | 4 | 0 | 4 | 1 | 5 | 1 | 7 | | Fl/Grammar | 0 | 133 | 0 | 100 | 18 | 41 | 5 | 38 | | Fl/Register | 0 | 2 | 0 | 3 | 0 | 0 | 0 | 0 | | Fl/Inconsistency | 0 | 2 | 0 | 5 | 0 | 2 | 0 | 2 | | Fl/Punctuation | 0 | 260 | 0 | 31 | 1 | 38 | 2 | 29 | | Fl/Spelling | 0 | 12 | 0 | 16 | 0 | 16 | 0 | 17 | | Fl/Encoding | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | St/Awkward | 0 | 166 | 0 | 144 | 13 | 73 | 16 | 81 | | Locale/Date | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 | | Locale/Name | 0 | 0 | 0 | 0 | 2 | 8 | 2 | 5 | | Locale/Time | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 5 | | Source Error | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | | Other | 0 | 0 | 1 | 0 | 0 | 3 | 0 | 3 | | Total Errors | 142 | 673 | 102 | 362 | 189 | 296 | 123 | 281 | Table 12: MQM error counts for PaLM WMT-dev random and SOTA systems for en→de and de→en. Abbreviations are as follows: "Acc": Accuracy, "Fl": Fluency, "St": Style, "Ter": Terminology, "Inappr": Inappropriate for context, "Incons": Inconsistent. | SOTA | GTrans. | WMT-dev | high-end | WMT-full | | | | |-----------------|-----------|-----------|------------|------------|-------|-------|------| | random | random | random | kNN | | | | | | MQM | 1.31 | 1.71 | 1.92 | 1.89 | 2.38 | 3.03 | | | SOTA | - | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | | Google Trans. | - | - | 0.073 | 0.124 | 0.0 | 0.0 | | | WMT-dev random | - | - | - | 0.588 | 0.001 | 0.0 | | | high-end random | - | - | - | - | 0.001 | 0.0 | | | WMT-full random | - | - | - | - | - | 0.001 | | | de→en | MQM | 1.18 | 1.59 | 1.58 | 1.67 | 1.90 | 1.93 | | SOTA | - | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | | Google Trans. | - | - | 0.512 | 0.225 | 0.003 | 0.003 | | | WMT-dev random | - | - | - | 0.175 | 0.001 | 0.0 | | | high-end random | - | - | - | - | 0.021 | 0.01 | | | WMT-full random | - | - | - | - | - | 0.372 | | | en→de | MQM | 3.11 | 3.12 | 3.60 | 3.89 | 3.95 | 4.06 | | SOTA | - | 0.447 | 0.0 | 0.0 | 0.0 | 0.0 | | | Google Trans. | - | - | 0.002 | 0.0 | 0.0 | 0.0 | | | WMT-dev random | - | - | - | 0.022 | 0.006 | 0.003 | | | high-end random | - | - | - | - | 0.343 | 0.168 | | | WMT-full random | - | - | - | - | - | 0.281 | | | zh→en | MQM | 2.47 | 3.23 | 3.24 | 3.70 | 4.35 | 5.06 | | SOTA | - | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | | Google Trans. | - | - | 0.488 | 0.004 | 0.0 | 0.0 | | | WMT-dev random | - | - | - | 0.002 | 0.0 | 0.0 | | | high-end random | - | - | - | - | 0.0 | 0.0 | | | WMT-full random | - | - | - | - | - | 0.0 | | | en→zh | | | | | | | | BLEU or 0.3 BLEURT for a 20-30%-overlap. However, we should emphasize that the differences due to overlap are small overall, and certainly much smaller than expected if one looked only at the difference between original and clean scores. ## I Fixed Versus Random Prompts The results from section 5.2 indicate that random selection from small, high-quality prompt pools can work better than trying to customize prompts for specific inputs. In this section we investigate the effect of using a *single* high-quality prompt for all inputs, chosen using a maximum-likelihood criterion. For convenience, we carried out experiments on the high-end pool with 1-shot paragraph prompts. For each prompt in the pool, we computed the probability of a set of held-out high-end paragraphs when PaLM was conditioned on that prompt. We select the prompt that resulted in the highest probability for each language pair. Table 18 compares this method to random selection from the high-end pool. For all language pairs except Chinese→English, the fixed prompt does as well or better than the average performance over 5 random runs where a different prompt is selected for each input during each run. In Chinese→English, the prompt that ranked 5th according to the probability criterion also outperformed the random average, suggesting problems with our held-out set for that language pair. We conclude that using a single high-quality prompt can be a safer strategy than choosing a fresh randomly-selected prompt for each input. Model probability appears to be a reasonable criterion for judging quality, but we look forward to refining this heuristic in future work. | Source | "Wir haben die Pflichtaufgaben mit Meisterschaft und Pokal einfach hervorragend gemeistert. | | |-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------| | Reference | "Quite simply, we have excellently mastered the necessary tasks for the Championship and the Cup. | | | Hyp | "We have simply mastered the tasks of the championship and the cup excellently. | | | Prompt 1 | German: Mit einer verstärkten Mannschaft holte die Mannschaft das Double aus Meisterschaft und Pokal. English: The decision paid off as the team achieved a league and cup double. | | | Prompt 2 | German: Darüber hinaus haben wir uns wichtige Meisterschaftspunkte im Kampf um den Vizetitel gesichert." English: We have furthermore secured some important championship points in the fight about the vice champion's title." | | | Prompt 3 | German: Bring deine Mannschaft durch alle Spiele der Europameisterschaft und gewinne den Pokal! English: Take your team all the way through the Euro Cup stages and lift the trophy! | | | Prompt 4 | German: So konnte er die französische Meisterschaft, den nationalen Pokal sowie den Supercup gewinnen. English: He helped the club to win the national championship and the Supercup. | | | Prompt 5 | German: Roter Stern gewinnt in jener Saison das Double von Meisterschaft und Pokal. English: Red Star won their sixth double in this season. | | | kNN RoBERTa | Hyp | "We have the compulsory tasks with mastery and cup simply superbly mastered. | | Prompt 1 | German: Im November letzten Jahres war ein einzelner Steinadler und auch Bussarde im Blick der Kamera. English: In November last year a solitary golden eagle and buzzards too appeared in web camera view. | | | Prompt 2 | German: Teile: Modesto-14th Street, Stanislaus, California, Kalifornien-Luft-Qualitätsindex English: | | | Random | | Share: Modesto-14th Street, Stanislaus, California, California Air Quality Index | | Prompt 3 | German: So viel kostet ein Hotel in Chester English: How much does a hotel in Chester cost? | | | Prompt 4 | German: "... alle Mitarbeiter, die alles tun, um hilfsbereit zu sein und sehr freundlich zu sein; köstliche Margaritas; Kolibris und Granatäpfel im Garten (sowie eine sehr freundliche Katze); Ein echtes Gefühl von Zuhause. " Aktionsangebot English: "... all staff, who go out of their way to be helpful and are extremely welcoming; delicious margaritas; hummingbirds and pomegranates in the garden (as well as a very friendly cat); a real home-from-home feeling. " | | | Prompt 5 | German: Gansevoort Land zum Verkauf English: Gansevoort Land for Sale (a) Example where kNN outperforms random selection. | | | Source | Frei von Drogen veröffentlichte Green mit der Peter Green Splinter Group einige Alben, bis sich die Band 2004 auflöste. | | | Reference | Free of drugs, Green and the Peter Green Splinter Group released various albums before the band split up in 2004. | | | Hyp | The band released their debut album, The Last of the Great Pretenders, in 2003. | | | Prompt 1 | German: Ab 1990 war er Sänger der Gruppe Talisman, mit der er sieben Studioalben veröffentlichte, bis sich die Band 2007 auflöste. English: From 1998 until his departure in 2007, he was the lead singer of the group Lonestar, which recorded seven studio albums on BNA Records during his tenure as lead vocalist. | | | Prompt 2 | German: 2001 veröffentlichte die Band unter dem Namen Glass die rockige Single Out Of Nowhere, verpasste die Charts und löste sich im Anschluss auf. English: Around this time he wrote and presented the ITV Network productions The Rock that Doesn't Roll and The Rock That Rolled Away. | | | Prompt 3 | German: Mit ihrer Band Ex Cops veröffentlichte sie zwei Alben, bevor sich die Band 2015 auflöste. English: Their new band released two EPs before signing to Apparition Records in 2011. | | | Prompt 4 | German: In seiner Jugend gründete David Haering die Punk-Band Side Effect, mit der er drei Alben und eine EP veröffentlichte. English: Peter Hajba and Alexander Brandon used OpenMPT to compose the soundtracks for Bejeweled 2, Bejeweled 3 and other PopCap games. | | | Prompt 5 | German: Nach der Veröffentlichung des Live-Albums Beast from the East 1988 trennten sich die Wege der Musiker, als Don Dokken die Band auflöste. English: In 1988, after the Monsters of Rock Tour and a further platinum album, Don Dokken decided to break up the band and they went on their separate ways. | | | kNN RoBERTa | Hyp | Free from drugs, Green released several albums with the Peter Green Splinter Group, until the band broke up in 2004. | | Prompt 1 | German: 250 gr/m2: eine Reihe merino intermedia, vielseitigkeit und schutz garantiert. English: 250 gr/m2: Range merino intermediate, versatility and guaranteed protection. | | | Prompt 2 | German: 127 Moo.3, Choeng Thale, Thalang, Phuket, Strand Bang Tao, Thailand (Karte anzeigen) | | | Random | | English: 127 Moo.3, Choeng Thale, Thalang, Phuket, Bang Tao Beach (Phuket), Thailand (Show map) | | Prompt 3 | German: Ich bin stolz, sagen zu können, dass Ihr Produkt mir die Größe verliehen hat, von der ich jahrelang geträumt habe. English: I am proud to say that your product has given me the size that I sought for all these years. | | | Prompt 4 | German: In der NBA Saison 2003–2004 schien es endlich besser für Garnett und die Timberwolves zu laufen. English: The 2002–03 season seemed to look up for the Timberwolves. | | | Prompt 5 | German: APC-Fhrancis Oliver Lopez extern natürliche Links English: APC-Fhrancis Oliver Lopez External Passing Juice (b) Example where random selection outperforms kNN. | | Table 14: Example prompts for random and kNN RoBERTa selection for the German→English translation direction. | Source | Wegen eines ersten offiziellen Corona-Verdachtsfalls im Land hat Nordkorea aus Furcht vor einer Katastrophe eine Großstadt komplett abgeriegelt. | |--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Reference | Due to a first official suspected case of coronavirus in the country, North Korea has completely shut down a major city out of fear of a disaster. | | SOTA | Due to a first official suspected case of coronavirus in the country, North Korea has completely sealed off a major city for fear of a catastrophe. | | PaLM | North Korea has completely sealed off a city of 300,000 people because of a first suspected case of coronavirus in the country, fearing a catastrophe. | | (a) PaLM generated two errors: (i) Makes the assumption that the major city has 300,000 citizens, (ii) Forgot to translate the | | | Source | Für mindestens drei Menschen endet die Tour mit dem Motorrad tödlich. | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------| | Reference | For at least three people, their bike ride ended in death. | | SOTA | The motorcycle tour ends fatally for at least three people. | | PaLM | At least three people die in motorcycle accidents. | | (b) The source mentions a single accident happening on a bike tour. PaLM refers to multiple accidents happening independently. Source Ein Zeuge hörte gegen 3.40 Uhr Geräusche in der Talstraße und lief in Richtung des Imbisses. Reference One witness heard noises on Talstraße around 3:40 am and ran in the direction of the snack stand. SOTA A witness heard noises in the valley road at around 3.40 a.m. and ran towards the snack bar. PaLM A witness heard noises in Talstraße at around 3:40 a.m. and ran towards the snack bar. (c) SOTA generates an overly-literal translation, resulting in copying the street name (Talstrasse) and using the wrong time | | Table 15: Example translations from newstest2021 German→English. PaLM translations are generated with the high-end prompt pool. These are typical of error patterns observed in the translation output. We also observed the same pattern when using WMT-dev as the prompt pool. In general, SOTA is more faithful to the source while PaLM generates less literal translations that occasionally miss some information from the source. (a) PaLM produces two errors: (i) translates a wrong player's name; (ii) adds extra information that the player received a raise in the swap deal. SOTA produces a perfect translation, but is much more literal than PaLM. | Source | French World Cup winner Dembele, who has struggled for game time at the Camp Nou, was recently linked with a move to PSG in a swap deal with Neymar. | |-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Reference | 在诺坎普球场冲锋陷阵的法国世界杯冠军得主Dembele 最近通过与Neymar 交换转投PSG。 | | SOTA | 法国世界杯冠军登贝莱在诺坎普一直在为比赛时间而挣扎,最近他与内马尔交换转会巴黎圣日尔 曼。 | | PaLM | 法国世界杯冠军德容,在诺坎普的出场时间一直不多,最近被传与内马尔进行交换加钱转会 到PSG。 | (b) The source phrase "*September 11*" is translated literally by SOTA into a date, whereas PaLM produces a more appropriate translation by describing it as a terrorist attack. | Source | . . . in the wake of September 11, ASIO was given power to compulsorily question people for up to seven days in relation to terrorism offences. | |-----------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Reference | . . .在911 事件之后,澳安全情报局有权对牵涉恐怖主义行为的人员进行为期最高7 天的强制性询 问。 | | SOTA | . . .在9月11日之后,安全情报组织被授权对与恐怖主义罪行有关的人进行长达7天的强制性讯问。 | | PaLM | . . .澳大利亚安全情报局在9·11恐怖袭击之后获得了强制询问人员的权力,可以在7天内就恐怖主义 罪行进行询问。 | Table 16: Example translations from newstest2021 English→Chinese. PaLM translations are generated with the WMT-dev prompt pool. We find SOTA to generate more literal translations than PaLM, but PaLM suffers from more omissions and mistranslations. | BLEU | BLEURT | | | | | | | | |---------------|----------|-----------------|------|-------|--------|------|-------|--------| | Data | %Clean | Method | Orig | Clean | ¬Clean | Orig | Clean | ¬Clean | | Google Trans. | 47.6 | 45.5 | 55.3 | 78.4 | 77.7 | 81.3 | | | | de → en 2016 | 80.3 | WMT-full Random | 46.1 | 43.5 | 54.9 | 77.3 | 76.3 | 81.5 | | Diff | 1.5 | 2.0 | 0.4 | 1.1 | 1.4 | -0.2 | | | | Google Trans. | 43.1 | 42.1 | 44.8 | 77.7 | 76.8 | 79.6 | | | | fr → en 2014 | 69.2 | WMT-dev Random | 43.0 | 41.3 | 45.4 | 77.7 | 76.9 | 79.5 | | Diff | 0.1 | 0.8 | -0.6 | 0.0 | -0.1 | 0.1 | | | Table 17: Comparison between Google Translate and 5-shot PaLM using three test sets: Orig. (original), Clean (overlapping examples removed) and ¬Clean (including only overlapping examples). We use Random instead of WMT-dev Random for de→en to avoid using the WMT 2021 development sets to prompt for the WMT 2016 test ("sampling from the future"). | BLEURT | | | | | |----------|-----------|------|------|------| | LP | Selection | min | avg | max | | en → de | fixed | 74.7 | | | | random | 74.5 | 74.7 | 75.0 | | | fixed | 76.3 | | | | | de → en | random | 75.6 | 75.8 | 75.9 | | en → zh | fixed | 64.7 | | | | random | 63.7 | 63.9 | 64.0 | | | fixed | 67.0 | | | | | zh → en | random | 67.3 | 67.5 | 67.7 | | en → fr | fixed | 75.5 | | | | random | 75.2 | 75.2 | 75.3 | | | fixed | 77.9 | | | | | fr → en | random | 77.4 | 77.6 | 77.6 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Unnumbered Limitations section immediately after Conclusion. ✓ A2. Did you discuss any potential risks of your work? Unnumbered Ethical Considerations section immediately after Limitations. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Sections 4, 5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We did not discuss licenses in the paper, but we verified that our use of materials was permitted. We are not distributing artifacts. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We partially address this question in our Ethical Considerations section; as mentioned above, in general we ensured that our use of materials was permitted. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 4, Appendix ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 4, Appendix ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We provide partial answers to this question in sections 1 and 5. We are not authorized to provide full details. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, 6, Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5, 6, Appendix ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 5, 6, Appendix ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We provide a cite to a paper that reports these instructions (Freitag et al, Experts, Errors, and Context, TACL 2021) ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We use a contractor, and this information is not available to us. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chen-etal-2023-exploring
Exploring Lottery Prompts for Pre-trained Language Models
https://aclanthology.org/2023.acl-long.860
Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning. Given the advantage of prompting in the zero-shot setting and the observed performance fluctuation among different prompts, we explore the instance-level prompt and their generalizability.By searching through the prompt space, we first validate the assumption that for every instance, there is almost always a lottery prompt that induces the correct prediction from the PLM, and such prompt can be obtained at a low cost thanks to the inherent ability of PLMs.Meanwhile, it is shown that some strong lottery prompts have high performance over the whole training set, and they are equipped with distinguishable linguistic features. Lastly, we attempt to generalize the searched strong lottery prompts to unseen data with prompt ensembling method. Experiments are conducted on various types of NLP classification tasks and demonstrate that the proposed method can achieve comparable results with other gradient-free and optimization-free baselines.
## Exploring Lottery Prompts For Pre-Trained Language Models Yulin Chen1∗, Ning Ding1,2∗**, Xiaobin Wang**3, Shengding Hu2, Hai-Tao Zheng1,4†, Zhiyuan Liu2,5,6†, **Pengjun Xie**3 1Shenzhen International Graduate School, Tsinghua University 2DCST, Tsinghua University 3Alibaba Group, 4Pengcheng Laboratory, Shenzhen, 5BNRIST, IAI, Tsinghua University 6Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou {yl-chen21, dingn18}@mails.tsinghua.edu.cn {zheng.haitao}@sz.tsinghua.edu.cn, {liuzy}@tsinghua.edu.cn ## Abstract Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning. Given the advantage of prompting in the zero-shot setting and the observed performance fluctuation among different prompts, we explore the instance-level prompt and their generalizability. By searching through the prompt space, we first validate the assumption that for every instance, there is almost *always* a lottery prompt that induces the correct prediction from the PLM, and such prompt can be obtained at a low cost thanks to the inherent ability of PLMs. Meanwhile, we find that some strong lottery prompts have high performance over the whole training set, and they are equipped with distinguishable linguistic features. Lastly, we attempt to generalize the searched strong lottery prompts to unseen data with prompt ensembling method without any parameter tuning. Experiments are conducted on various types of NLP classification tasks and demonstrate that the proposed method can achieve comparable results with other gradient-free and optimization-free baselines. ## 1 Introduction Since pre-trained language models (PLMs) became the de-facto standard in modern NLP researches (Devlin et al., 2019; Liu et al., 2019; Han et al., 2021a), the pretraining-finetuning paradigm has been prevailing until recent years when models keep scaling (Radford et al., 2019; Brown et al., 2020; Rae et al., 2021) and become too expensive to be optimized. To this end, researchers are actively seeking more effective strategies that require little or even no optimization to harness PLMs. Among these exploratory studies of advanced model adaptation, prompting (Brown et al., ∗equal contributions †corresponding authors 2020; Schick et al., 2020; Schick and Schütze, 2021a; Gao et al., 2021) is gaining popularity in the community, which uses additional context (prompts) to wrap input instances and trigger desired output tokens. Note that in this paper, the term "prompt" technically refers to the template that wraps the original input. In classification tasks, these tokens are further mapped to particular labels by a verbalizer. Such a paradigm is verified to be effective in a variety of downstream tasks, even when annotations are insufficient. Particularly, empirical evidence shows that coincidental prompts could achieve extraordinary performance in the zero-shot setting, i.e., no training examples are presented. For example, simple manual prompt can achieve an F1 score of over 60% on 46-class entity typing dataset (Ding et al., 2021a) and reaches 73% accuracy on DBpedia with 14 classes (Hu et al., 2021) in the zero-shot setting. Despite the promising performance of prompting, it is often accompanied by drastic fluctuations among different prompts (Zhao et al., 2021). Given the observed sensitivity and context-dependent nature of the prompting method, it is intuitive to assign distinct prompts to each instance to trigger the desired output. Intrigued by this intuition, we explore a bold hypothesis: Is it possible to find at least one instancelevel prompt that induces correct output for every data point (lottery prompt) in classification tasks without any optimization? We empirically show that after building an automatic searching procedure with reasonable searching space on 13 representative classification datasets of up to 66 classes, **the existence of such** lottery prompts can be validated (§ 2) . That is, the combination of just a few discrete tokens can make a PLM output correct labels for almost any classification data. This finding updates our 15428 recognition of the limit of prompted knowledge in PLMs and demonstrates a promising upper bound of the PLMs' inference capability. With the hypothesis verified, we conduct further analysis on the internal mechanisms and properties of lottery prompts to explore how the lottery prompts relate to model capability and how lottery prompts generalize to unseen data without any optimization. (1) We first find that the search cost of lottery prompts is low for most datasets (under 30 API calls), and could reflect task difficulty and model capacity (§ 3.1). Search success rate increases and search cost decreases for larger PLMs and PLMs pre-trained for more steps, demonstrating that lottery prompts are a unique consequence of the expanded model capacity, rather than a mere stroke of luck. (2) Among these lottery prompts, we also find that there are a number of "strong prompts" that perform non-trivially on the whole training set, and interpretable linguistic features can be identified among them (§ 3.2). Strong prompts demonstrate considerable potential to be generalized to unseen data, i.e., test dataset, of the current task. We develop a mutual-information-based prompt ensembling method and show that strong prompts could be effectively generalized to unseen data in an optimization-free manner (§ 4). Without any parameter update, the ensembling of strong prompts could achieve on-par or better performance with many competitive baselines. In summary, we validate the existence of lottery prompts and conduct an in-depth analysis of the properties of lottery prompts. We also show that by directly ensembling the strong prompts, prominent performance can be achieved on test data without any optimization. Our study points to the great potential of PLMs and is hoped to inspire future works in more efficient ways in searching and ensembling lottery prompts as an optimization-free adaptation of PLMs. ## 2 The Existence Of Lottery Prompts For Every Data Point Considering the extraordinary performance observed on zero-shot classification and the large variance brought by the prompt selection, we make an assumption as follows: Given a pre-trained language model and a classification dataset, for each instance, at least one lottery prompt exists that can induce the desired label from the PLM, without the ## Need To Update The Plm Parameters. To validate the assumption, we conduct experiments that attempt to find the lottery prompt for every data point on 13 classification tasks. Note that for different instances, the prompt may be different, and our goal is to verify the existence of such prompts in this experiment. ## 2.1 Overview And Setup Particularly, for every input instance in a classification task, we attempt to search through the prompt space and find a textual prompt that can make PLMs produce desired label words. We choose 13 datasets of various NLP tasks for assumption validation. Most of them come from GLUE benchmark (Wang et al., 2018), and others include Yelp Polarity (Zhang et al., 2015), SNLI (Bowman et al., 2015), AG's News (Zhang et al., 2015), DBpedia (Zhang et al., 2015), and Few-NERD (Ding et al., 2021b). SST-2 (Socher et al., 2013) and Yelp Polarity are datasets for binary sentiment classification. CoLA (Warstadt et al., 2019) is for acceptibility judgment of single sentence. SNLI, RTE (Wang et al., 2018), QNLI (Wang et al., 2018), WNLI (Levesque, 2011) and MNLI (Williams et al., 2018) target at language inference detection given a sentence pair. QQP (Iyer et al., 2017) and MRPC (Schick et al., 2020) are for paraphrase judgment. AG's News and DBpedia are used for text theme classification. Few-NERD is an entity typing dataset. As for prompt search space, 200 words with top frequency in English1are gathered and grouped according to part-of-speech tag with NLTK package (Loper and Bird, 2002) into nouns, verbs, prepositions, adjectives and adverbs. The designed prompt search space is the Cartesian product of three word sets T = NOUNS×VERBS×(PREP∪ ADJ∪ADV)× {<MASK>}, and *|T |* = 76725. The major concerns of such designing is to restrict the prompt space and to fit with common syntactic order of words to ensure prompt plausibility to some extent. As for verbalizers, we follow the standard design of previous works (Sun et al., 2022). We use RoBERTa-large (Liu et al., 2019) and GPT-2 (Radford et al., 2019) as the backbones. The specific prompt format and verbalizers used are shown in Appendix C. ## 2.2 The Searching Process For each dataset, we randomly sample 1000 instances from the training set as Xtrain = {(xi, yi)} and apply each prompt T ∈ T to each instance and use the PLM M to produce the prediction. Specifically, a prompt T composed of a noun, a verb and an adjective may be *"it was really"*. Applying it to an instance x:*"A fun movie."* will yeild the input text T(x):"A fun movie. it was really *<MASK>*". For each of such pair T(x) ∈ Xtrain × T , the score for each class can be obtained as $$o(x;T,{\mathcal{M}})={\mathrm{Softmax}}(\mathbf{V}({\mathcal{M}}(T(x)))),\quad(1)$$ where V denotes the projection from output logits over PLM vocabulary to the class label set. Specifically, to reduce the impact from the prompt, we use calibration (Zhao et al., 2021) to rescale the scores before making the final prediction. $$\begin{array}{l}q(T;{\cal M})=\mathrm{Softmax}({\sf V}({\cal M}(T(\cdot)))),\\ p(x;T,{\cal M})=\mathrm{Normal}\,iz\epsilon(\frac{o(x;T,{\cal M})}{q(T;{\cal M})}).\end{array}\tag{2}$$ T(·) means a wrapped input with empty string and q is the corresponding output probability over the label words. p is the final calibrated probability over the class labels. For every (x, y) ∈ Xtrain, we enumerate over each T ∈ T and see if the output yˆ = arg max p will give the correct prediction y. ## 2.3 Verification Of The Assumption Table 1 reports the basic searching results. Each instance x is considered correctly predicted if there exists T ∈ T such that y = arg max p. It is shown that for all datasets, a lottery prompt that induces the correct prediction from M exists for almost all 1000 instances. The assumption is thus validated, that is, in a finite search space composed of textual tokens, we can almost always find at least one combination of common words as a prompt to make the prediction correct. While it may not be surprising to see a success on binary classification tasks, achieving 100% coverage on Few-NERD, a 66-class dataset for entity typing, is worth noting. It indicates that the particular semantics distributed in PLM can be triggered by certain contexts even without any further fine-tuning. Naturally, the phenomenon is not observed when the model is not pre-trained. We conduct the same searching process for Few-NERD on a randomly initialized RoBERTa-large, and only 33.1% instances could find the corresponding lottery prompts. The effect of pre-training will be further explored in Section 3.1, demonstrating that lottery prompts are a unique and consequent effect along with language model pre-training. | Datasets | #Classes | RoBERTa-large | GPT-2 | |------------|------------|-----------------|---------| | SST-2 | 2 | 100.00 | 100.00 | | Yelp P. | 2 | 100.00 | 100.00 | | SNLI | 3 | 100.00 | 99.90 | | RTE | 2 | 100.00 | 100.00 | | MRPC | 2 | 100.00 | 100.00 | | CoLA | 2 | 100.00 | 100.00 | | MNLI | 3 | 99.90 | 99.90 | | QNLI | 2 | 100.00 | 100.00 | | QQP | 2 | 100.00 | 100.00 | | WNLI | 2 | 100.00 | 100.00 | | AG's News | 4 | 100.00 | 100.00 | | DBpedia | 14 | 100.00 | 100.00 | | Few-NERD | 66 | 100.00 | 99.70 | ## 3 Empirical Analysis Since we have verified the existence lottery prompts, in this section, we conduct further analysis on search cost and the searched lottery prompts. ## 3.1 Search Cost Analysis As aforementioned, the searching space in our experiment is *|T |* = 76725, however, the practical cost to find a lottery prompt for one data point is significantly lower than the budget. As shown in Figure 2, the average search cost for each instance does not exceed 30 API calls on most datasets for both PLMs. In this section, we show that search cost correlates with data difficulty and model capacity with further analysis. Task Difficulty. As shown in Figure 2, searching for a lottery prompt for a multi-class classification problem is more costly. The 66-class typing dataset Few-NERD requires a significantly higher search budget than the rest of the datasets, most of which only contain 2 or 3 classes. Another reasonable observation is that single sentence classification tasks are generally easier than tasks involving sentence pairs. As mentioned in the next part, it may be attributed to the designing of prompt format and label words. Meanwhile, NLI tasks with mixed domains are probably the most difficult ![3_image_1.png](3_image_1.png) ![3_image_0.png](3_image_0.png) ![3_image_2.png](3_image_2.png) sentence-pair tasks, given that MNLI, RTE, and SNLI are more costly than paraphrase tasks and other domain-specific NLI datasets. Comparing across models, the auto-regressive model (GPT2) generally takes more searches than the autoencoding model (RoBERTa-large). Despite the differences in individual datasets, they show similar trends, which can **roughly reflect how difficult** the dataset is for PLMs. Hard Instances. Beyond task difficulty, we are also interested in some of the hard instances, i.e. instances that require a significant number of searches or fail to match any lottery prompt in the given search space. We gather the 5 instances that require the most searches or ultimately observe a failure in searching from both PLMs. The examples from 3 datasets are presented in Appendix Table 8. It can be seen that for SST-2, the presented cases are intuitively difficult, as many of them involve vague expressions and complex reasoning that can be misleading. On the other hand, the hard cases in MNLI and SNLI seem more counterintuitive. Most "entailment" cases have considerable vocabulary overlap between the premise and hypothesis statements. The three failed cases are short sentences with almost identical expressions. We believe it is the negative effect from prompt template and label word chosen. For MNLI, both the high-lighted cases contain negation auxiliaries that rarely follow a "Yes" statement. This tendency drives the PLMs to always favor the choice of "No", which leads to erroneous prediction. The effect of negation has also been studied with standard PLM finetuning and proved to be a challenge (Hossain et al., 2022; Hosseini et al., 2021). The analysis shows that although for most instances, the lottery prompts can be easily found, **the prompting method is still disadvantaged when it comes** to complex text that requires advanced understanding ability. Also, prompting method is sensitive to verbalizer designs and **can be easily influenced by statistical correlation between label** words and input texts. Impact of Model Size and Pre-training. To explore the effect of model capacity on the easiness to search for lottery prompts, we conduct the same searching process as described in § 2 on AG's News and FewNERD with PLMs of different scales and pre-training status. Specifically, we use GPT2, GPT-2-medium, GPT-2-large and GPT-2-xl for model size ablation and RoBERTa-base pre-trained for 5000∼100000 steps for pre-training ablation, respectively. Figure 1 shows the variation of search success rate and average search cost per instance. For models of different scale, the success rate is similar but the search cost consistently decreases as models scale up, which shows that large PLMs generally have a larger feasible solution space for specific instances. Meanwhile, finding lottery prompts for PLM at their early pre-training stage is much harder. As the pre-training progresses, a significant reduction in search cost and increase in success rate follow. This indicates that **the existence of** lottery prompts is not merely a stroke of luck, but a consequence of pre-training that expands model capacity and can be further strengthened as PLMs scale up. ![4_image_0.png](4_image_0.png) ## 3.2 The Strong Prompts After searching for lottery prompts for all instances, we are interested in if there are "strong prompts" among them, i.e., prompts that perform well on the whole Xtrain. We measure the performance of each prompt over Xtrain with standard metrics on some representative datasets from each task category. The metric statistics and variation of all prompts are shown in Figure 3. **It could be concluded that** for all datasets, there are a handful of "strong prompts" that can perform satisfactorily on the dataset. Note that despite having altogether 66 classes, the best-performing prompt almost achieves an accuracy of 0.4 on Few-NERD. Meanwhile, different tasks show distinct patterns. Text classification tasks with single sentence are more sensitive to prompt choice and often observe larger performance variation over the prompt space. For SST-2, while the best-performing prompt reaches an accuracy of 0.8, the worst prompts can barely get to 0.3. For NLI tasks, prompt performance is more stable however mediocre. To inspect into the linguistic characteristics of the strong prompts, we present the top-5 prompts for some of the representative datasets and their corresponding metrics on the training set of 1000 instances in Table 2. While many prompts may not seem syntactical on the whole, certain linguistic characteristics can still be identified, which fit with our language intuition, both syntactically and semantically, and reveal some of the most contributive words in prompts for distinctive datasets. For example, the top prompts for the sentiment analysis task are compatible with chosen label words. Adverbs that enhance the statement (e.g. just, really, very) appear frequently in sentiment analysis tasks. For topic classification, the words like "other" and "such" naturally lead to nominal label words like "sports" and "artist". As for natural language inference task, although language entailment is subjective, it is common that personal pronouns are often involved when we express our opinions on entailment, like "I think it means", "Do you think", etc. Therefore appearance pf pronouns is in top prompts is reasonable. Meanwhile, we do observe that good prompts are not always interpretable. It may imply that the PLM's internal language ability and understanding deviates from human beings, which is why prompt engineering is important. Above all, we see that **"strong prompts" do exist and they** are equipped with distinct linguistic features depending on label words and task type. ## 4 Explore The Generalizability Of Strong Prompts In § 2 and § 3, we have empirically verified that conditioned on a pre-trained model and a classification task, it is possible to find a lottery prompt for almost every data point, and that there are a handful of strong prompts that perform non-trivially on the training set. In this section, we first describe the ensembling method and then present the generalization results. ## 4.1 Prompt Ensembling Method We gather a set of feasible prompts T∗ with the searching result on Xtrain and conduct inference with designed prompt ensembling method for each instance in Xtest. Since the choice of T∗is solely based on inference results on Xtrain, the process uses no validation set. Formally, given the selected prompts T∗ = {T1, T2, ..., Tt*} ⊂ T* , the prediction for each data point x ∈ Xtest is presented as p(x; T ∗,M) = Φ(p1, p2*, ..., p*t), (3) where pk = p(x; Tk,M) and is calculated as equation 2, and Φ is the ensembling method used. × × × × = Searching Space | even … | × √ √ √ × × | | | | |---------------------|---------------------------------------------------------------------------------------|------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------| | A fun ride. … | company, … | love, … | Too bad. He finds very [MASK] A fun ride. I make even [MASK] A fun ride. I find really [MASK] A fun ride. I find good [MASK] … | | | Dataset | Top-5 Prompts | Metrics | | | | SST-2 | he work just, I find very, I find really, help are for, she work just | 85.9, 85.6, 85.2, 84.6, 84.0 | | | | Yelp P. | look place really, you place also, look was also, I were very, they place also | 92.0, 91.3, 91.3, 91.2, 91.2 | | | | SNLI | I get really, I like through, I said always, keep love through, you found that | 56.9, 56.0, 55.8, 55.8, 55.7 | | | | RTE | keep like always, way think such, life think same, end think such, end like always | 60.0, 59.7, 59.6, 59.6, 59.4 | | | | MRPC | money had very, something had very, I been very, help had very, life had very | 70.9, 70.5, 70.4, 70.4, 70.2 | | | | AG's News | lot say on, I said other, time think other, state say on, you think other | 79.7, 78.8, 78.1, 78.0, 77.3 | | | | DBpedia | you said such, something know then, life make of, home said such, information is that | 87.5, 86.6, 86.0, 85.9, 85.8 | | | | Textual Input | [NOUN] | [VERB] | [PREP|ADJ|ADV] | [MASK] | | Searching Positions | | | | | really, good, even … [MASK] Too bad. The film is strictly routine. A fun ride. … I, he, company, … make, find, love, … Table 2: An example of Top-5 prompts over 1000 training instances for each dataset and their individual performance on training sets. The model used is RoBERTa-large. √ ![5_image_0.png](5_image_0.png) Acc. on Prompt ID With the assumption that strong prompts over Xtrain are also expected to perform well on Xtest, these best-performing prompts are regarded as the most reliable for predicting the unseen data. So we take the top-k best-performing prompts over the training set as T∗. In the experiments, we empirically use k = 10. A naive ensembling method is to take the average output as the final prediction by simple voting, where Φ(p1, p2*, ..., p*t) = 1 t Ptk=1 pk. While a more sophisticated strategy that echoes the spirit of "lottery prompt" is to select one most "reliable" prompt for each instance x ∈ Xtest. Intuitively, the more reliable a prompt T is, the more confident the model M will be about instance x. Inspired by Sorensen et al. (2022), we measure the confidence with the mutual information between x and y, T, which is defined by the reduction in entropy of predicted probability brought by x, $$I(x;T_{k},{\cal M})=H(q|T_{k}(\cdot))-H(p|T_{k}(x))$$ $$=-\sum_{i}q_{i}(T_{k};{\cal M})\log q_{i}(T_{k};{\cal M})+\tag{4}$$ $$\sum_{i}p_{i}(x;T_{k},{\cal M})\log p_{i}(x;T_{k},{\cal M}),$$ where q and p are the predicted probability vectors as in Equation 2. So the overall objective is $$\begin{array}{c}{{T^{*}=\arg\operatorname*{max}_{T\in{\mathcal{T}}^{*}}I(x;T,{\mathcal{M}}),}}\\ {{\Phi(p_{1},p_{2},...,p_{t})=p(x;T^{*},{\mathcal{M}}).}}\end{array}\qquad(5)$$ Specifically, maximization of mutual information entails that a good prompt itself should contain no bias towards the label set, so q should be close to a uniform distribution. On the other hand, a suitable prompt for a specific instance should induce an almost certain prediction on the desired class, corresponding to a near one-hot vector p. Experiments show that under few-shot settings, our mutual-information-based ensembling strategy is more advantageous than direct simple voting (§ 4.2). The complete searching and ensembling process is shown in Figure 4. ## 4.2 In-Dataset Generalization Experimental Setup. We comprehensively evaluate the generalizability of strong prompts on 8 representative datasets. Following previous works (Sun et al., 2022), we conduct experiments under few-shot settings. Specifically, we choose ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) the top-10 prompts as T∗and obtain the final prediction and test metrics with mutual-informationbased ensembling as Φ on the test set. We keep the verbalizers aligned with Sun et al. (2022) for fair comparison. The description of experimental details and baselines can be found in Appendix A. Overall Results. Figure 5 shows the in-dataset generalization results on each dataset. Overall, our method performs comparably to the existing baselines and requires the fewest trainable parameters. For some datasets, the searched strong prompts are shown to be more effective than baselines. It points to the fact that with a reasonable prompt search space and a few training instances, strong prompts can be identified and generalized effectively to unseen data. Best prompt on 32-shot data surprisingly overtakes many baselines. This, jointly with the mediocre performance of manual prompts, indicate that a human-comprehensible prompt may not always be the best choice for PLMs and may fail to probe a considerable amount of intrinsic knowledge in PLMs. Meanwhile, the success of MI over best prompt shows that ensembling a set of strong prompts is beneficial. Comparing across datasets, our method is more advantageous for harder tasks, including natural language inference (SNLI and RTE) and paraphrasing (MRPC). For single-sentence classification tasks, the improvement is minor. This finding fits with our intuition, as tasks involving two sentences often require more abstract abilities like reasoning and the contexts are more diverse across instances. Designing or optimizing for one unified prompt for such datasets is admittedly harder. Above all, it is exciting that ensembling a proper set of prompts composed of textual tokens may surpass network optimization on a dataset in an era of pre-trained language models and points to the values of mining and tapping into an optimal usage of plain textual prompt. Impact of Training Data Size. To further explore the property of our method, experiments are conducted under few-shot settings ranging from 8 shots to 256 shots with both simple voting and mutualinformation-based ensembling. As shown in Figure 6, we can see that performance varies a lot when different instances are sampled as the training set under low-shot settings. It suggests the importance of choosing the proper training data for our method. When more shots are provided, metrics get higher and variance gets smaller. As the data volume climbs up to 128 shots and 256 shots, the increase in metrics becomes minor for most datasets. It can also be concluded that for low-shot settings, mutual-information-based ensembling method yields higher results than simple voting. But as more training data are available, the gap is narrowed and the two ensembling strategies converge to similar levels. ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) | Task | Setting | Metrics | |----------------------------|------------------|-----------------| | Sentiment | SST-2 → Yelp P. | 90.27 ( 1.58 ↓) | | Analysis | Yelp P. → SST-2 | 84.15 ( 5.37 ↓) | | RTE → SNLI | 40.48 ( 10.13 ↓) | | | SNLI → RTE | 54.51 (4.19 ↓) | | | MNLI → SNLI | 47.96 (2.65 ↓) | | | MNLI → RTE | 55.81 (2.89 ↓) | | | Natural Language Inference | | | ## 4.3 Cross-Dataset Generalization We test the prompt transferbility across datasets with similar tasks under 32-shot setting. Experiments are conducted on sentiment analysis and language inference tasks. We also use MNLI as the source dataset as many previous works do. Table 3 shows that prompts chosen by our method can be transferable. While SST-2 and Yelp observe mutual transferability, transfering RTE to SNLI is relatively hard, which can be attributed to the inconsistency in class number. MNLI is shown to be a robust dataset for NLI task and the searched prompts perform satisfactorily on both RTE and SNLI. It is also in line with previous research findings that using prompts pretrained on MNLI could greatly boost performance on other NLI datasets. Above all, the results demonstrate that our proposed strategy can effectively extract representative prompts for a specific kind of task, which can be further utilized to reduce search cost. ## 5 Related Work Prompting, as an alternative to standard finetuning, is originally inspired by GPT-3 (Brown et al., 2020) and knowledge probing (Petroni et al., 2019; Jiang et al., 2020). With a similar form to pre-training tasks, it stimulates the intrinsic knowledge in PLMs more efficiently. Following several of the earliest works (Schick and Schütze, 2021a,b), prompting has been applied in various NLP tasks (Han et al., 2021b; Li and Liang, 2021; Sainz et al., 2021; Ding et al., 2021a). It is also discovered that the specific prompt used has a great impact on task performance. Therefore, efforts have been devoted to prompt engineering and automatic prompt generation. Optimizing for a good prompt has been conducted at both discrete token level (Shin et al., 2020; Gao et al., 2021) and continuous embedding level (Li and Liang, 2021; Zhang et al., 2021; Liu et al., 2021b; Li et al., 2022). Some also focus on the choice and representation of label words (Schick et al., 2020; Hu et al., 2021; Zhang et al., 2021). Experiments show that a well-optimized or pre-trained (Gu et al., 2022) prompt can be beneficial. Given the striking performance of prompting under few-shot settings especially, recently, more works are focusing on the more efficient tuning of PLMs based on prompts. Prompt tuning (Lester et al., 2021) tunes the pre-pended token embedding only. Other works enhance PLMs' zero-shot learning ability with prompts. Studies show that large PLMs with proper prompts (Wei et al., 2021) and training with diverse prompts (Sanh et al., 2021) can advance zero-shot performance. This line of work emphasizes the efficient tuning and steering process of large PLMs. Black-box tuning (Sun et al., 2022) optimizes the pre-pended continuous prompt in a projected low-dimensional space without PLM gradient information. This work is among the first few efforts (Jin et al., 2022; Wu et al., 2022) in mining instancelevel prompts, and is the first to propose and prove the existence of a lottery prompt composed of a few textual tokens for each instance. In contrast to tuning a small number of parameters or tuning without gradients, an optimization-free method is proposed to generalize the searched prompts to the test sets. ## 6 Conclusion In this work, we explore the existence of lottery prompts for every single instance and the adaptation of them for various classification tasks in an optimization-free manner. We propose a large prompt space composed of common words as the search space to verify the assumption. We also identify the searched strong prompts and the relation between model capacity and search cost and demonstrate the effectiveness of ensembling the strong prompts on the test set. Our proposed optimizationfree method achieves satisfactory results on various NLP tasks under few-shot settings. Above all, this work illuminates the fact that the great potential of PLMs can be successfully harnessed and prompted by plain textual prompts mined from PLM vocabulary without parameter optimization and thus points to the need for future efforts in more efficient ways in mining and utilizing lottery prompts. ## Acknowledgements This research is supported by the National Natural Science Foundation of China (Grant No.62276154 and No.62236004), Research Center for Computer Network (Shenzhen) Ministry of Education, Beijing Academy of Artificial Intelligence (BAAI), the Natural Science Foundation of Guangdong Province (Grant No. 2023A1515012914), Basic Research Fund of Shenzhen City (Grant No. JCYJ20210324120012033 and JSGG20210802154402007), the Major Key Project of PCL for Experiments and Applications (PCL2021A06), Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (HW2021008), Major Project of the National Social Science Foundation of China (No. 22&ZD298), and Institute Guo Qiang at Tsinghua University. Finally, we thank Xiaozhi for providing valuable comments. ## Limitations The current method works with a large prompt search space T , which means a tremendous number of inference API calls are required. Though Figure 2 shows that the average cost of finding a lottery prompt for each instance is low, the searching process is highly randomized and there is no guarantee of the performance of searched prompts over the test dataset. Therefore, finding strong prompts over the training set can still be laborious. How to use PLM inference calls more efficiently and leverage the generalization ability of T∗ within a reasonable cost is of future research interest. Our acceleration strategy can be found in Appendix B. Another aspect is that not all strong prompts are interpretable as presented in 2. While recently emerged larger models like ChatGPT demonstrate superb language understanding ability and can almost always answer yes or no questions correctly given a human-interpretable prompt. This gap observed between small PLMs like RoBERTa and large language models like ChatGPT is yet another interesting research topic. ## Ethical Considerations This work shows that with proper plain textual prompts, instance-level desired results can be prompted from PLMs. This inherent feature of PLMs means attacks can be launched to produce rude or discriminated words. On the other hand, however, we believe it can also be a technique used for debiasing a PLM. Overall, this effect depends on the intention of the users and the pre-training corpus of the corresponding PLMs. The analysis of this study can be used to facilitate the community to develop more specifications for the rational use of PLMs (especially the super-large ones), and more approaches to effectively prevent potential ethical issues. For example, we can use this technique to analyze which outputs that may have ethical issues are easily triggered by which contexts (prompts) and develop a set of intervention methods to make these tokens unavailable for output. ## References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of EMNLP*, pages 632–642. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Proceedings of NeurIPS*, volume 33, pages 1877–1901. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, and Zhiting Hu. 2022. Rlprompt: Optimizing discrete text prompts with reinforcement learning. arXiv preprint, abs/2205.12548. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*, pages 4171– 4186. Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021a. Prompt-learning for finegrained entity typing. *arXiv preprint*, 2108.10604. Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Haitao Zheng, and Maosong Sun. 2022. OpenPrompt: An open-source framework for promptlearning. In *Proceedings ACL*, pages 105–113. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021b. Few-NERD: A few-shot named entity recognition dataset. In *Proceedings of ACL*, pages 3198–3213. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In *Proceedings of ACL/IJCNLP*, pages 3816–3830. Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. 2022. PPT: Pre-trained prompt tuning for few-shot learning. In *Proceedings of ACL*, pages 8410–8423. Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021a. Pre-trained models: Past, present and future. *AI Open*. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021b. Ptr: Prompt tuning with rules for text classification. *arXiv preprint*, 2105.11259. Md Mosharaf Hossain, Dhivya Chinnappa, and Eduardo Blanco. 2022. An analysis of negation in natural language understanding corpora. In Proceedings of ACL, pages 716–723. Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R. Devon Hjelm, Alessandro Sordoni, and Aaron C. Courville. 2021. Understanding by understanding not: Modeling negation in language models. In *Proceedings of NAACL-HLT*, pages 1301–1312. Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. *arXiv* preprint, 2108.02035. Shankar Iyer, Nikhil Dandekar, , and Kornel Csernai. 2017. First quora dataset release: Question pairs. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? *TACL*, 8:423–438. Feihu Jin, Jinliang Lu, Jiajun Zhang, and Chengqing Zong. 2022. Instance-aware prompt learning for language understanding and generation. *arXiv*, abs/2201.07126. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. *arXiv preprint*, abs/2104.08691. Hector J. Levesque. 2011. The winograd schema challenge. In *Logical Formalizations of Commonsense* Reasoning, Papers from the 2011 AAAI Spring Symposium. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of ACL, pages 4582–4597. Yiyuan Li, Tong Che, Yezhen Wang, Zhengbao Jiang, Caiming Xiong, and Snigdha Chaturvedi. 2022. SPE: symmetrical prompt enhancement for fact probing. In *Proceedings of EMNLP*, pages 11689–11698. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021a. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *arXiv preprint*, abs/2110.07602. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. *arXiv preprint arXiv:2103.10385*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *arXiv preprint*, abs/1907.11692. Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1, page 63–70. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of EMNLP-IJCNLP*, pages 2463–2473. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label verbalization and entailment for effective zeroand few-shot relation extraction. *arXiv preprint*, abs/2109.03659. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M. Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. 2021. Multitask prompted training enables zero-shot task generalization. *arXiv preprint*, abs/2110.08207. Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. In *Proceedings of COLING*, pages 5569–5578. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of EACL*, pages 255–269. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In *Proceedings of NAACL*, pages 2339–2352. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In *Proceedings* of EMNLP, pages 4222–4235. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of EMNLP*, pages 1631–1642. Taylor Sorensen, Joshua Robinson, Christopher Rytting, Alexander Shaw, Kyle Rogers, Alexia Delorey, Mahmoud Khalil, Nancy Fulda, and David Wingate. 2022. An information-theoretic approach to prompt engineering without ground truth labels. In *Proceedings* of ACL, pages 819–862. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. 2022. Black-box tuning for language-model-as-a-service. In *Proceedings of* ICML, pages 20841–20855. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. TACL, 7:625–641. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners. *arXiv preprint*, abs/2109.01652. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of NAACL*, pages 1112–1122. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of EMNLP: System Demonstrations*, pages 38–45. Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, and Hao Ma. 2022. IDPG: an instance-dependent prompt generation method. *arXiv preprint*, abs/2204.04497. Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable prompt makes pre-trained language models better few-shot learners. arXiv preprint, abs/2108.13161. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Proceedings of NeurIPS*, page 649–657. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of ICML, pages 12697–12706. PMLR. ## A Experimental Details A.1 Experimental Settings We conduct experiments under few-shot settings on 8 datasets: SST-2, Yelp P., AG's News, DBpedia, MRPC, SNLI, RTE and MNLI. We experiment under both 16-shot and 32-shot data as the training set as our method requires no validation set. The total seen labeled data number does not exceed 32shot across all methods. The original validation set is used as the test set following (Sun et al., 2022). The detailed training and test set statistics for experiments in Figure 5 are shown in Table 4. All datasets are distributed under either CC BY license or CC BY-SA license, or subject to specified term of use. We have read and complied to the terms during experiments. The label words used follow (Sun et al., 2022) and are the same across all methods. | Datasets | Classes | |Xtest| | |------------|-----------|-----------| | SST-2 | 2 | 872 | | Yelp P. | 2 | 38000 | | AG's News | 4 | 7600 | | DBpedia | 14 | 70000 | | MRPC | 2 | 1725 | | SNLI | 3 | 10000 | | RTE | 2 | 277 | | MNLI | 3 | 9815 | Table 4: Statistics of the training and test set for experiments in Figure 5. ## A.2 Baselines We choose comparable baselines that do not update the PLM parameters, including 1) Gradient-based methods: Prompt Tuning and P Tuning v2; 2) Optimization-based methods: Feature MLP, Blackbox Tuning, and RLPrompt; and 3) Optimizationfree methods: Manual Prompt, and In-Context learning. The details are as follows: **Prompt Tuning** (Lester et al., 2021) optimizes the continuous prompt at the input level. **P-Tuning v2** (Liu et al., 2021a) is a variant of prompt tuning that pre-pends trainable parameters to each layer of the PLM and optimizes them in a multi-task setting. **FeatureMLP** uses pre-trained features output by PLMs and train a lightweight classifier offline. **Black-Box** Tuning (Sun et al., 2022) is a gradient-free method that optimizes the projected extra 500 parameters at the input layer with Covariance Matrix Adaptation Evolution Strategy. **RLPrompt** (Deng et al., 2022) optimizes for discrete prompts with reinforcement | Method | Gradients | Tuning | #Tunable Param. | |----------------------|-------------|----------|-------------------| | Prompt Tuning | Yes | Yes | 50K | | P-Tuning v2 | Yes | Yes | 1.2M | | Feature-MLP | No | Yes | 1M | | Black-Box Tuning | No | Yes | 500 | | RLPrompt | No | Yes | 3.1M | | Manual Prompt‡ | No | No | 0 | | In-Context Learning† | No | No | 0 | | Best Prompt† | No | No | 0 | | Ours | No | No | 0 | learning. **Manual Prompt** is a zero-shot method that directly uses a hand-crafted textual prompt for each dataset. **In-Context Learning** (Brown et al., 2020) is an optimization-free method that uses a few samples as demonstrations prepended to the test sample. Table 5 lists the features and trainable parameter number of baselines and our method. ## A.3 Implementation Details RoBERTa-large contains 354 million parameters and GPT-2 has 1.5 billion parameters. There is no extra parameter added in our method. For each dataset, the experiments are run with 5 different random seeds, and the mean metrics are reported. Most baseline results are taken from Sun et al. (2022) and Deng et al. (2022), while we re-run RLPrompt for MRPC and all baselines for MNLI with original code. All experiments are conducted on NVIDIA A100 and GeForce RTX 3090 GPUs with CUDA. The search process in § 4.2 with 32shot data takes about 2 hours with 40 GB maximum memory. The test process takes 5∼30 minutes depending on the size of T∗and Xtest. Our method is developed by OpenPrompt (Ding et al., 2022), an open-source prompt-learning framework based on PyTorch (Paszke et al., 2019). The models are obtained from the Huggingface Transformers library (Wolf et al., 2020). ## B Efficiency Analysis The results reported in § 4.2 all search through the whole prompt space T∗, i.e. every combination of an instance and a prompt is covered. Since it would require up to 4 hours with a single NVIDIA A100, we seek to optimize the process by pruning the search space. Our strategy is as follows: (1) randomly sample a batch of *valid prompts* (in our experiments we use batch size 16) from T∗ and apply them to the whole training set Xtrain; (2) record the performance of each prompt word, i.e. if a prompt is *"it was really"* and achieves 0.8 accuracy on Xtrain, then for each word in the prompt (*"it", "was", "really"*) 0.8 is recorded; (3) update the set of *valid prompts*; (4) repeat until there is no remaining *valid prompt*. A *valid prompt* means the average score of the three words is over a predefined threshold. In our experiments, the threshold is set to 0.7 on SST-2 dataset and achieves a mean test accuracy of 87.6%. As shown in Table 6, with the pruning strategy, the average time cost can be greatly reduced to 10 minutes with a still satisfying performance on test data. In our experiments, prompts are randomly sampled and grouped into batches. We believe a better-designed and heuristically informed batching strategy will further boost the searching efficiency and test performance. | Method | Time Cost | |------------------|-------------| | Prompt Tuning | 15.9 mins | | Feature-MLP | 7.0 mins | | Black-box Tuning | 10.1 mins | | Ours | 10.3 mins | at each position, we can conclude that words more adjacent to the "<mask>" token has a larger impact on the prediction, which fits with our intuition. In addition, GPT-2 demonstrates better fluency and interpretability compared to RoBERTa-large, as some high-frequency words found for RoBERTalarge like "without" are hard to comprehend. RoBERTa-large GPT-2 ## C Details Of Prompts And Label Words Table 7 displays the specific prompt format and label words used for searching for lottery prompt for each dataset with RoBERTa-large. For autoregressive PLMs like GPT-2, the "<mask>" token are removed and the prediction of the next token by PLM will be extracted. ## D Visualization Of Words In Strong Prompts We also get the 100 best prompts out of T for SST2, and visualize the frequent words at each position, as shown in Figure 7. From the variation of words | Dataset | Prompt | Label words | |-----------|-----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | SST-2 | <Text> [Prompt] <mask> | great, bad | | Yelp P. | <Text> [Prompt] <mask> | great, bad | | CoLA | <Text> [Prompt] <mask> | reasonable, unreasonable | | SNLI | <Text1> [Prompt]? <mask>, <Text2> | Yes, Maybe, No | | RTE | <Text1> [Prompt]? <mask>, <Text2> | Yes, No | | MNLI | <Text1> [Prompt]? <mask>, <Text2> | Yes, Maybe, No | | QNLI | <Text1> [Prompt]? <mask>, <Text2> | Yes, No | | WNLI | <Text1> [Prompt]? <mask>, <Text2> | Yes, No | | MRPC | <Text1> [Prompt]? <mask>, <Text2> | Yes, No | | QQP | <Text1> [Prompt]? <mask>, <Text2> | Yes, No | | AG's News | <Text> [Prompt] <mask> | world, sports, business, technology company, school, artist, athlete, politics, transportation, building, river, | | DBpedia | <Text> [Prompt] <mask> | village, animal, plant, album, film, book water, law, broadcast/program, media/newspaper, restaurant, artist/author, film, award, park, event, government/agency, person, educational/degree, education, director, game, sports/facility, protest, car, language, airport, organization, building, location, athlete, show/organization, sports/league, geopolitical, scholar/scientist, library, hotel, road/railway/highway/transit, painting, hospital, election, written/art, religion, company, train, ship, attack/battle/war/military/conflict, sports/event, disaster, currency, weapon, living, sports/team, politician, god, political/party, music, art, actor, theater, biology, software, island, medical, disease, chemical, product, airplane, food, mountain, astronomy, soldier | | Few-NERD | <Text> <Entity> [Prompt] <mask> | | Table 7: The prompt format and label words used for each dataset. [Prompt] represents the sequence of "[NOUN] [VERB] [PREP|ADJ|ADV]". For GPT-2, "<Text1> [Prompt]? <mask>, <Text2> " is changed into "<Text1> <Text2> [Prompt]? <mask>". | Datasets | Instance Text | Label | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|---------------| | it falls far short of poetry , but | negative | | | | will be best appreciated by those willing to endure its extremely languorous rhythms , waiting for happiness | negative | | | | expiration date | negative | | | | gut-wrenching , frightening war scenes since " saving private ryan " | positive | | | | sit through - despite some first-rate performances | positive | | | | largely flat and uncreative | negative | | | | all of dean 's mannerisms and self-indulgence , but | negative | | | | if oscar had a category called best bad film you thought was going to be really awful but was n't | positive | | | | SST-2 | It would be nice if more of the newcomers were artists, artisans, and producers, rather than lawyers and lobbyists, but head for head, I'll stack up Washington's intellectual capital against any competitor's. | It would be nice if there were more lawyers instead of artistic people. | contradiction | | MNLI | i just couldn't watch that much TV | I couldn't watch that much TV | entailment | | yeah uh well we did well we did you know we really did i mean i just don't understand these people that think taking an RV and parking it and sitting inside and watching TV and having your microwave it's not camping | I don't think it's camping if you hang out in an RV. | entailment | | | Of course | Maybe. | contradiction | | | I think not! | I do not think so. | entailment | | | Exhibit 10 Adjustment Factors Used to Account for Projected Real Income Growth through 2010 and 2020 | See Exhibit 10 for Adjustment Factors Used to Account for Projected Real Income Growth | neutral | | | through 2010 and 2020 | | | | | In the dark of night, their aim must be true. | Their aim must be accurate in the dark, or else they will not succeed. | neutral | | | now we quit that about two years ago no three years ago | We stopped doing that three years ago, after we got everyone China mugs. | entailment | | | when we got China mugs for everybody Two men are playing a game of chess, one is standing and the other is sitting. | A crowd watches a concert. | contradiction | | | A green jeep with men who are manning guns, with a crowd in the background on the street. | Video game fans in cosplay outfits. | contradiction | | | A man has a pink ribbon around his arm. | A guy with a strip of cloth around his bicep. | entailment | | | Large amounts of people walk around near a large, silver, reflective display. | People are singing. | contradiction | | | Man playing the accordion on a sidewalk during the day. | The Pope speed dials. | contradiction | | | SNLI | People walk and bike in front of a box office. | People are carrying about their business nearby a box office | entailment | | Three naked little boys are playing in a river and are covered in mud; | the boys had no clothes on in the river | entailment | | | one is standing up. A person wearing a dark blue covered up attire from head to toe, with a mask and vest, | Someone with a sword | entailment | | | holding a thin sword. Four children are in an industrial kitchen looking at a recipe with the ingredients | Four people are in the kitchen | entailment | | | on the table in front of them. Two guys getting a drink at a store counter. | two guys get a drink | entailment | | | Table 8: The most difficult instances for RoBERTa-large and GPT-2, measured by number of searches required | | | | Table 8: The most difficult instances for RoBERTa-large and GPT-2, measured by number of searches required to get the lottery prompt out of T . Instances in purple indicate failure to find a lottery prompt for GPT-2, and instances in blue are failure instances for RoBERTa-large. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In section "Limitations" at the end of the paper ✓ A2. Did you discuss any potential risks of your work? In section "Ethical Considerations" at the end of the paper ✓ A3. Do the abstract and introduction summarize the paper's main claims? See Abstract and Section 1 Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, 3 And 4 ✓ B1. Did you cite the creators of artifacts you used? Section 2.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 2.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zheng-etal-2023-facial
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations
https://aclanthology.org/2023.acl-long.861
Multimodal Emotion Recognition in Multiparty Conversations (MERMC) has recently attracted considerable attention. Due to the complexity of visual scenes in multi-party conversations, most previous MERMC studies mainly focus on text and audio modalities while ignoring visual information. Recently, several works proposed to extract face sequences as visual features and have shown the importance of visual information in MERMC. However, given an utterance, the face sequence extracted by previous methods may contain multiple people{'}s faces, which will inevitably introduce noise to the emotion prediction of the real speaker. To tackle this issue, we propose a two-stage framework named Facial expressionaware Multimodal Multi-Task learning (FacialMMT). Specifically, a pipeline method is first designed to extract the face sequence of the real speaker of each utterance, which consists of multimodal face recognition, unsupervised face clustering, and face matching. With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning. Experiments demonstrate the effectiveness of the proposed FacialMMT framework on the benchmark MELD dataset. The source code is publicly released at \url{https://github.com/NUSTM/FacialMMT}.
# A Facial Expression-Aware Multimodal Multi-Task Learning Framework For Emotion Recognition In Multi-Party Conversations Wenjie Zheng1, Jianfei Yu1∗**, Rui Xia**1∗ , and Shijin Wang2,3 1School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2iFLYTEK AI Research (Central China) 3State Key Laboratory of Cognitive Intelligence, Hefei, China 1{wjzheng, jfyu, rxia}@njust.edu.cn, 23sjwang3@iflytek.com ## Abstract Multimodal Emotion Recognition in Multiparty Conversations (MERMC) has recently attracted considerable attention. Due to the complexity of visual scenes in multi-party conversations, most previous MERMC studies mainly focus on text and audio modalities while ignoring visual information. Recently, several works proposed to extract face sequences as visual features and have shown the importance of visual information in MERMC. However, given an utterance, the face sequence extracted by previous methods may contain multiple people's faces, which will inevitably introduce noise to the emotion prediction of the real speaker. To tackle this issue, we propose a two-stage framework named **Facial** expressionaware Multimodal Multi-Task learning (FacialMMT). Specifically, a pipeline method is first designed to extract the face sequence of the real speaker of each utterance, which consists of multimodal face recognition, unsupervised face clustering, and face matching. With the extracted face sequences, we propose a multimodal facial expression-aware emotion recognition model, which leverages the frame-level facial emotion distributions to help improve utterance-level emotion recognition based on multi-task learning. Experiments demonstrate the effectiveness of the proposed FacialMMT framework on the benchmark MELD dataset. The source code is publicly released at https: //github.com/NUSTM/FacialMMT. ## 1 Introduction Multimodal Emotion Recognition in Multi-party Conversations (MERMC) is a challenging task in the field of multimodal research. The complexity of the task arises from the dynamic and spontaneous nature of human communication in multi-party conversations, which often involves multiple people expressing a variety of emotions simultaneously. In this task, the use of multiple modalities (e.g., ∗Corresponding authors. | Methods | #Col | #Seg | #Rec | |---------------------------|--------|--------|--------| | MELD (Poria et al., 2019) | ✗ | ✗ | ✗ | | UniMSE (Hu et al., 2022b) | ✗ | ✗ | ✗ | | MMGCN (Hu et al., 2021) | ✓ | ✗ | ✗ | | MESM (Dai et al., 2021) | ✓ | ✗ | ✗ | | M3ED (Zhao et al., 2022b) | ✓ | ✗ | ✗ | | FacialMMT (Ours) | ✓ | ✓ | ✓ | Table 1: Comparison between different models for face sequence extraction in the MERMC task. \#Col represents collection of all possible speakers' face sequences; \#Seg represents speaker segmentation, aiming to distinguish speaker sequences; \#Rec represents speaker recognition, aiming to identify the real speaker. text, audio, and vision) is essential as it allows for a more comprehensive understanding of the emotions being expressed. Among different modalities, visual information usually plays a crucial role as it often provides direct clues for emotion prediction. For example, in Figure 1, without the information from the visual modality, it is hard to determine the anger emotion of the real speaker, i.e., *Chandler*. In the literature, most previous MERMC studies primarily focus on the text and audio modalities (Poria et al., 2019; Liang et al., 2020; Mao et al., 2021; Chen et al., 2021), because the visual context in MERMC often involves many people and complex environmental scenes, which may bring much noise to emotion recognition of the real speaker. Owing to the indispensable role of visual modalities, a number of studies explored the potential of visual information in MERMC (Mai et al., 2019; Wang et al., 2022a; Li et al., 2022b; Hu et al., 2022a), which employ 3D-CNNs (Ji et al., 2010; Tran et al., 2015) to extract video features and model the interaction and dependency between consecutive video frames. However, the visual information extracted by these methods still contains much noise from environmental scenes. To alleviate the visual noise from environmental scenes, several recent studies (Dai et al., 2021; Liang et al., 2021; Hu et al., 2021; Zhao et al., ![1_image_0.png](1_image_0.png) 2022b) propose to detect all the faces in an utterance based on face detection tools such as MTCNN (Zhang et al., 2016), OpenFace (Baltrušaitis et al., 2016) or pre-trained active speaker detection models (Tao et al., 2021). However, given an utterance, the face sequence extracted by these methods may still contain multiple people, which may mislead the emotion prediction of the real speaker. For example, in Figure 1, there are two persons, *Joey* and *Chandler*, with distinct facial expressions, i.e., *disgust* and *anger*. Previous methods use the face sequence containing both persons' faces as visual features, which will inevitably have a negative impact on predicting *Chandler*'s emotion. Therefore, to fully leverage the visual modality for emotion recognition, it is crucial to extract the face sequence of the real speaker of each utterance. To this end, we propose a two-stage multimodal multi-task learning framework named **FacialMMT** for the MERMC task. In the first stage, we design a pipeline solution to obtain the face sequence of the real speaker, which contains three steps: 1) Extract the face sequence containing all possible speakers based on the combination of multimodal rules and an active speaker detection model (Tao et al., 2021); 2) Identify the number of face clusters in the face sequence based on an unsupervised clustering algorithm named InfoMap (Rosvall and Bergstrom, 2008); 3) Perform face matching and choose the face sequence with the highest confidence as the face sequence of the real speaker. Table 1 illustrates the differences between our method and previous ## Methods. Based on the extracted face sequence, in the second stage, we further propose a Multimodal facial expression-aware multi-task learning model named MARIO. MARIO first resorts to an auxiliary framelevel facial expression recognition task to obtain the emotion distribution of each frame in the face sequence. Next, the emotion-aware visual representation is then integrated with textual and acoustic representations via Cross-Modal Transformer (Tsai et al., 2019) for utterance-level emotion recognition. Our main contributions can be summarized as follows: - To obtain the face sequence of the real speaker in an utterance, we propose a face sequence extraction method, which consists of three steps, i.e., multimodal face recognition, unsupervised face clustering, and face matching. - We propose a Multimodal facial expressionaware multi-task learning model named MARIO for the MERMC task, which leverages an auxiliary frame-level facial expression recognition task to obtain the frame-level emotion distribution to help utterance-level emotion recogntion. - Experimental results on a benchmark dataset MELD demonstrate the superiority of our proposed FacialMMT framework over the SOTA systems. Moreover, FacialMMT outperforms a number of SOTA systems with a significant margin when only visual modality is used. ![2_image_0.png](2_image_0.png) ## 2 Method 2.1 Task Formulation Given an MERMC corpus D, let us use {X1, X2, . . . , X|D|} to denote a set of samples in the corpus. Each sample contains a multimodal dialogue with n utterances d = {u1, u2*, . . . , u*n}, in which each utterance ui = {uil, uia, uiv} contains information from three modalities, i.e., text, audio, and vision, denoted by {*l, a, v*}. The goal of the MERMC task is to classify each utterance uiinto one of C pre-defined emotion types yi, and predict a label sequence y = {y1, y2*, . . . , y*n} for d. Note that each utterance is only annotated with one speaker's identity (real speaker) and his/her emotion is annotated as the emotion of the current utterance. ## 2.2 Framework Overview As shown in Figure 2, our FacialMMT framework contains two stages. To obtain the face sequence of the real speaker in each utterance, the first stage introduces a pipeline method to perform multimodal face recognition and unsupervised clustering, followed by face matching. With the extracted face sequences, the second stage resorts to an auxiliary frame-level facial expression recognition task to generate the emotion distribution for each frame in the face sequence, and then employs Cross-Modal Transformer to integrate the emotion-aware visual representations with text and acoustic representations for multimodal emotion recognition. We will present the details of the two stages in the following two subsections. ## 2.3 Face Sequence Extraction As shown in the left side of Figure 2, the first stage extracts the face sequence of the real speaker based on the following three steps: Multimodal Face Recognition. First, we propose to combine multimodal rules and an active speaker detection (ASD) model to extract face sequences of all possible speakers. Specifically, given a video utterance, we use a pre-trained ASD model TalkNet (Tao et al., 2021) to combine visual and audio information for speaker detection. However, TalkNet often fails to identify speakers for videos with short duration or complex scenes (e.g., for a video with multiple people, someone is laughing or making noise instead of speaking). To obtain the face sequence in these videos, we further design several multimodal rules, including the opening and closing frequency of mouth, movement of different people's mouths between video frames, and the alignment between the mouth movement and audio signals. The details of these multimodal rules are described in Appendix A.1. Unsupervised Clustering. Based on the raw face sequence, we apply an unsupervised clustering algorithm InfoMap (Rosvall and Bergstrom, 2008) to identify the number of face clusters in the sequence as follows: - We first employ the K-Nearest Neighbors algorithm to construct a graph of all potential speakers' faces, and then calculate the similarity between faces, followed by using the normalized as the weight of edges. - Random walks are then conducted on the graph to generate different face sequences. - Lastly, we hierarchically encode the face sequences, and minimize the minimum average encoding length to obtain the clustering result. The minimization process includes minimizing the average encoding length of classes, as well as the average encoding length of each class's in-class objects. The formulation is defined as follows: $$\begin{split}&\arg\min_{K,Y}\mathcal{L}\left(P,K,Y\right)=q_{\cap}\left(-\sum_{i=1}^{K}\frac{q_{i\cap}}{q_{\cap}}\log\frac{q_{i\cap}}{q_{\cap}}\right)\\ &+\sum_{i=1}^{K}p_{i\cap}\left(-\frac{q_{i\cap}}{p_{i\cap}}\log\frac{q_{i\cap}}{p_{i\cap}}-\sum_{\alpha\in i}\frac{p_{\alpha}}{p_{i\cap}}\log\frac{p_{\alpha}}{p_{i\cap}}\right)\end{split}\tag{1}$$ where Y is the predicted face sequence category, K represents the number of face sequences, qi↷ represents the probability of the occurrence of category i, q↷ =PK i=1 qi↷, pα represents the probability of the occurrence of a face image α, and pi⟲ = qi↷ +PK α∈i pα. Face Matching. Finally, we construct a face library to determine the face sequence of the real speaker. Because the benchmark dataset for the MERMC task, i.e., MELD (Poria et al., 2019), contains six leading roles occurring frequently in the dataset, we manually select 20 different face images for each leading role based on the raw face sequence extracted in Multimodal Face Recognition and regard these 120 images as the face library. Next, we use a ResNet-50 model (He et al., 2016) pre-trained on a face recognition dataset MS-Celeb1M (Guo et al., 2016) to extract visual features for the images in the library and in different face clusters. As each utterance provides the real speaker's identity, who is either one of six leading roles or a passerby, we match the images in each face cluster with six leading roles' images in the library by calculating the cosine similarity between their visual representations. Specifically, if the identity of the real speaker is one of the six leading roles, the face sequence with the highest similarity is regarded as the real speaker's face sequence; otherwise, we regard the face sequence with the lowest similarity as the real speaker's face sequence. ## 2.4 A Multimodal Facial Expression-Aware Multi-Task Learning Model After obtaining the real speaker's face sequence in each utterance, we further propose a Multimodal facial expression-aware multi-task learning model (MARIO), as shown in the right side of Figure 2. Next, we introduce the details of MARIO, including unimodal feature extraction, emotion-aware visual representation, and multimodal fusion. ## 2.4.1 Unimodal Feature Extraction In the MERMC task, given an utterance ui, we extract unimodal features from three modalities {uil, uia, uiv} to obtain the text, audio, and visual representations as follows: - **Text**: To efficiently utilize the dialogue context and speaker's emotional dynamics, we concatenate the input utterance and all its contextual utterances as input, and feed it into a pre-trained language model (e.g., BERT) for fine-tuning. We then take out the hidden representation of the first token as the text representation El ∈ R dl, where dl = 512 is the size of text features. - **Audio**: We obtain the word-level audio representation based on the Wav2vec2.0 model (Baevski et al., 2020) pre-trained on the Librispeech-960h dataset (Panayotov et al., 2015), denoted by Ea ∈ R da, where da = 768 is the dimension of audio features. - **Vision**: Given the real speaker's face sequence of the input utterance, we use an InceptionResNetv1 model (Szegedy et al., 2017) pre-trained on the CASIA-WebFace dataset (Yi et al., 2014) to obtain the frame-level visual representation Ev ∈ R L×dv, where L is the face sequence length and dv = 512 is the size of visual features. ## 2.4.2 Emotion-Aware Visual Representation Because the goal of MERMC is to predict the emotion of all the utterances in a dialogue, we propose to enhance the frame-level visual representation with the emotion distribution of each frame. To achieve this, we introduce an auxiliary framelevel facial expression recognition task, which is known as Dynamic Facial Expression Recognition (DFER) in the computer vision community (Li and Deng, 2020). Formally, let D s be another set of samples for the DFER task. Each sample is a face sequence containing m faces, denoted by s = {s1, s2*, . . . , s*m}. The goal of DFER is to predict the label sequence z = {z1, z2*, . . . , z*m}, where each label zi belongs to one of C pre-defined facial expressions (i.e., emotion categories). Auxiliary DFER Module. As shown in the top right of Figure 2, we employ a well-known SwinTransformer model (Liu et al., 2021) pre-trained on the Ms-Celeb-1M dataset (Guo et al., 2016) to obtain the representation of each frame in the face sequence as follows: $\mathbf{H}^{s}=\{\mathbf{h}_{1}^{s},\cdots,\mathbf{h}_{m}^{s}\}=$ Swin-Transformer($s$) (2) where Hs ∈ R m×du is the generated facial features. Next, we feed Hsinto a multi-layer perceptron (MLP) layer for facial expression recognition. During the training stage, we use cross-entropy loss to optimize the parameters for the DFEK task: $$p(z_{i})=\text{softmax}(\text{MLP}(\mathbf{h}_{i}^{s}))\tag{3}$$ $$\mathcal{L}^{DFER}=-\frac{1}{M}\sum_{i=1}^{M}\sum_{j=1}^{m}\log p(z_{ij})\tag{4}$$ where $M$ is the number of samples in $\mathbb{D}^{s}$. **Facial Expression Perception for MERMC.** Based on the auxiliary DFER module, a direct solution to obtain the emotion-aware visual representation is to convert the predicted emotion of each frame to an one-hot vector and concatenate it with its original representation as the frame-level visual representation. However, as we all know, the one-hot vector derived from the *argmax* function is not differentiable, which will affect the parameter optimization in our multi-task learning framework. To tackle this issue, we apply Gumbel Softmax Jang et al. (2017), which has a continuous relaxed categorical distribution, to obtain an approximated emotion distribution for each frame. By using softmax as the differentiable approximation of *argmax* and adding a temperature function τ , it can achieve gradient updates during backpropagation: $$\mathbf{g}_{i}=\mathrm{softmax}((g+\mathbf{h}_{i}^{s})//\tau)\qquad\qquad(5)$$ $=\;-1-(-1-(-1))$ 7. where gi ∈ R C, g = − log(− log(u)) is a noise sampled from the Gumbel distribution, and u ∼ Uniform(0, 1). As τ → 0, the softmax computation smoothly approaches the argmax, and the sample vectors approximate one-hot vectors. Moreover, if the emotion distribution of the i-th frame in the face sequence concentrates on certain emotion, it shows that this frame reflects the clear emotion; otherwise if the emotion distribution is uniform, it implies the emotion in this frame is blurred and may bring noise to our MERMC task. To alleviate the noise from the emotion-blurred frames, we design a gating mechanism to dynamically control the contribution of each frame in the face sequence for the MERMC task. Specifically, the emotion clarity of the i-th frame can be computed as δi = gi· g⊤ i , where · denotes the dot product. Based on this, we can obtain the emotion clarity of all the frames in the face sequence: $$\delta=\{\delta_{1},\delta_{2},\cdots,\delta_{m}\}$$ $$(6)$$ We then apply δ to the original visual representation Ev to filter out the emotion-blurred frames, in which δiis less than a predetermined threshold. Finally, we concatenate the filtered visual representation E ′ v and the emotion distributions of all the frames Ee to obtain the emotion-aware visual representation as follows: $$\hat{\bf E}_{v}={\bf E}_{v}^{\prime}\oplus{\bf E}_{e},\quad{\bf E}_{e}=\{{\bf g}_{1},\cdots,{\bf g}_{m^{\prime}}\}\tag{7}$$ where $\hat{\bf E}_{v}\in{\mathbb{R}}^{m^{\prime}\times(d_{v}+C)}$, $m^{\prime}$ is the number of $$\mathbf{\Pi}^{0}$$ filtered frames, and ⊕ is the concatenation operator. ## 2.4.3 Multimodal Fusion Intra-Modal Interactions. We feed Ea and Eˆv to two separate self-attention Transformer layers (Vaswani et al., 2017) to model the intra-modal interactions within audio features and visual features as follows: Ha = Transformer(Ea), Hv = Transformer(Eˆv) Inter-Modal Interactions. To achieve interactions between different modalities, we apply the Cross-Model Transformer (CMT) layer (Tsai et al., 2019). Firstly, we fuse the text and audio modalities, alternating the two modalities as the query vector, then concatenating them to obtain the textaudio fused representation Hl−a. Similarly, Hl−a is then fused with visual modality to obtain the utterance-level text-audio-visual fused representation Hl−a−v below: > $\mathbf{H}_{l-a}=\mathrm{CM}$-Transformer($\mathbf{E}_{l},\mathbf{H}_{a}$) > $\mathbf{H}_{l-a-v}=\mathrm{CM}$-Transformer($\mathbf{H}_{l-a},\mathbf{H}_{v}$) Finally, $\mathbf{H}_{l-a-v}$ is fed to a row-form. Finally, Hl−a−v is fed to a softmax layer for emotion classification: $$q(y)=\mathrm{softmax}(\mathbf{W}^{\top}\mathbf{H}_{l-a-v}+\mathbf{b})\tag{10}$$ The standard cross-entropy loss is used to optimize the parameters for the MERMC task: $${\mathcal{L}}^{M E R M C}=-{\frac{1}{N}}\sum_{i=1}^{N}\log q(y_{i})\qquad(11)$$ $\text{number of utterance samples}$ . $$(8)$$ $$(9)$$ where N is the number of utterance samples. The pseudocode for training the MARIO model is provided in Appendix A.2. ## 3 Experiments And Analysis 3.1 Dataset To verify the effectiveness of our FacialMMT framework, we conduct experiments with two datasets. One is the dataset for the main MERMC task, and the other is the dataset for the auxiliary DFER task. The descriptions are as follows: Dataset for MERMC: We use the MELD dataset (Poria et al., 2019), which is a publicly available dataset for MERMC. MELD contains 13,707 video clips extracted from the sitcom *Friends*, which contain information such as utterance, audio, video, and speaker identity. It also provides emotion annotations on each utterance with seven classes, including neutral, surprise, fear, *sadness*, joy, *disgust*, and *anger*. Dataset for DFER: For the auxiliary DFER task, we use the Aff-Wild2 dataset (Kollias and Zafeiriou, 2019; Kollias, 2022), which contains 548 video clips collected from YouTube in realworld environments. Each clip has several frames of aligned faces and each frame is annotated with a facial expression. It has eight classes of emotions (six basic emotions, *neutral*, and *other*). Because the goal is to leverage Aff-Wild2 to guide the emotion prediction on our main dataset, we removed samples annotated with the *other* emotion. ## 3.2 Compared Systems We compare FacialMMT against the following systems: **DialogueRNN** (Majumder et al., 2019) models the speaker identity, historical conversation, and emotions of previous utterances with RNNs. **ConGCN** (Zhang et al., 2019) proposes a Graph Convolutional Network (GCN)-based model, which constructs a heterogeneous graph based on context-sensitive and speaker-sensitive dependencies. **MMGCN** (Hu et al., 2021) builds both long-distance dependency and dependencies between speakers with GCNs. **DialogueTRM** (Hu et al., 2021) proposes to consider the temporal and spatial dependencies and models local and global context information. **DAG-ERC** (Shen et al., 2021) models the information flow between the conversation background and its surrounding context. **MMDFN** (Hu et al., 2022a) introduces a dynamic fusion module to fuse multimodal context features. EmoCaps (Li et al., 2022b) extracts the emotional tendency and fuses modalities through an emotion capsule. **UniMSE** (Hu et al., 2022b) unifies multimodal sentiment analysis and ERC tasks with a unified framework based on T5 (Raffel et al., 2020). GA2MIF (Li et al., 2023) proposes a graph and attention based two-stage multi-source multimodal fusion approach. ## 3.3 Implementation For our FacialMMT framework, we employ either BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) as the textual encoder and use tiny version of Swin Transformer 1. The maximum length of input text is set to 512. The intercept operation is to remove the last word of the longest utterance in a dialogue and loop until the condition is met. The maximum length of visual and audio is set to the average plus 3 times the standard deviation. The batch sizes for MERMC task and DFER task is set to 1 and 150, respectively. The size of hidden layers is 768. The number of heads in self-attention layers and cross-modal transformer layers is 12, and the learning rates for MERMC and DFER is set to 7e-6 and 5e-5, respectively. The dropout rate is 0.1. The threshold for filter out the emotion-blurred frames is set to 0.2. Following previous works, we use the weighted average F1-score as the evaluation metric for the MERMC task. For the DFER task, the macro F1score on the validation set is reported. Our model is trained on a GeForce RTX 3090Ti GPU and parameters are optimized through an AdamW optimizer. ## 3.4 Main Results On The Mermc Task We report the results of different methods on the MERMC task in Table 2 and Table 4. The results of baselines are retrieved from previous studies. First, we compare the multimodal emotion recognition results of each method. As shown in Table 2, **FacialMMT-RoBERTa** outperforms all the compared systems with a significant margin, indicating the effectiveness of our proposed approach. Additionally, we find that using BERT instead of RoBERTa as the text encoder leads to a slight decrease in performance. Although it performs relatively than the T5-based **UniMSE** model, it still outperforms all the other baseline systems that either use BERT or RoBERTa as the text encoder. Moreover, we compare the emotion recognition results in a single visual modality. As shown in Table 4, previous methods such as **EmoCaps** and MM-DFN directly employ 3D-CNN to extract the visual features, which introduce environmental 1https://github.com/JDAI-CV/FaceX-Zoo Models Neutral Surprise Fear Sadness Joy Disgust Anger F1 DialogueRNN (Majumder et al., 2019) 73.50 49.40 1.20 23.80 50.70 1.70 41.50 57.03 ConGCN (Zhang et al., 2019) 76.70 50.30 8.70 28.50 53.10 10.60 46.80 59.40 MMGCN (Hu et al., 2021) - - - - - - - 58.65 DialogueTRM∗(Hu et al., 2021) - - - - - - - 63.50 DAG-ERC∗(Shen et al., 2021) - - - - - - - 63.65 MM-DFN (Hu et al., 2022a) 77.76 50.69 - 22.94 54.78 - 47.82 59.46 EmoCaps⋆(Li et al., 2022b) 77.12 **63.19** 3.03 **42.52** 57.50 7.69 **57.54** 64.00 UniMSE▲(Hu et al., 2022b) - - - - - - - 65.51 GA2MIF (Li et al., 2023) 76.92 49.08 - 27.18 51.87 - 48.52 58.94 FacialMMT-BERT 78.55 58.17 13.04 38.51 61.10 **30.30** 53.66 64.69 FacialMMT-RoBERTa **80.13** 59.63 **19.18** 41.99 **64.88** 18.18 56.00 **66.58** Table 2: Comparison results of the MERMC task on the MELD dataset. The baselines with italics only use textual modality. ▲ indicates the model uses T5 (Raffel et al., 2020) as the textual encoder. The baselines tagged with ⋆and ∗respectively use BERT and RoBERTa as textual encoders. The best results are marked in bold. Table 3: Results of the DFER task based on F1 score. noise and thus obtain relatively poor emotion recognition results. By extracting the face sequence of all possible speakers in a video, **MMGCN** achieves the best performance among the baseline systems. Moreover, we can observe our FacailMMT framework significantly outperforms all the compared systems, mainly due to the accurate extraction of the real speaker's face sequence. Lastly, we conduct ablation studies of Face Sequence Extraction in Section 2.3. First, after removing unsupervised clustering (UC) and face matching (FM), the emotion recognition result decreases by 2.12%, which demonstrates the usefulness of the two modules. Furthermore, if all the three steps are removed, meaning that video frames are directly used as visual features, the performance drops significantly. ## 3.5 Results On The Dfer Task | Savchenko (2022) | FacialMMT | | |--------------------|-------------|-------| | F1 | 40.67 | 42.19 | Table 3 shows the comparison of our method and one of the state-of-the-art methods (Savchenko, 2022) on the DFER task. For a fair comparison, we re-implement the compared system and run experiments based on the same setting as ours. In Table 3, we can clearly observe that our framework outperforms the compared system by 1.52 absolute percentage points on the Aff-Wild2 dataset, which demonstrates the effectiveness of our model on the auxiliary DFER task. Table 4: Comparison of single visual modality emotion recognition results. MFR represents multimodal face recognition, UC represents unsupervised clustering, and FM represents face matching. ## 3.6 Ablation Study We conduct ablation studies of FacialMMT, and show the results in Table 5. It is obvious that any removal of one or two modality leads to a performance drop, indicating that any modality plays an essential role in emotion recognition. Specifically, we can infer that the visual modality plays a more important role than the audio modality, which differs from the observations in previous studies. This suggests that enhancing multimodal emotion recognition from the perspective of visual representation is effective. Moreover, removing the auxiliary DFER module also drops the performance, indicating that introducing frame-level facial expression supervision signals can indeed provide important clues for utterance-level emotion recognition. | Models | Composition of visual information | F1 | |------------------------------|-------------------------------------|-------| | EmoCaps | Video frames | 31.26 | | MM-DFN | Video frames | 32.34 | | MMGCN | Possible speakers' face sequences | 33.27 | | Real speaker's face sequence | 36.48 | | | FacialMMT | - w/o UC, FM | 34.36 | | - w/o MFR, UC, FM | 32.27 | | ## 3.7 Case Study To better understand the two main contributions of our work, we present two examples in Figure 3. As our work is the first to use enhanced visual representations to help the MERMC task, we only compare FacialMMT with its variants: 1) **Toolkitbased FacialMMT** represents using face detection ![7_image_0.png](7_image_0.png) | FacialMMT | 66.58 | |-----------------------------|---------| | - w/o Audio | 66.20 | | - w/o Vision | 65.55 | | - w/o Audio, Vision | 63.98 | | - w/o Text, Vision | 38.02 | | - w/o Auxiliary DFER Module | 66.08 | toolkits to detect the face sequence, as in previous methods; 2) **FacialMMT -w/o UC, FM** represents face sequences extracted by multimodal face recognition. As shown in Figure 3, the face sequences extracted by two variants of our model contain much noise, which may mislead the emotion prediction of the input utterance. In contrast, our **FacialMMT** model correctly extracts the face sequence of the real speaker in both cases, and leverages the framelevel emotion distribution to help correctly predict the utterance-level emotions. ## 4 Related Work 4.1 Emotion Recognition In Conversations Recently, Emotion Recognition in Conversations (ERC) has gradually become a hot topic in the field of emotion analysis. According to the input form, ERC is classified into text-based ERC and multimodal ERC. Text-based ERC mainly focuses on research in modeling context, modeling speaker relationships, and incorporating commonsense knowledge (Majumder et al., 2019; Li et al., 2020; Shen et al., 2021; Liu et al., 2022c; Li et al., 2022a; Ong et al., 2022). To better mimic the way of human thinking, multimodal ERC has rapidly developed in recent years. Multimodal ERC mainly focuses on multimodal feature extraction, interaction, and fusion. First, some studies (Mao et al., 2021; Joshi et al., 2022; Li et al., 2022a) consider context information in conversations and utilize pre-trained language models such as BERT (Devlin et al., 2019) and BART (Lewis et al., 2020) to obtain dialogue-level text representations. Some works (Dai et al., 2021; Liang et al., 2021; Hu et al., 2021; Zhao et al., 2022b) also extract facial representations using various tools, such as MTCNN (Zhang et al., 2016). For multimodal interactions, exiting studies (Tsai et al., 2019; Lv et al., 2021) propose a Cross-Modal Transformer model and a progressive modality reinforcement approach for unaligned multimodal sequences. For modality fusion, Jin et al. (2020) propose a localness and speaker aware transformer to capture local context and emotional inertia. Li et al. (2022b) design an emotion capsule to fuse sentence vectors through multimodal representations, and Zou et al. (2022) propose to use a main modal Transformer to improve the effectiveness of multimodal fusion. In this work, due to the specific nature of multi-party conversations, we extract the face sequence of the real speaker from a video, and use frame-level facial expressions to help utterancelevel emotion recognition. ## 4.2 Dynamic Facial Expression Recognition The value of understanding facial expressions lies in collecting direct impressions from others during a conversation. Thus, there has been a significant amount of research conducted on the Dynamic Facial Expression Recognition (DFER) task. Early DFER datasets were mainly collected from laboratory environments, such as CK+ (Lucey et al., 2010), MMI (Valstar et al., 2010), Oulu-CASIA (Zhao et al., 2011). Since 2013, Emotion Recognition in the Wild (EmotiW) competition has been held, researchers have begun to shift their focus from laboratory-controlled environments to more realistic and complex wild scenarios. Some works (Sümer et al., 2021; Delgado et al., 2021; Mehta et al., 2022) focus on predicting student engagement, while others focus on mental health issues (Yoon et al., 2022; Amiriparian et al., 2022; Liu et al., 2022a). Moreover, there are also several studies proposing new datasets or methods for facial expression recognition of characters in movies and TV shows (Jiang et al., 2020; Zhao and Liu, 2021; Toisoul et al., 2021; Wang et al., 2022b; Liu et al., 2022b). ## 5 Conclusion In this paper, we proposed a two-stage framework named **Facial** expression-aware Multimodal Multi-Task learning (FacialMMT) for the MERMC task. FacialMMT first extracts the real speaker's face sequence from the video, and then leverages an auxiliary frame-level facial expression recognition task to obtain the emotion-aware visual representation through multi-task learning, followed by multimodal fusion for the MERMC task. Experiments on the MELD dataset show the effectiveness of FacialMMT. ## Limitations Our work has the following limitations. First, our proposed FacialMMT approach is a two-stage framework that is not fully end-to-end. We plan to propose an end-to-end framework in the future, which integrates face sequence extraction and multimodal emotion recognition in a joint learning manner. Second, this work primarily focuses on the visual modality, and has not yet delved into other aspects of the MERMC task. Therefore, in the future, we plan to leverage the extracted face sequences to explore better cross-modal alignment and multimodal fusion mechanisms to improve the performance of the MERMC task. ## Ethics Statement We would like to thank Poria et al. (2019) and Kollias and Zafeiriou (2019) for their valuable work in constructing and sharing the MELD and AffWild2 datasets. MELD is licensed under the GNU General Public License v3.02. For the Aff-Wild2, we have signed an End User License Agreement3. Since MELD is built on the sitcom Friends, we manually annotate 20 different face images occurring in the sitcom for each of the six leading roles without accessing to any personal information. We do not share personal information and do not release sensitive content that can be harmful to any individual or community. If applying our framework to real-world scenarios in the future, it could potentially involve some ethical issues such as user privacy and ethical biases, as pointed out by Stark and Hoey (2021) and Stark and Hutson (2021). While the ethical issues faced in emotion recognition are common, we will engage with the concerns raised about emotion recognition in the references and strictly comply with relevant regulations and ethical standards. Specifically, our work is based on publicly available datasets, and if we construct new MERMC datasets in the future, we will carefully consider user privacy issues, anonymize or obfuscate facial data, and ensure that the framework is only used in contexts where explicit consent for facial data processing has been obtained. Moreover, we will refer to the recommendations in Stark and Hoey (2021) and develop a comprehensive ethical framework that guides our research process. We are also committed to being transparent about our research methods, data sources, and potential limitations. Regarding potential biases, we plan to evaluate our framework on more diverse datasets in the future, and propose appropriate solutions to alleviate the bias issues. ## Acknowledgements The authors would like to thank the anonymous reviewers for their insightful comments. This work was supported by the Natural Science Foundation of China (62076133 and 62006117), and the Natural Science Foundation of Jiangsu Province for Young Scholars (BK20200463) and Distinguished Young Scholars (BK20200018). ## References Shahin Amiriparian, Lukas Christ, Andreas König, EvaMaria Meßner, Alan Cowen, Erik Cambria, and Björn W Schuller. 2022. Muse 2022 challenge: Multimodal humour, emotional reactions, and stress. In Proceedings of ACM MM. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Proceedings of NeurIPS. Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface: an open source facial behavior analysis toolkit. In *Proceedings of WACV*. Feiyu Chen, Zhengxiao Sun, Deqiang Ouyang, Xueliang Liu, and Jie Shao. 2021. Learning what and when to drop: Adaptive multimodal and contextual dynamics for emotion recognition in conversation. In Proceedings of ACM MM. Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, and Pascale Fung. 2021. Multimodal end-to-end sparse model for emotion recognition. In *Proceedings of* NAACL-HLT. Kevin Delgado, Juan Manuel Origgi, Tania Hasanpoor, Hao Yu, Danielle Allessio, Ivon Arroyo, William Lee, Margrit Betke, Beverly Woolf, and Sarah Adel Bargal. 2021. Student engagement dataset. In Proceedings of ICCV. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*. Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. 2016. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In Proceedings of ECCV. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of CVPR*. Dou Hu, Xiaolong Hou, Lingwei Wei, Lianxin Jiang, and Yang Mo. 2022a. Mm-dfn: Multimodal dynamic fusion network for emotion recognition in conversations. In *Proceedings of ICASSP*. Guimin Hu, Ting-En Lin, Yi Zhao, Guangming Lu, Yuchuan Wu, and Yongbin Li. 2022b. Unimse: Towards unified multimodal sentiment analysis and emotion recognition. In *Proceedings of EMNLP*. Jingwen Hu, Yuchen Liu, Jinming Zhao, and Qin Jin. 2021. Mmgcn: Multimodal fusion via deep graph convolution network for emotion recognition in conversation. In *Proceedings of ACL*. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparametrization with gumble-softmax. In Proceedings of ICLR. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 2010. 3d convolutional neural networks for human action recognition. *Proceedings of ICML*. Xingxun Jiang, Yuan Zong, Wenming Zheng, Chuangao Tang, Wanchuang Xia, Cheng Lu, and Jiateng Liu. 2020. Dfew: A large-scale database for recognizing dynamic facial expressions in the wild. In Proceedings of ACM MM. Xiao Jin, Jianfei Yu, Zixiang Ding, Rui Xia, Xiangsheng Zhou, and Yaofeng Tu. 2020. Hierarchical multimodal transformer with localness and speaker aware attention for emotion recognition in conversations. In *Proceedings of NLPCC*. Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Vikram Singh, and Ashutosh Modi. 2022. Cogmen: Contextualized gnn based multimodal emotion recognition. In *Proceedings of NAACL-HLT*. Dimitrios Kollias. 2022. Abaw: Valence-arousal estimation, expression recognition, action unit detection & multi-task learning challenges. In Proceedings of CVPR. Dimitrios Kollias and Stefanos Zafeiriou. 2019. Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface. *arXiv preprint* arXiv:1910.04855. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of ACL*. Jiang Li, Xiaoping Wang, Guoqing Lv, and Zhigang Zeng. 2023. Ga2mif: Graph and attention based two-stage multi-source information fusion for conversational emotion detection. IEEE Trans. Affect. Comput. Jingye Li, Donghong Ji, Fei Li, Meishan Zhang, and Yijiang Liu. 2020. Hitrans: A transformer-based context-and speaker-sensitive model for emotion detection in conversations. In *Proceedings of COLING*. Shan Li and Weihong Deng. 2020. Deep facial expression recognition: A survey. *IEEE Trans. Affect.* Comput. Shimin Li, Hang Yan, and Xipeng Qiu. 2022a. Contrast and generation make bart a good dialogue emotion recognizer. In *Proceedings of AAAI*. Zaijing Li, Fengxiao Tang, Ming Zhao, and Yusen Zhu. 2022b. Emocaps: Emotion capsule based model for conversational emotion recognition. In Proceedings of ACL (Findings). Jingjun Liang, Ruichen Li, and Qin Jin. 2020. Semisupervised multi-modal emotion recognition with cross-modal distribution matching. In *Proceedings* of ACM MM. Yunlong Liang, Fandong Meng, Ying Zhang, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021. Infusing multisource knowledge with heterogeneous graph neural network for emotional conversation generation. In Proceedings of AAAI. Feng Liu, Han-Yang Wang, Si-Yuan Shen, Xun Jia, Jing-Yi Hu, Jia-Hao Zhang, Xi-Yi Wang, Ying Lei, Ai-Min Zhou, Jia-Yin Qi, et al. 2022a. Opo-fcm: A computational affection based occ-pad-ocean federation cognitive modeling approach. IEEE Trans. Comput. Soc. Syst. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yuanyuan Liu, Wei Dai, Chuanxu Feng, Wenbin Wang, Guanghao Yin, Jiabei Zeng, and Shiguang Shan. 2022b. Mafw: A large-scale, multi-modal, compound affective database for dynamic facial expression recognition in the wild. In Proceedings of ACM MM. Yuchen Liu, Jinming Zhao, Jingwen Hu, Ruichen Li, and Qin Jin. 2022c. Dialogueein: Emotion interaction network for dialogue affective analysis. In Proceedings of COLING. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of ICCV*. Patrick Lucey, Jeffrey F Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In *Proceedings of CVPR*. Fengmao Lv, Xiang Chen, Yanyong Huang, Lixin Duan, and Guosheng Lin. 2021. Progressive modality reinforcement for human multimodal emotion recognition from unaligned multimodal sequences. In *Proceedings of CVPR*. Sijie Mai, Haifeng Hu, and Songlong Xing. 2019. Divide, conquer and combine: Hierarchical feature fusion network with local and global perspectives for multimodal affective computing. In *Proceedings of* ACL. Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In Proceedings of AAAI. Yuzhao Mao, Guang Liu, Xiaojie Wang, Weiguo Gao, and Xuan Li. 2021. Dialoguetrm: Exploring multimodal emotional dynamics in a conversation. In Proceedings of EMNLP (Findings). Naval Kishore Mehta, Shyam Sunder Prasad, Sumeet Saurav, Ravi Saini, and Sanjay Singh. 2022. Threedimensional densenet self-attention neural network for automatic detection of student's engagement. Appl. Intell. Donovan Ong, Jian Su, Bin Chen, Anh Tuan Luu, Ashok Narendranath, Yue Li, Shuqi Sun, Yingzhan Lin, and Haifeng Wang. 2022. Is discourse role important for emotion recognition in conversation? In *Proceedings* of AAAI. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In Proceedings of ICASSP. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. Meld: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of ACL. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.* Martin Rosvall and Carl T Bergstrom. 2008. Maps of random walks on complex networks reveal community structure. Proceedings of the national academy of sciences. Andrey V Savchenko. 2022. Video-based frame-level facial analysis of affective behavior on mobile devices using efficientnets. In *Proceedings of CVPR*. Weizhou Shen, Siyue Wu, Yunyi Yang, and Xiaojun Quan. 2021. Directed acyclic graph network for conversational emotion recognition. In *Proceedings* of ACL. Luke Stark and Jesse Hoey. 2021. The ethics of emotion in artificial intelligence systems. In *Proceedings of* ACM FAccT. Luke Stark and Jevan Hutson. 2021. Physiognomic artificial intelligence. Fordham Intell. Prop. Media & Ent. LJ. Ömer Sümer, Patricia Goldberg, Sidney D'Mello, Peter Gerjets, Ulrich Trautwein, and Enkelejda Kasneci. 2021. Multimodal engagement analysis from facial videos in the classroom. *IEEE Trans. Affect. Comput.* Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In *Proceedings of AAAI*. Ruijie Tao, Zexu Pan, Rohan Kumar Das, Xinyuan Qian, Mike Zheng Shou, and Haizhou Li. 2021. Is someone speaking? exploring long-term temporal features for audio-visual active speaker detection. In *Proceedings* of ACM MM. Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos, and Maja Pantic. 2021. Estimation of continuous valence and arousal levels from faces in naturalistic conditions. *Nat. Mach. Intell.* Jinming Zhao, Tenggan Zhang, Jingwen Hu, Yuchen Liu, Qin Jin, Xinchao Wang, and Haizhou Li. 2022b. M3ed: Multi-modal multi-scene multi-label emotional dialogue database. In *Proceedings of ACL*. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of ICCV. Zengqun Zhao and Qingshan Liu. 2021. Former-dfer: Dynamic facial expression recognition transformer. In *Proceedings of ACM MM*. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In *Proceedings of ACL*. Michel Valstar, Maja Pantic, et al. 2010. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In *Proceedings of LREC*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Proceedings of NeurIPS*. Fanfan Wang, Zixiang Ding, Rui Xia, Zhaoyu Li, and Jianfei Yu. 2022a. Multimodal emotion-cause pair extraction in conversations. *IEEE Trans. Affect. Comput.* Yan Wang, Yixuan Sun, Yiwen Huang, Zhongying Liu, Shuyong Gao, Wei Zhang, Weifeng Ge, and Wenqiang Zhang. 2022b. Ferv39k: A large-scale multiscene dataset for facial expression recognition in videos. In *Proceedings of CVPR*. Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. 2014. Learning face representation from scratch. arXiv preprint arXiv:1411.7923. Jeewoo Yoon, Chaewon Kang, Seungbae Kim, and Jinyoung Han. 2022. D-vlog: Multimodal vlog dataset for depression detection. In *Proceedings of AAAI*. Dong Zhang, Liangqing Wu, Changlong Sun, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. 2019. Modeling both context-and speaker-sensitive dependence for emotion detection in multi-speaker conversations. In *Proceedings of IJCAI*. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. Guoying Zhao, Xiaohua Huang, Matti Taini, Stan Z Li, and Matti PietikäInen. 2011. Facial expression recognition from near-infrared videos. *Image Vis.* Comput. Jinming Zhao, Ruichen Li, Qin Jin, Xinchao Wang, and Haizhou Li. 2022a. Memobert: Pre-training model with prompt-based learning for multimodal emotion recognition. In *ICASSP*. ## A Appendix A.1 Multimodal Rules ShiHao Zou, Xianying Huang, XuDong Shen, and Hankai Liu. 2022. Improving multimodal fusion with main modal transformer for emotion recognition in conversation. *Knowl. Based Syst.* We design several multimodal rules to obtain possible speakers' face sequences from a video. The detailed steps are as follows: 1) using the FFmpeg 4tool to sample frame-level images from a video; 2) using the OpenFace 5library to detect all the people in the frame-level images, and obtain different FaceID, confidence, 68 feature landmarks, and aligned facial images; 3) using the FFmpeg tool to extract the audio from the video; 4) determining the number of possible speakers in the current video based on three rules as follows: - "Mouth open-close" count. For different FaceID candidates, count their mouth open and close times respectively. If the sum of the distance between the upper and lower lips of a person is greater than a certain threshold at a certain time, we will record that the mouth of this FaceID is open at the current time. - Mouth movement. Count which person's mouth moves the most during the time period by considering the movement of the lips between two consecutive frames of facial images, the difference in width between the inner corners of the mouth between these two frames, and the difference in height between the upper and lower inner lips between these two frames. - Voice Activity Detection algorithm. Following Zhao et al. (2022a), we identify which frames in the current video have sound by considering whether the visual movement of the lips matches the audio signal. The better the matching is, the higher the probability that it is the speaker. 4https://ffmpeg.org/ 5https://github.com/TadasBaltrusaitis/OpenFace ## A.2 Pseudo-Code Of Mario We provide the pseudocode for training the proposed MARIO model, where θSwin, θT , θself−*attn*, and θCMT represent the parameters of SwinTransformer, the text encoder, self-attention Transformer, and Cross-Modal Transformer, respectively. ## Algorithm 1: Multitask Training Procedure ![12_Image_0.Png](12_Image_0.Png) Of Mario Input: DFER dataset; MERMC dataset. Output: θSwin; θT ; θself−attn; θCMT . 1 **repeat** 2 for *all batches in the DFER dataset* do 3 Forward face sequences through Swin-Transformer ; 4 Compute loss L DF ER ; 5 Finetune θ*Swin* using ∇L*DF ER* 6 for *all batches in the MERMC dataset* do 7 Forward text through text encoder ; 8 Forward face sequences through Swin-Transformer ; 9 Obtain facial expression-aware visual representation ; 10 Audio and vision are sent to self-attention Transformer layer respectively ; 11 Conduct cross-modal fusion of text and audio ; 12 Conduct cross-modal fusion of text-audio and vision ; 13 Compute loss L*MERMC* ; 14 Update θself−*attn* and θCMT and finetune θ*Swin* and θT using ∇L*MERMC* ; 15 **until** *epoch reaches its maximum*; ![12_image_1.png](12_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section Abstract and Section Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We use several pre-training language models, which are referenced and briefly introduced in Section Method ✓ B1. Did you cite the creators of artifacts you used? Section Method ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section Ethics Statement ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? For the scientific artifacts we use, we use them as expected; The scientific artifact we have created is described in the Section Introduction and its intended use. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section Ethics Statement ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We analyze our FacialMMT framework in Section Experiments and Analysis. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We show the details of the datasets we used in section Experiments and Analysis. ## C ✓ **Did You Run Computational Experiments?** Section Experiments And Analysis. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We describe them in the experimental setting of Section 3. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We describe them in the experimental setting of Section 3. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We describe them in Section 3. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? we report them in Section 3. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section Method ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We ourselves annotated 20 different face images for each of the six leading occurring in the sitcom ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We ourselves annotated 20 different face images for each of the six leading occurring in the sitcom ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The data we use is publicly available ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We use publicly available datasets and have added a response to ethics review in the camare-ready submission and look forward to it being approved. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We ourselves annotated data
li-etal-2023-teast
{T}e{AST}: Temporal Knowledge Graph Embedding via Archimedean Spiral Timeline
https://aclanthology.org/2023.acl-long.862
Temporal knowledge graph embedding (TKGE) models are commonly utilized to infer the missing facts and facilitate reasoning and decision-making in temporal knowledge graph based systems. However, existing methods fuse temporal information into entities, potentially leading to the evolution of entity information and limiting the link prediction performance of TKG. Meanwhile, current TKGE models often lack the ability to simultaneously model important relation patterns and provide interpretability, which hinders their effectiveness and potential applications. To address these limitations, we propose a novel TKGE model which encodes \textbf{T}emporal knowledge graph \textbf{e}mbeddings via \textbf{A}rchimedean \textbf{S}piral \textbf{T}imeline (TeAST), which maps relations onto the corresponding Archimedean spiral timeline and transforms the quadruples completion to 3th-order tensor completion problem. Specifically, the Archimedean spiral timeline ensures that relations that occur simultaneously are placed on the same timeline, and all relations evolve over time. Meanwhile, we present a novel temporal spiral regularizer to make the spiral timeline orderly. In addition, we provide mathematical proofs to demonstrate the ability of TeAST to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing TKGE methods. Our code is available at \url{https://github.com/IMU-MachineLearningSXD/TeAST}.
## Teast: Temporal Knowledge Graph Embedding Via Archimedean Spiral Timeline Jiang Li1,2, Xiangdong Su1,2 ∗**, Guanglai Gao**1,2 1 College of Computer Science, Inner Mongolia University, Hohhot, China 2 National & Local Joint Engineering Research Center of Intelligent Information Processing Technology for Mongolian, Hohhot, China lijiangimu@gmail.com, cssxd@imu.edu.cn, csggl@imu.edu.cn ## Abstract ![0_Image_0.Png](0_Image_0.Png) Temporal knowledge graph embedding (TKGE) models are commonly utilized to infer the missing facts and facilitate reasoning and decision-making in temporal knowledge graph based systems. However, existing methods fuse temporal information into entities, potentially leading to the evolution of entity information and limiting the link prediction performance of TKG. Meanwhile, current TKGE models often lack the ability to simultaneously model important relation patterns and provide interpretability, which hinders their effectiveness and potential applications. To address these limitations, we propose a novel TKGE model which encodes Temporal knowledge graph embeddings via Archimedean Spiral Timeline (TeAST), which maps relations onto the corresponding Archimedean spiral timeline and transforms the quadruples completion to 3th-order tensor completion problem. Specifically, the Archimedean spiral timeline ensures that relations that occur simultaneously are placed on the same timeline, and all relations evolve over time. Meanwhile, we present a novel temporal spiral regularizer to make the spiral timeline orderly. In addition, we provide mathematical proofs to demonstrate the ability of TeAST to encode various relation patterns. Experimental results show that our proposed model significantly outperforms existing TKGE methods. Our code is available at https://github.com/ IMU-MachineLearningSXD/TeAST. ## 1 Introduction Knowledge graph (KG) expresses the relations of real-world entities and allows for reasoning new facts, which enables a wide range of applications in natural language processing (Chen et al., 2019; Junior et al., 2020; Hu et al., 2021). It stores a vast amount of knowledge in the form of triplets. These triplets are typically denoted as (*s, r, o*), ∗Corresponding Author where s, r and o represent the subject, the relation, and the object. Since knowledge changes over time, researchers introduced timestamps into knowledge graphs to create temporal knowledge graphs (TKGs). In TKGs, each knowledge fact is represented as a quadruple (*s, r, o, τ* ), where τ denotes the timestamp at which the fact was true. This allows for more precise representation and querying of information in knowledge graphs, enabling applications that require an understanding of the evolution of knowledge over time. Given the inherent incompleteness of most KGs and TKGs, knowledge graph embedding (KGE) and temporal knowledge graph embedding (TKGE) have been widely investigated to infer the missing facts using the existing ones. In particular, TKGE has gained significant attention for its ability to represent and analyze knowledge over time. This work focuses on TKGE. With the advancement of deep learning, researchers have proposed a number of KGE approaches. These approaches typically involve learning low-dimensional embeddings of entities and relations, and then using a score function to measure 15460 the plausibility of triplets (Ji et al., 2022). While existing KGE approaches have been shown to be effective on static knowledge graphs, they cannot be directly applied to TKGs due to the fact that real-world knowledge is dynamic and changes over time. To address this issue, researchers have designed TKGE models that are capable of capturing the temporal information and dynamic nature of real-world facts. Recent TKGE models (Lacroix et al., 2020; Xu et al., 2020a, 2021; Chen et al., 2022) have shown very impressive completion performance on TKGs. Nevertheless, there are two problems with these TKGE models. Firstly, the fusion of temporal information into entities led to a potential evolution of entity information, thus limiting the link prediction performance on TKG. In fact, the meaning of entities in quadruples does not change over time, whereas the relations between connected entities do. Secondly, existing TKGE models are not capable of simultaneously encoding important relation patterns and providing interpretability, which hinders their effectiveness and potential applications. To tackle these issues, we draw inspiration from the Archimedean spiral and design Temporal knowledge graph embeddings via Archimedean Spiral Timeline (TeAST). Specifically, we first map relations onto the corresponding Archimedean spiral timeline and form a unified representation for the timestamp and the relation. As shown in Figure 1, we expect relations at the same time to be on the same timeline and relations evolve over time. That is, we simplify the quadruples (*s, r, o, τ* ) to a triplet (s, r } *τ, o*), where } denotes Archimedean spiral operation. As a result, we transform the TKG embedding as 3th-order tensor completion problem in the complex space. Next, we optimize the graph embeddings through tensor factorization. In addition, we propose a new temporal spiral regularizer to constrain the time representation and make the spiral timeline orderly. We further provide mathematical proofs to demonstrate the ability of TeAST to encode various relation patterns. Experiments show that our method significantly outperforms the existing methods on TKGE benchmarks. Different from the existing TKGE models, we map relations onto the Archimedean spiral timeline and avoid incorporating temporal information into the entities. It ensures that the relations can evolve over time and the entities remain unchanged in TKGs. This is consistent with real-world facts. ## 2 Related Work 2.1 Static Knowledge Graph Embedding Motivated by the translation invariance principle in word2vec (Mikolov et al., 2013), TransE defines the distance between es + er and eo with the l1 or l2 norm constraint, where es, eo denote entity embedding vectors and er denote relation embedding vectors. The score function of TransE is defined as φ(*s, r, o*) = ||es + er − eo||p. Following TransE, TransH (Wang et al., 2014), TransR (Lin et al., 2015) and TransD (Ji et al., 2015) employ different projection strategies to adjust graph embeddings. Different from the above distance based models, RESCAL (Nickel et al., 2011), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016) and SimplE (Kazemi and Poole, 2018) employ tensor factorization based to model knowledge graphs, in which each relation r is mapped into a latent semantic matrix Mr. In addition, RotatE (Sun et al., 2019) and QuatE (Zhang et al., 2019) treat each relation as a rotation in complex space and in the quaternion space, respectively. ## 2.2 Temporal Knowledge Graph Embedding Analogously to KGE models, TKGE models add the temporal information and calculates the score function for the quadruples to evaluate its reasonableness. Therefore, most TKGE models are based on existing KGE models. TTransE (Leblay and Chekol, 2018) extends TransE and encodes time stamps τ as translations same as relations. Hence, the score function of TTransE is denoted as φ(*s, r, o, τ* ) = ||es + er + eτ − eo||p. Furthermore, TA-TransE (García-Durán et al., 2018) and TA-DistMult (García-Durán et al., 2018) encode timestamps based on TransE and DistMult, respectively. TComplEx (Lacroix et al., 2020) and TNTComplEx (Lacroix et al., 2020) build on ComplEx and perform a 4th-order tensor decomposition of a TKG. DE-SimplE (Goel et al., 2020) adds a diachronic entity (DE) embedding function to learn the temporal entities. ChronoR (Sadeghian et al., 2021) is based on RotatE and learns a kdimensional rotation transformation parametrized by relation-time pairs. Next, each subject entity is transformed with the rotation. TeLM (Xu et al., 2021) performs more expressive multivector representations to encode a temporal KG and utilizes the asymmetric geometric product. In addition, RotateQVS (Chen et al., 2022) builds on QuatE and encodes both entities and relations as quaternion em- ![2_image_0.png](2_image_0.png) beddings, in which the temporal entity embeddings are represented as rotations in the quaternion space. Recently, BoxTE (Messner et al., 2022) models the TKGE based on a box embedding model BoxE (Abboud et al., 2020). ## 3 Background And Notation 3.1 Archimedean Spiral As mentioned, we expect the relations with the same timestamp to be on the same timeline and all relations evolve over time. We choose the Archimedean spiral to model TKGs in the proposed method. Through the angle of rotation around the origin, Archimedean spiral provides the possibility of distinguishing the relations on the same timeline. In mathematics, Archimedean spiral (also known as the arithmetic spiral) was named in honor of the Greek mathematician Archimedes. As shown in Figure 2, it is the locus corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line that rotates with constant angular velocity. Equivalently, in polar coordinates (*ξ, θ*) it can be described by the equation: $$\xi=a+b\cdot\theta,$$ ξ = a + b · θ, (1) where a controls the distance from the starting point of the spiral to the origin, b controls the distance between loops, and θ is the angle of rotation of the spiral. The distance between each loop is 2πb. ## 3.2 Relation Patterns Let E denote the set of entities, R denote the set of relations, and T denote the set of the timestamp. Given a temporal knowledge graph G, it can be defined as a collection of quadruples (*s, r, o, τ* ), where s ∈ E, r ∈ R, o ∈ E and τ ∈ T denote the subject entity, relation, object entity and timestamp, respectively. As previous studies (Sun et al., 2019; Chen et al., 2022) highlighted, TKGE has focused on several key relations patterns, including: Definition 1. A relation r is symmetric, if ∀*s, o, τ* , r(s, o, τ ) ∧ r(o, s, τ ) *holds True.* Definition 2. A relation r is asymmetric, if ∀*s, o, τ* , r(s, o, τ ) ∧ ¬r(o, s, τ ) *holds True.* Definition 3. Relation r1 is the inverse of r2*, if* ∀s, o, τ , r1(s, o, τ ) ∧ r2(o, s, τ ) *holds True.* Definition 4. Relation r1 and r2 are evolving over time from timestamp τ1 to timestamp τ2, if ∀*s, o, τ* , r1(s, o, τ1) ∧ r2(s, o, τ2) *holds True.* ## 4 Methodology 4.1 Teast Model In this section, we introduce the novel TeAST model, which represents the relations on Archimedean spiral timelines. Since many previous works (Trouillon et al., 2016; Sun et al., 2019; Lacroix et al., 2020; Xu et al., 2020a) have demonstrated that encoding knowledge graphs in complex space can better capture potential links between entities, we also model TKGs in the complex space. For a quadruple (*s, r, o, τ* ), we also use es, er, eo and eτ to denote the subject embedding, relation embedding, object embedding and timestamp embedding respectively in the complex space. We have $$\begin{array}{l}{{\mathbf{e}_{s}=R e(s)+i I m(s),\mathbf{e}_{r}=R e(r)+i I m(r),}}\\ {{\mathbf{e}_{o}=R e(o)+i I m(o),\mathbf{e}_{\tau}=R e(\tau)+i I m(\tau),}}\\ {{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(2)}}\end{array}$$ $$(1)$$ where es, er, eo, eτ ∈ C k, and Re(∗) is the real vector component and Im(∗) is an imaginary vector component. We first map relations onto the corresponding Archimedean spiral timeline. Specifically, we regard each relation as different the angle of rotation θ in Eq. 1, and regard each timestamp as distance control parameter b in Eq. 1. Therefore, the range of embedding values for each relation is er ∈ (0, 2π). To prevent crossover between spirals, we set the starting point of all spirals to the origin. That is, we set a = 0 for TeAST in Eq. 1. On this basis, we map all relations to the matching spiral timeline, denoted as: $$\mathbf{\xi}_{(\tau,r)}=\mathbf{e}_{\tau}\circ\mathbf{e}_{r},$$ ξ(τ,r) = eτ ◦ er, (3) where ◦ denotes the Hadamard product. Since TeAST is modeled in complex space, we employ the Hadamard product to do spiral timeline mapping for the relations accordingly. Further, we have $$\begin{array}{c}{{\begin{array}{l}{{\xi_{(\tau,r)}=R e(\tau)R e(r)-I m(\tau)I m(r)}}\\ {{\qquad+i R e(\tau)I m(r)+i I m(\tau)R e(r),}}\end{array}}}\end{array}\tag{4}$$ where Re(r) ∈ (0, 2π) and Im(r) ∈ (0, 2π). All relation embeddings are all constrained between 0 and 2π. This ensures that the relations can be effectively mapped to the corresponding spiral timelines. Following previous tensor factorization models (Trouillon et al., 2016; Lacroix et al., 2020), the score function of TeAST is denoted as: $$\phi(s,r,o,\tau)=R e(<\mathbf{e}_{s},\mathbf{\xi}_{(\tau,r)},\bar{\mathbf{e}}_{o}>).\quad(5)$$ Then, we optimize the graph embeddings through the score function. Furthermore, since Archimedean spiral is based on the polar coordinate system, we can regard ξ(τ,r) as a modulus part. During the model training process, we note that there are inevitably equal modulus cases on different spiral timelines, leading to confusion between semantic relations. Therefore, we employ timestamp phase information e0τ = Re(τ0) + iIm(τ0) to avoid the bad cases, where Re(τ0)*, Im*(τ0) ∈ R k 2 . Additionally, we use absolute values to constrain the temporal phase information to be isotropic over time. This is done to enforce consistency and avoid any directional bias. As phases have periodic characteristics, we employ a sine function to measure the timestamp phase embeddings similar to HAKE (Zhang et al., 2020). Combining the modulus part and the phase part, we get $$\begin{array}{c}{{\xi_{(\tau,r)}^{\prime}=(R e(\tau)R e(r)+\sin(R e(\tau^{\prime}))}}\\ {{\qquad-(I m(\tau)I m(r)+\sin(I m(\tau^{\prime}))}}\\ {{\qquad+i(R e(\tau)I m(r)+\sin(R e(\tau^{\prime}))}}\\ {{\qquad+i(I m(\tau)R e(r)+\sin(I m(\tau^{\prime})).}}\end{array}$$ $$\quad(6)$$ $$({\mathfrak{I}})$$ It is worth noting that the number of parameters of TeAST increases linearly with embedding dimension k. Hence, the space complexity of TeAST model is O(k), similar to TNTComplEx (Lacroix et al., 2020). In addition, we calculate the score function of TeAST with Hadamard product between k-dimensional complex vector embeddings as TNTComplEx. The time complexity of TeAST and TNTComplEx equals to O(k). ## 4.2 Loss Function Following TNTComplEx (Lacroix et al., 2020) and TeLM (Xu et al., 2021), we use reciprocal learning to simplify the training process, and the loss function is defined as follows: $$\mathcal{L}_{\mu}=-\log(\frac{\exp(\phi(s,r,o,\tau))}{\sum_{s^{\prime}\in\mathcal{E}}\exp(\phi(s^{\prime},r,o,\tau))})$$ $$-\log(\frac{\exp(\phi(o,r^{-1},s,\tau))}{\sum_{o^{\prime}\in\mathcal{E}}\exp(\phi(o^{\prime},r^{-1},s,\tau))})\tag{8}$$ $$+\lambda_{\mu}\sum_{i=1}^{k}(||\mathbf{e}_{s}||_{3}^{3}+||\mathbf{\xi}_{(\tau,r)}^{\prime}||_{3}^{3}+||\mathbf{e}_{o}||_{3}^{3}),$$ where λµ denotes N3 regularization weight and r−1 is the inverse relation. According to several studies, N3 regularization improves the performance of the KGE models (Lacroix et al., 2018; Xu et al., 2020b) and TKGE models (Lacroix et al., 2020; Xu et al., 2021) based on tensor factorization. ## 4.3 Temporal Regularization The temporal regularization can constrain the temporal embedding information and thus better model TKGs. TNTComplEx (Lacroix et al., 2020) expects neighboring timestamps to have close representations. Hence, the smoothing temporal regularizer is defined as: $$\Lambda^{3}=\frac{1}{N_{\tau}-1}\sum_{i=1}^{N_{\tau}-1}\|\mathbf{e}_{\tau(i+1)}-\mathbf{e}_{\tau(i)}\|_{3}^{3},\tag{9}$$ where $N_{\tau}$ is the number of time steps. Recently, TeLM (Xu et al., 2021) introduces the linear temporal regularizer by adding a bias component between the neighboring temporal embeddings, which can be defined as: $$+t(Im(\tau)Re(\tau)+\sin(Im(\tau)).$$ We improved score function of TAST is given $$\Omega^{3}=\frac{1}{N_{\tau}-1}\sum_{i=1}^{N_{\tau}-1}\|e_{\tau(i+1)}-e_{\tau(i)}-e_{b}\|_{3}^{3},$$ $$\phi(s,r,o,\tau)=Re(<e_{s},\boldsymbol{\xi}_{(\tau,r)}^{t},\bar{e}_{o}>).\tag{7}$$ $$\Omega^{3}=\frac{1}{N_{\tau}-1}\sum_{i=1}^{N_{\tau}-1}\|e_{\tau(i+1)}-e_{\tau(i)}-e_{b}\|_{3}^{3},\tag{10}$$ by where eb denotes the randomly initialized biased embedding, which is then learned from the training process. In this work, we employ the Archimedean spiral to model TKGs. The previous temporal regularization methods expect the adjacent timestamps to be close to each other. For our model TeAST, this leads to the spiral timeline overlapping scenarios. To avoid these bad scenarios, we develop a novel temporal spiral regularizer by adding the phase timestamp embedding e0τ to the smoothing temporal regularizer. The temporal regularization function is defined as: $$\begin{split}{\mathcal{L}_{\tau}}^{3}=\frac{1}{N_{\tau}-1}\sum_{i=1}^{N_{\tau}-1}\|(\mathbf{e}_{\tau(i+1)}-\mathbf{e}_{\tau(i)})\\ +(\mathbf{e}_{\tau(i+1)}^{\prime}-\mathbf{e}_{\tau(i)}^{\prime})\|_{3}^{3}.\end{split}\tag{11}$$ The total loss function of TeAST is defined as: $${\mathcal{L}}={\mathcal{L}}_{\mu}+\lambda_{\tau}{\mathcal{L}}_{\tau}{}^{3},\qquad\qquad(12)$$ $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. ere $\lambda_\tau$ is. where λτ is the weight of the temporal regularizer. ## 4.4 Modeling Various Relation Patterns TeAST can model important relation patterns, including symmetric, asymmetric, inverse and temporal evolution patterns. We list all the propositions here and provide the proofs in Appendix. Proposition 1. *TeAST can model the symmetric* relation pattern. (See proof in Appendix A) Proposition 2. *TeAST can model the asymmetric* relation pattern. (See proof in Appendix B) Proposition 3. *TeAST can model the inverse relation pattern. (See proof in Appendix* C) Proposition 4. *TeAST can model the temporal evolution pattern. (See proof in Appendix* D) ## 5 Experiments 5.1 Datasets We evaluate TeAST on three TKGE benchmark datasets. **ICEWS14** and **ICEWS05-15** (GarcíaDurán et al., 2018) are both extracted from the Integrated Crisis Early Warning System (ICEWS) dataset (Lautenschlager et al., 2015), which consists of temporal sociopolitical facts starting from 1995. ICEWS14 consists of sociopolitical events in 2014 and ICEWS05-15 involves events occurring from 2005 to 2015. **GDELT** is a subset of the larger *Global Database of Events, Language, and* | ICEWS14 | ICEWS05-15 | GDELT | | |-------------|--------------|----------|-----------| | E | 7,128 | 10,488 | 500 | | R | 230 | 251 | 20 | | T | 365 | 4017 | 366 | | #Train | 72,826 | 386,962 | 2,735,685 | | #Vaild | 8,963 | 46,092 | 341,961 | | #Test | 8,941 | 46,275 | 341,961 | | Timespan | 1 year | 11 years | 1 year | | Granularity | Daily | Daily | Daily | Table 1: Statistics of TKGE datasets in the experiment. Tone (GDELT) TKG dataset (Leetaru and Schrodt, 2013). The GDELT contains facts with daily timestamps between April 1, 2015 and March 31, 2016, and only contains 500 most common entities and 20 most frequent relations. It is worth noting that GDELT holds a large number of quadruples (2M) but does not describe enough entities (500). Hence, The GDELT requires a strong temporal inductive capacity. ## 5.2 Evaluation Protocol In this paper, we evaluate our TKGE model using the benchmarks mentioned above. Following the strong baselines (Lacroix et al., 2020; Xu et al., 2021; Chen et al., 2022), the quality of the ranking of each test triplet is evaluated by calculating all possible substitutions of subject entity and object entity: (s0*, r, o, τ* ) and (s, r, o0, τ ), where s0, o0 ∈ E. And then, we sort the score of candidate quadruples under the timewise filtered settings (Lacroix et al., 2020; Xu et al., 2021; Chen et al., 2022). The performance is evaluated using standard evaluation metrics, including Mean Reciprocal Rank (MRR) and Hits@n. Hits@n measures the percentage of correct entities in the top n predictions. Higher values of MRR and Hits@n indicate better performance. Hits ratio with cut-off values n = 1, 3, 10. In this paper, we utilize H@n to denote Hits@n for convenience. ## 5.3 Baselines We compare our model with the state-of-theart TKGE models, including TTransE (Leblay and Chekol, 2018), DE-SimplE (Goel et al., 2020), TA-DistMult (García-Durán et al., 2018), ChronoR (Sadeghian et al., 2021), TComplEx (Lacroix et al., 2020), TNTComplEx (Lacroix et al., 2020), TeLM (Xu et al., 2021), BoxTE (Messner et al., 2022) and RotateQVS (Chen et al., 2022). MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 TTransE 0.255 0.074 - 0.601 0.271 0.084 - 0.616 0.115 0.0 0.160 0.318 DE-SimplE 0.526 0.418 0.592 0.725 0.513 0.392 0.578 0.748 0.230 0.141 0.248 0.403 TA-DistMult 0.477 0.363 - 0.686 0.474 0.346 - 0.728 0.206 0.124 0.219 0.365 ChronoR ♥ 0.625 0.547 0.669 0.773 0.675 0.596 0.723 0.820 - - - - TComplEx ♥ 0.610 0.530 0.660 0.770 0.660 0.590 0.710 0.800 0.340 0.249 0.361 0.498 TNTComplEx ♥ 0.620 0.520 0.660 0.760 0.670 0.590 0.710 0.810 0.349 0.258 0.373 0.502 TeLM 0.625 0.545 0.673 0.774 0.678 0.599 0.728 0.823 0.350 0.261 0.375 0.504 BoxTE ♥ 0.613 0.528 0.664 0.763 0.667 0.582 0.719 0.820 0.352 0.269 0.377 0.511 RotateQVS 0.591 0.507 0.642 0.754 0.633 0.529 0.709 0.813 0.270 0.175 0.293 0.458 TeAST(ours) **0.637 0.560 0.682 0.782 0.683 0.604 0.732 0.829 0.371 0.283 0.401 0.544** ICEWS14 ICEWS05-15 GDELT Table 2: Link prediction results on ICEWS14, ICEWS05-15 and GDELT. All results are taken from the original papers. Results of ♥ are the best results reported in the original papers. They are ChronoR (k=2), TComplEx (x10), TNTComplEx (x10) and BoxTE (k=5), respectively. Dashes: results are not reported in the responding literature. Note that TComplEx and TNTComplEx are also based on tensor factorization TKGE methods in the complex space, and thus we consider TComplEx and TNTComplEx as the main baselines. Furthermore, TeLM performs multivector tensor factorization for a TKG. Hence, TeLM has twice the space complexity of TeAST, TComplEx and TNTComplEx. Among the existing TKGE methods, TeLM obtains SOTA results on ICEWS14 and ICEWS0515 and BoxTE achieves SOTA results on GDELT dataset. ## 5.4 Experimental Setup We implement our proposed model TeAST via pytorch based on TNTComplEx (Lacroix et al., 2020) training framework1. All experiments are trained on a single NVIDIA Tesla V100 with 32GB memory. We use Adagrad (Duchi et al., 2011) optimizer and employ grid search to find the best hyperparameters based on the performance on the validation datasets. The learning rate is set to 0.1 and the embedding dimension k is set to 2000 in all cases. The best models are selected by early stopping on the validation datasets, and the max epoch is 200. The optimal hyperparameters for TeAST are as follows: $$025,\lambda_{\tau}=0.0$$ - **ICEWS14:** λµ = 0.0025, λτ = 0.01 - ICEWS05-15: $\lambda_\mu=0.002,\lambda_\tau=0.1$. - **GDELT:** λµ = 0.003, λτ = 0.003 We report the average results on the test set for five runs. We omit the variance as it is gen-1https://github.com/facebookresearch/tkbc erally low. The training processes of TeAST on ICEWS14, ICEWS05-15 and GDELT cost less than half an hour, less than an hour and five hours, respectively. ## 6 Results And Analysis 6.1 Main Results The link prediction results on ICEWS14, ICEWS05-15 and GDELT are shown in Table 2. We observe that TeAST surpasses all baselines on ICEWS14, ICEWS05-15 and GDELT regarding all metrics. Since TeAST employs the temporal Archimedean spiral to encode relation embeddings, this allows relations that occur at the same moment to be mapped onto the same spiral timeline and all relations evolve over time. It builds a close connection between the relation and timestamp and avoids incorporating temporal information into the entities for TKG. It proves that mapping the relations to Archimedean spiral timeline is an effective way to learn graph embeddings. TeAST can better encode temporal knowledge graphs and captures the latent information between subject entities and object entities. Meanwhile, the temporal spiral regularizer in TeAST avoids spiral timeline overlapping scenarios and further improves the performance. BoxTE (Messner et al., 2022) has shown that GDELT requires a high level of temporal inductive capacity for effective encoding. This is because GDELT exhibits a significant degree of temporal variability, with some facts lasting across multiple consecutive time stamps while others are momentary and sparse. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) In comparison to the SOTA method BoxTE on GDELT, TeAST achieves superior results on all metrics. ## 6.2 Effect Of Temporal Regularizer We study the effect of temporal regularization on ICEWS14, and compare the performance of TeAST with the previously proposed temporal regularizers, including the smoothing temporal regularizer Λ 3in Eq. 9, the linear temporal regularizer Ω3in Eq. 10 and our proposed temporal spiral regularizer Lτ 3in Eq. 11. We set the temporal regularization weight λτ ∈ {0.0001, 0.001, 0.005, 0.01, 0.1}. Detailed results of the effect of temporal regularization on ICEWS14 are given in Figure 3. The blue line denotes the temporal spiral regularizer. Compared with the previously proposed temporal regularizers, the temporal spiral regularizer improved MRR by 0.8 points, Hits@10 by 0.3 points, and Hits@1 by 1.2 points, respectively. Since the temporal spiral regularizer adds a phase timestamp embedding to avoid the overlap of Archimedean spiral timelines and thus can better discriminate timestamp infor- ## Mation. Furthermore, we utilize t-SNE (Van der Maaten and Hinton, 2008) to visualize the trained timestamp embeddings of TeAST, which with and without the temporal spiral regularizer. The visualization results are shown in Figure 4. We observe that the distribution of adjacent temporal embeddings of TeAST without temporal spiral regularization trained is scattered. There are only a few months that come together, such as January, October and November. In addition, we observe some overlapping scenarios of the learned time embeddings, suggesting that the learned time embedding is not inaccurate. It will further hinder the effectiveness of learning the facts associated with a specific timestamp. On the contrary, using the temporal spiral regularizer in TeAST can learn time embedding information effectively, resulting in orderly time clusters. This demonstrates the effectiveness of the temporal spiral regularizer in improving the ability of the model to accurately capture and retain information about specific timestamps. In addition, ![7_image_0.png](7_image_0.png) | ICEWS14 | ICEWS05-15 | | | | | | | | | | |----------------|------------------|-------|-------|-------|-------|-------|-------|-------|-------|------| | Mapping Entity | Mapping Relation | Phase | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | | ✔ | 0.598 | 0.531 | 0.649 | 0.749 | 0.639 | 0.542 | 0.710 | 0.798 | | | | ✔ | ✔ | 0.611 | 0.542 | 0.658 | 0.752 | 0.651 | 0.556 | 0.727 | 0.800 | | | ✔ | 0.621 | 0.545 | 0.665 | 0.763 | 0.671 | 0.589 | 0.722 | 0.812 | | | | ✔ | ✔ | 0.637 | 0.560 | 0.682 | 0.782 | 0.683 | 0.604 | 0.732 | 0.829 | | we notice a very interesting phenomenon: TeAST also learned deep information about the order between months with the temporal spiral regularizer and the temporal embedding of the same month presented on the same line. The results further suggest a good fit with our initial motivation that each relation should be mapped onto a temporal spiral and the relations with the same timestamp should be on the same timeline. ## 6.3 Analysis On Relation Embeddings As for TeAST, we employ the Archimedean spiral to map relations into the polar coordinate system. Therefore, we map the learned relation embedding of the same time to the corresponding timeline in the polar coordinate system. The results are shown in Figure 5. The mapping algorithm is based on the implementation of Eq. 3. The Figure 5 shows the relation embedding projection for four different times. We can see that the relation embeddings of the same timestamp are fitted as an Archimedean spiral timeline. This is further evidence that TeAST can effectively encode relations onto the corresponding spiral timeline. ## 6.4 Ablation Studies In this part, we conduct ablation studies on mapping entities and mapping relations of TeAST and the phase item. Table 3 shows the results on ICEWS14 and ICEWS05-15 benchmark datasets. The results of the comparison of mapping entities and mapping relations on the spiral timeline indicate that mapping relations on the spiral timeline is more effective than mapping entities on the spiral timeline for TeAST. This is further proof that the design motivation of TeAST is the meanings of the entities in quadruples do not change as time evolves, while the relations between entities change in TKGs. In addition, we also observe that TeAST achieves better link prediction results with phase vectors, because it can well distinguish relations at the same level of semantic hierarchy. It is worth noting that TeAST also obtains better or more competitive results without phase vectors than TComplEx and TNTComplEx on ICEWS14 and ICEWS05-15. The results show that TeAST maps relations on the corresponding Archimedean spiral timelines, which can effectively model temporal knowledge graphs. ## 7 Conclusion This paper proposes a novel and interesting TKGE method TeAST, which maps relations onto the corresponding Archimedean spiral timeline. The experimental results fully illustrate that TeAST can better model TKG than previous methods and learn the relation information over time. We also provide formal mathematical proofs to demonstrate that TeAST can encode the key relation patterns. In addition, the temporal spiral regularizer learns the latent information about the order between months better and improves the link prediction performances. This work will hopefully stimulate further research on TKGE models and provide a novel perspective on the subject. ## Limitations As previously mentioned, TeAST maps relations onto the corresponding Archimedean spiral timeline and transforms the quadruples completion to 3th-order tensor factorization. It is required to store the values and this slightly increase the space requirement and training time in the embedding learning process. Among all the baselines, TComplEx, TNTComplEx and TeLM are all tensor factorization based models. Table 4 compares training time and space requirement between our model and baselines on ICEWS14. TComplEx is the smallest model and takes the minimum training time. Compared with TComplEx, our model is about 4.6% bigger than TComplEx, and takes 21.4% more training time. | ICEWS14 | | | | |-------------|------------|-------------|-------| | Method | #Params(M) | #Train-time | MRR | | TComplEx | 31.81 | 14 min | 0.610 | | TNTComplEx | 32.65 | 16 min | 0.620 | | TeLM | 63.63 | 19 min | 0.625 | | TeAST(ours) | 33.28 | 17 min | 0.637 | Table 4: Comparison with existing TKGE models based on tensor factorisation. All experiments are trained on a single NVIDIA Tesla V100 with 32GB memory. ## Acknowledgement This work was funded by National Natural Science Foundation of China (Grant No. 61762069), Key Technology Research Program of Inner Mongolia Autonomous Region (Grant No. 2021GG0165), Key R&D and Achievement Transformation Program of Inner Mongolia Autonomous Region (Grant No. 2022YFHH0077), The Central Government Fund for Promoting Local Scientific and Technological Development (Grant No. 2022ZY0198), Big Data Lab of Inner Mongolia Discipline Inspection and Supervision Committee (Grant No. 215005206043). ## References Ralph Abboud, ˙Ismail ˙Ilkan Ceylan, Thomas Lukasiewicz, and Tommaso Salvatori. 2020. Boxe: A box embedding model for knowledge base completion. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Kai Chen, Ye Wang, Yitong Li, and Aiping Li. 2022. Rotateqvs: Representing temporal information as rotations in quaternion vector space for temporal knowledge graph completion. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5843– 5857. Association for Computational Linguistics. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2019. Bidirectional attentive memory networks for question answering over knowledge bases. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2913–2923. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12:2121–2159. Alberto García-Durán, Sebastijan Dumancic, and Mathias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4816–4821. Association for Computational Linguistics. Rishab Goel, Seyed Mehran Kazemi, Marcus A. Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3988–3995. AAAI Press. Zikun Hu, Yixin Cao, Lifu Huang, and Tat-Seng Chua. 2021. How knowledge graph and attention help? A qualitative analysis into bag-level relation extraction. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4662–4671. Association for Computational Linguistics. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In *Proceedings of the 53rd* Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 687–696. The Association for Computer Linguistics. Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. 2022. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33(2):494–514. Ademar Crotti Junior, Fabrizio Orlandi, Damien Graux, Murhaf Hossari, Declan O'Sullivan, Christian Hartz, and Christian Dirschl. 2020. Knowledge graph-based legal search over german court cases. In The Semantic Web: ESWC 2020 Satellite Events - ESWC 2020 Satellite Events, Heraklion, Crete, Greece, May 31 - June 4, 2020, Revised Selected Papers, volume 12124 of *Lecture Notes in Computer Science*, pages 293– 297. Springer. Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information* Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 4289–4300. Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2869–2878. PMLR. Jennifer Lautenschlager, Steve Shellman, and Michael Ward. 2015. Icews event aggregations. Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In Companion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon , France, April 23-27, 2018, pages 1771–1776. ACM. Kalev Leetaru and Philip A Schrodt. 2013. Gdelt: Global data on events, location, and tone, 1979–2012. In *ISA annual convention*, volume 2, pages 1–49. Citeseer. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2181–2187. AAAI Press. Johannes Messner, Ralph Abboud, and ˙Ismail ˙Ilkan Ceylan. 2022. Temporal knowledge graph completion using box embeddings. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 7779–7787. AAAI Press. Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In *Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural* Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111–3119. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 809–816. Omnipress. Ali Sadeghian, Mohammadreza Armandpour, Anthony Colas, and Daisy Zhe Wang. 2021. Chronor: Rotation based temporal knowledge graph embedding. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6471–6479. AAAI Press. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In *7th International Conference on Learning Representations,* ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *Proceedings of the 33nd International Conference on* Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of *JMLR Workshop and Conference Proceedings*, pages 2071–2080. JMLR.org. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In *Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence,* July 27 -31, 2014, Québec City, Québec, Canada, pages 1112–1119. AAAI Press. Chengjin Xu, Yung-Yu Chen, Mojtaba Nayyeri, and Jens Lehmann. 2021. Temporal knowledge graph completion using a linear temporal regularizer and multivector embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2569–2578. Association for Computational Linguistics. Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann. 2020a. Tero: A time-aware knowledge graph embedding via temporal rotation. In *Proceedings of the 28th* International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 1583–1593. International Committee on Computational Linguistics. Chengjin Xu, Mojtaba Nayyeri, Yung-Yu Chen, and Jens Lehmann. 2020b. Knowledge graph embeddings in geometric algebras. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 530–544. International Committee on Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. In *Advances in Neural Information Processing Systems 32:* Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 2731–2741. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In The ThirtyFourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3065–3072. AAAI Press. The score function of TeAST is defined as: $\phi(s,r,o,\tau)=Re(<\mathbf{e}_{s},\mathbf{\xi}^{\prime}_{(\tau,r)},\bar{\mathbf{e}}_{o}>)$ $$=Re(\sum_{k=1}^{K}\mathbf{e}_{sk}\mathbf{\xi}^{\prime}_{(\tau,r)k}\bar{\mathbf{e}}_{ok})$$ $$=<Re(\mathbf{e}_{s}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{o})>$$ $$+<Im(\mathbf{e}_{s}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{o})>$$ $$+<Re(\mathbf{e}_{s}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{o})>$$ $$-<Im(\mathbf{e}_{s}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{o})>.$$ Following ComplEx (Trouillon et al., 2016), we employ the standard componentwise multilinear dot product *< a, b, c >*:= Pk akbkck in Eq. 13. For symmetric pattern, we have r(*s, o, τ* )∧ r(*o, s, τ* ) according to Definition 1. Hence, we get $$\phi(s,r,o,\tau)=\phi(o,r,s,\tau).\qquad(14)$$ One can easily check that Eq. 14 meet the symmetric pattern conditions when ξ0(τ,r) is real (i.e. its imaginary part is zero). We have $$\phi(s,r,o,\tau)=<Re(\mathbf{e}_{s}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{o})>$$ $$+<Im(\mathbf{e}_{s}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{o})>$$ $$=<Re(\mathbf{e}_{o}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{s})>$$ $$+<Im(\mathbf{e}_{o}),Re(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{s})>$$ $$=\phi(o,r,s,\tau).\tag{15}$$ Therefore, a sufficient necessary condition for TeAST to be able to model symmetric pattern is Im(ξ0(τ,r) ) = 0. For asymmetric pattern, we have r(s, o, τ ) ∧ ¬r(*o, s, τ* ) according to Definition 2. Hence, we get $$\phi(s,r,o,\tau)\neq\phi(o,r,s,\tau).$$ $$(16)$$ One can easily check that Eq. 16 meet the asymmetric pattern conditions when ξ0(τ,r) is purely imaginary (i.e. its real part is zero). We have $$\phi(s,r,o,\tau)=<Re(\mathbf{e}_{s}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{o})>$$ $$-<Im(\mathbf{e}_{s}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{o})>,$$ $$\phi(o,r,s,\tau)=<Re(\mathbf{e}_{o}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Im(\mathbf{e}_{s})>$$ $$-<Im(\mathbf{e}_{o}),Im(\mathbf{\xi}^{\prime}_{(\tau,r)}),Re(\mathbf{e}_{s})>.$$ We can get φ(s, r, o, τ ) 6= φ(*o, r, s, τ* ). Therefore, a sufficient necessary condition for TeAST to be able to model asymmetric pattern is Re(ξ0(τ,r) ) = 0. For inverse pattern, we have r1(s, o, τ )∧r2(*o, s, τ* ) according to Definition 3. Hence, we get $$\phi(s,r_{1},o,\tau)=\phi(o,r_{2},s,\tau)\Leftrightarrow$$ $$\mathbf{e}_{r1}=\bar{\mathbf{e}}_{r2}\Leftrightarrow$$ $$Re(r_{1})+Re(r_{2})=0\wedge Im(r_{1})-Im(r_{2})=0,\tag{18}$$ where e¯r2 is the conjugate of er1. For temporal evolution pattern, we have r1(s, o, τ1) ∧ r2(*s, o, τ*2) according to Definition 4. Hence, we have $$\begin{array}{c}\phi(s,r_{1},o,\tau_{1})=\phi(s,r_{2},o,\tau_{2})\Leftrightarrow\\ \mathbf{\xi}^{\prime}_{(\tau_{1},r_{1})}=\mathbf{\xi}^{\prime}_{(\tau_{2},r_{2})}.\end{array}\tag{19}$$ It is worth noting that $\boldsymbol{\xi}^{\prime}_{(\tau_{1},r_{1})}=\boldsymbol{\xi}^{\prime}_{(\tau_{2},r_{2})}$ just means the values of their modulus part add phase part are equal. The relations at the same time are mapped on the corresponding Archimedean spiral timeline in the polar spatial representation. ## E Analysis And Case Study For Several Key Relation Patterns To illustrate the learned relation patterns that contain symmetric, asymmetric, inverse and temporal evolution patterns, we visualize some examples by visualizing the histograms of the learned embeddings. All cases are from ICEWS14 dataset (García-Durán et al., 2018). ![11_image_0.png](11_image_0.png) ## E.1 Symmetric Pattern As shown the proof of Propositions 1 (see Appendix A), TeAST can encode symmetric pattern when Im(ξ0(τ,r) ) = 0 is satisfied. As shown in Figure 6, tow facts (Kazakhstan, Consult, Afghanistan, 2014-04-11) and *(Afghanistan, Consult, Kazakhstan, 2014-04-11)* from ICEWS14, and *Consult* is a symmetric relation. We observe that the learned Im(ξ0(τ1,r1) ) in Figure 6(c) is close to 0. The result demonstrates that TeAST can model the symmetric pattern. ## E.2 Asymmetric Pattern Opposite to symmetric pattern, TeAST can encode asymmetric pattern when Re(ξ0(τ,r) ) = 0 is satisfied. Figure 7 shows an example of asymmetric pattern and *Make statement* is taken an asymmetric relation. Figure 7(c) shows that our TeAST can model the asymmetric pattern. ## E.3 Inverse Pattern As shown the proof of Propositions 3 (see Appendix C), if r4 is the inverse of the r3, and we have Re(r3) + Re(r4) = 0 ∧ Im(r3) − Im(r4) = 0. Two existing facts *(Iraq, Host a visit, Nuri alMaliki, 2014-06-13)* and *(Nuri al-Maliki, Make* ![12_image_0.png](12_image_0.png) (a) r2: *Make statement* (b) τ2: *2014-03-06* ![12_image_1.png](12_image_1.png) a visit, Iraq, 2014-06-13) from ICEWS14, which the relation *Host a visit* is the inverse of the relation Make a visit. Figure 8 shows that TeAST satisfies the above conditions. ![12_image_3.png](12_image_3.png) ## E.4 Temporal Evolution Pattern As shown in Proof of Propositions 4 (see Appendix D), if a relation r5 and a relation r6 are evolving over time from τ5 from τ6, we have ξ0(τ5,r5) = ξ0(τ6,r6) . To verify that TeAST can model the temporal evolution pattern, we randomly select five facts, including *(Nuri al-Maliki, Make a* visit, Iraq, 2014-06-13), (Nuri al-Maliki, Consult, Iraq, 2014-06-23), (Nuri al-Maliki, Make statement, Iraq, 2014-06-29), *(Nuri al-Maliki, Mobilize* or increase police power, Iraq, 2014-08-11) and (Nuri al-Maliki, Praise or endorse, Iraq, 2014-1110). The five quadruples above belong to the temporal evaluation pattern. As shown in Figure 9, we mutually calculate the cosine similarity between ξ0(τi,ri) of the five quadruples. We can observe that the ξ0(τi,ri) of the corresponding quadruples are all close. Results further demonstrate that TeAST can effectively model the temporal evolution pattern. ![12_image_2.png](12_image_2.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 1 ✓ B1. Did you cite the creators of artifacts you used? 5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The code repository is governed by Apache-2.0 license. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
bao-etal-2023-human
Human Inspired Progressive Alignment and Comparative Learning for Grounded Word Acquisition
https://aclanthology.org/2023.acl-long.863
Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words.
# Human Inspired Progressive Alignment And Comparative Learning For Grounded Word Acquisition Yuwei Bao† Barrett Martin Lattimer§∗ **Joyce Chai**† †Computer Science and Engineering, University of Michigan §ASAPP {yuweibao, lattimer, chaijy}@umich.edu ## Abstract Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words. ## 1 Introduction Two of the important word acquisition problems are: 1) what must be learned to acquire a word, and 2) how to learn the word? To the first question, cognitive studies have shown that several critical steps in learning language naturally comes from joint attention establishment (Tomasello and Farrar, 1986), and symbol grounding (Harnad, 1990). Children's attention are usually redirected through a mother or teacher's guidance, and they learn to map these attended sensory inputs (e.g. color, sound, heat) with their corresponding words or sentences. Living in a rich and diverse world enabled by our multiple body sensors, we learned to filter out the noise and pay attention to specific aspects of an input which we assign linguistic labels to. This attention establishment and information filtration process is the first step of word acquisition. After filtering out the noise, we are left with a mental representation of ∗ Work done during master study at the University of Michigan. 15475 what a word entails. Just as the word "car" triggers certain impressions of a common transportation in head, we store these representations as they could come in handy later when we use them to reason, imagine, and express ourselves. To acquire a word, humans learn to filter out noise to focus on key information from sensory inputs that contributes to its meaning (Gentner, 1983; Tomasello and Farrar, 1986), and store that meaning representation for future use (Harnad, 1990; Kuehne et al., 2000). As for the second question, one of the common but under-explored methods is implicit or explicit comparisons. Caretakers may lay out stuffed animals around a baby and name them one by one to differentiate them. In school, teachers may compare different components of the learning material, e.g. "Today we learn 'colors'. This is red/blue/yellow...". Comparison is the process of finding commonalities and highlighting differences (Gentner and Markman, 1994). It allows children to attend to matching relational structures of inputs (Gentner, 1983), filter out background noise, and learn to generalize as well as abstract. With comparisons, especially clean well-structured comparisons, children can learn a lot of condensed knowledge efficiently and cultivate their capabilities to tackle noisier challenges outside of the classroom (Ball and Thompson, 2018; Anderson et al., 2018; Shao and Gentner, 2019). From these findings, we propose a new method of word acquisition for artificial intelligent (AI) agents. We mimic the classroom learning setting and constructed a small clean dataset named **SOLA** - Simulated Objects for Language Acquisition. This dataset allows the model to draw efficient similarity and difference comparisons, learn to filter out noise, pay attention only to key information that contributes to a word meaning, and store these word-representation mappings continually as new words are introduced. While a larger scale evaluation is needed in the future, through controlled ![1_image_0.png](1_image_0.png) experiments, our preliminary results have demonstrated the potential of this model in efficient continual learning of grounded words. The dataset and code are available at https://github.com/ sled-group/Comparative-Learning. The contributions of this work include: 1. Constructed a small, clean dataset SOLA for studying efficient comparisons. 2. Framed the acquisition of words as both an information filtration process, and as a representation-symbol mapping. 3. Proposed a new method of grounded word acquisition through comparative learning 4. Demonstrated the performance, usability, and generalizability of the acquired representations through multiple tasks. ## 2 Related Work 2.1 Human Language Acquisition Language acquisition is the process of putting linguistic labels onto abstracted features, and structuring them into sentences following publicly recognized grammatical rules to express intention. The simple process of abstracting features, takes input attention filtering and relational generalization to pinpoint the learning concept, and associate them with linguistic labels (Harnad, 1990; Kuehne et al., 2000). Studies show that the amount of mother-child joint attention facilitation time is positively correlated to a child's early vocabulary size growth (Tomasello and Farrar, 1986), and that human infants are capable of comparison and abstraction through same/different relation comprehension (Anderson et al., 2018). Comparison is a central component of human cognition which results in our own uniquely structured knowledge representations (Markman and Gentner, 1993; Gentner and Maravilla, 2017). The theory of Structure-Mapping predicts that similarity comparison allows subjects to attend to matching relational structures of inputs (Gentner, 1983), highlight the differences, and that human infants are able to learn such relations in very few examples (Hespos et al., 2020). The difficulty of establishing a structural mapping, however, is influenced by the ease of the alignment process (Gentner, 1983). Progressive alignment (Hespos et al., 2020; Kotovsky and Gentner, 1996) suggests that constructing an alignment among highly similar comparisons can invite young children to reason about relational structures and serve as base knowledge for future hierarchical abstractions and complex characteristic learning. Our work took inspiration from the above two theories by constructing grouped multimodal samples for similarity and difference comparisons. We also start progressive alignment with highly aligned pairings during early word acquisition. ## 2.2 Continual Learning There are two major limitations that current neural network models face. Models either take the largepretrained approach, throwing as much data as possible during training, and hope to learn everything and achieve AGI (Reed et al., 2022) all at once without the need for continual learning. Or models take the architectural (Lomonaco and Maltoni, 2017)/ rehearsal (Lopez-Paz and Ranzato, 2017)/ replay (Parisi et al., 2018)/ regularization (Kirkpatrick et al., 2017) approaches hoping to retain previously learned knowledge amid newly introduced data distribution shift (Ring, 1998; Nguyen et al., 2017; Schlimmer and Fisher, 1986). Humans, however, are lifelong learners. We are constantly adapting to new environments, learning new concepts & tasks, and evolving together as a society. Human execution of this process is simple, natural, and cost effective, without catastrophically forgetting previously learned knowledge (Van de Ven and Tolias, 2019), nor having to retrain from scratch every time new knowledge is introduced (Schlimmer and Fisher, 1986). Our method follows the human learning approach and gradually learns more concepts as they are introduced. We demonstrate the model's resistance against catastrophic forgetting and the data learning efficiency in our experiments. ## 2.3 Contrastive Learning Contrastive learning is a paradigm that enables models to learn feature representations through contrasting examples without explicit labeling (Chen et al., 2020a; Wu et al., 2018; Khosla et al., 2020). Contrastive learning uses a single contrastive loss function that pushes similar classes together and dissimilar classes apart (Dosovitskiy et al., 2014). In this paper, we introduce **Comparative Learning** which adapts the general definition of contrastive learning by explicitly separating the similarity training from the difference training. On top of encoding each input as in contrastive learning, we took additional steps to further extract information about similarities and differences separately given the same amount of inputs. Supervised by associated words, we use the similarity batches to learn the process of noise filtration and a shared feature representation. We use the difference batches to refine and differentiate these feature representations. ## 2.4 Multimodal Grounding A large number of previous works try to draw connections between language and different modalities, such as VILBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), UNITER (Chen et al., 2020b), OSCAR (Li et al., 2020a), Vokenization (Tan and Bansal, 2020), and more (Radford et al., 2021; Wang et al., 2021; Zhang et al., 2021; Cho et al., 2021; Bao et al., 2022; Hill et al., 2020; Tsimpoukelli et al., 2021). These models demonstrated their state of the art multimodal representations on a range of downstream tasks, including image/video captioning, image/text retrieval, visual question answering, and text-to-image generation (Kamath et al., 2021; Wu et al., 2022; Mao et al., 2019; Zheng et al., 2022; Ahmetoglu et al., 2022). A large portion of these works focusd on visual recognition and language production tasks such as captioning, retrieval, and some visual question answering. These works embed visual and textual inputs into the same latent space for similarity comparison and retrieval. These models can learn a great language-vision matching filter, but often do not preserve a grounded concept representation given the linguistic labels. Another line of works focus on language comprehension and image/video generation. They take a pre-trained language embedding and use it to generate high resolution images, and have achieved extraordinary performance. Notably, (Liu et al., 2022; Du et al., 2021, 2020) achieved compositional visual generation with energy based models. (Brooks et al., 2022) worked on image editing given instructions with paired training images. Also others demonstrated language grounding through compositional text to image generations (Feng et al., 2022; Pearl et al., 2022). These models rely on great grounded language representations to generate meaningful images. Our work frames the language acquisition process as both input information filtration and representation learning. A few methods include both parts of this definition. CLIP (Radford et al., 2021) used contrastive learning on massive number of weakly linked image-text pairs to project each modality into the same embedding space, which allows the encoders to filter inputs, and store the representations through text embeddings. Several works including (Liu et al., 2022; Du et al., 2021, 2020) used a set of energy based models on recognition tasks for input filtration, and iteratively refined the representations through the Langevin dynamics procedure (Xie et al., 2016). Our work proposes a human inspired approach for word acquisition. We jointly train both the input filtration process and the representations, and map them to their corresponding words through comparative learning. 3 Dataset Inspired by the classroom teaching setting and the Progressive Alignment theory (Kotovsky and Gentner, 1996), we created a new dataset **SOLA** (Simulated Objects for Language Acquisition). SOLA has little noise and clearly defined attributes to isolate different concepts for efficient sample comparisons and grounded language-feature mapping. We generated SOLA using the open-source simulation software Kubric (Greff et al., 2022) designed for semi-realistic image/video synthesis. SOLA (Figure 1) contains images of individual simulated objects with three associated learning attributes: color, material, and shape. Each object is a composition of one of 8 colors, 11 shapes, and 4 materials. We also diversify the images by capturing each object at 3 different light settings and 6 different camera angles. A total of 6336 Red Green Blue Alpha (RGBA) images were generated. To evaluate the generalizability and robustness of the models on nosier inputs, we also composed a Variation Test set (D*test*_v) of 989 RGBA images by applying a stretch, shade change, or size transformation. An object in this test set is either stretched along one of the x, y, and z axis, colored with a darker or lighter shade, or shrunk to a medium or small size. Although not used in this work, we rendered the Depth, Surface Normal, Segmentation Map, and Object Coordinates images for each corresponding RGBA image for future research. | Split | Total | Dknown | Dunknown | |------------|---------|----------|------------| | Vocab Size | 23 | 20 | 3 | | Dtrain | 5094 | 3006 | 2088 | | Dtest_nc | 1242 | 744 | 468 | | Dtest_v | 989 | 580 | 409 | Table 1: Splits on RGBA Images To evaluate the novel composition capability of the methods, we reserved 9 learning attribute pairs exclusively in the Novel Composition Test set (D*test*_nc). The rest were assembled into the Train set (D*train*) for word acquisition training. To evaluate models' abilities to continual learning, we split the vocabulary into two sets: a Known vocabulary and an Unknown vocabulary set, which leads to two datasets D*known* and D*unknown*. The D*unknown* dataset includes images describable by at least one of the three attributes: [yellow, glass, torus_knot], and the rest of the images are in D*known*. Each training and testing dataset is broken down into Known and Unknown versions accordingly. More details about SOLA can be found in the Appendix. Several existing datasets offer dense compositional attribute annotations that can be helpful for language grounding, such as MIT-States (Isola et al., 2015), UT-Zappos (Yu and Grauman, 2014), CUB (Wah et al., 2011), ShapeNet (Chang et al., 2015), Visual Genome (Krishna et al., 2017), and PACO (Ramanathan et al., 2023). These datasets are great resources for scaling attribute concept learning, especially from noisy real world images, but are not designed to form clean structural alignment for comparative language acquisition. Our work took the baby step of progressive alignment (Hespos et al., 2020; Kotovsky and Gentner, 1996; Childers) by offering the model structured and denoised sets of inputs for easier structural comparison and efficient feature extraction. Following these works, we believe that equipping the model with a set of clean base knowledge can help it extend to messier inputs in the future. Other abstract datasets such as CLEVR (Johnson et al., 2017) focus on diagnosing and probing the reasoning or interpretability of models through visual question and answering, and are not designed for language acquisition. Additionally, our dataset includes 2 more materials and 8 more shapes than CLEVR, providing a lot more variance and opportunities for vocabulary learning. We also diversify lighting, camera angles, and further object transformations in the variation test set for generalization and composition analysis. We introduce SOLA as it offers clean, grouped images for structured comparative learning. More detailed dataset comparisons can be found in Table 3. 4 Method ## 4.1 Comparative Learning Comparative Learning is the process of finding the similarities and differences from a set of inputs. It is a general learning strategy that can be applied to different input modalities, sizes, and duration. The general formulation can be found below. For each label/word/symbol liin an unconstrained set L = {l1, l2, *· · · }*, we first assemble a batch of samples Bs = {a l1 1 , a l1 2 , · · · , al1 n }, that share the label li for similarity learning, and a batch of samples Bd = {b l1, · · · , blj, *· · · }*j̸=ithat cannot be described by li for difference learning. The process of SIMli (Eq.1) finds similarities across examples in Bs, and extracts out its representation Repli . The process of DIFFli (Eq.2) highlights the differences between li and other non-compatible labels, and refines the representation Repli . Noncompatible labels are the ones that cannot be assigned to the same entity at the same time, e.g.(up, down). Comparable to the positive and negative batches in contrastive learning, these labels naturally occur through difference comparisons, and are organized by the supervisor. Both the computations and the representation are stored to map the label: {li: [SIMli , DIFFli , Repli ]}. $$\begin{array}{l}{{\mathrm{Rep}_{l_{i}}=\mathrm{SIM}_{l_{i}}(\{a^{l_{i}}\in{\mathcal{B}}_{s}\})}}\\ {{\mathrm{Rep}_{l_{i}}=\mathrm{DIFF}_{l_{i}}(a^{l_{i}},\{b^{l}\in{\mathcal{B}}_{d}\})}}\end{array}$$ (1) (2) $\frac{1}{2}$ In this work, we contextualize the method of comparative learning in word acquisition through a set of still visual inputs (Figure 2). For each concept, e.g. "red", we assemble a batch of images that share the word "red" for similarity training. We also assemble a batch of images that are of any other color (non-compatible) but "red" for difference refinement. We keep the rest of the attributes the same for better structural alignment. As illustrated in Algorithm 1 and Figure 2, given a batch of training samples (sim and diff) for word li: B = {Bs, Bd}, we first take a shortcut by having each image au go through a frozen pre-trained ![4_image_0.png](4_image_0.png) CLIP (Radford et al., 2021) image embedding as the starting point. This shortcut bypasses a few structural alignment steps, and encodes the raw images into the same 512 dimensions eu available for direct comparisons. The information denoising and attention establishment process is composed of two parts for each word li: the filter Fli and the encoder Encli . The filter maintains a vector the same size as the embedding eu, and computes the elementwise product of the two. It is a learning vector that masks the input embedding by selecting only the relevant dimensions that contributes to the word li. This masked embedding goes through two fully connected layers of Encli to output a condensed representation ru. On top of learning the attention filtration process (Fli , Encli ), we then calculate the centroid of all the sample representations ru from the similarity batch Bs as the condensed representation Repli for li. For difference learning, we have all the Bd samples to go through the same filtration and encoding process for word li. Since none of them can be described by the word li, the output should be nothing like Repli . Therefore, the loss function pushes the distance between each sim batch sample and the centroid close, and pushes the diff batch sample representations apart from the centroid. This process filters out and abstracts the shared representations of li, and differentiates it from other non-compatible words. It jointly trains the filter Fli , the encoder Encli , and the representation Repli . We store the mapping {li: [Fli , Encli , Repli ]} for each word in memory for later use. 4.2 Generative Decoder Learning Due to input filtration, the dimensions of the condensed word representations come from selective, word-specific subsets of the original 512 dimensions of e. They are, therefore, not aligned in the same space across different words and cannot be Algorithm 1: Comparative Learning-Word li 1 for Sim and Diff data batches: {Bs, Bd} do 2 // Similarity Learning 3 for au ∈ Bs do 4 eu = CLIP_emb[au] 5 ru = Encli [Fli (eu)] 6 Repli = Centroid[{ru}u∈{1,··· ,n}] 7 // Difference Learning 8 for bv ∈ Bd do 9 ev = CLIP_emb[bv] 10 rv = Encli [Fli (ev)] 11 // Loss 12 losss =Pu Dist[ru, Repli ] 13 lossd =Pv Dist[rv, Repli ] 14 loss = (losss) 2 + (1 − lossd) 2 15 Backpropagate and Optimize Output: {li: [Fli , Encli , Repli ]} used for direct interactions. To allow compositional reasoning with all the words and their grounded representations, we trained a decoder (Figure 2) to revert the condensed representations back to the same space as the CLIP embedding e. To train the decoder Declpfor word p, we adopted two strategies in parallel: Editing and Reconstruction (Figure 2). About editing, given an image of a (blue, flower), for example, if we filter out blue add red, we should get an image of a (red, flower). Following this logic as in Eq. 3, we mask out feature q from input embedding eq by multiplying the opposite of filter q: (1 − Flq ). We then add back the decoded (Declp ) representation of Replp for word p. Both the filter Flq and the representation Replp were trained in the previous step and frozen. The output (out_q2p) aims to resemble the embedding of ep. Similarly, for reconstruction as in Eq. 4, if we filter out feature p from input embedding ep and add back the decoded representation of Replp , we should get the original embedding of ep. Both passes are trained jointly to learn the decoder of p (Eq. 5). Each decoder is stored together in the mapping {l: [Fl, Encl, Decl, Repl ]}. The decoded representations open the door for zero-shot compositional comprehension, generation, and reasoning. For illustration purpose, we also trained a small image generator that upsamples the CLIP embedding back to an RGB image. Details about the models and training can be found in the Appendix. $\texttt{out\_}q2p=e_q(1-\texttt{F}_{l_q})+\texttt{Dec}_{l_p}[\texttt{Rep}_{l_p}]$ (3) $\texttt{out\_}p2p=e_p(1-\texttt{F}_{l_p})+\texttt{Dec}_{l_p}[\texttt{Rep}_{l_p}]$ (4) $\texttt{loss}=\texttt{Dist}[e_p,\texttt{out\_}q2p]+\texttt{Dist}[e_p,\texttt{out\_}p2p]$ (5) ## 5 Experiments 1. ## With The Training Described Above, Each Word Will Have A Mapping {L: [Fl, Encl, Decl, Repl ]} Stored In The Memory. These Acquired Word Representations Can Be Used During Inference Time For Downstream Tasks. In This Section, We Evaluate These Representations On Several Tasks That Test Models' Robustness, Generalizability, Flexibility, And Ability To Continual Learning. 5.1 Multi-Attribute Recognition In this task, the models are challenged with zeroshot recognition of all the attributes (color, shape, material) of a given test image a under two evaluation settings: (1) *Novel composition setting* where the image with a combination of attributes which is not seen during training (i.e., a ∈ D*test*_nc); and (2) Noisy setting where the images were injected with noise in the variation test set (a ∈ D*test*_v). The models were trained on the training data (D*train*). For each test image (Figure 3), we go through the memory, apply the corresponding filter and encoder of each word to the input embedding, and picked the top 3 words with the shortest mean squared error (MSE) between the learned word representation and image embedding. We took the essence of several zero-shot compositional learning methods such as (Li et al., 2020b; Anwaar et al., 2021; Mancini et al., 2021), and implemented them as variations of the CLIP model for a better experiment control and fairer comparison. More specifically, we compare our method with the following baselines: CLIP Zero Shot computes the highest matching words for each test image. We experimented with ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) Figure 4: **Multi-Attribute Recognition Performance** Comparison: The percentage of each ground truth attribute (color, material, shape, or all 3) being among the top 3 model predictions. different prompts, and reported the highest performances using the prompt "a photo of a x". CLIP Contrastive adds two fully connected layers to the image encoder, and fine tune the model on the same training dataset with a contrastive loss. CLIP Linear also adds two fully connected layers to the image encoder, but with an output dimension of the vocabulary size. It predicts 1s for all corresponding word dimensions, and 0s otherwise. This method is subject to a fixed vocabulary size, and can be hard to expand to new concepts. CLIP Multi-Attr finetunes two fully connected layers out of the image encoder for each word, and predicts 1s and 0s based on its confidence measured by the word-image matchability (i.e., similarity). The performance of all the methods over two test datasets can be found in Figure 4. For each image, we evaluate whether its corresponding color, ![6_image_0.png](6_image_0.png) material, shape, or all three of them are among the top 3 selected words. It is observed that our method consistantly outperforms all baselines across two test datasets and four categories. CLIP Zero Shot showed decent baseline performance on the multi-attribute recognition task, as this model was pre-trained on massive weakly linked language-image pairs. However, our model and the finetuned models are able to surpass this baseline with a significant margin. CLIP Contrastive overfits to the color features, mainly guessing colors in its top three resulting in high color performance but lagging behind in all other attributes. CLIP Linear and CLIP Multi-Attr showed an improved performance compared to CLIP Zero Shot, but couldn't catch up with our method. Among the 3 attributes, the material attribute was the hardest to learn for all the methods. Humans generally learn textures through touching, tapping an object for sound, and other sensors so a visual input alone may not be sufficient to grasp the meaning of materials, especially under a dim light. However, our method was still able to lead in performance on materials, which consequently also increased the accuracy for all top 3. This is likely because our model is able to pay attention to specific aspects (e.g. light reflection, transparency) of the images better through explicit comparisons. ## 5.2 Continual Word Acquisition We investigated models' capability to continually acquire new words on the same multi-attribute recognition task in comparison with the models mentioned in Section 5.1. As mentioned in Section 3, we split all the training and testing datasets into two parts based on the vocabulary (Dknown, D*unknown*). The D*known* datasets include 20 words, and the D*unknown* datasets include an additional 3 new words for continual word ac- ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) (b) Data Efficiency in Round 2: Trained on D*unknown* versus Dknown+D*unknown* Figure 6: **Continual Word Acquisition Evaluation**: Accuracy of model's top 3 predictions being all ground truth attributes. quisition and evaluation. Any image that shares at least one of the 3 new words is part of D*unknown*. Our model conducts continual learning in two ways (Figure 5) it can learn new concepts using the exact same way as described in Figure 2, and add the word-representation mapping to the memory; 2) It can also update and refine the learned concepts, whenever new samples are available. More specifically, we extract the relevant {l: [Fl, Encl, Decl, Repl ]} from the memory for word l. The new samples go through similarity and difference learning with the old Fl and Enclto get a batch of condensed representation {r}'s. Together with the old Repl , we can calculate a new centroid with these embeddings, and a loss. Through backpropogation and training, the new centroid will be the refined Repl , and both the encoder and filter are updated in the memory for word l. We first evaluate the severity of catastrophic forgetting of the methods (Figure 6a). In Round 1, the models were trained on D*known* datasets, and the D*unknown* sets in Round 2. We evaluate the accuracy of the models on two D*known* test sets by computing the percentage of models' top 3 predictions all being the ground truth attributes. For CLIP Contrastive, we increased the vocab size for Round 2 training. For CLIP Multi-Attr and our method, we introduced additional models for each new concept. The CLIP Linear model was the hardest to grow as the output dimension was fixed to the previous vocab size. We initialized the first linear weights with the learned weights in Round 1, and had to retrain the model in Round 2. In Figure 6a, except for the CLIP Contrastive model, most models suffered from catastrophic forgetting between Round 1 and Round 2. Our method had a mild performance decrease as more vocab was introduced. This is likely due to the top 3 label selection competitions among increasing vocab size. CLIP Linear and CLIP Multi-Attr suffered severe catastrophic forgetting on the Variation Test D*known* set, likely due to lack of generalizability. We also evaluated the continual training data efficiency for different models. During Round 2, we compare how much replay data would the models need in order to achieve a decent performance by training them on either the D*unknown* datasets only (new lessons) or both the Dknown+D*unknown* datasets (new and old lessons). Round 2 trained on only D*unknown* receives significantly less data, and does not require reviewing old lessons. In Figure 6b, when trained only with the D*unknown* set, our method had already outperformed all other methods even compared to their performances when trained with both Dknown+D*unknown* datasets. When more data was available, our method was able to improve performance even further on identifying all attributes. These results showed early signs of efficient continual learning capabilities and resistance against catastrophic forgetting. Unlike discriminative methods such as CLIP Linear, which has a fixed output dimension based on the vocab size, our method is a lot more flexible to grow for new concepts, and achieved higher performance without the need to review old concepts. Further investigations are needed for larger scale evaluation. ## 5.3 Compositional Imagination And Reasoning Another way of evaluating acquired words is through compositional imagination and reasoning given words. With the stored meaning representations, we will be able to flexibly compose different meanings together for reasoning, imagination, simulation, and language understanding. We evaluate this capability in two use cases: composition reasoning and generation. Most traditional multimodal methods, such as the ones in Section 5.1 only focus on learning a feature extractor given an image. They do not store a grounded representation for each word for reasoning or generation. We therefore, have to turn to the text embedding part of CLIP for comparison as they were trained to be in the same space as the image ![7_image_0.png](7_image_0.png) embeddings. Text embeddings have been shown to carry grounded semantic meanings through high resolution image generations, but also have been found to struggle at grounding certain attributes (Saharia et al., 2022; Ramesh et al., 2022). In this section, we compare our method to **CLIP** Zero Shot and **CLIP Finetune** on the following tasks. We use the text embedding of both methods to do image editing and compositional generation. For CLIP Finetune, we added two fully connected layers on the text embedding and fintuned with our D_*train* dataset. ## Composition Generation Without any given images, humans are able to build mental representations of objects given linguistic descriptions. These representations are built upon abstract word associated features, and can be flexibly composed and manipulated as more features are added. Unlike previous works that emphasize on high resolution image generation, we focus on building compositional mental representations that can be used for downstream reasoning. Quantitatively, we evaluate the fidelity of a pair of concept composition through a multiple choice task. Given any two random compatible words, e.g. (red, cone), and the CLIP embedding of two hard distractors (each sharing at least one attribute as the original pair, e.g. A. (Red, Cone), B. (Red, Cylinder), C. (Blue, Cone)), we challenge the models to generate a mental representation such that it is closest to the correct image embedding. Each choice is a randomly selected image with the two specified features. As shown in Figure 7, for example, we decode representations of both "red" and "cone" and then add the two resulting vectors to create our "mental image" embedding of a red cone. The multiple choice embedding with the smallest MSE is chosen to be the desired attribute composition imagination. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Table 2: Composition Generation Multiple Choice Task ![8_image_2.png](8_image_2.png) Figure 8: **Novel Composition Generation**: Our method is able to accurately reflect the word, and zero-shot compose mental images with novel feature combinations. The performance can be found in Table 2. Over 15 runs of 100 randomly selected questions each, our model is able to outperform the text embedding composition of both CLIP Zero Shot and CLIP Text Finetune. Among those, the Color+Shape combo is the easiest one to assemble, likely due to the challenges of learning the material features for the other two combinations. Our method is better at extracting different aspects of the inputs, and learn to pin down the exact word meanings through efficient similarity and difference comparisons. Qualitatively, we also generated the representations of the novel combinations in our testing data (Figure 8), and see how visually close they are to the GT pictures. The visualization shows that CLIP Zero Shot and CLIP Finetune both struggle at representing the precise definition of some shape concepts, such as "Cone", "Sphere", and "Teapot", but are good at embedding color concepts. The last row serves as an inspiration of possible ground truth images given a pair of concepts. ## Composition Reasoning Another simple compositional reasoning task on different object features is to do 'arithmetic' with them. For example, a (red, cone) - red + blue= (blue, cone). With the Decoder training in Figure 2, we can flexibly edit a given image to a desired feature. In this section, given an input image, and a random pair of attribute switching, we qualitatively evaluate if the edited image resembles the desired attribute while keeping all other features the same. Figure 9 shows three qualitative examples on feature switching over color, material, and shape, compared to CLIP Zero Shot and CLIP Finetune. It is observed that CLIP trained text embeddings excel at extracting color related concepts, but struggle at material and shape. This could be due to its unfamiliarity with the specific material and shape words that we use in our dataset, whereas color concepts are more universal and are easier to learn from pixels. Finetuning helps improve the performance, but still lagged behind our method. More qualitative examples can be found in the Appendix. 6 Conclusion In this work, we took a human inspired approach to acquire multi-attribute concepts. We define the acquisition of word as learning both an information filtration process, and a representation-symbol mapping. We mimic the classroom setting, constructed a small clean dataset SOLA for efficient comparative and continual learning. We evaluated the learned representations in multi-attribute recognition, compositional simulation and reasoning tasks. Our experiment results outperformed CLIP variations in controlled settings, and showed early signs of a promising new method for continual grounded word acquisition through comparative learning. ## Limitations As exciting as this work is, it does have several limitations and a lot of opportunities for future improvement. How to scale? We demonstrated our method in a highly constrained environment with very limited concepts, whereas humans are able to pick up new concepts in the noisy world with few shots. How could these representations learned in a clean environment be useful in real world? Would comparative learning still be useful outside of the classroom? We followed the baby step of progressive alignment and hoping that establishing a set of clean base knowledge, can ease the acquisition of future more complex concepts through comparisons with existing knowledge, analogy and hierarchical abstraction. This hypothesis remains to be investigated in the future. What about other words? Some concepts can be learned through just visual inputs, like color, whereas other concepts require grounding through different sensory types or modalities, like "hot", "loud" and "fast". Even more concepts are built upon existing words through abstraction and generalization, e.g. "philosophy", "momentum". Comparisons can still be used to ground these words, but input to these comparisons could vary from data modalities to computation methods, to abstract representations. We leave these for future work. How to put words into sentences? This work only focused on the grounding of individual words into visual representations, whereas sentence syntax, grammar, and article structure are yet to be learned. For future work, we could treat language as its own modality, and learn the structure through comparisons as well. Just like in an elementary linguistic class, a teacher would list out several examples "I shower"/"You shower"/"He shower". Humans can learn grammar through what's changing and what's constant. This could be an interesting next step to look into. Who can offer the supervision? As mentioned at the beginning, human language acquisition is a highly supervised learning process. Babies are rarely inventing new words but learning how adults label objects through generations of conventions. A classroom setting with highly structured curriculum and clean dataset takes a lot of curriculum design and heavy annotation. This is the cost that humans are willing to spend in order to educate human children from kindergarten to college. Maybe it is a fair price that we have to pay in order for artificial intelligence to learn what we want them to learn. About the current work itself, there are several constraints that we are limited to. First of all, due to limited computation resources and data size, we had to take a shortcut by using a pre-trained CLIP embedding as a starting point for our models. In theory, we could and would love to train our models from scratch, just like how a new born would learn their first language. A dataset like Toys-200 (Stojanov et al., 2019) could mimic the process of babies interacting with the objects, get a 360 view and help build 3D mental representations. Second of all, like many other continual learning methods, an unbounded memory space is an unrealistic assumption. As more concepts are learned, the memory space would grow fast, so as the search time. An interesting next step could be to reorganize the memory according to the association distances and hierarchical structures. Lastly, our work aims at proposing a novel language acquisition definition and the comparative continual learning method. We used somewhat simple model architecture and image generation models for proof-of-concept demonstration on the method. More sophisticated model architecture and training can be switched for different input modalities and applications. Listed above are several major limitations and future directions based on current work. We are more than happy to take constructive suggestions and criticism to help improve this and future works. ## Ethics Statement This work took the human inspired approach to learn word acquisition for artificial intelligent agents. We generated a small clean dataset using the open-source simulation software Kubric, which was designed for semi-realistic image/video synthesis. All of our training was done on a single machine with 8GB GPU and an Interl i9 processor, with very limited environmental cost. This work does not involve human subject, nor can be can be used to directly interact with humans. ## Acknowledgements This work was supported in part by NSF IIS-1949634 and DARPA PTG program HR00112220003. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. ## References Alper Ahmetoglu, Erhan Oztop, and Emre Ugur. 2022. Learning multi-object symbols for manipulation with attentive deep effect predictors. Erin M Anderson, Yin-Juei Chang, Susan Hespos, and Dedre Gentner. 2018. Comparison within pairs promotes analogical abstraction in three-month-olds. Cognition, 176:74–86. Muhammad Umer Anwaar, Egor Labintcev, and Martin Kleinsteuber. 2021. Compositional learning of image-text query for image retrieval. In *Proceedings* of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1140–1149. Linden J. Ball and Valerie A. Thompson, editors. 2018. The Routledge international handbook of thinking and reasoning. Routledge, New York. Yuwei Bao, Sayan Ghosh, and Joyce Chai. 2022. Learning to mediate disparities towards pragmatic communication. Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2022. Instructpix2pix: Learning to follow image editing instructions. *arXiv preprint arXiv:2211.09800*. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. 2015. Shapenet: An information-rich 3d model repository. *arXiv preprint arXiv:1512.03012*. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020b. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104–120. Springer. Jane B Childers. Language and concept acquisition from infancy through childhood. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. In *International Conference on Machine Learning*, pages 1931–1942. PMLR. Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. 2014. Discriminative unsupervised feature learning with convolutional neural networks. Advances in neural information processing systems, 27. Yilun Du, Shuang Li, and Igor Mordatch. 2020. Compositional visual generation with energy based models. Advances in Neural Information Processing Systems, 33:6637–6647. Yilun Du, Shuang Li, Yash Sharma, Josh Tenenbaum, and Igor Mordatch. 2021. Unsupervised learning of compositional energy concepts. Advances in Neural Information Processing Systems, 34:15608–15620. Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2022. Training-free structured diffusion guidance for compositional text-to-image synthesis. arXiv preprint arXiv:2212.05032. Dedre Gentner. 1983. Structure-mapping: A theoretical framework for analogy. *Cognitive science*, 7(2):155– 170. Dedre Gentner and Francisco Maravilla. 2017. Analogical reasoning. In The Routledge International Handbook of Thinking and Reasoning, pages 186– 203. Routledge. Dedre Gentner and Arthur B. Markman. 1994. Structural alignment in comparison: No difference without similarity. *Psychological Science*, 5(3):152–158. Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, Thomas Kipf, Abhijit Kundu, Dmitry Lagun, Issam Laradji, Hsueh-Ti (Derek) Liu, Henning Meyer, Yishu Miao, Derek Nowrouzezahrai, Cengiz Oztireli, Etienne Pot, Noha Radwan, Daniel Rebain, Sara Sabour, Mehdi S. M. Sajjadi, Matan Sela, Vincent Sitzmann, Austin Stone, Deqing Sun, Suhani Vora, Ziyu Wang, Tianhao Wu, Kwang Moo Yi, Fangcheng Zhong, and Andrea Tagliasacchi. 2022. Kubric: a scalable dataset generator. Stevan Harnad. 1990. The symbol grounding problem. Physica D, 42:335–346. Susan J Hespos, Erin Anderson, and Dedre Gentner. 2020. Structure-mapping processes enable infants' learning across domains including language. In *Language and concept acquisition from infancy through* childhood, pages 79–104. Springer. Felix Hill, Olivier Tieleman, Tamara Von Glehn, Nathaniel Wong, Hamza Merzic, and Stephen Clark. 2020. Grounded language learning fast and slow. arXiv preprint arXiv:2009.01719. Phillip Isola, Joseph J Lim, and Edward H Adelson. 2015. Discovering states and transformations in image collections. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 1383–1391. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In *Proceedings of the IEEE conference* on computer vision and pattern recognition, pages 2901–2910. Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. 2021. MDETR - modulated detection for end-to-end multimodal understanding. *CoRR*, abs/2104.12763. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. *Advances in Neural* Information Processing Systems, 33:18661–18673. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. *Proceedings of the National Academy of* Sciences, 114(13):3521–3526. Laura Kotovsky and Dedre Gentner. 1996. Comparison and categorization in the development of relational similarity. *Child Development*, 67(6):2797–2822. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Sven E Kuehne, Dedre Gentner, and Kenneth D Forbus. 2000. Modeling infant learning via symbolic structural alignment. In Proceedings of the twenty-second annual conference of the cognitive science society, pages 286–291. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020a. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *European Conference on Computer Vision*, pages 121–137. Springer. Y. Li, Y. Xu, X. Mao, and C. Lu. 2020b. Symmetry and group in attribute-object compositions. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11313–11322, Los Alamitos, CA, USA. IEEE Computer Society. Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. 2022. Compositional visual generation with composable diffusion models. arXiv preprint arXiv:2206.01714. Vincenzo Lomonaco and Davide Maltoni. 2017. Core50: a new dataset and benchmark for continuous object recognition. *CoRR*, abs/1705.03550. David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continuum learning. CoRR, abs/1706.08840. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32. M. Mancini, M. Naeem, Y. Xian, and Z. Akata. 2021. Learning graph embeddings for open world compositional zero-shot learning. IEEE Transactions on Pattern Analysis amp; Machine Intelligence, (01):1– 1. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. *CoRR*, abs/1904.12584. Arthur B Markman and Dedre Gentner. 1993. Structural alignment during similarity comparisons. Cognitive psychology, 25(4):431–467. Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. 2017. Variational continual learning. *arXiv preprint arXiv:1710.10628*. German Ignacio Parisi, Jun Tani, Cornelius Weber, and Stefan Wermter. 2018. Lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization. *CoRR*, abs/1805.10966. Ofek Pearl, Itai Lang, Yuhua Hu, Raymond A Yeh, and Rana Hanocka. 2022. Geocode: Interpretable shape programs. *arXiv preprint arXiv:2212.11715*. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Vignesh Ramanathan, Anmol Kalia, Vladan Petrovic, Yi Wen, Baixue Zheng, Baishan Guo, Rui Wang, Aaron Marquez, Rama Kovvuri, Abhishek Kadian, et al. 2023. Paco: Parts and attributes of common objects. *arXiv preprint arXiv:2301.01795*. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gómez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. 2022. A generalist agent. Transactions on Machine Learning Research. Featured Certification. Mark B Ring. 1998. Child: A first step towards continual learning. In *Learning to learn*, pages 261–292. Springer. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Jeffrey C Schlimmer and Douglas Fisher. 1986. A case study of incremental concept induction. In *AAAI*, volume 86, pages 496–501. Ruxue Shao and Dedre Gentner. 2019. Symmetry: Lowlevel visual feature or abstract relation? In *Proceedings of the 41st Annual Meeting of the Cognitive Science Society*, Proceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019, pages 2790–2796. The Cognitive Science Society. Stefan Stojanov, Samarth Mishra, Ngoc Anh Thai, Nikhil Dhanda, Ahmad Humayun, Chen Yu, Linda B. Smith, and James M. Rehg. 2019. Incremental object learning from contiguous views. In *2019 IEEE/CVF* Conference on Computer Vision and Pattern Recognition (CVPR), pages 8769–8778. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. *arXiv preprint arXiv:1908.07490*. Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. *arXiv preprint* arXiv:2010.06775. Michael Tomasello and Michael Jeffrey Farrar. 1986. Joint attention and early language. *Child Development*, 57(6):1454–1463. Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. *Advances in Neural Information Processing Systems*, 34:200–212. Gido M Van de Ven and Andreas S Tolias. 2019. Three scenarios for continual learning. *arXiv preprint* arXiv:1904.07734. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. 2011. The caltech-ucsd birds-200-2011 dataset. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. *arXiv preprint arXiv:2108.10904*. Tailin Wu, Megan Tjandrasuwita, Zhengxuan Wu, Xuelin Yang, Kevin Liu, Rok Sosic, and Jure ˇ Leskovec. 2022. Zeroc: A neuro-symbolic model for zero-shot concept recognition and acquisition at inference time. Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. 2018. Unsupervised feature learning via nonparametric instance discrimination. In *Proceedings* of the IEEE conference on computer vision and pattern recognition, pages 3733–3742. Jianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. 2016. A theory of generative convnet. In *Proceedings of The 33rd International Conference on* Machine Learning, volume 48 of *Proceedings of Machine Learning Research*, pages 2635–2644, New York, New York, USA. PMLR. Aron Yu and Kristen Grauman. 2014. Fine-grained visual comparisons with local learning. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pages 192–199. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579–5588. Qi Zheng, Chaoyue Wang, Dadong Wang, and Dacheng Tao. 2022. Visual superordinate abstraction for robust concept learning. ## A Dataset Sola Here is a detailed description of the Simulated Objects for Language Acquisition (SOLA) dataset: Learning Attributes (Figure 10): 1. Color: 8 2. Material: 4 3. Shape: 11 Changing Attributes (Figure 11): 1. Lighting: 3 2. Camera Angle: 6 Variation Attributes (Figure 12): 1. Shade: 3 2. Size: 3 3. Stretch: 4 Image Types: 1. RGBA 2. Depth 3. Surface Normal 4. Segmentation 5. Object Coordinates Coordinates This amounts to 7325 RGBA images in total, with 6336 originals and 989 with variations. A training and testing split can be found in Table 1. The original image set was first broke down into Novel Composition Training and Novel Composition Testing. 9 pairs of attributes are: 1. (yellow, cone) 2. (green, metal) 3. (plastic, cube) 4. (purple, teapot) 5. (red metal) 6. (glass, torus_knot) 7. (white, cylinder) 8. (aqua, rubber) 9. (glass, sphere) ![13_image_0.png](13_image_0.png) Figure 10: SOLA Learning Attributes ![13_image_1.png](13_image_1.png) ![14_image_0.png](14_image_0.png) For continual learning evaluation, we split the vocabulary into the following two sets. Any images associated at least one of the concepts in D*unknown* are assembled into D*unknown* train/test datasets, and the rest in D*known*. The number of samples in each split can be found in Table 1. Known = [brown, green, blue, aqua, purple, red, white, rubber, material, plastic, cube, cylinder, sphere, cone, torus, gear, sponge, spot, teapot, suzzane] Unknown = [yellow, glass, torus_knot] ## B Model Architecture And Training For the encoder training, we used the pretrained CLIP image encoder (frozen) to embed the input images, going through a filter of 512 dimensions, and two fully connected layers with a hidden dimension of 128 and latent dimension of 16. Each round is trained on a similarity batch and a difference batch of size 128 each. The training moves on to the next concept when the loss went down below 0.008 or hit 200 rounds. The whole vocabulary was trained for 5 epochs with a learning rate of 1e−3. For the decoder training, we froze the weights of the filter and the pre-trained representations from the previous step, and trained four fully connected layers with a dimension upsampling 16 → 64 → 64 → 96 → 512 with a dropout rate of 0.2. Each concept was trained for 100 round with a batch size of 128. The whole vocabulary was trained for 5 epochs with a learning rate of 1e−3. For comparisons, CLIP Contrastive embedded both image inputs, and text inputs. The image embeddings went through two fully connected layers with a hidden dimension of 128 and output dimension of the vocabulary size. CLIP Linear trained two fully connected layers on top of the image embeddings with a hidden dimension of 128 and output dimension of the vocabulary size. CLIP MultiAttr did the same thing for each word, and the output dimension was 1 over softmax predictions. CLIP Text Finetune trained two fully connected layers on top of the text embeddings, with an input & output dimensions of 512, and hidden dimension of 66. We tried to keep all the model architecture relative the same or having similar number of parameters for a fair comparison. The models were each trained for 50 epochs with a learning rate of 1e−3. The small image generator contains 5 upsampling convolution layers with dimensions going from (512,1) to (3,224,224). The number of channels are [128, 64, 32, 16, 3]. We trained 100 epochs on our original dataset with a learning rate or 1e−3. All experiments done on a single NVIDIA(R) GeForce(R) RTX 2070 SUPER(TM) 8GB GDDR6 and 10th Gen Intel(R) Core(TM) i9-10900K processor. ## C **Sola And Other Dataset Comparisons** | Dataset | Size | Language | Image Type(s) | Purpose | Structural Alignment | |---------------|--------|------------|----------------------------------------|---------------------------|------------------------| | CUB | 11.8k | Sentences | rgb, bbox | fine-grain classification | No | | UT-Zappos | 50k | Words | rgb | attribute comparison | No | | ShapeNet | 51k | Words | 3D | 3D shapes | No | | MIT-States | 53k | Words | rgb | state transformation | No | | PACO | 81.5k | Words | rgb, seg | part segmentation | No | | CLEVR | 100k | Sentences | rgb | reasoning diagnosis | No | | Visual Genome | 108k | Sentences | rgb, bbox | question answering | No | | SOLA | 36.6k | Words | rgba, depth, seg, surf form, obj cords | comparative acquisition | Yes | Table 3: Multimodal Dataset Comparisons ![16_image_0.png](16_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes. Section Limitation. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes. Section Abstraction and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? ✗ ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes. Section 3. ✓ B1. Did you cite the creators of artifacts you used? Yes. Section 3. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Yes. Section 3. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes. Section 3. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Yes. Section 3 and Appendix A. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Yes. Section 3 and Appendix A. ## C ✓ **Did You Run Computational Experiments?** Yes. Section 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes. Section 5 and Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes. Section 5 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes. Section 5. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes. Section 5 and Appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** No. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
przepiorkowski-wozniak-2023-conjunct
Conjunct Lengths in {E}nglish, Dependency Length Minimization, and Dependency Structure of Coordination
https://aclanthology.org/2023.acl-long.864
This paper confirms that, in English binary coordinations, left conjuncts tend to be shorter than right conjuncts, regardless of the position of the governor of the coordination. We demonstrate that this tendency becomes stronger when length differences are greater, but only when the governor is on the left or absent, not when it is on the right. We explain this effect via Dependency Length Minimization and we show that this explanation provides support for symmetrical dependency structures of coordination (where coordination is multi-headed by all conjuncts, as in Word Grammar or in enhanced Universal Dependencies, or where it single-headed by the conjunction, as in the Prague Dependency Treebank), as opposed to asymmetrical structures (where coordination is headed by the first conjunct, as in the Meaning{--}Text Theory or in basic Universal Dependencies).
# Conjunct Lengths In English, Dependency Length Minimization, And Dependency Structure Of Coordination Adam Przepiórkowski ICS Polish Academy of Sciences and University of Warsaw adamp@ipipan.waw.pl ## Abstract This paper confirms that, in English binary coordinations, left conjuncts tend to be shorter than right conjuncts, regardless of the position of the governor of the coordination. We demonstrate that this tendency becomes stronger when length differences are greater, but only when the governor is on the left or absent, not when it is on the right. We explain this effect via Dependency Length Minimization and we show that this explanation provides support for symmetrical dependency structures of coordination, as opposed to structures headed by the first conjunct. ## 1 Introduction It has been observed for various particular types of coordination in English that left-most conjuncts tend to be shorter than right-most conjuncts (e.g., Gibson et al. 1996, Temperley 2005, Lohmann 2014). This is illustrated in (1) from the Penn Treebank (PTB; Marcus et al. 1993), where the left conjunct, *ship*, is shorter than the right conjunct, hope I get paid, in terms of the number of words (1 vs. 4), the number of syllables (1 vs. 4), and the number of characters (4 vs. 15, including spaces). (1) I'm going to [[ship] and [hope I get paid]]. However, to the best of our knowledge, there are no demonstrations of this effect that would take all kinds of coordinations into account and that would use various length metrics. Filling this gap is the first contribution of this paper. There is even less work that asks whether it is really the left-to-right order of conjuncts that matters here, or whether it is perhaps the closeness to the external head - henceforth, *governor*. (1), where the governor to is on the left, lends support to both - "leftness" and "closeness" - hypotheses. But the two hypotheses make different predictions when the governor is on the right, as in (2)–(3). (2) [[Walter Sisulu] and [the African National Congress]] *came* home yesterday. Michał **Wo´zniak** University of Warsaw m.wozniak60@student.uw.edu.pl (3) [[My younger daughter] and [I]] are fine. (2), where the left conjunct is shorter (2 vs. 4 words, 5 vs. 9 syllables, 13 vs. 29 characters), only supports the "leftness" hypothesis, while (3), where the right conjunct is shorter (1 vs. 3 words, 1 vs. 5 syllables, 1 vs. 19 characters), only supports the "closeness" hypothesis. The second contribution of this paper is to establish two facts regarding the influence of the governor on conjunct lengths in English. The first fact is that left conjuncts tend to be shorter even when the governor is on the right, which immediately invalidates the "closeness" hypothesis. The second observation is more interesting and has important consequences for dependency theories of coordination: the position of the governor is crucial in how this tendency for left conjuncts to be shorter changes with differences in conjunct lengths. When the governor is on the left, this tendency becomes stronger with increasing length differences between conjuncts, but when the governor is on the right, this effect disappears. The third contribution of this paper is to provide an explanation of this effect in terms of Dependency Length Minimization (DLM) - the robustly demonstrated tendency for natural languages to strive for maximally local dependencies. Our explanation is compatible with the view that such distance minimization pressure is at work both at the level of use and at the level of grammar (Hawkins 1994, Futrell et al. 2020). The final novel contribution is to demonstrate that this explanation is possible only on two of the four main linguistic approaches to coordination, namely, on the approaches schematically represented in (4)–(5), but not on the other two approaches schematized in (6)–(7).1 (4) **Conjunction-headed/Prague**: ![1_image_0.png](1_image_0.png) (6) **Bouquet/Stanford**: ## (7) **Chain/Moscow**: ![1_image_4.png](1_image_4.png) We also show that DLM considerations favour the representation of coordination in the enhanced version of the current treebank annotation standard, Universal Dependencies (UD; https:// universaldependencies.org/; Nivre et al. 2016, de Marneffe et al. 2021, Zeman et al. 2022), over the basic UD representation of coordination. Hence, the empirical results of this paper lend support to the issue of the appropriate dependency structure of coordination both in theoretical linguistics and in corpora. ## 2 Data Work reported here is based on one of standard syntactic resources, the Penn Treebank. More precisely, we utilized the version of PTB, which we call PTB&, made available by Ficler and Goldberg (2016).2 It incorporates earlier corrections of the internal structure of nominal phrases (Vadas and Curran 2007) and - importantly - it improves on PTB by offering explicit and relatively consistent information about coordinations (see Ficler and Goldberg 2016 for details). Unlike treebanks within Universal Dependencies, PTB& makes extents of conjuncts unambiguous and, in particular, it explicitly marks shared dependents of conjuncts, but it does not explicitly mark heads of constructions, reflect where a given approach is conspicuously assumed: (4) - in the Prague Dependency Treebank (https://ufal. mff.cuni.cz/prague-dependency-treebank; Hajic et al. ˇ 2006), (6) - in the Stanford dependency parser (https://nlp. stanford.edu/software/lex-parser.shtml; de Marneffe et al. 2006), (7) - in the Meaning–Text Theory originally developed in Moscow (Mel'cuk ˇ 1974, 1988, 2009). In the same spirit, we call the approach in (5) "London", as it is often associated with Word Grammar (Hudson 2010, 1990, 1984: 225), developed at University College London. 2https://github.com/Jess1ca/ CoordinationExtPTB so also information about governors of coordinate structures is not explicitly available. However, it is relatively easy to construct rules finding governors. A simplified example of such a rule is: "if the mother of coordination c is a PP, try to locate a sister of c of category IN or TO (cf. (8)), or – failing that - of category VBG (cf. (9))".3 (8) Flesh goes to total alert for [[flight] or [fight]]. (9) The visitors then listed technologies up for sale, *including* [[launch services] and [propulsion hardware]]. The final evaluation of the program implementing these rules, performed on previously unseen random 100 coordinations, gave the 92% accuracy in locating a specific governor and, crucially, 97% in deciding whether the governor is absent, on the left, or on the right of the coordinate structure.4 Out of about 49,200 sentences (1.25M tokens) in PTB&, 19,095 contain at least one coordination, with the total of 24,446 coordinations. All coordinate structures (i.e., *–CCP nodes) with exactly two conjuncts - 21,825 altogether - where automatically extracted from PTB&, together with information about the location of the governor, if any.5 There were 17,825 coordinations with a governor on the left (13,106, i.e., 73.5%) or on the right (4,719, i.e., 26.5%). The length of each conjunct was measured as in §1: in characters (textual length), in syllables (as in, e.g., Benor and Levy 2006 and Lohmann 2014; an approximation of spoken length), and in words (as in, e.g., Gibson et al. 1996 and Temperley 2005; common in discussions of DLM). Implementing character and word metrics was straightforward. Syllable counting was done with the help of two Python libraries: Inflect, for converting numbers written with digits to words, and CMUdict, for ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) ![1_image_3.png](1_image_3.png) | median | m e a n | | | | | | |-----------------------------------|-----------|-------|-------|-------|---------|----------| | left right | left | right | V | p | | | | All coordinations (N = 21,825) | | | | | | | | characters | 15 | 20 | 26.68 | 32.34 | 7.4e+07 | 3.7e-262 | | syllables | 5 | 7 | 8.35 | 9.85 | 6.5e+07 | 5.5e-171 | | words | 3 | 3 | 4.42 | 5.36 | 3.2e+07 | 6.6e-234 | | No governor (N = 4,000) | | | | | | | | characters | 49 | 56 | 54.78 | 64.66 | 3e+06 | 1.9e-33 | | syllables | 15 | 16 | 16.20 | 18.88 | 2.8e+06 | 1.8e-28 | | words | 8 | 10 | 9.23 | 11.00 | 2.4e+06 | 2.3e-40 | | Governor on the left (N = 13,106) | | | | | | | | characters | 14 | 18 | 22.40 | 28.01 | 2.5e+07 | 8.2e-214 | | syllables | 5 | 6 | 7.29 | 8.75 | 2.2e+07 | 6.5e-127 | | words | 3 | 3 | 3.70 | 4.63 | 1.1e+07 | 1.9e-207 | | Governor on the right (N = 4,719) | | | | | | | | characters | 9 | 10 | 14.76 | 16.94 | 3.3e+06 | 3.2e-49 | | syllables | 3 | 4 | 4.64 | 5.24 | 2.6e+06 | 3.5e-35 | | words | 1 | 1 | 2.35 | 2.60 | 5.6e+05 | 4.4e-23 | Table 1: Medians and means of lengths of left and right conjuncts in binary coordinations in PTB& looking up the number of syllables for particular words. Additional heuristics were implemented for tokens unknown to CMUdict (abbreviations, special symbols including $, etc.; see the Appendix). ## 3 Basic Statistics Table 1 shows that, in binary coordinations in PTB&, left conjuncts tend to be shorter than right conjuncts. This is true about the whole population of binary coordinations, as well as about each of its three subpopulations: with no governor, with the governor on the left, and - crucially - with the governor on the right. As noted above, this last result immediately invalidates the "closeness" hypothesis. In each population and for any length unit, the median of left conjunct lengths is smaller than or equal to the median of right conjunct lengths, and in each case the mean of left conjunct lengths is smaller than the mean of right conjunct lengths. All 12 differences between means in Table 1 are highly significant (p ⌧ 0.001), as established by the onesided Wilcoxon test (with the values of V statistics and p reported in the table).6 This confirms and | g | o | v | e | r | n | o | r | |----------------|--------------|-------|-------|------|------|--------|-----| | o n t he le ft | on the right | | | | | | | | prop | N | prop | N | 2(1) | p | | | | characters | 0.632 | 12140 | 0.603 | 4236 | 11.5 | 0.0007 | | | syllables | 0.599 | 11027 | 0.600 | 3671 | 0.0 | 0.87 | | | words | 0.674 | 8377 | 0.625 | 1754 | 15.1 | 0.0001 | | Table 2: Proportions of shorter conjuncts occurring on the left (vs. right) depending on the position of the governor, in coordinations with conjuncts of different lengths extends partial results of previous works, which focused on particular constructions and used particular length metrics: there is a general tendency in English for left conjuncts to be shorter. ## 4 Dependence On Governor Position The previous section showed that left conjuncts tend to be shorter, even when the governor is on the right. However, this section will demonstrate that the position of the governor matters and that the governor does attract shorter conjuncts to some extent. We first report on an unsuccessful attempt to make this demonstration. If the governor attracts shorter conjuncts, then we might expect more left conjuncts to be shorter when the governor is on the left than when the governor is on the right. Table 2 shows that this expectation is *partially met*: when length is measured in characters or words, the proportion of shorter left conjuncts is indeed greater when the governor is on the left; these two differences are highly significant (p < 0.001), as ascertained by the twosided proportions test.7 However, when length is measured in syllables, the proportion of shorter left conjuncts is slightly higher in the opposite scenario, i.e., when the governor is on the right, but this difference is not statistically significant. Moreover, by the same reasoning, when there is no governor, we might expect relevant proportions to be somewhere between those in Table 2. As shown in Table 3, this expectation is *not met*: when there is no governor, the proportions of shorter left conjuncts are smaller than when there is a governor on the left or on the right. The reason for this is that coordinations without a governor are very | no governor | vs. on the left | vs. on the right | | | | | |---------------|-------------------|--------------------|------|---------|-----|--------| | prop | N | 2(1) | p | 2(1) | p | | | characters | 0.575 | 3932 | 41.5 | 1.2e-10 | 6.7 | 0.0098 | | syllables | 0.569 | 3764 | 10.4 | 0.0012 | 7.6 | 0.0057 | | words | 0.587 | 3574 | 82.4 | 1.1e-19 | 7.2 | 0.0072 | Table 3: Proportions of shorter conjuncts occurring on the left (vs. right) in the absence of governor, in coordinations with conjuncts of different lengths specific: 97% of them are coordinations of matrix Ss (sentences; 57%) or VPs (verbal phrases; 40%), and many of such coordinations are constituted by a main sentence or VP, with another - shorter - added as a comment to the first one (see (10)) or heavily elided (see (11)).8 (10) [[Mr. Straszheim expects he will take some heat], and [he's right]]. (11) The bank [[employs 8,000 people in Spain] and [2,000 abroad]]. In summary, comparing total proportions of shorter left conjuncts does not provide us with an argument for the influence of the governor. However, such an influence is clear when we investigate how these proportions change with absolute length differences. Figure 1 contains the results of fitting monofactorial binary logistic regression models to the three subpopulations differing in the presence and position of the governor (see the three rows in this figure); this is done for all three length metrics (see the three columns).9 The figure shows that when there is no governor (the first row of plots) and when it is on the left (the second row), the proportions of shorter left conjuncts grow steadily; in all six cases the probability p that this positive tendency is accidental is well below 0.001. Interestingly, the situation changes drastically when we consider coordinations with a governor on the right (the third row). Here, the correlation is not significantly positive, and in the case of words it is even (insignificantly) negative. Additional multifactorial binary logistic regression analysis confirmed the very significant interaction of governor position and absolute length difference (p < 0.001 for characters and syllables, p < 0.01 for words). Moreover, the analysis of slope contrasts (performed with R's emmeans::emtrends) shows that the slope is statistically significantly flatter when the governor is on the right than when it is on the left or missing. Finally, in the case of lengths measured in characters and syllables, the slope is significantly steeper when the governor is on the left than when there is no governor.10 In summary, the tendency for shorter conjuncts to occur on the left grows with absolute length difference between conjuncts when there is no governor and - even more so - when it is on the left, but not when it is on the right. ## 5 Dependency Length Minimization Dependency Length Minimization (DLM) is the tendency for natural languages to prefer structures with shorter dependencies to those with longer dependencies (for overviews see, e.g., Liu et al. 2017, Temperley and Gildea 2018). This tendency has long been noted in linguistics (Behaghel 1909, 1932: 4), has been confirmed by numerous corpus studies (some of the earliest being Hawkins 1994 and Ferrer-i-Cancho 2004) combined with computer simulations (starting with Gildea and Temperley 2007, 2010 and Liu 2008), and has received various psycholinguistic and statistical explanations (e.g., Hawkins 1994, Gibson 1998, Futrell and Levy 2017, Futrell 2019). As argued in Hawkins 1994 and Futrell et al. 2020, this tendency operates both at the level of grammar and at the level of use. At the level of use, when both orders of two dependents are grammatical, the shorter dependent tends to occur closer to the governor - both in headinitial languages, where the short–long tendency is observed (e.g., Bever 1970, Hawkins 1994, Arnold et al. 2000), and in head-final languages, where the long–short tendency is seen (e.g., Hawkins 1994, Yamashita and Chang 2001, Yamashita 2002). For example, when two PP dependents of a verb are of similar lengths, both orders are perfectly fine (e.g., *sing [in the club] [for an hour]* and sing [for an hour] [in the club]), but as length differences between the two PPs increase, so does the tendency for the shorter to occur next to the verb (e.g., *sing* 10In the case of words, the slight opposite tendency is observed, but it is not (even marginally) statistically significant. ![4_image_0.png](4_image_0.png) [for an hour] [in the most famous jazz club in the whole of USA] is more likely to occur than *sing* [in the most famous jazz club in the whole of USA] [for an hour]). At the level of grammar, certain conventionalized word orders turn out to minimize dependency lengths on average. For example, when an NP (nominal phrase) and a PP are both dependents of a verb V, the [V NP PP] order incurs shorter dependency lengths than the [V PP NP] order on average, given that NPs are on average shorter than PPs. Hawkins (1994: 90) argues that this tendency is conventionalized: present in grammar, not in use. The reason for this claim is that there is a strong preference for this order not only when the NP is shorter than the PP, but also when they are of similar lengths (e.g., *I sold [my mother's ring] [for five* dollars] vs. *I sold [for five dollars] [my mother's* ring]). However, this convention may be overridden in use, when length differences become large (e.g., *I sold [for five dollars] [my mother's silver* engagement ring that she got from my father] is more natural), again in compliance with DLM. We hypothesize that the same processes are at play in coordination. That is, the dependency structure of coordination must be such that shorter left conjuncts minimize dependency lengths - the more so, the bigger the length difference - when the governor is on the left (see the middle row of plots in Figure 1) or absent (see the top row), but not when it is on the right (see the bottom row). In the next section we will investigate which of the dependency approaches to coordination are compatible with such a DLM-based explanation of the effects illustrated in Figure 1. ## 6 Dependency Structure Of Coordination The following reasoning is based on the observation that, in English, heads of both conjuncts are on average situated the same - usually short - distance from the left periphery. In the case of PPs, VPs, CPs (complementizer phrases, e.g., *that he came*; marked as SBAR in PTB) and NPs on their analysis as determiner phrases (Abney 1987, Hudson 1990), this will usually be the left-most word. In the case of NPs analysed as headed by the noun, this will be the second word on average (first, in the case of determinerless plurals and mass terms, third when both a determiner and an adjective is present, etc.), in the case of typical sentences the head will be offset by the subject, etc. Let us first consider the Bouquet/Stanford approach. As can be seen in (12a–b), which illustrates coordination with the governor on the left, the total dependency length is smaller when the shorter conjunct occurs on the left (as in (12a)) than when it occurs on the right (as in (12b)). The same holds for coordinations with no governor, in which case the head of the first conjunct is the root; see (12c–d). This agrees with the tendencies illustrated in Figure 1. However, also when the governor is on the right, as in (12e–f), shorter left conjuncts minimize dependency length. Moreover, the gain is the same as in the other two situations: it is the length difference of the conjuncts. So, if the Stanford approach accurately reflected dependencies in coordination, we would expect the third row in Figure 1 to look the same as the first two rows, contrary to facts. (12) **Bouquet/Stanford**: ![5_image_0.png](5_image_0.png) Exactly the same reasoning applies to the Chain/Moscow approach, illustrated in (13).11 (13) **Chain/Moscow**: ![5_image_1.png](5_image_1.png) Hence, the Moscow approach also does not provide a good linguistic model of our empirical findings. On the other hand, the Conjunctionheaded/Prague approach is compatible with our corpus observations. In this case, when the governor is on the left (see (14a–b)), the dependency minimization gain is twice greater than when there is no governor (see (14c–d)): it is twice the length difference between conjuncts. This larger minimization gain may explain the above observation that the slopes in Figure 1 tend to be steeper when there is a governor on the left than when there is no governor. (14) **Conjunction-headed/Prague**: ![6_image_1.png](6_image_1.png) Interestingly, unlike in the case of the previous two approaches to coordination, relative conjunct lengths do not matter for dependency length minimization when the governor is on the right (see (14e–f)). Hence, the Prague approach makes it possible to explain the effects observed in Figure 1 and, moreover, the explanation is purely at the level of use: it does not invoke any conventionalized effects of DLM. Let us finally consider the Multi-headed/London approach. At first, it seems incompatible with our empirical findings: when the governor is on the left (see (15a–b)), shorter left conjuncts minimize dependency length, as confirmed by empirical observations, but when there is no governor (see (15c–d)), lengths of conjuncts do not matter for dependency minimization, and the significantly positive slopes in plots in the top row of Figure 1 remain unexplained. Moreover, when the governor is on the right (see (15e–f)), shorter *right* conjuncts seem to minimize dependencies, so the lines in the third row of Figure 1 are expected to have significantly negative slopes, symmetrical to the positive slopes of lines in the middle row, contrary to facts. (15) **Multi-headed/London**: ![6_image_0.png](6_image_0.png) However, the London approach to coordination is compatible with Figure 1 on the assumption that the at-use pressure for shorter left conjuncts in the majority of governed coordinate structures is conventionalized as an at-grammar preference for shorter left conjuncts. That is, just as in the case of NP and PP dependents of verbs, DLM may be assumed to work in coordination both at the level of use and at the level of grammar. In the case of coordination with no governor, where there are no at-use preferences for relative conjunct lengths (see (15c–d)), this at-grammar preference still has the effect that shorter conjuncts are preferred on the left, as seen in the top row of Figure 1. Moreover, this at-grammar preference makes it possible to explain the steeper slopes when the governor is on the left (the middle row), namely, as an additive effect of the at-use pressure (see (15a–b)) and the at-grammar convention. Finally, in the case of governor on the right (see (15e–f)), the at-use pressure for shorter right conjuncts is at conflict with the at-grammar preference for shorter left conjuncts, and they seem to largely cancel each other out, as observed in the third row of Figure 1. In partial summary, out of the four standard dependency approaches to coordination, only two make it possible to explain the curious tendencies in Figure 1 as DLM effects, with DLM operational either only at use (the Prague approach) or both at use and at grammar (the London approach). Paradoxically, while these tendencies are not symmetrical - shorter left conjuncts are preferred when the governor is on the left (or absent), but there is no clear preference either way when it is on the right - the two approaches that are compatible with it are symmetrical in the sense that they treat both conjuncts on par. On the Prague approach, coordination is headed by the conjunction and each conjunct is its dependent, while on the London approach coordination is multi-headed - equally by the head of each conjunct. This should be contrasted with Stanford and Moscow approaches, in which coordination is - asymmetrically - headed by the first conjunct. Interestingly, more symmetrical variants of these two asymmetric approaches to coordination are compatible with the empirical observations in Figure 1. To see that, consider the simple variants of the "first conjunct is the head" approaches in which coordination is still headed by the first conjunct when there is no governor, but otherwise it is headed by the conjunct closer to the governor. Such variants may be justified by the occasional observations that, in various languages, the governor seems to have a special relation to the closest conjunct, e.g., in the phenomenon of closest conjunct agreement (see Nevins and Weisser 2019 for an overview). These variants differ from the original approaches only when the governor is on the right, i.e., only in configurations in which the original approaches were incompatible with corpus findings: (16) **Bouquet/Stanford (closest)**: ![7_image_0.png](7_image_0.png) f. (17) **Chain/Moscow (closest)**: **c.**: $\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\end{array}\right)$ **f.**: $\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\end{array}\right)$ So-modified Stanford and Moscow approaches behave like the Prague approach: shorter left conjuncts still minimize dependency lengths when the governor is on the left or absent, but which conjunct is shorter does not matter when the governor is on the right. Hence, on the assumption that there is no conventionalized preference for shorter left conjuncts, such approaches are compatible with the lack of clear tendency in the third row of Figure 1. Let us finally consider the enhanced version of the current treebank annotation standard, Universal Dependencies (UD; https:// universaldependencies.org/; Nivre et al. 2016, de Marneffe et al. 2021, Zeman et al. 2022), appropriately called Enhanced Universal Dependencies (EUD; Schuster and Manning 2016). It combines Stanford and London approaches to coordination. The basic EUD structure of coordination is schematically represented in (18): (18) **Enhanced UD**: , , The vanilla UD standard follows the Stanford approach to coordination. Enhanced UD retains all dependencies of the vanilla UD and adds dependencies from the governor to all non-initial conjuncts.12 In (18), there are two such EUD-specific dependencies from the governor to non-initial conjuncts (drawn in red above all other dependencies). Note that removing some of the basic UD dependencies, namely, those which are shown in (18) as dashed arcs, would make the structure purely multiheaded. But, as it stands, the full EUD structure contains all dependencies of both approaches: Stanford and London. It turns out that, because of that, such structures are compatible with the empirical findings of this paper. As can be seen in (19), shorter left conjuncts again minimize dependency lengths when the governor is on the left (see (19a–b)) or absent (see (19c–d)). However, when the governor is on the right, the total dependency length does not depend on the order of conjuncts (see (19e–f)). This is compatible with the empirical findings of this paper in a way fully analogous to how the Prague approach fits these empirical findings. (19) **Enhanced UD**: a. b. c. d. e. f. We conclude that, out of the theoretical linguistic approaches to the dependency structure of English coordinations, both Conjunction-headed/Prague 12EUD also adds dependencies from conjuncts to their shared dependents. and Multi-headed/London approaches may explain the patterns observed in PTB&. In particular, the results of this paper provide an argument against the theoretical linguistic validity of the basic version of the Universal Dependencies standard, commonly used in NLP applications, given that basic UD implements the Bouquet/Stanford approach. However, they also provide support for the enhanced version of this standard, which - on top of dependencies present in basic UD - adds dependencies from the governor to each conjunct, thus making the structure effectively multi-headed. ## 7 Previous Work This work carries out the research program briefly suggested in Temperley and Gildea 2018: 78: "[G]iven the strong evidence for DLM, a finding that one syntactic analysis of (say) coordinate phrases resulted in shorter dependencies than another analysis [. . . ] could be regarded as a strong point in its favor." The only earlier attempt to do that that we are aware of is Temperley 2005. It identifies 6 constructions with the governor on the left and 3 with the governor on the right and attempts to extract them from the original PTB, reporting that "in many cases [relevant dependencies] can be inferred quite reliably from the constituent structures". Temperley shows that - in these constructions - left conjuncts tend to be shorter in terms of words, even when the governor is on the right. Temperley considers two dependency approaches to coordination, Multi-headed/London and Chain/Moscow, and argues that both are compatible with this tendency: the Moscow approach at the level of use, and the London approach at the level of grammar. By contrast, we show that DLM must be at work in English coordination at both levels in order to be compatible with the Multiheaded/London approach and, crucially, that the observed facts cannot be explained via DLM by asymmetrical approaches such as Chain/Moscow.13 While Temperley (2005) does not consider the dependence of the proportion of shorter left conjuncts on the length difference between conjuncts, this effect is demonstrated by Gibson et al. (1996: 88–90). They show that, in NP coordinations in PTB and in the Brown corpus (Kucera and ˇ Francis 1967), proportions of shorter left conjuncts grow proportionally to length differences between conjuncts. While they do not consider the dependence of this effect on the presence and position of the governor, so they do not notice that this tendency disappears when the governor is on the right, their results are broadly compatible with ours.14 Apart from these two highly relevant previous publications, there is little corpus work on the order of conjuncts in coordination, the most important being Lohmann 2014. While it is limited to English nominal coordinations, it confirms the short–long tendency (measured in syllables there) observed elsewhere, but also shows that semantic factors have a stronger effect (where applicable), which explains why - even when differences in length are very large - the proportion of shorter left conjuncts is still much lower than 1. ## 8 Conclusion This paper makes the following empirical contributions: 1) it demonstrates - more robustly than previous work - the general short–long tendency in binary coordinations in English, and 2) that this tendency also holds for coordinations followed by their governor. The novel observation is 3) how this effect depends on differences in conjunct lengths and on the position of the governor: the strong statistically highly significant positive correlation between length differences and proportions of shorter left conjuncts disappears when the governor is on the right. On the theoretical side, 4) we argued that this effect is explained by DLM, possibly operating at both levels: use and grammar, and 5) that this explanation is only possible when the symmetrical dependency structures of coordination are assumed. To the extent to which no other explanations of the effects observed in Figure 1 are forthcoming, this provides an argument for the linguistic validity of symmetrical approaches to coordination such as Conjunction-headed/Prague and Multiheaded/London and against those approaches which assume that coordinations are headed by the first conjunct, such as Bouquet/Stanford and Chain/Moscow approaches. 14For lack of space, we do not provide plots for the whole population of coordinations in PTB&, but such plots would be similar to the first two rows in Figure 1, as coordinations with the governor on the right (N = 4,719), which display a weakish and statistically insignificant correlation between proportions and length differences, would be dominated by the other coordinations (N = 17,106), where the correlation is much stronger and statistically significant. ## Acknowledgements This paper benefited from comments by Joakim Nivre, Agnieszka Patejuk, David Temperley, the three anonymous ACL 2023 reviewers, as well as members of the Linguistic Engineering Group at ICS Polish Academy of Sciences (especially, Małgorzata Marciniak) and students of the Cognitive Science Program at the University of Warsaw (especially, Grzegorz Kasperski and Wojciech Stempniak). ## 9 Limitations 9.1 English And Ptb The main limitation of the research reported here is that it is based not only on just a single language, English, but on just a single corpus containing single-genre texts (from the Wall Street Journal), PTB&. It might seem that this kind of research should instead be performed on Universal Dependencies (UD; https://universaldependencies. org/; Nivre et al. 2016, de Marneffe et al. 2021, Zeman et al. 2022) - a collection of over 200 dependency treebanks for over 100 languages, currently the favourite resource for investigating DLM (see, e.g., Futrell et al. 2020). Unfortunately, UD is ill-suited for the task at hand. In order to investigate conjunct lengths in relation to the position of the governor, a resource is needed which, for each coordination, makes it possible to locate its governor, if any, and to identify the exact extent of each conjunct (in order to measure its length). UD excels on the first requirement but fails on the second: it is not clear whether dependents on the left of the head of the left conjunct are a part of this conjunct or whether they modify the whole coordinate structure (or are shared by all conjuncts). This problem is illustrated with the following example from the UD English GUM corpus (Zeldes 2017), where it is not clear whether *never* is part of the left conjunct only (in which case the left conjunct is longer than the right conjunct) or whether it is shared by the two conjuncts (in which case they are of equal length).15 15Note that, technically, UD follows the asymmetrical Bouquet/Stanford approach to coordination, even though, conceptually, coordination is assumed to be symmetric: "UD in principle assumes a symmetric relation between conjuncts, which have equal status as syntactic heads of the coordinate structure. However, because the dependency tree format does not allow this analysis to be encoded directly, the first conjunct in the linear order is by convention always treated as the parent of all other conjuncts" (de Marneffe et al. 2021: 276). (20) ![9_image_0.png](9_image_0.png) Our initial experiments suggest that it is much easier to find governors of coordinations in PTB&, than to decide which dependents should be shared by conjuncts in UD treebanks;16 hence the use of PTB& rather than UD in the current paper. The Enhanced UD (EUD) format makes it possible to explicitly encode which dependents are shared, but currently almost all EUD treebanks seem to be the result of error-prone automatic conversion from basic UD treebanks based on simple heuristics (Schuster and Manning 2016), so such information is currently not fully reliable. For example, the EUD version of GUM (in the current 2.11 release of UD) does not contain information that never in the above example is shared between conjuncts (the intended interpretation of this sentence), although it does contain information about, say, shared subjects. However, given the recent steps towards the creation of EUD treebanks with more reliable coordination information (Grünewald et al. 2021), future versions of EUD treebanks might become better-suited for the task at hand. Moreover, some EUD treebanks - for some languages other than English - are the result of direct translation from formats that do distinguish between modifiers of left conjuncts and modifiers of whole coordinations (or all conjuncts), so it is possible that they preserve these distinctions. Probably the largest such treebank in the current release 2.11 of UD is UD_Czech-PDT (87,913 sentences; Hajic et al. ˇ 2006), other such treebanks including: UD_Czech-CAC (24,709; Hladká et al. 2008), UD_Polish-PDB (22,152; Wróblewska 2018), UD_Polish-LFG (17,246; Przepiórkowski and Patejuk 2020), UD_Dutch-Alpino (13,603; Bouma and van Noord 2017), UD_Czech-FicTree (12,760; Jelínek 2017), UD_Slovak-SNK (10,604; Zeman 2017), UD_Arabic-PADT (7,664; Hajic et al. ˇ 2009), UD_Dutch-LassySmall (7,341; Bouma and van Noord 2017), and UD_LithuanianALKSNIS (3,642; Bielinskiene et al. ˙ 2016). Furthermore, there are some treebanks - including reasonable-sized French treebanks - natively encoded in the surface-syntactic alternative to UD, Surface Syntactic Universal Dependencies (SUD; Gerdes et al. 2018, 2021), which explicitly represents information about shared dependents. Moreover, there are some constituency treebanks (thus unambiguously representing extents of conjuncts), which also contain explicit information about heads of constructions (which makes identifying governors easy), that could be used for the purposes of the current research; these include the Polish Składnica treebank (Wolinski et al. ´ 2011, Wolinski and ´ Hajnicz 2021) and the Swedish Eukalyptus treebank (Adesam et al. 2015). Finally, all the relevant information is represented in Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag 1994, Müller et al. 2021) and Lexical Functional Grammar (LFG; Kaplan and Bresnan 1982, Dalrymple et al. 2019) parsebanks - e.g., the English Redwood treebank (Oepen et al. 2004), the Bulgarian BulTreeBank (Simov et al. 2002), or the Polish LFG Treebank (Patejuk and Przepiórkowski 2015) - but the internal format of structures in such treebanks is usually much more complex than in the case of PTB or (E)UD, and less well-documented, which makes it more difficult to extract relevant information automatically. Nevertheless, future work should investigate to what extent these treebanks contain reliable and unambiguous information about coordinate structures and their governors and, if they do, it should attempt to replicate the results reported here on the basis of those treebanks. ## 9.2 Confounding Factors Another limitation of the research reported above is that it does not consider possible confounding factors. There are at least two such factors that should be carefully investigated, although in both cases we give reasons below why we believe these factors should not dramatically influence our conclusion regarding the ability of symmetrical dependency approaches to coordination to explain the patterns observed in Figure 1. The first possible confounding factor is the "old/new" discourse status of conjuncts. It is well known that phrases expressing discourseold ("given") information tend to be shorter than discourse-new phrases and they tend to occur earlier in the sentence (see, e.g., Arnold et al. 2000). Applied to conjuncts, this might explain both the fact that left conjuncts tend to be shorter regardless of the presence and position of the governor (see Table 1 in the main text) and that the proportion of shorter left conjuncts is greater than 0.5 regardless of the presence and position of the governor (see Tables 2–3). The explanation would be simple: when the two conjuncts have different discourse status, the statistically shorter discourse-old conjunct tends to occur earlier in the sentence, i.e., as the left conjunct, which influences the hypothetically otherwise balanced distribution of conjunct lengths in coordinations. However, Temperley 2005: 587–588 shows that even when the discourse status of nominal conjuncts is controlled for (e.g., by considering only indefinite NP conjuncts, which tend to be discourse-new), left conjuncts tend to be shorter, and the differences between means of length ratios are statistically significant. More importantly, it is not clear how this discourse factor could alone explain the tendencies seen in Figure 1. In order to explain the first two rows, it would have to be assumed that the larger the length difference between conjuncts, the greater the probability that the shorter conjunct is discourse-old and the longer is discourse-new and, hence, the larger the proportion of shorter left conjuncts. We are not aware of a claim to this effect being made in the literature. But even if it were true, this hypothesis alone would not suffice, as it is directly contradicted by the third row, where proportions of shorter left conjuncts do not grow with their length differences in a statistically significant way, and even seem to shrink in the case of length differences measured in words. On the other hand, it is possible to combine this discourse-based hypothesis with the at-use DLM considerations above, i.e., to adopt this hypothesis as an alternative to the grammar-level convention "prefer shorter left conjuncts proportionally to length difference". (Recall that this convention had to be assumed in order to make the empirical results of this paper compatible with the Multiheaded/London approach to coordination.) Then the at-use operation of DLM and this general discourse preference for shorter left conjuncts would conspire in the explanation based on the London approach, but only on the assumption - crucial for that explanation - that the strength of the discourse preference grows with the absolute difference between lengths of conjunct. The opposite assumption, namely, that the strength of this discourse preference does not depend on such length differences, is needed in the case of the Conjunction- | g | o | v | e | r | n | o | r | |---------|-------------|--------------|------|-------|------|-------|-----| | n o n e | on the left | on the right | | | | | | | # | % | # | % | # | % | | | | ADJP | 7 | 0.18 | 355 | 2.71 | 714 | 15.13 | | | ADVP | 4 | 0.10 | 123 | 0.94 | 38 | 0.81 | | | NP | 79 | 1.98 | 7886 | 60.17 | 2966 | 62.85 | | | NX | 3 | 0.07 | 326 | 2.49 | 61 | 1.29 | | | PP | 3 | 0.07 | 446 | 3.40 | 24 | 0.51 | | | QP | 5 | 0.12 | 91 | 0.69 | 59 | 1.25 | | | S | 2267 | 56.67 | 504 | 3.85 | 337 | 7.14 | | | SBAR | 3 | 0.07 | 373 | 2.85 | 20 | 0.42 | | | UCP | 4 | 0.10 | 278 | 2.12 | 340 | 7.20 | | | VP | 1609 | 40.23 | 2693 | 20.55 | 153 | 3.24 | | headed/Prague approach and the three variants of the "coordination headed by the first conjunct" approaches considered at the end of §6. This is because, on these approaches, at-use DLM does not say anything about the relative lengths of conjuncts when the governor is on the right, so - if the discourse preference for shorter left conjuncts is really at work here - its effect must be relatively constant, as witnessed in the third row of plots in Figure 1: the slopes there are not significantly positive. In summary, the potential confounding factor of discourse-newness may influence which of the symmetrical approaches to coordination explains the data better: London (if the strength of discourse effects depends on conjunct length differences) or Prague and variants (if it does not). On the other hand, this potential confounding factor should not influence the general conclusion that symmetrical approaches have more explanatory power than asymmetrical approaches such as Bouquet/Stanford or Chain/Moscow. The second possible confounding factor is the type of coordination: whether it is a coordination of NPs, VPs, sentences, etc. Table 4 gives numbers and percentages of different kinds of coordination, depending on the position of the governor, and Figure 2 presents a mosaic plot visualizing this data.17 It shows that the distributions of categories of coordination vary considerably depending on the presence and position of the governor. For example, as mentioned in §4, 97% of all coordinations without a governor are coordinations of sentences (57%) 17All categories occurring at least 20 times with the governor on the right and at least 20 times with the governor on the left where taken into account. All other categories occur less than ten times in *each* of the *three* data sets. ![11_image_0.png](11_image_0.png) and VPs (40%), while this is true of only 24% of coordinations with the governor on the left, and only 10% - on the right. As sentences and VPs are typically much longer than phrases bearing other categories, also conjuncts in coordinations of these types are much longer, and their mean length differences are also larger (see Table 1 in the main text). Moreover, as also observed in the main text, many of such coordinations have shorter right conjunct (see examples (10)–(11)), which results in such coordinations having a lower proportion of shorter left conjuncts than coordinations with a governor (contrast Tables 2 vs. 3). So it is clear that the different distributions of categories of coordination, which depend on the presence and position of the governor, have an impact on the statistics reported in Tables 1–3 and, thus, should be carefully investigated. It is less clear how this potentially confounding factor could influence the main effect observed in Figure 1, i.e., that there is a statistically significant positive correlation between proportions of shorter left conjuncts and absolute length differences when there is no governor or when it is on the left, but no such correlation when the governor is on the right. One way to verify that there is no such influence would be to examine plots analogous to those in Figure 1, but separately for each category of coordination. Unfortunately, there is no category that would be sufficiently well represented in all three populations; for example, while there are as many as 7,886 NP coordinations with the governor on the left and 2,966 when it is on the right, there are only 79 such coordinations without a governor in PTB&. Not only is this last number much too low for a statistically valid investigation, but also the 2,966 coordinations with the governor on the right do not contain enough data (enough in the sense explained in Harrell 2015: 72–73; cf. Gries 2021: 71) for the case where the right conjunct is shorter: there are only 24 such coordinations with length difference (in words) 3, only 14 when the difference is 4, only 7 when it is 5, etc. Nevertheless, once larger corpora with good quality annotation of coordinations become available, the effect of coordination category should be investigated in more detail. ## References Steven Abney. 1987. *The English Noun Phrase in its* Sentential Aspect. Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA. Yvonne Adesam, Gerlof Bouma, and Richard Johansson. 2015. Defining the Eukalyptus forest - the Koala treebank of Swedish. In Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 1–9, Vilnius, Lithuania. Linköping University Electronic Press, Sweden. Jennifer E. Arnold, Thomas Wasow, Anthony Losongco, and Ryan Ginstrom. 2000. Heaviness vs. newness: The effects of structural complexity and discourse status on constituent ordering. *Language*, 76(1):28– 55. Otto Behaghel. 1909. Beziehungen zwischen Umfang und Reihenfolge von Satzgliedern. Indogermanische Forschungen, 25:110–142. Otto Behaghel. 1932. *Deutsche Syntax: eine* geschichtliche Darstellung. Band IV: Wortstellung. Periodenbau. Carl Winters Universitätsbuchhandlung, Heidelberg. Sarah B. Benor and Roger Levy. 2006. The chicken or the egg? A probabilistic analysis of English binomials. *Language*, 82(2):233–278. Thomas G. Bever. 1970. The cognitive basis for linguistic structures. In John R. Hayes, editor, Cognition and the Development of Language, pages 279–362. Wiley, New York. Agne Bielinskien ˙ e, Loïc Boizou, Jolanta Kovalevskait ˙ e,˙ and Erika Rimkute. 2016. ˙ Lithuanian dependency treebank ALKSNIS. In Inguna Skadin, a and Roberts Rozis, editors, *Human Language Technologies - The* Baltic Perspective, pages 107–114. IOS Press. Gosse Bouma and Gertjan van Noord. 2017. Increasing return on annotation investment: The automatic construction of a Universal Dependency treebank for Dutch. In *Proceedings of the NoDaLiDa 2017* Workshop on Universal Dependencies (UDW 2017), pages 19–26, Gothenburg, Sweden. Association for Computational Linguistics. Mary Dalrymple, John J. Lowe, and Louise Mycock. 2019. *The Oxford Reference Guide to Lexical Functional Grammar*. Oxford University Press, Oxford. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, LREC 2006, pages 449–454, Genoa. ELRA. Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. *Computational Linguistics*, 47(2):255–308. Ramon Ferrer-i-Cancho. 2004. Euclidean distance between syntactically linked words. *Physical Review* E, 70:056135. Jessica Ficler and Yoav Goldberg. 2016. Coordination annotation extension in the Penn Tree Bank. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics*, pages 834–842, Berlin, Germany. Jamie Y. Findlay and Dag T. T. Haug. 2021. How useful are Enhanced Universal Dependencies for semantic interpretation? In Proceedings of the Sixth International Conference on Dependency Linguistics (DepLing, Syntax Fest 2021), pages 22–34, Sofia, Bulgaria. Richard Futrell. 2019. Information-theoretic locality properties of natural language. In Proceedings of the First Workshop on Quantitative Syntax (Quasy, SyntaxFest 2019), pages 2–15, Paris, France. Association for Computational Linguistics. Richard Futrell and Roger Levy. 2017. Noisy-context surprisal as a human sentence processing cost model. In *Proceedings of the 15th Conference of the European Chapter of the Association for Computational* Linguistics (EACL 2009): Volume 1, Long Papers, pages 688–698, Valencia, Spain. Richard Futrell, Roger P. Levy, and Edward Gibson. 2020. Dependency locality as an explanatory principle for word order. *Language*, 96(2):371–412. Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2018. SUD or Surface-Syntactic Universal Dependencies: An annotation scheme nearisomorphic to UD. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 66–74. Association for Computational Linguistics. Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2021. Starting a new treebank? Go SUD! In Proceedings of the Sixth International Conference on Dependency Linguistics (DepLing, Syntax Fest 2021), pages 35–46, Sofia, Bulgaria. Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. *Cognition*, 68(1):1–76. Edward Gibson, Carson T. Schütze, and Ariel Salomon. 1996. The relationship between the frequency and the processing complexity of linguistic structure. *Journal of Psycholinguistic Research*, 25(1):59–92. Daniel Gildea and David Temperley. 2007. Optimizing grammars for minimum dependency length. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 184–191, Prague. Daniel Gildea and David Temperley. 2010. Do grammars minimize dependency length? *Cogntiive Science*, 34(2):286–310. Stefan Th. Gries. 2021. *Statistics for Linguistics with R*, 3rd edition. De Gruyter Mouton, Berlin/Boston. Stefan Grünewald, Prisca Piccirilli, and Annemarie Friedrich. 2021. Coordinate constructions in English enhanced Universal Dependencies: Analysis and computational modeling. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 795–809. Association for Computational Linguistics. Jan Hajic, Otakar Smr ˇ ž, Petr Zemánek, Petr Pajas, Jan Šnaidauf, Emanuel Beška, Jakub Krácmar, and Kamila Hassanová. 2009. Prague Arabic dependency treebank 1.0. Jan Hajic, Jarmila Panevová, Eva Haji ˇ cová, Petr ˇ Sgall, Petr Pajas, Jan Štepánek, Ji ˇ ˇrí Havelka, Marie Mikulová, Zdenekˇ Žabokrtský, Magda Ševcíková ˇ Razímová, and Zdenka Ure ˇ šová. 2006. Prague Dependency Treebank 2.0 (PDT 2.0). Frank E. Harrell, Jr. 2015. *Regression Modeling Strategies*, 2nd edition. Springer. John A. Hawkins. 1994. *A Performance Theory of Order and Constituency*. Cambridge University Press, Cambridge. Barbora Hladká, Jan Hajic, Jirka Hana, Jaroslava ˇ Hlavácová, Ji ˇ ˇrí Mírovsky, and Jan Raab. 2008. The ` Czech academic corpus 2.0 guide. *The Prague Bulletin of Mathematical Linguistics*, 89(1):41–96. Richard Hudson. 1984. *Word Grammar*. Blackwell, Oxford. Richard Hudson. 1990. *English Word Grammar*. Blackwell, Oxford. Richard Hudson. 2010. *An Introduction to Word Grammar*. Cambridge University Press, Cambridge. Tomás Jelínek. 2017. FicTree: A manually annotated treebank of Czech fiction. In *ITAT*, pages 181–185. Ronald M. Kaplan and Joan Bresnan. 1982. LexicalFunctional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, *The Mental Representation of Grammatical Relations*, pages 173–281. MIT Press, Cambridge, MA. Henry Kucera and W. Nelson Francis. 1967. ˇ *Computational Analysis of Present-day American English*. Brown University Press, Providence, RI. Haitao Liu. 2008. Dependency distance as a metric of language. *Journal of Cognitive Science*, 9(2):159– 191. Haitao Liu, Chunshan Xu, and Junying Liang. 2017. Dependency distance: A new perspective on syntactic patterns in natural languages. *Physics of Life* Reviews, 21:171–193. Arne Lohmann. 2014. *English Coordinate Constructions: A Processing Perspective on Constituent Order*. Cambridge University Press, London. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. Igor Mel'cuk. 1974. ˇ *Opyt teorii lingvistiˇceskix modelej* «Smysl , *Tekst»*. Nauka, Moscow. Igor Mel'cuk. 1988. ˇ Dependency Syntax: Theory and Practice. The SUNY Press, Albany, NY. Igor Mel'cuk. 2009. Dependency in natural language. ˇ In Alain Polguère and Igor Mel'cuk, editors, ˇ *Dependency in Linguistic Description*, pages 1–110. John Benjamins, Amsterdam. Stefan Müller, Anne Abeillé, Robert D. Borsley, and Jean-Pierre Koenig, editors. 2021. *Head-Driven* Phrase Structure Grammar: The Handbook. Language Science Press, Berlin. Andrew Nevins and Philipp Weisser. 2019. Closest conjunct agreement. *Annual Review of Linguistics*, 5:219–241. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ˇ ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC 2016, pages 1659–1666, Portorož, Slovenia. European Language Resources Association (ELRA). Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christoper D. Manning. 2004. LinGO Redwoods: A rich and dynamic treebank for HPSG. Research on Language and Computation, 4(2):575–596. Agnieszka Patejuk and Adam Przepiórkowski. 2015. Parallel development of linguistic resources: Towards a structure bank of Polish. *Prace Filologiczne*, LXV:255–270. Carl Pollard and Ivan A. Sag. 1994. *Head-driven* Phrase Structure Grammar. Chicago University Press / CSLI Publications, Chicago, IL. Martin Popel, David Marecek, Jan ˇ Štepánek, Daniel ˇ Zeman, and Zdenˇ ekˇ Žabokrtský. 2013. Coordination structures in dependency treebanks. In *Proceedings* of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 517–527, Sofia, Bulgaria. Adam Przepiórkowski and Agnieszka Patejuk. 2020. From Lexical Functional Grammar to enhanced Universal Dependencies: The UD-LFG treebank of Polish. *Language Resources and Evaluation*, 54:185– 221. R Core Team. 2022. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna. Nornadiah Mohd Razali and Bee Wah Yap. 2011. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. *Journal of* Statistical Modeling and Analytics, 2(1):21–33. Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependencies: An improved representation for natural language understanding tasks. In *Proceedings of the Tenth International Conference on Language Resources and* Evaluation, LREC 2016, pages 2371–2378, Portorož, Slovenia. European Language Resources Association (ELRA). Kiril Simov, Gergana Popova, and Petya Osenova. 2002. HPSG-based syntactic treebank of Bulgarian (BulTreeBank). In Andrew Wilson, Paul Rayson, and Tony McEnery, editors, *A Rainbow of Corpora: Corpus Linguistics and the Languages of the World*, pages 135–142. Lincom-Europa, Munich. David Temperley. 2005. The dependency structure of coordinate phrases: A corpus approach. Journal of Psycholinguistic Research, 34(6):577–601. David Temperley and Daniel Gildea. 2018. Minimizing syntactic dependency lengths: Typological/cognitive universal? *Annual Review of Linguistics*, 4:67–80. David Vadas and James Curran. 2007. Adding noun phrase structure to the Penn Treebank. In *Proceedings of the 45th Annual Meeting of the Association for* Computational Linguistics, pages 240–247, Prague. Marcin Wolinski, Katarzyna G ´ łowinska, and Marek ´ Swidzi ´ nski. 2011. A preliminary version of Sk ´ ładnica—a treebank of Polish. In Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pages 299–303, Poznan, ´ Poland. Marcin Wolinski and El ´ zbieta Hajnicz. 2021. ˙ Składnica: a constituency treebank of Polish harmonised with the Walenty valency dictionary. Language Resources and Evaluation, 55:209–239. Alina Wróblewska. 2018. Extended and enhanced Polish dependency bank in Universal Dependencies format. In *Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)*, pages 173–182. Association for Computational Linguistics. Hiroko Yamashita. 2002. Scrambled sentences in Japanese: Linguistic properties and motivations for production. *Text*, 22:597–634. Hiroko Yamashita and Franklin Chang. 2001. "Long before short" preference in the production of a headfinal language. *Cognition*, 81:B45–B55. Bee Wah Yap and Chiaw Hock Sim. 2011. Comparisons of various types of normality tests. *Journal of Statistical Computation and Simulation*, 81(12):2141–2155. Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. *Language Resources and Evaluation*, 51(3):581–612. Daniel Zeman. 2017. Slovak dependency treebank in Universal Dependencies. *Journal of Linguistics/Jazykovedny casopis* ` , 68(2):385–395. Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Hamid Aghaei, Željko Agic, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy ´ Ajede, Gabriele Aleksandravi ˙ ciˇ ut¯ e, Ika Alfina, Avner ˙ Algom, Erik Andersen, Lene Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Glyd Aranes, Maria Jesus Aranzabe, Bilge Nas Arıcan, Hórunn Arnardóttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Deniz Baran Aslan, Cengiz Asmazoglu, Luma ˘ Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Starkaður Barkarson, Rodolfo Basile, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, Yifat Ben Moshe, Gözde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne˙ Bielinskiene, Kristín Bjarnadóttir, Rogier Blok- ˙ land, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Anouck Braggaar, Kristina Brokaite, Aljoscha Burchardt, Marie ˙ Candito, Bernard Caron, Gauthier Caron, Lauren Cassidy, Tatiana Cavalcanti, Gül¸sen Cebiroglu Ery- ˘ igit, Flavio Massimiliano Cecchini, Giuseppe G. A. ˘ Celano, Slavomír Céplö, Neslihan Cesur, Savas ˇ Cetin, Özlem Çetinoglu, Fabricio Chalub, Shweta ˘ Chauhan, Ethan Chi, Taishi Chika, Yongseok Cho, Jinho Choi, Jayeol Chun, Juyeon Chung, Alessandra T. Cignarella, Silvie Cinková, Aurélie Collomb, Çagrı Çöltekin, Miriam Connor, Daniela Corbetta, ˘ Marine Courtin, Mihaela Cristescu, Philemon Daniel, Elizabeth Davidson, Mathieu Dehouck, Martina de Laurentiis, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Elisa Di Nuovo, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Sandra Eiche, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaž Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Richárd Farkas, Federica Favero, Jannatul Ferdaousi, Marília Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Federica Gamba, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Fabrício Ferraz Gerardi, Kim Gerdes, Filip Ginter, Gustavo Godoy, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciut¯ e,˙ Matias Grioni, Loïc Grobol, Normunds Gruz¯ ¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Tunga Güngör, Nizar Habash, Hinrik Hafsteinsson, Jan Hajic, Jan Haji ˇ c jr., Mika Hämäläinen, Linh Hà M ˇ y, Na- ˜ Rae Han, Muhammad Yudistira Hanifmuti, Takahiro Harada, Sam Hardwick, Kim Harris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladká, Jaroslava Hlavácová, Florinel Hoci- ˇ ung, Petter Hohle, Jena Hwang, Takumi Ikeda, Anton Karl Ingason, Radu Ion, Elena Irimia, O. lájídé Ishola, Kaoru Ito, Siratun Jannat, Tomáš Jelínek, Apoorva Jha, Anders Johannsen, Hildur Jónsdóttir, Fredrik Jørgensen, Markus Juutinen, Sarveswaran K, Hüner Ka¸sıkara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Neslihan Kara, Ritván Karahóga, Boris Katz, Tolga ˇ Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Elena Klementieva, Elena Klyachko, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Mehmet Köse, Natalia Kotsyba, Jolanta Kovalevskaite, Simon Krek, Parameswari ˙ Krishnamurthy, Sandra Kübler, Oguzhan Kuyrukçu, ˘ Aslı Kuzgun, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lê Hông, Alessandro Lenci, Saran Lertpra- ` dit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Bruna Lima Padovani, Krister Lindén, Nikola Ljubešic,´ Olga Loginova, Stefano Lusito, Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Menel Mahamdi, Jean Maillard, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Bü¸sra Mar¸san, Cat˘ alina M ˘ ar˘ anduc, ˘ David Marecek, Katrin Marheinecke, Stella Markan- ˇ tonatou, Héctor Martínez Alonso, Lorena Martín Rodríguez, André Martins, Jan Mašek, Hiroshi Matsuda, Yuji Matsumoto, Alessandro Mazzei, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Tatiana Merzhevich, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missilä, Cat˘ alin ˘ Mititelu, Maria Mitrofan, Yusuke Miyao, AmirHossein Mojiri Foroushani, Judit Molnár, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Giovanni Moretti, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Mariam Nakhlé, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, Manuela Nevaci, Luong ¯ Nguy˜ên Thi., Huy`ên Nguy˜ên Thi. Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo. Olúòkun, Mai Omura, Emeka Onwuegbuzia, Noam Ordan, Petya Osenova, Robert Östling, Lilja Øvrelid, ¸Saziye Betül Özate¸s, Merve Özçelik, Arzucan Özgür, Balkız Öztürk Ba¸saran, Teresa Paccosi, Alessio Palmero Aprosio, Hyunji Hayley Park, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Giulia Pedonese, Angelika Peljak-Łapinska, Siyao Peng, ´ Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Andrea Peverelli, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela Rääbis, Alexandre Rademaker, Mizanur Rahoman, Taraka Rama, Loganathan Ramasamy, Carlos Ramisch, Fam Rashel, Mohammad Sadegh Rasooli, Vinit Ravishankar, Livy Real, Petru Rebeja, Siva Reddy, Mathilde Regnault, Georg Rehm, Ivan Riabov, Michael Rießler, Erika Rimkute, Larissa Ri- ˙ naldi, Laura Rituma, Putri Rizqiyah, Luisa Rocha, Eiríkur Rögnvaldsson, Mykhailo Romanenko, Rudolf Rosa, Valentin Ros, ca, Davide Rovati, Ben Rozonoyer, Olga Rudina, Jack Rueter, Kristján Rúnarsson, Shoval Sadde, Pegah Safari, Benoît Sagot, Aleksi Sahala, Shadi Saleh, Alessio Salomoni, Tanja Samardžic, Stephanie Samson, Manuela Sanguinetti, ´ Ezgi Sanıyar, Dage Särg, Baiba Saul¯ıte, Yanin Sawanakunanon, Shefali Saxena, Kevin Scannell, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Lane Schwartz, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Syeda Shahzadi, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Yana Shishkina, Muh Shohibussirri, Dmitry Sichinava, Janine Siewert, Einar Freyr Sigurðsson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Shafi Sourov, Carolyn Spadine, Rachele Sprugnoli, Vivian Stamou, Steinhór Steingrímsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadová, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Daniel Swanson, Zsolt Szántó, Chihiro Taguchi, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Dipta Tanaya, Mirko Tavoni, Samson Tella, Isabelle Tellier, Marinella Testori, Guillaume Thomas, Sara Tonelli, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Ure ˇ šová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Elena Vagnoni, Sowmya Vajjala, Rob van der Goot, Martine Vanhove, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Uliana Vedenina, Eric Villemonte de la Clergerie, Veronika Vincze, Natalia Vlasova, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Shira Wigderson, Sri Hartati Wijono, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Arife Betül Yenice, Olcay Taner Yıldız, Zhuoran Yu, Arlisa Yuliawati, Zdenekˇ Žabokrtský, Shorouq Zahra, Amir Zeldes, He Zhou, Hanzhi Zhu, Anna Zhuravleva, and Rayan Ziane. 2022. Universal Dependencies 2.11. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. ## Appendix: Counting Syllables The syllabic length of a constituent was measured by adding syllabic lengths of all leaves of this constituent, whose labels do not start and end with a hyphen. Such technical labels include -LRB- and -RRB- (left and right round brackets.) Syllables of each string were counted using the CMUdict Python package,18 which relies on data from the Carnegie Mellon Pronouncing Dictionary.19 This dictionary contains phonetic information; in particular, stressed phonemes, which can be treated as syllabic, are marked with numbers (e.g., *Peter* is represented as P-IY1-T-ER0). Therefore, using CMUdict, the number of syllables can be determined by counting the phonemes that end with a number. Some PTB tokens were absent in CMUdict, in which case the following heuristics were applied. In the case of tokens containing nonalphanumeric characters, these characters 18https://pypi.org/project/cmudict/ 19http://www.speech.cs.cmu.edu/cgi-bin/cmudict were first removed (apart from commas and dots in numbers), resulting in multiple tokens (see the example at the end of this appendix). Tokens consisting of up to four upper-case English letters were treated as abbreviations, so their syllabic length was calculated as the sum of syllables in each letter, according to CMUdict. For example, the length of WWW is six syllables. In other cases of tokens consisting of letters, a simple heuristic was applied to try to estimate the number of syllables: each substring of vowels was counted as one syllable. If a word ended with -e (other than -le), then this ending was not counted as a syllable. According to this heuristic, the syllabic length of beautiful is 3, of *tackle* - 2, and of *fare* - 1. Numbers were recognized as 1) strings of digits with optional commas and at most one dot after all commas, e.g., *100,000.99*, 2) possibly ending in -st, -nd, -rd, -th (ordinal numbers), 3) as well as strings of digits ending with -s and optionally starting with an apostrophe (e.g., *'80s*; such affixes are removed, as not affecting the number of syllables).20 Every such number was converted to words using Python's Inflect package (e.g., *100,000.99* would be converted to one hundred thousand point nine nine).21 As an exception, numbers in the range 1960–1999 were treated as years (e.g., 1984 as nineteen eighty-four rather than one thousand nine hundred eighty-four). After the conversion, syllabic length was determined as in the case of other words, i.e., using CMUdict or the above heuristics. Finally, a small dictionary with syllabic lengths was created for 18 strings: $, %, &, ½, ¼, ¾, 's and 11 abbreviations of month names (*sans May*). For example, consider the hypothetical token O.K.-177,000\KTF+NATO-iron. It would first be split on non-alphanumeric characters into the following tokens: O, K, 177,000, KTF, *NATO*, and iron. *177,000* is recognized as a number and converted by Inflect to *one hundred and seventy seven* thousand, which corresponds to 11 syllables. O, K, NATO, and *iron* are all in CMUdict, with - respectively - 1, 1, 2, and 2 syllables, i.e., 6 in total. KTF is not in CMUdict, so it is split into letters, whose syllabic lengths are 1 according to CMUdict, so 3 in total. Hence, altogether, the whole initial token is estimated to consist of 20 syllables. 20An attempt to recognize unknown tokens as numbers in 2) (e.g., *80th*) and 3) (e.g., *'80s*) was made *before* the initial stage of splitting tokens on non-alphanumeric characters. 21https://pypi.org/project/inflect/ ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 9 A2. Did you discuss any potential risks of your work? Not applicable. We cannot think of any potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We used - not created - scientific artefacts, as described in Section 2. ✓ B1. Did you cite the creators of artifacts you used? 2 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We used the extension of Penn Treebank described in Ficler and Goldberg 2016 (cited in the paper). Unfortunately, the license status of this extension is not clear. The webpage of this extension (https://github.com/Jess1ca/CoordinationExtPTB) requires one to first obtain another PTB extension from a URL that does not seem to be functional (http://schwa.org/projects/resources/wiki/NounPhrases). In the end we obtained the extension described in Ficler and Goldberg 2016 directly from the authors and we made sure that our institution obtained the original PTB from LDC, so that no rights were violated. We felt there was no need (and no space) to include the above description of the unclear license in the paper, but we will be willing to do so if reviewers make this request. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2 and 9.1 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used a version of Penn Treebank, which 1) only consists of newspaper texts, 2) is widely available and has been used in countless tasks before, and - for these reasons - does not require any anonymization. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Not applicable, because we do not make any new artifacts available. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. See Tables 1–3, fn.8, and Figure 1. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✗ **Did You Run Computational Experiments?** Left Blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
chalkidis-etal-2023-lexfiles
{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development
https://aclanthology.org/2023.acl-long.865
In this work, we conduct a detailed analysis on the performance of legal-oriented pre-trained language models (PLMs). We examine the interplay between their original objective, acquired knowledge, and legal language understanding capacities which we define as the upstream, probing, and downstream performance, respectively. We consider not only the models{'} size but also the pre-training corpora used as important dimensions in our study. To this end, we release a multinational English legal corpus (LeXFiles) and a legal knowledge probing benchmark (LegalLAMA) to facilitate training and detailed analysis of legal-oriented PLMs. We release two new legal PLMs trained on LeXFiles and evaluate them alongside others on LegalLAMA and LexGLUE. We find that probing performance strongly correlates with upstream performance in related legal topics. On the other hand, downstream performance is mainly driven by the model{'}s size and prior legal knowledge which can be estimated by upstream and probing performance. Based on these findings, we can conclude that both dimensions are important for those seeking the development of domain-specific PLMs.
# Lexfiles And Legallama: Facilitating English Multinational Legal Language Model Development Ilias Chalkidis∗ Nicolas Garneau∗ **Anders Søgaard** Department of Computer Science, University of Copenhagen, Denmark Cat˘ **alina Goant** ˘ ,a˘ Utrecht University School of Law, Netherlands Daniel Martin Katz Illinois Tech - Chicago Kent College of Law, IL, United States ## Abstract In this work, we conduct a detailed analysis on the performance of legal-oriented pretrained language models (PLMs). We examine the interplay between their original objective, acquired knowledge, and legal language understanding capacities which we define as the upstream, probing, and downstream performance, respectively. We consider not only the models' size but also the pre-training corpora used as important dimensions in our study. To this end, we release a multinational English legal corpus (LeXFiles) and a legal knowledge probing benchmark (LegalLAMA) to facilitate training and detailed analysis of legal-oriented PLMs. We release two new legal PLMs trained on LeXFiles and evaluate them alongside others on LegalLAMA and LexGLUE. We find that probing performance strongly correlates with upstream performance in related legal topics. On the other hand, downstream performance is mainly driven by the model's size and prior legal knowledge which can be estimated by upstream and probing performance. Based on these findings, we can conclude that both dimensions are important for those seeking the development of domain-specific PLMs. ## 1 Introduction Following closely the advances in the development of NLP technologies, the legal NLP literature is flourishing with the release of many new resources, including large legal corpora (Henderson* et al., 2022), datasets (Chalkidis et al., 2021a; Koreeda and Manning, 2021; Zheng et al., 2021; Chalkidis et al., 2022a; Habernal et al., 2022), and pre-trained legal-oriented language models (PLMs) (Chalkidis et al., 2020; Zheng et al., 2021; Xiao et al., 2021). Benchmark suites (Chalkidis et al., 2022a; Hwang et al., 2022; Niklaus et al., 2023) to evaluate the performance of PLMs in a more systematic way ∗ Equal contribution. have been also developed, showcasing the superiority of legal-oriented PLMs over generic ones on downstream legal NLP tasks. Despite this impressive progress, there is still not a thorough study on (a) how PLMs trained under different settings (pre-training corpora, size of the model) perform across different legal subcorpora, and (b) what sort of knowledge such models have acquired from pre-training, and (c) how important is domain (legal) specificity vs general (cross-domain) legal knowledge. Furthermore, often times, legal NLP relies on datasets without drawing clear lines and comparisons between the various legal systems they may reflect. A legal system may be defined as a set of rules adopted and enforced at a given governance level, which may be national, regional or international (Friedman and Hayden, 2017), e.g., UK, EU, US, CoE, etc. We define the upstream evaluation as the task PLMs are explicitly designed to do: Masked Language Modelling (MLM) (Devlin et al., 2019). We then probe for specific legal concepts that are legalsystem specific, in a similar fashion as Petroni et al. (2019) did using the "LAnguage Models Analysis" (LAMA) framework. Finally, we assess the PLMs performance in LexGLUE (Chalkidis et al., 2022a) downstream tasks. More importantly, we explore how the aforementioned factors (upstream, and probing performance) interplay and relate to downstream performance. Our contributions are: (a) We release LeXFiles, a new diverse English legal corpus including 11 sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus comprises approx. 6 million documents which sum up to approx. 19 billion tokens. (b) We release 2 new legal-oriented PLMs, dubbed LexLMs, warm-started from the RoBERTa (Liu et al., 2019) models, and further pre-trained on the LeXFiles for 1M additional steps. | Sub-Corpus (Source) | # Documents | # Tokens / Percentage (%) | Sampling Smoothing (%) | |-----------------------|---------------|-----------------------------|--------------------------| | EU Legislation | 93.7K | 233.7M (01.2%) | 05.0% | | EU Case Law | 29.8K | 178.5M (00.9%) | 04.3% | | UK Legislation | 52.5K | 143.6M (00.7%) | 03.9% | | UK Case Law | 47K | 368.4M (01.9%) | 06.2% | | Canadian Legislation | 6K | 33.5M (00.2%) | 01.9% | | Canadian Case Law | 11.3K | 33.1M (00.2%) | 01.8% | | U.S. Legislation | 518 | 1.4B (07.4%) | 12.3% | | U.S. Case Law | 4.6M | 11.4B (59.2%) | 34.7% | | U.S. Contracts | 622K | 5.3B (27.3%) | 23.6% | | ECtHR Case Law | 12.5K | 78.5M (00.4%) | 02.9% | | Indian Case Law | 34.8K | 111.6M (00.6%) | 03.4% | | Total | 5.8M | 18.8B (100%) | 100% | Table 1: Core statistics of the newly introduced LeXFiles corpus. In the last column, we present the sampling smoothing percentages used to train our LexLM models (Section 4.1). (c) We release LegalLAMA, a diverse probing benchmark suite comprising 8 sub-tasks that aims to assess the acquaintance of legal knowledge that PLMs acquired in pre-training. (d) We evaluate 7 PLMs on both LeXFiles and LegalLAMA, analyzing their performance out of the box per LeXFiles sub-corpus and LegalLAMA tasks. We also fine-tune and evaluate these models in selected LexGLUE tasks, and examine the interplay between MLM, probing, and downstream performance. ## 2 Lexfiles Corpus The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora (Table 1) that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus contains approx. 19 billion tokens. In comparison, the Pile of Law corpus released by Henderson* et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent. The LeXFiles's sub-corpora are: (a) *EU Legislation.* We release 93.7K EU laws (regulations, decisions, directives) published in EUR-Lex, the website of the EU Publication Office.1 (b) *EU Case Law.* We release 29.8K EU court decisions, mainly issued from the Court of 1https://eur-lex.europa.eu/ Justice (CJEU), published in EUR-Lex.1 (c) *UK Legislation.* We release 52.5 UK laws published in UK.LEGISLATION.GOV.UK, the official website of the UK National Archives.2 (d) *UK Case Law.* We release 47K UK court decisions published in the British and Irish Legal Information Institute (BAILII) database.3 (e) *US Legislation.* We re-distribute 518 US state statutes (legislation) originally published by Henderson* et al. (2022). (f) *US Case Law.* We release 4.6M US decisions (opinions) published by Court Listener,4a web database hosted by the Free Law Project.5 (g) *US Contracts.* We release 622K US contracts (agreements) obtained from US Securities and Exchange Commission (SEC) filings, which are publicly available from the SEC-EDGAR6 database. (h) *Canadian Legislation.* We release 6K Canadian laws (acts, regulations) published in the official legislation portal of Canada.7 (i) *Canadian Case Law.* We re-distribute 13.5K Canadian decisions (opinions) originally published by Henderson* et al. (2022). (j) *ECtHR Case Law.* We release 12.5K decisions ruled by the European Court of Human rights 2https://www.legislation.gov.uk/ 3https://www.bailii.org/ 4https://www.courtlistener.com/ 5We release decisions published from 1965 on-wards (cf. post Civil Rights Act), as a hard threshold for cases that possibly rely on out-dated and discriminatory law standards. The rest of the sub-corpora include more recent documents. 6https://www.sec.gov/edgar 7https://laws-lois.justice.gc.ca/eng/ (ECtHR) published in HUDOC,8the database of ECtHR. (k) *Indian Case Law.* We include 34.8K Indian Supreme Court cases originally published by Malik et al. (2021). The LeXFiles is pre-split into training and test subsets to provide a fair ground for comparing the performance of PLMs that have not been trained in the training set. We use the training subset of the LeXFiles corpus to train 2 new transformer-based languages models, dubbed LexLMs (Section 4.1), and evaluate their MLM performance across many other already available PLMs (Section 4.2). ## 3 Legal**Lama Benchmark** LAnguage Model Analysis (LAMA) (Petroni et al., 2019) is a probing task that is designed to assess specific capabilities of PLMs. The general framework of LAMA is to let PLMs predict a target token behind a [MASK] given its context, e.g., "Paris is the capital of [MASK]", where the answer is 'France'. LegalLAMA is a new probing benchmark suite inspired by this framework. It includes 8 sub-tasks that aim to assess the acquaintance of legal knowledge that PLMs acquired in the pretraining phase in a *zero-shot fashion*. Such tasks cannot be resolved by laypersons or even law professionals that are not experts in the specific fields of law in many cases.9 The acquaintance of legal knowledge can be interpreted as some form of primitive understanding of the law, for specific aspects in very controlled (limited) settings -limited legal concepts under a specific jurisdiction-. As Sahlgren and Carlsson (2021) mentioned: "Rather than asking whether a language model understands or not, we should ask to what extent, and in which way, a model understands." We further extend the LAMA framework by allowing PLMs to predict multi-token targets. Take for example the "*Drug Tra*ffi*cking*" offence under the "*Drug-Related*" crimes of the US legislation. Using the RoBERTa tokenizer, this term is split into two tokens, that is "*Drug*" and "Traffi*cking*". We replace thus the "drug trafficking" phrase with two [MASK] tokens, and then ask the model to predict these tokens simultaneously. 8https://hudoc.echr.coe.int/eng 9In Appendix A, we present a discussion on the LegalLAMA tasks' level of difficulty. …was arrested and charged under Alabama law with [MASK] [MASK], regarding paraphernalia… ![2_image_0.png](2_image_0.png) We evaluate the overall performance of PLMs using the macro-averaged Mean Reciprocal Rank (MRR) (Voorhees and Tice, 2000) over the set of labels (not the entire vocabulary).10 In the case of multi-token targets, we average the MRR over the predicted tokens.11 Note that LegalLAMA examples come from the test subset of the related LexFiles sub-corpora in order to have a fair comparison between models trained or not on the LexFiles training sets. We provide a concrete example in Figure 1, and describe the tasks in detail: ECHR Articles (CoE). In this task, we have paragraphs from the court assessment section of ECtHR decisions. We extract those paragraphs from the newly introduced ECHR corpus presented in Section 2. The paragraphs include references to ECHR articles, e.g., "Article [MASK] *of the Convention"*, where [MASK] is the article number. For example, *"The applicant complained under Article* [2] of the Convention that the prison authorities had failed to protect her son's right to life by taking the necessary measures." Given a paragraph, where the article number is masked, the model has to predict the associated article number given the context. The dataset is composed of 5,072 test instances containing on average 69 tokens and 13 unique article numbers to predict. 10We decided to report only MRR results in the main paper for the sake of clarity. Moreover, MRR avoids penalizing for near-identical outcomes. Detailed results including Precision at 1 (P@1) are available in Appendix C. 11A stricter evaluation would be to consider a multi-token prediction valid only if all the sub-tokens are properly predicted by the PLM. We decided to average the MRR to consider minor variations and errors. Contractual Section Titles (US). In this task, we have sections from US contracts reusing the dataset of Tuggener et al. (2020). Contractual sections are usually numbered and titled, e.g., *"10.* [Arbitration]. Any controversy, dispute or claim directly or indirectly arising out of or relating to this Agreement [...]". The section titles reflect the content (subject matter) of the section, and are commonly re-used. Given a section, where the section title is masked, the model has to predict the associated title given the context. The dataset is composed of 1,527 test instances containing on average 85 tokens and 20 unique section titles to predict. Contract Types (US). In this task, we have introductory paragraphs from US contracts. We extract those paragraphs from the newly introduced corpus of US contracts, presented in Section 2. Introductory paragraphs usually start with the contract title revealing the contract type, e.g., *"Service Agreement"*, and follow with the names of the involved parties, and their roles in this agreement. For example, "This **[Purchase]** *Agreement is entered into* this 23rd day of January 2020 by and between A (the "Purchaser") and B (the "Seller").". Given an introductory paragraph, where the contract type is masked, the model has to predict the associated type given the context. The task is composed of 1,089 test instances containing on average 150 tokens and 15 unique types of contracts to predict. Crime Charges (US). In this task, we have paragraphs from US court judgments (opinions). We extract those paragraphs from the US case law corpus, presented in Section 2. We select a list of criminal offenses (e.g., "Sexual Assault"), categorized into 11 major categories (e.g., Sex-related) from the FindLaw website.12 We filter out paragraphs that refer the specified criminal charges verbatim. For example, *"A person commits the crime* of **[burglary]** *in the first degree when he or she enters or remains unlawfully in a building with the* intent to commit a crime against a person or property therein" Given a paragraph, where a criminal charge is masked, the model has to predict the associated criminal charge given the context. The task is composed of 4,518 test instances containing on average 118 tokens and 59 charges to predict. Legal Terminology (US). In this task, we have paragraphs from US court judgments (opinions). 12https://www.findlaw.com/criminal/ criminal-charges.html We extract those paragraphs from the US case law corpus, presented in Section 2. We select a subset of legal terms per legal topic (e.g., finance law, property law, family law) using the legal vocabularies provided by the Legal Information Institute (LII) of the Cornell Law School.13 We filter out paragraphs that use the specified legal terms. For example, "The **[marital privilege]** *against selfincrimination is [...] grounded upon the theory that* just as one may not be convicted by his own compelled testimony, so may he not be convicted by the testimony of his spouse." Given a paragraph, where a legal term is masked, the model has to predict the associated legal term given the context. The task is composed of 5,829 test instances containing on average 308 tokens and 92 legal terms from 7 topics to predict. Legal Terminology (EU). In this task, we have paragraphs from CJEU judgments (opinions). We extract those paragraphs from the newly introduced EU case law corpus, presented in Section 2. We select a subset of legal terms based on the subject matters provided by the database of the courts (CURIA).14 We filter out paragraphs that use the specified legal terms. For example, "The guiding principle at the basis of EU **[data protection]** law is that of a self-determined decision of an individual who is capable of making choices about the use and processing of his or her data." Given a paragraph, where a legal term is masked, the model has to predict the associated legal term given the context. The task is composed of 2,127 test instances containing on average 164 tokens and 42 legal terms from 23 topics to predict. Legal Terminology (CoE). In this task, we have paragraphs from ECtHR decisions. We extract those paragraphs from the newly introduced ECHR corpus presented in Section 2. We select a subset of legal terms (legal issues) based on the keywords provided by the database of the courts (HUDOC).15 We filter out paragraphs that use the specified legal terms. For example, *"The applicants alleged* that their relatives' **[right to life]** was violated in that they were deliberately killed by village guards." Given a paragraph, where a legal term is masked, the model has to predict the associated legal term given the context. The task is composed of 6,803 13https://www.law.cornell.edu/ 14https://curia.europa.eu/ 15https://www.echr.coe.int/Documents/HUDOC_ Keywords_ENG.pdf | Model (Source) | # Params | # Vocab | # Acc. Tokens | Pre-training Corpora | | | |------------------|---------------------------|-----------|-----------------|------------------------|---------|------------------| | RoBERTa | (Liu et al., 2019) | 124/355M | 50K | 2T | (160GB) | Generic Corpora | | LegalBERT | (Chalkidis et al., 2020) | 110M | 32K | 43B | (12GB) | Legal Corpora | | CaseLawBERT | (Zheng et al., 2021) | 110M | 32K | 43B | (37GB) | US Case Law | | PoL-BERT | (Henderson* et al., 2022) | 340M | 32K | 130B | (256GB) | US Legal Corpora | | LexLM | (ours) | 124/355M | 50K | 2T + 256B | (175GB) | Legal Corpora | test instances containing on average 97 tokens and 250 legal terms from 15 articles to predict. Criminal Code Sections (Canada). In this task, we have paragraphs from the Criminal Court of Canada's decisions containing Section Numbers of the Criminal Code of Canada (CCC)16. For example, "Section **[680]** *of the Criminal Code provides* that a bail review is to be conducted by a panel of this court where directed by the Chief Justice." Given a paragraph, where a criminal code's section is masked, the model has to predict the associated section number, paragraph, and sub-paragraph (if any) given the context. The task is composed of 321 test instances containing on average 72 tokens and 144 different section numbers to predict. In Appendix D, we present the full list of vocabulary (masked terms) grouped in categories (clusters) -when applicable- per LegalLAMA sub-task. ## 4 Experiments 4.1 Pre-Trained Language Models We consider 7 large language models to assess their performance with respect to the upstream (MLM), probing, and downstream evaluation: RoBERTa (Base/**Large)** are the original RoBERTa models (Liu et al., 2019) trained for 64k steps with very large batches on generic corpora; thus do not have any clear legal prior (knowledge). LegalBERT (Base) is a legal-oriented BERT model (Devlin et al., 2019) released by Chalkidis et al. (2020) trained for 1M steps on legal corpora from EU, UK, CoE, and USA. CaseLawBERT (Base) is another legal-oriented BERT released by Zheng et al. (2021). CaseLawBERT (which we will refer to as *CL-BERT* henceforth) is trained from scratch for 2M steps on the Harvard Law case corpus, which comprises 3.4M legal decisions from US federal and state courts. 16https://laws-lois.justice.gc.ca/eng/acts/ c-46/index.html PoL-BERT (Large) is a legal-oriented RoBERTa model released by Henderson* et al. (2022) trained from scratch for 2M steps on the Pile of Law, a corpus consisting of approx. 256GB of English, mainly US, language legal and administrative text. LexLM (Base/**Large)** are our newly released RoBERTa models. We follow a series of bestpractices in language model development: (a) We warm-start (initialize) our models from the original RoBERTa checkpoints (base or large) of Liu et al. (2019). (b) We train a new tokenizer of 50k BPEs, but we reuse the original embeddings for all lexically overlapping tokens (Pfeiffer et al., 2021). (c) We continue pre-training our models on the diverse LeXFiles (Section 2) corpus for additional 1M steps with batches of 512 samples, and a 20/30% masking rate (Wettig et al., 2023), for base/large models, respectively. (d) We use a sentence sampler with exponential smoothing of the sub-corpora sampling rate following Conneau et al. (2019) since there is a disparate proportion of tokens across subcorpora (Table 1) and we aim to preserve percorpus capacity (avoid overfitting). (e) We consider mixed cased models, similar to all recently developed large PLMs. Additional details on LexLM models pre-training can be found in Appendix B. ## 4.2 Upstream Evaluation In Table 3, we present the upstream (MLM) performance for all PLMs across the LeXFiles subcorpora. The performance is measured in terms of accuracy, i.e. Precision@1 of the masked token to be predicted. The accuracy is thus averaged over all the masked tokens for each task. We also provide the average across all tasks, per model. We observe that results vary across models trained in very different settings (model's capacity, pre- | Sub-Corpus | RoBERTa-B | RoBERTa-L | LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | |-----------------|-------------|-------------|-------------|-----------|------------|-----------|-----------| | EU Legislation | 72.0 | 75.1 | 83.1 | 61.4 | 73.3 | 78.7 | 81.8 | | EU Case Law | 72.7 | 76.5 | 81.4 | 63.0 | 68.5 | 79.8 | 82.9 | | UK Legislation | 71.3 | 75.1 | 86.2 | 65.1 | 72.8 | 84.1 | 87.3 | | UK Case Law | 68.9 | 73.2 | 72.3 | 61.2 | 62.4 | 73.2 | 76.9 | | CAN Legislation | 75.5 | 78.9 | 80.6 | 66.4 | 73.3 | 82.9 | 85.2 | | CAN Case Law | 62.8 | 66.0 | 73.8 | 64.1 | 66.0 | 76.7 | 80.3 | | US Case Law | 68.2 | 72.5 | 71.6 | 64.4 | 63.8 | 71.7 | 74.8 | | US Legislation | 74.5 | 78.1 | 79.7 | 65.3 | 77.0 | 80.5 | 83.5 | | US Contracts | 67.5 | 70.9 | 89.1 | 69.5 | 76.9 | 85.1 | 87.8 | | ECtHR Case Law | 72.0 | 75.7 | 83.3 | 61.9 | 66.3 | 80.1 | 83.3 | | Indian Case Law | 65.6 | 70.0 | 65.2 | 56.3 | 58.3 | 73.3 | 76.2 | | Average | 70.1 | 73.8 | 78.7 | 63.5 | 68.9 | 78.7 | 81.8 | | Model Rank | 5 | 4 | 2 | 7 | 6 | 2 | 1 | Table 3: Upstream evaluation measured in terms of accuracy (Precision@1) on the Masked Language Modelling (MLM) task across all LeXFiles sub-corpora. training corpora), while the results also vary across legal sub-corpora. We want to remind the reader that the upstream evaluation offers a rough idea of a model's capabilities since it relies on random masked sub-words, in which case many of those can be generic and thus highly predictable (e.g. preposition "of"). This phenomenon further motivates the construction of the LegalLAMA benchmark, in which case only "legal knowledge sensitive" words have been masked. Type of Documents. In terms of differences across sub-corpora, we observe that the performance on legislation is better compared to case law in 3/4 legal systems, where we have both (EU, UK, US, Canada), with US contractual language being the most predictable for the models which have been trained on it (LexLMs, LegalBERT). Comparison of PLMs. Overall, the large LexLM model outperforms the rest, being 3% more accurate on average compared to the 2nd best models (base versions of LexLM, and LegalBERT). Such results are expected since LexLMs have been trained in a diverse corpus, similarly to LegalBERT, compared to CL-BERT, and PoL-BERT, which have been trained on US corpora. Overspecialization harms the two US-centric models in a great extend since they are outperformed even from the generic RoBERTa models. We also observe that LegalBERT outperforms the similarly-sized LexLM in specific sub-corpora (Both EU, UK legislation, ECtHR case law, and US Contracts) that were included in its training. We hypothesize that these results are related to the pretraining data diversity, since LexLMs have been trained in a more diverse corpus including many more documents from different legal systems with a sampling smoothing to preserve capacity per subcorpus. The larger LexLM model has the capacity to cover all sub-corpora to a greater detail. In general, larger models pre-trained on the same corpora (RoBERTas, LexLMs) perform better compared to smaller ones, but in-domain pre-training is a much more important factor for upstream performance, e.g., LegalBERT outperforms RoBERTa-L. ## 4.3 Probing Evaluation In Table 4, we present the results across all examined PLMs on LegalLAMA. We analyze the results from two core perspectives: the prior knowledge and the probing task. Prior Knowledge. The pre-training corpus has a significant impact on the probing performance. RoBERTa models, having little to no legal prior, were expected to achieve worst performance on all probing tasks. Surprisingly, CL-BERT and PoLBERT achieve on-par or sometimes worst performance than RoBERTa (Base & Large) in most tasks. Being trained on the "Harvard Law Case" corpus (CL-BERT) and the Pile of Law (PoL-BERT), we would have expected better performance than a model without legal prior. Their pre-training corpora might be lacking diversity, which might cause their poor performance even on Legal-US probing | Statistics | Models | | | | | | | | | | |------------------------|----------|------|---------------------------------------------------------------------|------|------|------|------|------|------|------| | Task | #T | #L | #T/L RoBERTa-B RoBERTa-L LegalBERT CL-BERT PoL-BERT LexLM-B LexLM-L | | | | | | | | | ECHR Articles | 69 | 13 | 1.0 | 39.8 | 41.3 | 91.1 | 37.5 | 35.2 | 91.4 | 94.3 | | Contract Sections | 85 | 20 | 1.3 | 23.6 | 44.5 | 80.2 | 29.2 | 64.8 | 88.2 | 87.3 | | Contract Types | 150 | 15 | 1.1 | 43.4 | 47.8 | 82.2 | 54.9 | 49.7 | 84.0 | 86.1 | | Crime Charges (US) 118 | 59 | 2.1 | 56.3 | 62.4 | 51.5 | 62.6 | 43.5 | 63.0 | 68.1 | | | Terminology (US) | 92 | 7 | 2.9 | 47.1 | 54.2 | 60.5 | 66.7 | 44.6 | 66.4 | 67.5 | | Terminology (EU) | 164 | 42 | 3.0 | 38.0 | 45.3 | 63.2 | 38.6 | 36.9 | 63.1 | 70.4 | | Terminology (CoE) | 97 | 250 | 1.2 | 45.4 | 53.1 | 77.3 | 49.7 | 32.8 | 81.3 | 86.8 | | CC Sections | 72 | 144 | 2.0 | 15.8 | 19.7 | 21.9 | 18.4 | 19.9 | 50.6 | 68.8 | | Average | 33.1 | 41.3 | 54.8 | 38.0 | 36.8 | 70.8 | 77.4 | | | | | Model Rank | 7 | 4 | 3 | 5 | 6 | 2 | 1 | | | | tasks. LegalBERT (Base), being trained on UK, EU and USA data illustrates important improvement over models without legal prior (RoBERTa) or having only US legal prior (CaseLaw and PoLBERT). LexLM models, being trained on the new LeXFiles dataset, show performance improvement over LegalBERT across all tasks, especially on the task of predicting Section Numbers of the Criminal Code of Canada. Regarding the size of the model, we are able to compare the cased versions of RoBERTa Base/Large and LexLM Base/Large. As expected, the larger versions offer better performance than the smaller ones on every task. ![6_image_0.png](6_image_0.png) Probing Tasks. We characterize the difficulty of the tasks by their semantic level, the output space (the number of labels to predict from), and the label complexity (how many tokens per label). We expose the tasks' different characteristics in Table 4. Given the best-performing model (LexLM-L), we can see that Crime Charges and Legal Terminology (US and EU) are the hardest tasks to solve. Looking at Table 4, we can see that these three tasks are characterized by a higher label complexity (>2). We further demonstrate the label complexity impact in Figure 2. The output space does not seem to have a correlation with the models' performance, since the selected Legal Terminology Topic Clusters (US) has only 7 possible labels, whereas the Criminal Code Section (Canada) has 144 possible labels. Finally, Crime Charges, being the hardest task to solve, has on average 118 tokens as input and 59 possible labels with moderate complexity, similar to the Terminology tasks (EU and CoE). This suggests that the difficulty of the task is not only driven by the labels' complexity but may rather lie in the lack of contextualization. Take for example the following sentence: "This case involves perhaps the first prosecution under New York's new **[computer crime]** statute, Penal Law article 156, which went into effect on November 1, 1986, just days before the incidents charged herein." The only contextual hint the PLMs have to predict the correct tokens (**[computer crime]**) is the utterance "Penal Law article 156, which went into effect on November 1, 1986". This is the opposite task of predicting article numbers given a context, which is much more difficult than predicting the actual context because the output space is larger.17 ## 4.4 Downstream Evaluation For downstream evaluation, we conduct experiments for 6 legal classification tasks, 5 part of LexGLUE (Chalkidis et al., 2022a), covering US contracts, US, EU, and ECHR law. ECtHR (Task B) (Chalkidis et al., 2021b) is a multi-label topic classification task, where given 17The actual tokens predicted by the best-performing examined PLM were "sexual" and "abuse". | RoBERTa-B | RoBERTa-L | LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | |-------------|-------------|-------------|-----------|------------|-----------|-----------|------|------|------|------|------|------|------|------| | Task | µF1 | mF1 | µF1 | mF1 | µF1 | mF1 | µF1 | mF1 | µF1 | mF1 | µF1 | mF1 | µF1 | mF1 | | ECtHR | 61.2 | 40.5 | 74.2 | 51.5 | 59.1 | 37.2 | 53.6 | 29.1 | 69.1 | 46.9 | 63.2 | 41.8 | 76.7 | 57.9 | | LEDGAR | 80.5 | 62.6 | 83.6 | 71.5 | 81.2 | 64.7 | 80.9 | 64.0 | 83.3 | 71.4 | 82.5 | 66.8 | 84.7 | 72.8 | | CNLI | 66.8 | 48.6 | 68.0 | 63.5 | 70.2 | 65.6 | 69.0 | 64.6 | 68.3 | 64.1 | 61.6 | 42.9 | 69.7 | 64.5 | | SCOTUS | 65.0 | 36.0 | 68.9 | 41.4 | 60.9 | 31.2 | 62.9 | 33.8 | 66.3 | 39.5 | 66.9 | 37.7 | 71.1 | 43.9 | | CaseHOLD | 72.7 | 72.7 | 75.6 | 75.6 | 76.1 | 76.1 | 77.6 | 77.6 | 73.7 | 73.7 | 74.8 | 74.8 | 78.5 | 78.5 | | EURLEX | 33.4 | 06.1 | 62.7 | 27.1 | 27.7 | 04.0 | 27.0 | 04.7 | 60.5 | 25.4 | 34.2 | 06.9 | 63.1 | 28.0 | | Average | 58.4 | 22.5 | 71.5 | 48.6 | 55.0 | 17.1 | 53.9 | 18.7 | 69.5 | 46.4 | 59.0 | 24.3 | 73.3 | 51.0 | | Upstream | 5 | 4 | 2 | 7 | 6 | 2 | 1 | | | | | | | | ECtHR 61.2 40.5 74.2 51.5 59.1 37.2 53.6 29.1 69.1 46.9 63.2 41.8 **76.7 57.9** LEDGAR 80.5 62.6 83.6 71.5 81.2 64.7 80.9 64.0 83.3 71.4 82.5 66.8 **84.7 72.8** CNLI 66.8 48.6 68.0 63.5 **70.2 65.6** 69.0 64.6 68.3 64.1 61.6 42.9 69.7 64.5 SCOTUS 65.0 36.0 68.9 41.4 60.9 31.2 62.9 33.8 66.3 39.5 66.9 37.7 **71.1 43.9** CaseHOLD 72.7 72.7 75.6 75.6 76.1 76.1 77.6 77.6 73.7 73.7 74.8 74.8 **78.5 78.5** EURLEX 33.4 06.1 62.7 27.1 27.7 04.0 27.0 04.7 60.5 25.4 34.2 06.9 **63.1 28.0** ![7_image_0.png](7_image_0.png) the facts of an ECtHR case, the model has to predict the alleged violated ECHR article among 10 such articles (e.g., "Art 3. - Prohibition of Torture", "Art. 6 - Right to Fair Trial"). LEDGAR (Tuggener et al., 2020) is a single-label multi-class topic classification task, where given a contractual paragraph, the model has to predict one of the correct topic among 100 topics (e.g., "Limitation of Liability", "Arbitration"). ContractNLI (Koreeda and Manning, 2021) is a contract-based Natural Language Inference (NLI) task, where given an Non-Disclosure Agreement (NDA) and one out 17 templated *hypotheses* (e.g., "The Party may share some Confidential Information with some third-parties."), the model has to predict if the hypothesis is (entailed, *contradicted*, or is *neutral*) to the terms of the NDA. SCOTUS (Chalkidis et al., 2022a) is a single-label multi-class topic classification task, where given a Supreme Court of US (SCOTUS) opinion, the model has to predict the relevant area among 14 issue areas (e.g., "Civil Rights", "Judicial Power"). CaseHOLD (Zheng et al., 2021) is a multiple choice QA classification task, where given a paragraph from a US legal opinion where a legal rule (holding) is masked, the model has to predict the applicable rule among 5 alternatives (the correct one and 2 irrelevant presented in other cases). EURLEX (Chalkidis et al., 2021a) is a multi-label topic classification task, where given an EU law, the model has to predict the correct EUROVOC concept among hundred concepts (e.g., "Environmental Policy", "International Trade"). ![7_image_1.png](7_image_1.png) for a single epoch with a learning rate of 1e−5 leading to a small number of updates. We are interested to examine how fast each model convergence based on its prior knowledge; in other words, what can a model learn in a single pass over training data? Finetuning models for many epochs over large datasets will eventually lead to a full re-parameterization of the models, in which case the importance of prior knowledge will diminish compromise the goal of our study (Figure 3).18 For all tasks, we use standard N-way classifiers with a classification head (Devlin et al., 2019). For ECtHR, and SCOTUS, involving long documents, we warm-start Longformer (Beltagy et al., 2020) models from each PLM's parameters to encode up to 2048 tokens. We evaluate classification performance with micro-F1 (µF1) and macro-F1 (mF1) across tasks following Chalkidis et al. (2022a). Results In Table 5, we present the test results across all tasks/datasets. We analyze the results from two perspectives: model's capacity (size), and prior legal knowledge abducted via pre-training. 18In most tasks, models fully converge after approx. 5 epochs with improved performance, and the relative differences between generic and legal-oriented models are diminished (Chalkidis et al., 2022a). We fine-tune all examined PLMs (Section 4.1) Model's capacity (size) strongly correlates with the overall downstream performance. Across all tasks, there are 2/6 exceptions (CNLI and CaseHOLD) where LegalBERT outperforms larger PLMs. Both tasks are using sentence pairs, a setup used in BERT's pre-training, but not in RoBERTa, which may bring LegalBERT, a BERT-based model, in a better initial condition co-considering the minimal updates steps, compared to all large models following the RoBERTa pre-training setup, which do no use pairs of sentences or optimized based on a sentence-level objective (NSP). Legal Knowledge also plays an important role following the model's capacity (size). We observe that LexLM-B trained in the diverse LeXFiles corpus outperforms the equally-sized RoBERTa-B model in 5/6 tasks, while LegalBERT and CL-BERT outperform it only in 3 out of 6 tasks. In this case, the results are mixed, i.e., acquaintance of legal knowledge as expressed by upstream (Section 4.2) and probing (Section 4.3) performance does not correlate with downstream performance. In the case of large-sized models, LexLM-L outperform RoBERTa-L across all tasks, while PoLBERT trained on the US-biased Pile of Law corpus is outperformed by RoBERTa-L in 5 out of 6 tasks. Given the results with respect to upstream and probing performance, RoBERTa-L has a better legal prior; so in these regards, acquaintance of legal knowledge fully correlates with downstream performance in the large models' regime. ## 5 Release Of Resources We release our code base to assure reproducibility and let others extend our study by experimenting with other PLMs, or develop new ones.19 The new LexLM models (Section 4.1), the LeXFiles corpus 20 (Section 2), and the LegalLAMA benchmark 21 (Section 4.3) are available on Hugging Face Hub (Lhoest et al., 2021).22 ## 6 Conclusions And Future Work In this work, we introduced a multinational English legal corpus (LeXFiles) and a legal knowledge probing benchmark (LegalLAMA) to facilitate training and detailed analysis of legal-oriented 19https://github.com/coastalcph/lexlms 20https://huggingface.co/datasets/lexlms/lex_ files 21https://huggingface.co/datasets/lexlms/ legal_lama 22https://huggingface.co/lexlms PLMs. We also released two new legal PLMs and evaluate them alongside others on LegalLAMA and LexGLUE. Based on our analysis (Section 4), we make the following general observations: (a) The use of diverse legal corpora leads to better overall upstream performance (Section 4.2). (b) We find that probing performance strongly correlates with upstream performance in related legal topics (Section 4.3). (c) For both upstream, and probing performance, the selection of pre-training corpora has a much larger effect compared to model's capacity (Sections 4.2-4.3). Nonetheless, larger models pre-trained on similar corpora have better overall performance. (d) Downstream performance is mainly driven by the model's capacity and prior legal knowledge which can be estimated by upstream and probing performance (Section 4.4). In future work, we plan to further analyze the learning dynamics of legal language models by comparing their representations with representations derived from legal knowledge bases. Given the availability of the new resources, the development of instruction-following (Wei et al., 2021) fine-tuned legal-oriented GPT-like (Ouyang et al., 2022) models is also an anticipated direction. ## Limitations Diversity of Corpora While the newly introduced LeXFiles corpus is significantly more diverse compared to the Pile of Law corpus of Henderson* et al. (2022), it is still an English-only corpus covering only 6 legal systems (EU, UK, CoE, US, India, Canada). Despite, the fact that we can train better models (LexLMs) and evaluate these models across these corpora, in future work, we should extend our analysis to cover even more languages and legal systems, and a higher granularity in the labeling of legal fields within these systems. Not only will this help support the inclusion of other legal traditions but also adding more linguistic and cultural diversity will help us better understand the robustness of existing methods. Similarly, the newly introduced LegalLAMA benchmark consists of 8 sub-tasks targeting EU, ECHR, US, and Canadian jurisdictions in a very controlled setting; where examples were automatically extracted. While on this benchmark, legaloriented PLMs has demonstrated a significant degree of "understanding" of legal language and legal topics, this benchmark should be further expanded with more sub-tasks to evaluate the acquaintance of legal knowledge across more legal systems and topics, and possibly cleansed from both very easy and unsolvable examples. Model Considerations In this work, we consider encoder-only (BERT-like) models up to approx. 350M parameters, while recent work on the development of Large Language Models (LLMs) (Kaplan et al., 2020; Brown et al., 2020; Hoffmann et al., 2022; Chowdhery et al., 2022) is mainly targeting billion-parameter-sized models (10-100Bs of parameters) that usually follow a decoder-only, e.g., GPT (Radford and Narasimhan, 2018), or encoder-decoder, e.g., T5 (Raffel et al., 2020), architecture. Moreover, new paradigms of training PLMs have been introduced, such as *instructionbased finetuning* (Wei et al., 2021), and *alignment* via Reinforcement Learning from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022). Latest GPT models (Ouyang et al., 2022) have recently shown significant zero-shot progress on law-related tasks such as bar examination question answering (Katz et al., 2023). Thus, future work should follow the most recent advances by pre-training much larger auto-regressive GTP-like models that seem to lead to emergent zero-shot and few-shot capabilities. Evaluation Considerations In Section 3, we present how we account for and evaluate multitoken expressions (terms) on the LegalLAMA benchmark; we are open to ideas on how we should possibly improve the current approach to provide a fairer and more robust evaluation framework across all models. Similarly, in Section 4.4, we fine-tune all examined PLMs for a single epoch to avoid extreme over-reparameterization and better estimate how model's knowledge affects convergence and performance. Nonetheless, there are possibly better approaches to control for these aspects, e.g., Adapter-based (Rücklé et al., 2021) finetuning, or other approaches, such as LoRA (Hu et al., 2022). Beyond Performance While we consider a multi-facet analysis, we do not cover other interesting dimensions that should also be explored, especially since law is a very sensitive application domain; for instance trustworthiness-related topics, such as model interpretability (Chalkidis et al., 2021b; Malik et al., 2021), and fairness (Chalkidis et al., 2022b). Future work can build from the results reported herein to explore these important topics. ## Ethics Statement The scope of this work is to examine the performance of legal-oriented PLMs from a multi-facet perspective and broaden the discussion to help practitioners build assisting technology for legal professionals and laypersons. We believe that this is an important application field, where research should be conducted (Tsarapatsanis and Aletras, 2021) to improve legal services and democratize law, while also highlighting (informing the audience on) the various multi-aspect shortcomings seeking a responsible and ethical (fair) deployment of legaloriented technologies. In this direction, we introduce new resources covering various legal systems to build new models that better represent law and better assess their capabilities. All newly developed and published resources are based on publicly available data, most of them scattered on several web portals. ## Acknowledgments This work was partly funded by the Innovation Fund Denmark (IFD, https: //innovationsfonden.dk/en) and the Fonds de recherche du Québec - Nature et technologies (FRQNT, https://frq.gouv.qc.ca/ nature-et-technologies/). ## References Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021a. MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2898– 2904, Online. Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos, and Prodromos Malakasiotis. 2021b. Paragraph-level rationale extraction through regularization: A case study on European court of human rights cases. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 226–241, Online. Association for Computational Linguistics. Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Katz, and Nikolaos Aletras. 2022a. LexGLUE: A benchmark dataset for legal language understanding in English. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 4310–4330, Dublin, Ireland. Association for Computational Linguistics. Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Sebastian Schwemer, and Anders Søgaard. 2022b. FairLex: A multilingual benchmark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4389–4406, Dublin, Ireland. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*, abs/1911.02116. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Lawrence M. Friedman and Grant M. Hayden. 2017. 1What Is a Legal System? In *American Law: An* Introduction. Oxford University Press. Ivan Habernal, Daniel Faber, Nicola Recchia, Sebastian Bretthauer, Iryna Gurevych, Indra Spiecker genannt Döhmann, and Christoph Burchard. 2022. Mining Legal Arguments in Court Decisions. *arXiv preprint*. Peter Henderson*, Mark S. Krass*, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, and Daniel E. Ho. 2022. Pile of law: Learning responsible data filtering from the law and a 256gb opensource legal dataset. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In *International Conference* on Learning Representations. Wonseok Hwang, Dongjun Lee, Kyoungyeon Cho, Hanuhl Lee, and Minjoon Seo. 2022. A multi-task benchmark for korean legal language understanding and judgement prediction. In *Thirty-sixth Conference on Neural Information Processing Systems* Datasets and Benchmarks Track. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *CoRR*, abs/2001.08361. Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. 2023. Gpt-4 passes the bar exam. Yuta Koreeda and Christopher Manning. 2021. ContractNLI: A dataset for document-level natural language inference for contracts. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 1907–1919, Punta Cana, Dominican Republic. Association for Computational Linguistics. Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander M. Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Vijit Malik, Rishabh Sanjay, Shubham Kumar Nigam, Kripabandhu Ghosh, Shouvik Kumar Guha, Arnab Bhattacharya, and Ashutosh Modi. 2021. ILDC for CJPE: Indian legal documents corpus for court judgment prediction and explanation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4046–4062, Online. Association for Computational Linguistics. Joel Niklaus, Veton Matoshi, Pooja Rani, Andrea Galassi, Matthias Stürmer, and Ilias Chalkidis. 2023. Lextreme: A multi-lingual and multi-task benchmark for the legal domain. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebas- ´ tian Ruder. 2021. UNKs everywhere: Adapting multilingual language models to new scripts. In Pro- ceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10186– 10203, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930–7946, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Magnus Sahlgren and Fredrik Carlsson. 2021. The singleton fallacy: Why current critiques of language models miss the point. *Frontiers in Artificial Intelligence*, 4. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In *Advances in Neural Information Processing Systems*, volume 33, pages 3008–3021. Curran Associates, Inc. Dimitrios Tsarapatsanis and Nikolaos Aletras. 2021. On the ethical limits of natural language processing on legal text. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3590–3599, Online. Association for Computational Linguistics. Don Tuggener, Pius von Däniken, Thomas Peetz, and Mark Cieliebak. 2020. LEDGAR: A large-scale multi-label corpus for text classification of legal provisions in contracts. In *Proceedings of the Twelfth* Language Resources and Evaluation Conference, pages 1235–1241, Marseille, France. European Language Resources Association. Ellen Voorhees and D Tice. 2000. The trec-8 question answering track evaluation. 3. The TREC-8 Question Answering Track Evaluation. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners. *CoRR*, abs/2109.01652. Alexander Wettig, Tianyu Gao, Zexuan Zhong, and Danqi Chen. 2023. Should you mask 15% in masked language modeling? In *Proceedings of* the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2985–3000, Dubrovnik, Croatia. Association for Computational Linguistics. Chaojun Xiao, Xueyu Hu, Zhiyuan Liu, Cunchao Tu, and Maosong Sun. 2021. Lawformer: A pre-trained language model for chinese legal long documents. CoRR, abs/2105.03887. Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When does pretraining help? assessing self-supervised learning for law and the casehold dataset. In Proceedings of the 18th International Conference on Artificial Intelligence and Law. Association for Computing Machinery. ## A Legallama Discussion The LegalLAMA tasks cannot be resolved by laypersons or even law professionals that are not experts in the specific fields of law in many cases. Another consideration that often goes unspecified is that expertise is legal system-specific (e.g. US law differs widely from EU law), as do the distinctions between the academic and the practical knowledge of law (including potential sub-distinctions between different types of legal practitioners, e.g. litigation experts, contract drafting experts, due dilligence experts, etc.). Lastly, it is also important to note that legal systems can be clustered according to similarities or differences. Specifically: - For task **'ECHR Articles'**, both laypersons and lawyers who are not experts in human rights law (particularly ECHR) would perform at random chance level, since they lack knowledge of the ECHR in an article level. Providing the titles of the articles (Table 6), we can expect improved performance in case of rich context. Generally, the same can be said for the related task **'Legal Terminology (CoE)'**. Legal terminology is very particular to individual legal systems, and predicting the place of legal concepts within the ECHR would require a very high level of specialization. - For task **'Contractual Section Titles (US)'**, structural knowledge of US contracts would be necessary for the performance of this task with a high degree of accuracy. This is due to the fact that contracts often have some structural similarities, but also particular characteristics depending on the type of contract (e.g. employment, sale, credit). Laypersons would perform this task at random chance level. Practicing lawyers with contract drafting expertise would potentially have the highest performance in this task. Non-US lawyers with no contract drafting expertise would perform slightly higher than random chance level. The same considerations apply to the task 'Contract Types (US)'. - For tasks **'Crime Charges (US)'** and **'Criminal Code Sections (Canada)'**, both laypersons and lawyers who are not experts in criminal law (particularly US law and Canadian law) would perform at random chance level, since the legal concepts are very specific (e.g. manslaughter). Improved performance could be seen in cases where the masked terms are specifically defined. - For tasks **'Legal Terminology (US)'** and **'Legal Terminology (EU)'**, the same discussion as above is applicable. Legal terminology is system-specific. There may be similar terms, but in the absence of knowledge relating to how such similarities may be interpreted, a non-expert lawyer would not perform such a task with a very high accuracy level. ## A.1 Ecthr Articles We hereby provide details on the 13 ECtHR articles; | ECHR Article | Description (Title) | |----------------|----------------------------------------------| | Article 2 | Right to life | | Article 3 | Prohibition of torture | | Article 5 | Right to liberty and security | | Article 6 | Right to a fair trial | | Article 7 | No punishment without law | | Article 8 | Right to respect for private and family life | | Article 9 | Freedom of thought, conscience and religion | | Article 10 | Freedom of expression | | Article 11 | Freedom of assembly and association | | Article 13 | Right to an effective remedy | | Article 14 | Prohibition of discrimination | | Article 34 | Individual applications | | Article 35 | Admissibility criteria | Table 6: ECHR Articles ## B Lexlm Pre-Training Details For the newly released, LexLM models (LexLMs), we followed a series of best-practices in language model development literature: (a) We warm-start (initialize) our models from the original RoBERTa checkpoints (base or large) of Liu et al. (2019). Model recycling | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|-----------------------------------------------------|-----------|-----------|------|------|------|------|------|------|------|------|------|------| | Task | P@1 | MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | | | ECHR Articles | 0.26 | 0.40 | 0.27 | 0.41 | 0.86 | 0.91 | 0.23 | 0.38 | 0.20 | 0.35 | 0.86 | 0.91 | 0.91 | 0.94 | | Contract Sections | 0.20 | 0.40 | 0.53 | 0.66 | 0.77 | 0.85 | 0.24 | 0.40 | 0.51 | 0.65 | 0.78 | 0.86 | 0.78 | 0.86 | | Contract Types | 0.32 | 0.48 | 0.34 | 0.50 | 0.80 | 0.87 | 0.42 | 0.55 | 0.37 | 0.50 | 0.82 | 0.89 | 0.85 | 0.91 | | Crime Charges (US) 0.46 | 0.58 | 0.54 | 0.65 | 0.44 | 0.56 | 0.51 | 0.63 | 0.33 | 0.45 | 0.56 | 0.67 | 0.61 | 0.71 | | | Terminology (US) | 0.41 | 0.51 | 0.49 | 0.58 | 0.52 | 0.63 | 0.58 | 0.69 | 0.37 | 0.49 | 0.64 | 0.74 | 0.70 | 0.79 | | Terminology (EU) | 0.34 | 0.47 | 0.40 | 0.53 | 0.51 | 0.64 | 0.25 | 0.39 | 0.25 | 0.38 | 0.60 | 0.72 | 0.67 | 0.77 | | Terminology (CoE) | 0.43 | 0.54 | 0.51 | 0.60 | 0.69 | 0.78 | 0.36 | 0.49 | 0.30 | 0.41 | 0.78 | 0.86 | 0.86 | 0.91 | | CC Sections | 0.36 | 0.45 | 0.40 | 0.50 | 0.53 | 0.59 | 0.45 | 0.54 | 0.46 | 0.53 | 0.77 | 0.83 | 0.86 | 0.90 | | Average | 0.33 | 0.47 | 0.41 | 0.54 | 0.61 | 0.71 | 0.34 | 0.49 | 0.32 | 0.46 | 0.71 | 0.80 | 0.77 | 0.85 | ![13_image_0.png](13_image_0.png) is a standard process followed by many (Wei et al., 2021; Ouyang et al., 2022) to benefit from starting from an available "well-trained" PLM, instead from scratch (random). (b) We train a new tokenizer of 50k BPEs based on the training subsets of LeXFiles to better cover legal language across all covered legal systems. Although, we reuse the original RoBERTa embeddings for all lexically overlapping tokens (Pfeiffer et al., 2021), i.e., we warm-start word embeddings for tokens that already exist in the original RoBERTa vocabulary, and use random ones for the rest. (c) We continue pre-training our models on the diverse LeXFiles (Section 2) corpus for additional 1M steps with batches of 512 samples. We do initial warm-up steps for the first 5% of the total training steps with a linearly increasing learning rate up to 1e−4, and then follow a cosine decay scheduling, following recent trends. For half of the warm-up phase (2.5%), the Transformer encoder is frozen, and only the embeddings, shared between input and output (MLM), are updated. We also use an increased 20/30% masking rate, where also 100% of the predictions are based on masked tokens, compared to Devlin et al. (2019) 23 for base/large models respectively, based on the findings of Wettig et al. (2023). (d) For both training the tokenizer and the LexLM models, we use a sentence sampler with exponential smoothing of the sub-corpora sampling rate following Conneau et al. (2019) and Raffel et al. (2020), since there is a disparate proportion of tokens across sub-corpora (Table 1) and we aim to preserve per-corpus capacity, i.e., avoid overfitting to the majority (approx. 94% of the total number or tokens) US-origin texts. (e) We consider mixed cased models, similar to all recently developed large PLMs (Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020). We make LexLM models (base/large) publicly available alongside all intermediate checkpoints every 50k training steps on Hugging Face Hub.24 ## C Detailed Legal-Lama Results Per Tasks Table 7 contains the same results as in Table 4 with the addition of Precision@1 scores (P@1). The reason why we decided to only present MRR results in the main paper is that the difference between MRR and P@1 does not change the ranking of the models, and P@1 does not account for minor variations in predictions. For each task, we display detailed results per predicted terms for each model. Table 8 contains results on the 13 article numbers from the ECHR task. Table 9 contains results on the 20 clause types from the Contract Section task. Table 10 contains results on the 16 types of contracts from the Contract Section task. Table 11 contains results on the 11 topics from the Crime Charges (US) task. Each topic contains multiple labels. Table 12 contains results on the 7 topics from the Terminology (US) task. Each topic contains multiple labels. Table 13 24https://huggingface.co/lexlms contains results on the 23 topics from the Terminology (EU) task. Each topic contains multiple labels. Table 14 contains results on the 12 articles from the Terminology (CoE) task. Each article contains multiple labels. Table 15 contains results on the 43 sections from the Criminal Code Sections (Canada) task. ## D Legallama Tasks' Vocabulary In Tables 8, 9, 10, 13, and 15 we present the labels' list for the 'ECHR Articles', 'Contract Sections', 'Contract Types', 'Terminology (EU)' and 'Criminal Code Sections (Canada)' sub-tasks and the label-wise performance. In Tables 16, 17, and 18, we present the labels' list for the 'Terminology (CoE)', 'Crimes Charges (US)', and 'Terminology (US)' sub-tasks grouped in clusters. | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|------------|-----------|-----------------------------------------|------|------|------|------|------|------|------|------|------|------| | ECHR Article P@1 | MRR | P@1 | MRR | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | Art. 2 | 0.87 | 0.91 | 0.63 | 0.76 | 0.87 | 0.92 | 0.27 | 0.45 | 0.29 | 0.51 | 0.86 | 0.91 | 0.91 | 0.94 | | Art. 3 | 0.23 | 0.56 | 0.35 | 0.59 | 0.93 | 0.96 | 0.44 | 0.62 | 0.32 | 0.54 | 0.93 | 0.96 | 0.96 | 0.97 | | Art. 5 | 0.35 | 0.56 | 0.39 | 0.58 | 0.83 | 0.89 | 0.32 | 0.44 | 0.20 | 0.41 | 0.79 | 0.86 | 0.88 | 0.92 | | Art. 6 | 0.27 | 0.40 | 0.26 | 0.38 | 0.93 | 0.96 | 0.28 | 0.43 | 0.18 | 0.36 | 0.93 | 0.96 | 0.94 | 0.96 | | Art. 7 | 0.15 | 0.38 | 0.30 | 0.53 | 0.53 | 0.72 | 0.15 | 0.36 | 0.29 | 0.49 | 0.62 | 0.75 | 0.74 | 0.83 | | Art. 8 | 0.16 | 0.28 | 0.18 | 0.36 | 0.89 | 0.93 | 0.17 | 0.32 | 0.13 | 0.30 | 0.89 | 0.94 | 0.91 | 0.95 | | Art. 9 | 0.33 | 0.46 | 0.32 | 0.46 | 0.83 | 0.89 | 0.27 | 0.45 | 0.27 | 0.45 | 0.85 | 0.92 | 0.95 | 0.97 | | Art. 10 | 0.23 | 0.34 | 0.24 | 0.37 | 0.84 | 0.90 | 0.27 | 0.43 | 0.21 | 0.33 | 0.87 | 0.91 | 0.90 | 0.93 | | Art. 11 | 0.25 | 0.33 | 0.27 | 0.36 | 0.94 | 0.96 | 0.30 | 0.44 | 0.23 | 0.34 | 0.91 | 0.94 | 0.97 | 0.99 | | Art. 13 | 0.28 | 0.36 | 0.32 | 0.40 | 0.89 | 0.94 | 0.27 | 0.36 | 0.26 | 0.39 | 0.90 | 0.94 | 0.92 | 0.95 | | Art. 14 | 0.14 | 0.24 | 0.15 | 0.26 | 0.85 | 0.91 | 0.14 | 0.27 | 0.07 | 0.19 | 0.88 | 0.92 | 0.90 | 0.94 | | Art. 34 | 0.09 | 0.20 | 0.08 | 0.19 | 0.90 | 0.93 | 0.08 | 0.17 | 0.06 | 0.15 | 0.90 | 0.94 | 0.93 | 0.96 | | Art. 35 | 0.05 | 0.13 | 0.06 | 0.17 | 0.90 | 0.94 | 0.05 | 0.13 | 0.05 | 0.13 | 0.88 | 0.93 | 0.92 | 0.95 | | Average | 0.26 | 0.40 | 0.27 | 0.41 | 0.86 | 0.91 | 0.23 | 0.38 | 0.20 | 0.35 | 0.86 | 0.91 | 0.91 | 0.94 | Table 8: P@1 and MRR results of the 7 examined PLMs on the 13 article numbers from the ECHR task. | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |------------------------------------------------------------------------------------------------------------|-----------|------------|-----------|-----------|-----------------------------------------|------|------|------|------|------|------|------|------|------| | Clause Type | P@1 | MRR | P@1 | MRR | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | Arbitration | 0.44 | 0.65 | 0.97 | 0.98 | 1.00 | 1.00 | 0.83 | 0.91 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | | Assignments | 0.05 | 0.15 | 0.34 | 0.49 | 0.85 | 0.89 | 0.01 | 0.12 | 0.40 | 0.58 | 0.90 | 0.94 | 0.94 | 0.96 | | Confidentiality | 0.14 | 0.34 | 0.73 | 0.84 | 0.99 | 0.99 | 0.14 | 0.34 | 0.67 | 0.77 | 0.99 | 0.99 | 0.99 | 0.99 | | Costs | 0.00 | 0.22 | 0.56 | 0.66 | 0.78 | 0.89 | 0.22 | 0.38 | 0.33 | 0.54 | 0.56 | 0.78 | 0.67 | 0.80 | | Definitions | 1.00 | 1.00 | 0.99 | 0.99 | 0.78 | 0.84 | 0.27 | 0.53 | 0.75 | 0.85 | 0.78 | 0.85 | 0.81 | 0.87 | | Disclosures | 0.56 | 0.70 | 0.37 | 0.50 | 0.80 | 0.89 | 0.02 | 0.16 | 0.01 | 0.23 | 0.65 | 0.80 | 0.59 | 0.77 | | Employment | 0.42 | 0.69 | 1.00 | 1.00 | 0.92 | 0.96 | 0.50 | 0.67 | 0.65 | 0.80 | 0.85 | 0.92 | 1.00 | 1.00 | | Enforceability | 0.00 | 0.17 | 0.26 | 0.37 | 0.42 | 0.64 | 0.00 | 0.06 | 0.25 | 0.42 | 0.33 | 0.54 | 0.16 | 0.39 | | Fees | 0.12 | 0.50 | 0.52 | 0.70 | 0.43 | 0.62 | 0.39 | 0.54 | 0.38 | 0.60 | 0.48 | 0.67 | 0.51 | 0.69 | | Indemnification | 0.41 | 0.59 | 0.70 | 0.80 | 0.92 | 0.96 | 0.10 | 0.34 | 0.98 | 0.98 | 0.96 | 0.98 | 0.97 | 0.98 | | Law | 0.00 | 0.40 | 0.21 | 0.57 | 0.37 | 0.58 | 0.87 | 0.92 | 0.00 | 0.16 | 0.79 | 0.87 | 0.78 | 0.86 | | Participations | 0.04 | 0.20 | 0.45 | 0.66 | 0.82 | 0.90 | 0.52 | 0.67 | 0.38 | 0.59 | 0.80 | 0.87 | 0.82 | 0.89 | | Remedies | 0.05 | 0.25 | 0.16 | 0.34 | 0.92 | 0.96 | 0.11 | 0.37 | 0.52 | 0.71 | 0.98 | 0.99 | 0.99 | 0.99 | | Representations 0.01 | 0.30 | 0.43 | 0.62 | 0.77 | 0.85 | 0.17 | 0.46 | 0.46 | 0.64 | 0.86 | 0.91 | 0.80 | 0.87 | | | Severability | 0.02 | 0.17 | 0.34 | 0.58 | 0.99 | 0.99 | 0.00 | 0.16 | 0.97 | 0.98 | 0.98 | 0.99 | 0.98 | 0.99 | | Solvency | 0.09 | 0.22 | 0.38 | 0.52 | 0.94 | 0.97 | 0.00 | 0.06 | 0.11 | 0.26 | 0.97 | 0.99 | 0.97 | 0.99 | | Taxes | 0.29 | 0.59 | 0.86 | 0.90 | 0.99 | 0.99 | 0.24 | 0.48 | 0.56 | 0.68 | 0.99 | 0.99 | 0.99 | 0.99 | | Termination | 0.31 | 0.56 | 0.60 | 0.77 | 0.75 | 0.85 | 0.22 | 0.45 | 0.84 | 0.91 | 0.80 | 0.89 | 0.76 | 0.86 | | Waivers | 0.12 | 0.22 | 0.59 | 0.67 | 0.79 | 0.87 | 0.00 | 0.07 | 0.57 | 0.74 | 0.94 | 0.95 | 0.84 | 0.89 | | Warranties | 0.00 | 0.14 | 0.05 | 0.26 | 0.08 | 0.39 | 0.14 | 0.33 | 0.27 | 0.53 | 0.05 | 0.36 | 0.10 | 0.41 | | Average | 0.20 | 40.2 | 0.53 | 0.66 | 0.77 | 0.85 | 0.24 | 0.40 | 0.51 | 0.65 | 0.78 | 0.86 | 0.78 | 0.86 | | Table 9: P@1 and MRR results of the 7 examined PLMs on the 20 clause types from the Contract Section task. | | | | | | | | | | | | | | | | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|------------|-----------|-----------------------------------------|------|------|------|------|------|------|------|------|------|------| | Contract Type P@1 | MRR | P@1 | MRR | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | Award | 0.62 | 0.67 | 0.62 | 0.70 | 1.00 | 1.00 | 0.54 | 0.60 | 0.62 | 0.70 | 1.00 | 1.00 | 1.00 | 1.00 | | Consulting | 0.03 | 0.17 | 0.10 | 0.23 | 0.94 | 0.97 | 0.08 | 0.29 | 0.07 | 0.17 | 0.81 | 0.87 | 0.90 | 0.93 | | Credit | 0.57 | 0.72 | 0.37 | 0.53 | 0.97 | 0.98 | 0.80 | 0.88 | 0.55 | 0.77 | 0.90 | 0.95 | 0.95 | 0.98 | | Employment | 0.40 | 0.54 | 0.30 | 0.44 | 0.88 | 0.94 | 0.63 | 0.73 | 0.56 | 0.72 | 0.99 | 0.99 | 0.96 | 0.98 | | Indemnity | 0.08 | 0.34 | 0.00 | 0.16 | 0.62 | 0.71 | 0.00 | 0.15 | 0.00 | 0.11 | 1.00 | 1.00 | 1.00 | 1.00 | | Letter | 0.22 | 0.33 | 0.24 | 0.34 | 0.96 | 0.98 | 0.76 | 0.87 | 0.18 | 0.27 | 0.77 | 0.88 | 0.93 | 0.97 | | License | 0.40 | 0.62 | 0.20 | 0.42 | 0.63 | 0.76 | 0.49 | 0.70 | 0.31 | 0.44 | 0.69 | 0.79 | 0.86 | 0.91 | | Loan | 0.51 | 0.67 | 0.72 | 0.84 | 0.90 | 0.93 | 0.72 | 0.83 | 0.95 | 0.97 | 0.90 | 0.94 | 0.87 | 0.93 | | Purchase | 0.70 | 0.83 | 0.59 | 0.68 | 0.70 | 0.83 | 0.52 | 0.68 | 0.93 | 0.96 | 0.89 | 0.92 | 0.93 | 0.94 | | Security | 0.35 | 0.56 | 0.70 | 0.80 | 0.95 | 0.97 | 0.59 | 0.75 | 0.35 | 0.59 | 0.97 | 0.99 | 0.97 | 0.99 | | Separation | 0.12 | 0.26 | 0.16 | 0.28 | 0.66 | 0.77 | 0.15 | 0.38 | 0.07 | 0.21 | 0.73 | 0.86 | 0.71 | 0.82 | | Services | 0.24 | 0.45 | 0.29 | 0.48 | 0.52 | 0.67 | 0.05 | 0.19 | 0.38 | 0.54 | 0.52 | 0.69 | 0.52 | 0.69 | | Settlement | 0.49 | 0.63 | 0.49 | 0.71 | 0.70 | 0.80 | 0.88 | 0.93 | 0.58 | 0.72 | 0.53 | 0.74 | 0.65 | 0.80 | | Supply | 0.09 | 0.24 | 0.35 | 0.51 | 0.61 | 0.73 | 0.09 | 0.19 | 0.04 | 0.14 | 0.70 | 0.77 | 0.65 | 0.74 | | Voting | 0.00 | 0.13 | 0.03 | 0.33 | 1.00 | 1.00 | 0.00 | 0.10 | 0.00 | 0.13 | 0.83 | 0.91 | 0.90 | 0.95 | | Average | 0.32 | 0.48 | 0.34 | 0.50 | 0.80 | 0.87 | 0.42 | 0.55 | 0.37 | 0.50 | 0.82 | 0.89 | 0.85 | 0.91 | Table 10: P@1 and MRR results of the 7 examined PLMs on the 16 types of contracts from the Contract Types task. | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|-----------------------------------------------------|-----------|-----------|------|------|------|------|------|------|------|------|------|------| | Crime Charges | P@1 | MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | | | Children | 0.69 | 0.78 | 0.73 | 0.82 | 0.47 | 0.61 | 0.67 | 0.78 | 0.45 | 0.60 | 0.73 | 0.82 | 0.77 | 0.85 | | Computer | 0.36 | 0.51 | 0.46 | 0.62 | 0.32 | 0.41 | 0.42 | 0.53 | 0.29 | 0.40 | 0.44 | 0.56 | 0.51 | 0.64 | | Court-related | 0.55 | 0.66 | 0.57 | 0.69 | 0.53 | 0.65 | 0.61 | 0.73 | 0.44 | 0.58 | 0.63 | 0.74 | 0.67 | 0.78 | | Drug-related | 0.40 | 0.53 | 0.48 | 0.60 | 0.31 | 0.44 | 0.35 | 0.50 | 0.26 | 0.38 | 0.42 | 0.55 | 0.46 | 0.60 | | Wrongful Life Taking 0.50 | 0.64 | 0.59 | 0.72 | 0.59 | 0.71 | 0.58 | 0.72 | 0.31 | 0.47 | 0.61 | 0.74 | 0.63 | 0.76 | | | Mens Rea | 0.56 | 0.64 | 0.62 | 0.69 | 0.55 | 0.65 | 0.68 | 0.76 | 0.47 | 0.59 | 0.69 | 0.77 | 0.75 | 0.82 | | Monetary | 0.40 | 0.51 | 0.48 | 0.59 | 0.52 | 0.63 | 0.50 | 0.63 | 0.30 | 0.44 | 0.53 | 0.65 | 0.61 | 0.72 | | Pattern of Behavior | 0.37 | 0.50 | 0.48 | 0.59 | 0.41 | 0.50 | 0.44 | 0.57 | 0.26 | 0.37 | 0.52 | 0.62 | 0.57 | 0.68 | | Property | 0.25 | 0.34 | 0.36 | 0.43 | 0.26 | 0.36 | 0.32 | 0.41 | 0.14 | 0.22 | 0.40 | 0.46 | 0.42 | 0.48 | | Sex-related | 0.55 | 0.65 | 0.59 | 0.70 | 0.47 | 0.59 | 0.54 | 0.66 | 0.36 | 0.48 | 0.60 | 0.70 | 0.66 | 0.75 | | Violent | 0.46 | 0.61 | 0.57 | 0.70 | 0.45 | 0.59 | 0.54 | 0.69 | 0.29 | 0.45 | 0.58 | 0.72 | 0.65 | 0.77 | | Average | 0.46 | 0.58 | 0.54 | 0.65 | 0.44 | 0.56 | 0.51 | 0.63 | 0.33 | 0.45 | 0.56 | 0.67 | 0.61 | 0.71 | Table 11: Results on the 'Crime Charges (US)' LegalLAMA tasks. Results are clustered in Crime Topics. | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|------------|---------------------------------------------|-----------|------|------|------|------|------|------|------|------|------|------| | Topic | P@1 | MRR P@1 | MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | | Business law | 0.29 | 0.38 | 0.37 | 0.45 | 0.48 | 0.59 | 0.59 | 0.70 | 0.35 | 0.46 | 0.59 | 0.71 | 0.69 | 0.79 | | Criminal law | 0.39 | 0.49 | 0.46 | 0.54 | 0.48 | 0.58 | 0.54 | 0.65 | 0.32 | 0.45 | 0.64 | 0.73 | 0.67 | 0.76 | | Employment law | 0.47 | 0.60 | 0.58 | 0.68 | 0.47 | 0.60 | 0.54 | 0.67 | 0.41 | 0.54 | 0.55 | 0.67 | 0.65 | 0.76 | | Family law | 0.52 | 0.61 | 0.59 | 0.67 | 0.49 | 0.62 | 0.66 | 0.77 | 0.40 | 0.52 | 0.75 | 0.84 | 0.82 | 0.88 | | Immigration | 0.48 | 0.57 | 0.54 | 0.62 | 0.58 | 0.67 | 0.55 | 0.65 | 0.38 | 0.48 | 0.65 | 0.74 | 0.72 | 0.80 | | Landlord-tenant law 0.37 | 0.46 | 0.44 | 0.52 | 0.64 | 0.73 | 0.69 | 0.77 | 0.42 | 0.52 | 0.75 | 0.82 | 0.80 | 0.86 | | | Bankruptcy | 0.37 | 0.49 | 0.43 | 0.55 | 0.48 | 0.59 | 0.49 | 0.62 | 0.34 | 0.47 | 0.53 | 0.66 | 0.59 | 0.71 | | Average | 0.41 | 0.51 | 0.49 | 0.58 | 0.52 | 0.63 | 0.58 | 0.69 | 0.37 | 0.49 | 0.64 | 0.74 | 0.70 | 0.79 | Table 12: Results on the 'Terminology (US)' LegalLAMA task. Results are clustered in Law Topics. | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |--------------------------------------------|---------------------------------------------------------|------------|-----------|-----------|------|------|------|------|------|------|------|------|------|------| | Topic | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | | | | | Accession | 0.32 | 0.45 | 0.57 | 0.68 | 0.93 | 0.95 | 0.46 | 0.55 | 0.87 | 0.90 | 0.80 | 0.88 | 0.80 | 0.89 | | Administrative cooperation | 0.15 | 0.33 | 0.23 | 0.40 | 0.53 | 0.69 | 0.12 | 0.27 | 0.19 | 0.32 | 0.65 | 0.79 | 0.82 | 0.89 | | Approximation of laws | 0.46 | 0.54 | 0.54 | 0.58 | 0.36 | 0.47 | 0.18 | 0.32 | 0.08 | 0.23 | 0.67 | 0.73 | 0.72 | 0.79 | | Area of freedom, security and justice 0.14 | 0.27 | 0.13 | 0.28 | 0.11 | 0.24 | 0.14 | 0.28 | 0.11 | 0.25 | 0.13 | 0.27 | 0.19 | 0.34 | | | Citizenship of the union | 0.40 | 0.60 | 0.47 | 0.64 | 0.26 | 0.45 | 0.12 | 0.30 | 0.31 | 0.47 | 0.50 | 0.70 | 0.53 | 0.72 | | Competition | 0.50 | 0.68 | 0.75 | 0.80 | 0.84 | 0.90 | 0.52 | 0.62 | 0.52 | 0.62 | 0.88 | 0.89 | 0.88 | 0.89 | | Consumer protection | 0.40 | 0.57 | 0.50 | 0.62 | 0.45 | 0.58 | 0.28 | 0.42 | 0.20 | 0.37 | 0.25 | 0.42 | 0.40 | 0.54 | | Data protection | 0.47 | 0.63 | 0.61 | 0.73 | 0.64 | 0.75 | 0.17 | 0.28 | 0.20 | 0.35 | 0.66 | 0.76 | 0.73 | 0.82 | | External relations | 0.30 | 0.45 | 0.40 | 0.61 | 0.38 | 0.55 | 0.19 | 0.29 | 0.09 | 0.22 | 0.40 | 0.61 | 0.55 | 0.68 | | Free movement of capital | 0.42 | 0.45 | 0.42 | 0.45 | 0.18 | 0.38 | 0.11 | 0.26 | 0.08 | 0.22 | 0.33 | 0.53 | 0.33 | 0.59 | | Free movement of goods | 0.25 | 0.37 | 0.25 | 0.35 | 0.32 | 0.48 | 0.21 | 0.34 | 0.18 | 0.31 | 0.62 | 0.74 | 0.38 | 0.58 | | Freedom of establishment | 0.22 | 0.34 | 0.42 | 0.50 | 0.64 | 0.75 | 0.33 | 0.43 | 0.29 | 0.40 | 0.81 | 0.88 | 0.94 | 0.95 | | Freedom of movement for workers | 0.22 | 0.34 | 0.35 | 0.41 | 0.19 | 0.35 | 0.12 | 0.23 | 0.11 | 0.22 | 0.43 | 0.56 | 0.38 | 0.55 | | Freedom to provide services | 0.07 | 0.20 | 0.04 | 0.23 | 0.23 | 0.40 | 0.10 | 0.24 | 0.15 | 0.29 | 0.39 | 0.58 | 0.54 | 0.67 | | Fundamental rights | 0.60 | 0.73 | 0.69 | 0.81 | 0.89 | 0.93 | 0.26 | 0.37 | 0.22 | 0.36 | 0.84 | 0.90 | 0.83 | 0.89 | | Internal market | 0.00 | 0.24 | 0.20 | 0.40 | 0.94 | 0.96 | 0.26 | 0.36 | 0.40 | 0.55 | 0.40 | 0.62 | 0.70 | 0.77 | | Non-contractual liability | 0.09 | 0.19 | 0.09 | 0.20 | 0.19 | 0.35 | 0.19 | 0.40 | 0.10 | 0.23 | 0.30 | 0.49 | 0.55 | 0.70 | | Non-discrimination | 0.00 | 0.24 | 0.00 | 0.25 | 0.50 | 0.68 | 0.29 | 0.48 | 0.10 | 0.26 | 0.67 | 0.83 | 0.33 | 0.67 | | Privileges and immunities | 0.17 | 0.27 | 0.12 | 0.24 | 0.63 | 0.77 | 0.25 | 0.36 | 0.20 | 0.35 | 0.81 | 0.88 | 0.81 | 0.87 | | Procedural provisions | 0.53 | 0.66 | 0.63 | 0.75 | 0.68 | 0.80 | 0.61 | 0.73 | 0.42 | 0.56 | 0.71 | 0.82 | 0.75 | 0.84 | | Public health | 0.62 | 0.80 | 0.50 | 0.72 | 0.68 | 0.79 | 0.38 | 0.58 | 0.28 | 0.48 | 0.54 | 0.75 | 0.92 | 0.96 | | Safeguard measures | 0.50 | 0.52 | 0.50 | 0.58 | 0.64 | 0.76 | 0.31 | 0.39 | 0.42 | 0.52 | 0.75 | 0.88 | 1.00 | 1.00 | | Social policy | 0.75 | 0.78 | 0.75 | 0.81 | 0.42 | 0.54 | 0.22 | 0.37 | 0.15 | 0.32 | 0.75 | 0.83 | 1.00 | 1.00 | | Average | 0.34 | 0.47 | 0.40 | 0.53 | 0.51 | 0.64 | 0.25 | 0.39 | 0.25 | 0.38 | 0.60 | 0.72 | 0.67 | 0.77 | | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|------------|-----------|-----------|-----------------------------------------|------|------|------|------|------|------|------|------|------| | Article | P@1 | MRR | P@1 | MRR | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | Art. 2 | 0.46 | 0.57 | 0.52 | 0.63 | 0.72 | 0.82 | 0.37 | 0.51 | 0.36 | 0.47 | 0.80 | 0.87 | 0.90 | 0.94 | | Art. 3 | 0.51 | 0.61 | 0.58 | 0.69 | 0.80 | 0.87 | 0.40 | 0.54 | 0.34 | 0.45 | 0.83 | 0.90 | 0.89 | 0.93 | | Art. 5 | 0.39 | 0.51 | 0.46 | 0.57 | 0.56 | 0.69 | 0.36 | 0.48 | 0.25 | 0.38 | 0.63 | 0.75 | 0.74 | 0.83 | | Art. 6 | 0.42 | 0.55 | 0.49 | 0.62 | 0.68 | 0.77 | 0.43 | 0.55 | 0.36 | 0.49 | 0.77 | 0.85 | 0.82 | 0.89 | | Art. 7 | 0.71 | 0.78 | 0.82 | 0.86 | 0.89 | 0.93 | 0.36 | 0.59 | 0.44 | 0.52 | 0.88 | 0.93 | 0.91 | 0.94 | | Art. 8 | 0.35 | 0.47 | 0.45 | 0.56 | 0.62 | 0.71 | 0.29 | 0.41 | 0.26 | 0.36 | 0.73 | 0.82 | 0.84 | 0.90 | | Art. 9 | 0.49 | 0.57 | 0.56 | 0.64 | 0.67 | 0.76 | 0.43 | 0.53 | 0.33 | 0.44 | 0.79 | 0.86 | 0.85 | 0.91 | | Art. 10 | 0.30 | 0.43 | 0.41 | 0.52 | 0.57 | 0.69 | 0.25 | 0.37 | 0.20 | 0.31 | 0.73 | 0.82 | 0.84 | 0.90 | | Art. 11 | 0.32 | 0.44 | 0.42 | 0.52 | 0.66 | 0.75 | 0.29 | 0.40 | 0.23 | 0.34 | 0.74 | 0.84 | 0.87 | 0.92 | | Art. 13 | 0.44 | 0.61 | 0.55 | 0.69 | 0.78 | 0.86 | 0.38 | 0.56 | 0.27 | 0.45 | 0.86 | 0.90 | 0.91 | 0.94 | | Art. 14 | 0.72 | 0.80 | 0.79 | 0.85 | 0.80 | 0.86 | 0.69 | 0.78 | 0.52 | 0.63 | 0.84 | 0.89 | 0.91 | 0.94 | | Art. 35 | 0.14 | 0.21 | 0.18 | 0.24 | 0.61 | 0.71 | 0.14 | 0.26 | 0.09 | 0.18 | 0.78 | 0.85 | 0.89 | 0.93 | | Average | 0.43 | 0.54 | 0.51 | 0.61 | 0.69 | 0.79 | 0.36 | 0.49 | 0.30 | 0.41 | 0.78 | 0.86 | 0.86 | 0.91 | | RoBERTa-B RoBERTa-L LegalBERT | CL-BERT | PoL-BERT | LexLM-B | LexLM-L | | | | | | | | | | | |---------------------------------|-----------|------------|-----------|-----------|-----------------------------------------|------|------|------|------|------|------|------|------|------| | Section | P@1 | MRR | P@1 | MRR | P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR | | | | | | | | | | | 16 | 0.00 | 0.08 | 0.00 | 0.04 | 0.00 | 0.04 | 0.00 | 0.10 | 0.00 | 0.08 | 0.50 | 0.62 | 1.00 | 1.00 | | 21 | 0.23 | 0.41 | 0.37 | 0.47 | 0.46 | 0.56 | 0.43 | 0.56 | 0.44 | 0.55 | 0.94 | 0.96 | 0.97 | 0.99 | | 85 | 0.46 | 0.51 | 0.31 | 0.42 | 0.30 | 0.41 | 0.29 | 0.37 | 0.30 | 0.39 | 0.40 | 0.52 | 0.57 | 0.69 | | 86 | 0.38 | 0.53 | 0.38 | 0.50 | 0.50 | 0.62 | 0.50 | 0.54 | 0.50 | 0.53 | 0.50 | 0.71 | 0.50 | 0.66 | | 87 | 0.75 | 0.78 | 0.50 | 0.62 | 0.75 | 0.79 | 0.50 | 0.65 | 0.75 | 0.82 | 0.75 | 0.83 | 0.75 | 0.80 | | 88.23 | 0.25 | 0.34 | 0.33 | 0.38 | 0.33 | 0.38 | 0.33 | 0.39 | 0.33 | 0.42 | 0.33 | 0.40 | 0.33 | 0.38 | | 95 | 0.48 | 0.54 | 0.52 | 0.56 | 0.52 | 0.55 | 0.46 | 0.52 | 0.45 | 0.49 | 0.79 | 0.84 | 0.80 | 0.85 | | 122 | 0.17 | 0.19 | 0.11 | 0.15 | 0.17 | 0.18 | 0.12 | 0.15 | 0.12 | 0.16 | 0.50 | 0.67 | 0.83 | 0.86 | | 145 | 0.25 | 0.38 | 0.25 | 0.40 | 0.44 | 0.51 | 0.38 | 0.50 | 0.50 | 0.55 | 0.62 | 0.71 | 0.88 | 0.90 | | 151 | 0.59 | 0.61 | 0.89 | 0.91 | 0.62 | 0.64 | 0.04 | 0.34 | 0.02 | 0.32 | 0.91 | 0.92 | 0.91 | 0.92 | | 152 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.50 | 0.75 | 1.00 | 1.00 | 1.00 | 1.00 | | 163 | 0.50 | 0.52 | 0.50 | 0.51 | 0.50 | 0.51 | 0.50 | 0.51 | 0.50 | 0.52 | 0.50 | 0.75 | 1.00 | 1.00 | | 163.1 | 0.33 | 0.40 | 0.44 | 0.57 | 0.67 | 0.68 | 0.33 | 0.51 | 0.33 | 0.46 | 1.00 | 1.00 | 1.00 | 1.00 | | 231 | 0.25 | 0.29 | 0.38 | 0.54 | 0.62 | 0.65 | 0.44 | 0.51 | 0.56 | 0.59 | 0.94 | 0.94 | 1.00 | 1.00 | | 249 | 0.40 | 0.45 | 0.33 | 0.41 | 0.60 | 0.68 | 0.66 | 0.74 | 0.53 | 0.66 | 0.87 | 0.91 | 0.88 | 0.90 | | 254 | 0.50 | 0.61 | 0.65 | 0.73 | 0.50 | 0.58 | 0.40 | 0.52 | 0.50 | 0.59 | 0.75 | 0.85 | 0.85 | 0.92 | | 264 | 0.67 | 0.67 | 0.50 | 0.56 | 0.50 | 0.59 | 0.42 | 0.51 | 0.25 | 0.38 | 0.92 | 0.96 | 1.00 | 1.00 | | 267.12 | 0.33 | 0.53 | 0.33 | 0.44 | 0.67 | 0.77 | 0.75 | 0.84 | 0.75 | 0.80 | 1.00 | 1.00 | 1.00 | 1.00 | | 267.5 | 0.67 | 0.78 | 0.75 | 0.85 | 0.83 | 0.90 | 0.67 | 0.76 | 0.50 | 0.62 | 1.00 | 1.00 | 1.00 | 1.00 | | 267.8 | 0.47 | 0.54 | 0.56 | 0.62 | 0.66 | 0.69 | 0.56 | 0.63 | 0.60 | 0.66 | 0.83 | 0.87 | 0.83 | 0.88 | | 268 | 0.45 | 0.54 | 0.25 | 0.41 | 0.35 | 0.44 | 0.35 | 0.44 | 0.40 | 0.49 | 0.50 | 0.65 | 0.75 | 0.86 | | 279 | 0.83 | 0.86 | 0.92 | 0.92 | 0.75 | 0.81 | 0.83 | 0.88 | 0.83 | 0.86 | 1.00 | 1.00 | 0.92 | 0.96 | | 380 | 0.24 | 0.35 | 0.24 | 0.36 | 0.39 | 0.47 | 0.47 | 0.53 | 0.35 | 0.48 | 0.78 | 0.80 | 0.71 | 0.73 | | 462.37 | 0.40 | 0.49 | 0.40 | 0.52 | 0.65 | 0.69 | 0.67 | 0.70 | 0.65 | 0.69 | 0.78 | 0.80 | 0.81 | 0.87 | | 465 | 0.50 | 0.63 | 0.75 | 0.76 | 0.50 | 0.63 | 0.38 | 0.54 | 0.75 | 0.75 | 1.00 | 1.00 | 1.00 | 1.00 | | 467.1 | 0.29 | 0.41 | 0.57 | 0.75 | 0.67 | 0.76 | 0.33 | 0.64 | 0.58 | 0.70 | 1.00 | 1.00 | 1.00 | 1.00 | | 495 | 0.32 | 0.40 | 0.32 | 0.46 | 0.60 | 0.66 | 0.56 | 0.61 | 0.60 | 0.65 | 0.77 | 0.87 | 0.87 | 0.92 | | 530 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.01 | 0.00 | 0.02 | 0.00 | 0.03 | 1.00 | 1.00 | 1.00 | 1.00 | | 591 | 0.13 | 0.31 | 0.25 | 0.36 | 0.61 | 0.71 | 0.52 | 0.58 | 0.51 | 0.60 | 0.87 | 0.93 | 0.87 | 0.92 | | 601 | 0.58 | 0.62 | 0.58 | 0.64 | 0.86 | 0.89 | 0.79 | 0.81 | 0.29 | 0.49 | 0.86 | 0.93 | 0.86 | 0.93 | | 650 | 0.64 | 0.70 | 0.72 | 0.77 | 0.78 | 0.80 | 0.69 | 0.74 | 0.75 | 0.76 | 0.97 | 0.99 | 0.97 | 0.99 | | 672.73 | 0.25 | 0.30 | 0.25 | 0.32 | 0.33 | 0.34 | 0.33 | 0.34 | 0.33 | 0.38 | 0.67 | 0.71 | 1.00 | 1.00 | | 672.78 | 0.27 | 0.34 | 0.34 | 0.43 | 0.42 | 0.46 | 0.50 | 0.55 | 0.42 | 0.48 | 0.83 | 0.92 | 1.00 | 1.00 | | 676 | 0.14 | 0.29 | 0.14 | 0.27 | 0.50 | 0.62 | 0.57 | 0.66 | 0.36 | 0.55 | 0.93 | 0.94 | 1.00 | 1.00 | | 683 | 0.11 | 0.26 | 0.18 | 0.30 | 0.48 | 0.52 | 0.52 | 0.57 | 0.48 | 0.54 | 0.81 | 0.88 | 0.90 | 0.94 | | 684 | 0.35 | 0.43 | 0.60 | 0.72 | 0.25 | 0.51 | 0.25 | 0.34 | 0.25 | 0.27 | 1.00 | 1.00 | 1.00 | 1.00 | | 686 | 0.21 | 0.28 | 0.28 | 0.36 | 0.57 | 0.65 | 0.65 | 0.68 | 0.43 | 0.55 | 0.68 | 0.79 | 0.94 | 0.96 | | 687 | 0.20 | 0.30 | 0.30 | 0.49 | 0.62 | 0.64 | 0.38 | 0.51 | 0.50 | 0.53 | 0.88 | 0.94 | 0.75 | 0.83 | | 715.1 | 0.12 | 0.25 | 0.12 | 0.22 | 0.33 | 0.50 | 0.33 | 0.45 | 0.50 | 0.56 | 1.00 | 1.00 | 1.00 | 1.00 | | 718.1 | 0.17 | 0.26 | 0.08 | 0.24 | 0.67 | 0.67 | 0.67 | 0.67 | 0.33 | 0.48 | 1.00 | 1.00 | 1.00 | 1.00 | | 718.2 | 0.20 | 0.30 | 0.17 | 0.31 | 0.52 | 0.59 | 0.52 | 0.57 | 0.59 | 0.64 | 0.76 | 0.85 | 0.87 | 0.92 | | 784 | 0.20 | 0.29 | 0.30 | 0.49 | 0.50 | 0.52 | 0.38 | 0.46 | 0.50 | 0.52 | 1.00 | 1.00 | 1.00 | 1.00 | | 839 | 0.17 | 0.25 | 0.07 | 0.20 | 0.33 | 0.37 | 0.33 | 0.36 | 0.33 | 0.35 | 1.00 | 1.00 | 1.00 | 1.00 | | Average | 0.36 | 0.45 | 0.40 | 0.50 | 0.53 | 0.59 | 0.45 | 0.54 | 0.46 | 0.53 | 0.77 | 0.83 | 0.86 | 0.90 | | ECHR Article | Masked Terms | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Art. 2 | 'accessibility' , 'effective investigation' , 'expulsion' , 'extradition' , 'foreseeability' , 'positive obligations' , 'prescribed by law' , 'right to life' , 'safeguards against abuse' , 'use of force' | | Art. 3 | 'effective investigation' , 'expulsion' , 'extradition' , 'inhuman punishment' , 'inhuman treatment' , 'positive obligations' , 'prohibition of torture' , 'torture' | | Art. 5 | 'competent court' , 'deprivation of liberty' , 'drug addicts' , 'educational supervision' , 'expulsion' , 'extradition' , 'guarantees to appear for trial' , 'lawful arrest or detention' , 'lawful order of a court' , 'length of pre-trial detention' , 'minors' , 'order release' , 'persons of unsound mind' , 'procedure prescribed by law' , 'reasonable suspicion' , 'release pending trial' , 'review by a court' , 'right to liberty and security' , 'security of person' , 'speediness of review' , 'take proceedings' , 'trial within a reasonable time' | | Art. 6 | 'charged with a criminal offence' , 'disciplinary proceedings' , 'enforcement proceedings' , 'equality of arms' , 'examination of witnesses' , 'exclusion of public' , 'expulsion' , 'extradition' , 'fair hearing' , 'free legal assistance' , 'impartial tribunal' , 'independent tribunal' , 'insufficient means' , 'legal aid' , 'national security' , 'necessary in a democratic society' , 'oral hearing' , 'presumption of innocence' , 'protection of public order' , 'proved guilty according to law' , 'public hearing' , 'public judgment' , 'reasonable time' , 'right to a fair trial' , 'rights of defence' , 'same conditions' , 'tribunal established by law' | | Art. 7 | 'criminal offence' , 'heavier penalty' , 'retroactivity' | | Art. 8 | 'accessibility' , 'economic well-being of the country' , 'expulsion' , 'extradition' , 'foreseeability' , 'interference' , 'national security' , 'necessary in a democratic society' , 'positive obligations' , 'prevention of crime' , 'prevention of disorder' , 'protection of health' , 'protection of morals' , 'protection of the rights and freedoms of others' , 'public authority' , 'public safety' , 'respect for correspondence' , 'respect for family life' , 'respect for home' , 'respect for private life' , 'right to respect for private and family life' , 'safeguards against abuse' | | Art. 9 | 'foreseeability' , 'freedom of conscience' , 'freedom of religion' , 'freedom of thought' , 'interference' , 'necessary in a democratic society' , 'observance' , 'positive obligations' , 'practice' , 'prescribed by law' , 'protection of health' , 'protection of public order' , 'protection of the rights and freedoms of others' , 'public safety' , 'safeguards against abuse' , 'teaching' , 'worship' | | Art. 10 | 'duties and responsibilities' , 'foreseeability' , 'freedom of expression' , 'freedom to hold opinions' , 'freedom to impart information' , 'freedom to receive information' , 'interference' , 'national security' , 'necessary in a democratic society' , 'positive obligations' , 'prescribed by law' , 'prevention of crime' , 'prevention of disorder' , 'protection of health' , 'protection of morals' , 'protection of the reputation of others' , 'protection of the rights of others' , 'public safety' , 'safeguards against abuse' , 'territorial integrity' | | Art. 11 | 'accessibility' , 'foreseeability' , 'form and join trade unions' , 'freedom of assembly and association' , 'freedom of association' , 'freedom of peaceful assembly' , 'interference' , 'national security' , 'necessary in a democratic society' , 'positive obligations' , 'prescribed by law' , 'prevention of crime' , 'prevention of disorder' , 'protection of health' , 'public safety' | | Art. 13 | 'effective remedy' , 'national authority' , 'right to an effective remedy' | | Art. 14 | 'discrimination' , 'language' , 'national minority' , 'national origin' , 'objective and reasonable justification' , 'prohibition of discrimination' , 'property' , 'race' , 'religion' , 'sex' , 'social origin' | | Art. 35 | 'continuing situation' , 'effective domestic remedy' , 'exhaustion of domestic remedies' , 'final domestic decision' , 'manifestly ill-founded' , 'no significant disadvantage' , 'relevant new information' | | Art. P1-1 | 'accessibility' , 'deprivation of property' , 'foreseeability' , 'general interest' , 'general principles of international law' , 'interference' , 'peaceful enjoyment of possessions' , 'positive obligations' , 'possessions' , 'prescribed by law' , 'protection of property' , 'secure the payment of taxes' Table 16: Masked Terms used in the 'Terminology (CoE)' LegalLAMA task. | | Crime Area | Masked Terms | |---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Children | 'child abandonment' , 'child abuse' | | Computer | 'computer crime' , 'cyberbullying' , 'identity theft' | | Court-related | 'criminal contempt of court' , 'perjury' , 'probation violation' | | Drug-related | 'drug distribution' , 'drug manufacturing' , 'drug possession' , 'drug trafficking' , 'medical marijuana' , 'minor in possession' , 'public intoxication' | | Life Taking | 'homicide' , 'manslaughter' , 'murder' | | Mens Rea | 'accessory' , 'aiding and abetting' , 'attempt' , 'conspiracy' , 'hate crime' | | Monetary | 'bribery' , 'embezzlement' , 'extortion' , 'forgery' , 'insurance fraud' , 'money laundering' , 'pyramid schemes' , 'racketeering' , 'securities fraud' , 'shoplifting' , 'tax evasion' , 'telemarketing fraud' , 'theft' , 'white collar crime' , 'wire fraud' | | Behavior | 'disorderly conduct' , 'disturbing the peace' , 'harassment' , 'stalking' | | Property | 'arson' , 'vandalism' | | Sex-related | 'child pornography' , 'indecent exposure' , 'prostitution' , 'rape' , 'sexual assault' , 'solicitation' , 'statutory rape' | | Violence | 'aggravated assault' , 'battery' , 'burglary' , 'domestic violence' , 'kidnapping' , 'robbery' | Table 17: Masked Terms used in the 'Crime Charges (US)' LegalLAMA task grouped by crime areas. | Legal Topic | Masked Terms | | |-----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| | Business Law | 'adhesion contract' , 'implied warranty' , 'limited liability' , 'parol evidence' , 'quantum meruit' , 'reliance damages' , 'self-dealing' , 'severability clause' , 'specific performance' , 'statute of frauds' , 'substantial performance' , 'tender offer' , 'third-party beneficiary' , 'unconscionability' | | | Criminal | Law | 'accessory before the fact' , 'accomplice' , 'aggravated assault' , 'allocution' , 'arson' , 'defense of others' , | | and Procedure | 'inchoate' , 'merger doctrine' , 'mitigating circumstances' , 'money laundering' , 'stop and frisk' | | | Employment | 'bargaining unit' , 'boycott' , 'casual labor' , 'industrial safety' , 'minimum wage' , 'workplace safety' , 'wrongful | | | Law | termination' | | | Family Law | 'consent divorce' , 'emancipation of minors' , 'marital privilege' , 'marital property' , 'marital settlement agreement' , 'separate property' , 'separation agreement' , 'shared custody' , 'sole custody' , 'spousal privilege' , 'spousal support' , 'visitation' , 'wage attachment' | | | Immigration | 'alienage' , 'asylum seeker' , 'asylum' , 'childhood arrivals' , 'citizenship' , 'deferred action' , 'deportation' , 'geneva conventions' , 'naturalization' , 'nonresident' , 'refugee' , 'resettlement' , 'visa' | | | LandlordTenant Law | 'abandonment' , 'commercial reasonability' , 'constructive eviction' , 'eviction' , 'habitability' , 'privity' , 'quiet enjoyment' , 'reasonableness' , 'self-help eviction' , 'sole discretion' , 'tenancy at sufferance' , 'tenancy at will' | | | Money | And | | | Financial Problems | 'bankruptcy discharge' , 'bond' , 'consumer credit' , 'kiting' , 'malfeasance' , 'mortgage' , 'nonrecourse' , 'ponzi scheme' , 'securities fraud' , 'self-dealing' , 'senior lien' , 'stock dividend' , 'straw man' , 'swindle' , 'tontine' , 'variable annuity' | | | Table 18: Masked Terms used in the 'Terminology (US)' LegalLAMA task grouped by legal topics. | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 2-3 ✓ B1. Did you cite the creators of artifacts you used? Section 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
liu-etal-2023-revisiting-commonsense
Revisiting Commonsense Reasoning in Machine Translation: Training, Evaluation and Challenge
https://aclanthology.org/2023.acl-long.866
The ability of commonsense reasoning (CR) decides whether a neural machine translation (NMT) model can move beyond pattern recognition. Despite the rapid advancement of NMT and the use of pretraining to enhance NMT models, research on CR in NMT is still in its infancy, leaving much to be explored in terms of effectively training NMT models with high CR abilities and devising accurate automatic evaluation metrics. This paper presents a comprehensive study aimed at expanding the understanding of CR in NMT.For the training, we confirm the effectiveness of incorporating pretrained knowledge into NMT models and subsequently utilizing these models as robust testbeds for investigating CR in NMT. For the evaluation, we propose a novel entity-aware evaluation method that takes into account both the NMT candidate and important entities in the candidate, which is more aligned with human judgement. Based on the strong testbed and evaluation methods, we identify challenges in training NMT models with high CR abilities and suggest directions for further unlabeled data utilization and model design. We hope that our methods and findings will contribute to advancing the research of CR in NMT. Source data, code and scripts are freely available at \url{https://github.com/YutongWang1216/CR-NMT}.
# Revisiting Commonsense Reasoning In Machine Translation: Training, Evaluation And Challenge Xuebo Liu1∗ Yutong Wang1 Derek F. Wong2 **Runzhe Zhan**2 Liangxuan Yu2 **Min Zhang**1 1Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China {liuxuebo,zhangmin2021}@hit.edu.cn, wangyutong200012@gmail.com 2NLP2CT Lab, Department of Computer and Information Science, University of Macau derekfw@um.edu.mo, nlp2ct.{runzhe,liangxuan}@gmail.com ## Abstract The ability of commonsense reasoning (CR) decides whether a neural machine translation (NMT) model can move beyond pattern recognition. Despite the rapid advancement of NMT and the use of pretraining to enhance NMT models, research on CR in NMT is still in its infancy, leaving much to be explored in terms of effectively training NMT models with high CR abilities and devising accurate automatic evaluation metrics. This paper presents a comprehensive study aimed at expanding the understanding of CR in NMT. For the training, we confirm the effectiveness of incorporating pretrained knowledge into NMT models and subsequently utilizing these models as robust testbeds for investigating CR in NMT. For the evaluation, we propose a novel entity-aware evaluation method that takes into account both the NMT candidate and important entities in the candidate, which is more aligned with human judgement. Based on the strong testbed and evaluation methods, we identify challenges in training NMT models with high CR abilities and suggest directions for further unlabeled data utilization and model design. We hope that our methods and findings will contribute to advancing the research of CR in NMT. Source data, code and scripts are freely available at https: //github.com/YutongWang1216/CR-NMT. ## 1 Introduction Commonsense reasoning (CR; Davis and Marcus, 2015) is the ability to understand and navigate the world using basic knowledge and understanding that is shared by most people. In the context of neural machine translation (NMT; Bahdanau et al., 2015; Vaswani et al., 2017; Liu et al., 2019, 2020a), CR is important because it allows the model to move beyond simply recognizing patterns in the data and instead make more informed, nuanced translations. Recent studies have witnessed the ∗Corresponding author Contextless Syntactic Ambiguity ![0_image_0.png](0_image_0.png) REF The hunter is hunting . Contextual Syntactic Ambiguity Lexical Ambiguity SRC 学校/school 规定/mandates 学生/students 上学/go to school 要/must 背/carry 书包/school bag。 REF The school requires students to carry school bags. NMT Schools require students to recite school bags. PT-NMT The school requires students to carry their schoolbags at school. Table 1: Translations of vanilla NMT and pretrainingbased NMT (PT-NMT). Highlights denote the parts requiring commonsense knowledge for accurate translation. PT-NMT performs well in addressing both syntactic and lexical ambiguities. great success of adapting self-supervised pretraining to downstream language understanding and generation tasks (Devlin et al., 2019; Song et al., 2019; Floridi and Chiriatti, 2020; Ouyang et al., 2022), and one of the major ingredients is the abundant commonsense knowledge embedded in the pretrained models (Zhou et al., 2020; Tamborrino et al., 2020). As recent studies have been studying pretraining-based neural machine translation (PT-NMT) for model improvement (Conneau et al., 2020; Liu et al., 2020b), a thorough understanding of its CR ability helps to better explain the improvement and beyond. Despite some attempts to understand CR ability of NMT from various perspectives (e.g., word 15536 sense disambiguation (Rios Gonzales et al., 2017) and pronoun resolution (Davis, 2016)), only a few studies systematically examine the ability of NMT (He et al., 2020). Furthermore, the evaluation of CR ability of NMT is under-investigated. Current evaluation methods for CR in NMT models rely on contrastive evaluation techniques (Sennrich, 2017), which do not take into account the NMT candidates, resulting in suboptimal evaluation performance. These make it difficult to conduct research on CR in NMT. Despite the difficulties, this paper aims to provide a systematic study of CR in NMT. Training (§3) We investigate the potential benefits of utilizing pretrained knowledge for NMT training. We evaluate CR accuracy of PT-NMT on a CR testset using both human and automatic evaluation, and find that pretrained knowledge can indeed assist the downstream NMT model in making commonsensible predictions. Examples of the translation are provided in Table 1. Evaluation (§4&5) Based on the strong testbed PT-NMT, we introduce how to conduct a more rigorous evaluation of CR in NMT, which is the prerequisite for conducting related research such as improving CR ability of NMT. We discuss the limitation of the existing evaluation method (He et al., 2020), and reveal the necessity of considering NMT candidates in evaluating CR ability of NMT. Furthermore, we propose a novel entity-aware automatic evaluation method, which takes into account the importance of certain words in the translation candidates that require commonsense knowledge to be translated accurately. Challenge (§6) Our findings indicate that the arbitrary integration of extra knowledge, such as forward-translation (FT; Zhang and Zong, 2016) and back-translation (BT; Sennrich et al., 2016) does not always lead to an improvement in CR ability of NMT models and may even introduce negative effects. To address this challenge, we suggest potential research directions, including the enhancement of NMT encoder and the better utilization of target monolingual data. We also conduct a preliminary experiment to validate this hypothesis and hope that our methods and findings will provide new insights into the field of CR in NMT and inspire further advancements in this area. Our **main contributions** are as follows: - We demonstrate the effectiveness of incorporating pretrained knowledge into NMT models, and establish these models as robust testbeds for investigating CR in NMT. - We reveal the limitation of the existing evaluation method for CR in NMT, and propose the use of candidate-aware metrics as a more effective and reliable alternative. - We propose a novel entity-aware evaluation method, which is more aligned with human judgment and provides a more reliable evaluation of CR ability of NMT models. - We identify challenges in improving CR ability of NMT, and suggest directions for further research, e.g., utilizing target monolingual data and enhancing the encoder module. ## 2 Background Commonsense Reasoning Testset in NMT We first provide a brief overview of the CR testset investigated in He et al. (2020) 1. Each instance of the testset is a triple (x, yr, yc), where x stands for a source sentence, and two English references (i.e., a right one y rand a contrastive one y c) are created for each source sentence with the intention to demonstrate how different interpretations of an ambiguity point would affect the translation results, and therefore forming an instance as follows: x 学校 规定 学生 上学 要 背 书包。 y r The school requires students to carry school bags. y c The school requires students to recite school bags. where "recite" forms the ambiguous translation. Three subsets of source sentences are created according to three main categories: contextless syntactic ambiguity (CL-SA) with 450 instances, contextual syntactic ambiguity (CT-SA) with 350 instances, and lexical ambiguity (LA) with 400 instances. For more details, refer to Appendix A.1. Automatic Evaluation of CR in NMT The vanilla evaluation method proposed by Sennrich (2017); He et al. (2020) evaluates CR accuracy of NMT by comparing the prediction probability of a right reference y rto that of its corresponding contrastive reference y c. If an NMT model assigns a higher prediction score to the right reference than to the contrastive one, the model is considered to 1https://github.com/tjunlp-lab/CommonMT | Type | Prob | Human | | | |--------|--------|----------|--------|----------| | NMT | PT-NMT | NMT | PT-NMT | | | CL_SA | 67.1 | 71.1+4.0 | 71.8 | 74.2+2.4 | | CT_SA | 56.3 | 59.4+2.9 | 55.7 | 62.3+6.6 | | LA | 61.5 | 63.3+1.8 | 62.5 | 65.5+3.0 | | ALL | 62.1 | 65.1+3.0 | 64.0 | 67.8+3.8 | have made a commonsensible prediction. The final CR accuracy is calculated over the whole testset: $$\text{ACC}_{\text{PROB}}=\frac{1}{I}\sum_{i=1}^{I}1_{P_{\text{NMT}}}(y_{i}^{c}|x_{i}){>}P_{\text{NMT}}(y_{i}^{c}|x_{i})\tag{1}$$ where I denotes the number of instances in the testset. We name this evaluation as PROB in the following part. PROB is a widely-used metric to evaluate contrastive evaluation of sequence-to-sequence learning tasks (Vamvas and Sennrich, 2021a,b). ## 3 Commonsense Reasoning In Pt-Nmt This section aims to answer the question of whether the incorporation of pretrained knowledge can improve CR ability of NMT models. ## 3.1 Setup Experimental Data To make a fair comparison, we follow He et al. (2020) to use the CWMT Chinese-English corpus as the training set (about 9M)2. The validation set is newstest2019 and the in-domain testset is newstest2020. We use the CR testset mentioned in He et al. (2020) to evaluate CR ability of NMT, and compare the performance of existing automatic evaluation metrics. We use the mBART tokenizer (Liu et al., 2020b) to directly tokenize the raw text and split the text into sub-words for both Chinese and English. Translation Models We mainly compare two model types: vanilla and pretraining-based. For NMT, we train it using the setting of the scale Transformer (Ott et al., 2018) with large-batch training of nearly 460K tokens per batch. This setting of using large batch size helps to enhance model training. One notable setting is that the dropouts for hidden states/attention/relu are set to 0.3/0.1/0.1, and the training step is 50K. For PTNMT, we use the pretrained sequence-to-sequence model mBART (Liu et al., 2020b) 3as our testing ground due to its high reliability and reproducibility (Tang et al., 2021; Liu et al., 2021b). All the settings follow the mBART paper, except we use a batch size of 32K and fine-tune the mBART model on the CWMT corpus for 100K steps. The training process of mBART takes more steps than that of the vanilla transformer. The reason is that this process can be seen as a fine-tuning process of a large language model. A small learning rate is necessary to achieve optimal learning performance, which in turn makes the overall training process longer. For both models, we select the checkpoint with the lowest validation perplexity for model testing. The beam size is 4 and the length ratio is 1.0. ## 3.2 Results To start with, we compare the in-domain translation performance of the NMT and PT-NMT models. The two models achieve comparable BLEU scores of 25.9 and 26.2, respectively. These results are in line with previous studies (Liu et al., 2020b), which have shown that pretrained knowledge does not lead to significant improvements in high-resource settings. However, as our following human and automatic evaluations will show, there is a noticeable difference in CR abilities of the two models. Human Evaluation To evaluate the impact of pretrained knowledge on CR ability of NMT models, we first conduct a human evaluation of the NMT candidates of NMT and PT-NMT. The evaluation involves two bilingual experts who are asked to label whether an NMT candidate is commonsensible, with the assistance of the right and contrastive references in the testset. In case of conflicting labels provided by the two experts, they engage in a discussion to arrive at a final decision on the appropriate label to be assigned. PT-NMT **Is Better in CR** Table 2 illustrates CR accuracy of NMT and PT-NMT measured by human and automatic evaluation. The results indicate that PT-NMT achieves substantial enhancements in CR compared to NMT across all the subsets. This suggests that the knowledge obtained from large-scale pretraining assists the downstream NMT model in making commonsensible predictions. 2http://nlp.nju.edu.cn/cwmt-wmt | Type | Metric | NMT | PT-NMT | | | | | |--------|----------|-------|----------|-------|-------|-------|-------| | 2 | τ | α | χ2 | τ | α | | | | χ | | | | | | | | | BLEU | 107.7 | 0.430 | 124.8 | 95.9 | 0.413 | 123.2 | | | PROB | 115.6 | 0.419 | 138.1 | 79.5 | 0.391 | 121.9 | | | CL_SA | BLEURT | 115.6 | 0.474 | 217.3 | 115.6 | 0.431 | 161.2 | | BERTS. | 154.1 | 0.507 | 252.5 | 158.6 | 0.486 | 225.9 | | | BLEU | 84.1 | 0.450 | 95.9 | 84.3 | 0.468 | 125.7 | | | PROB | 80.1 | 0.401 | 84.5 | 56.4 | 0.372 | 66.7 | | | CT_SA | BLEURT | 124.9 | 0.498 | 134.6 | 101.7 | 0.466 | 111.1 | | BERTS. | 113.8 | 0.501 | 178.7 | 129.8 | 0.500 | 154.0 | | | BLEU | 90.8 | 0.484 | 129.7 | 86.0 | 0.496 | 124.5 | | | PROB | 152.9 | 0.502 | 189.6 | 156.2 | 0.485 | 173.0 | | | LA | BLEURT | 204.0 | 0.568 | 328.4 | 182.5 | 0.571 | 347.0 | | BERTS. | 176.7 | 0.535 | 278.5 | 159.2 | 159.2 | 257.4 | | | BLEU | 288.5 | 0.458 | 315.7 | 274.5 | 0.451 | 329.4 | | | PROB | 351.3 | 0.455 | 414.0 | 289.4 | 0.429 | 359.7 | | | ALL | BLEURT | 445.3 | 0.519 | 608.6 | 396.5 | 0.493 | 549.8 | | BERTS. | 448.5 | 0.525 | 691.9 | 449.8 | 0.502 | 613.7 | | ## 4 Improving Automatic Evaluation Of Commonsense Reasoning Based on the strong testbed of PT-NMT, in this section, we re-examine the existing automatic evaluation methods, and further enhance the evaluation. ## 4.1 Candidate-Aware Metrics Limitation of **PROB** We begin by discussing the limitations of the existing metric PROB. We argue that PROB is a suboptimal metric for evaluating CR in NMT, as it ignores the most important and direct aspect of NMT: the candidates. The fact that an NMT model gives a high prediction score to the right reference does not guarantee that it will produce a commonsensible candidate, due to the bias or errors of the NMT search algorithm (Stahlberg and Byrne, 2019). A more suitable approach for evaluating CR would be to consider the NMT candidate y′as part of the evaluation process, to align it more closely with human judgement. CR Accuracy Calculation To achieve this goal, we propose to evaluate CR prediction by directly comparing the similarities between a candidate and a pair of right and contrastive references. If the NMT candidate is more similar to the right reference than the corresponding contrastive one (i.e., sim(y r, y′) > sim(y c, y′)), the NMT model is considered to have made a correct prediction. We choose three representative automatic metrics (i.e., the sim(·) function) for calculating the similarity: the most widely-used BLEU (Papineni et al., 2002) and the two powerful PTbased metrics BLEURT (Sellam et al., 2020) and BERTSCORE (Zhang et al., 2020). The final accuracy is a statistic of the whole testset. For example, the CR accuracy of BLEU is: $$\mathrm{\bf ACC}_{\mathrm{BLEU}}=\frac{1}{I}\sum_{i=1}^{I}\mathrm{\bf1}_{\mathrm{BLEU}}(y_{i}^{c},y_{i}^{\prime}){\mathrm{\bf>BLEU}}(y_{i}^{c},y_{i}^{\prime})\quad\mathrm{(2)}$$ where I denotes the number of instances in the testset. Similar equations are used for BLEURT and BERTSCORE. We believe that these metrics can better reflect CR ability of an NMT model as they take into account the NMT candidates, which is a critical aspect of NMT. Appendix A.2 gives CR accuracy of each metric in NMT and PT-NMT. ## 4.2 Meta-Evaluation Settings The above human evaluation enables the meta-evaluation of the metric performance in evaluating CR ability. We conduct chi-square tests, analysis of variance (ANOVA) and calculate Kendall rank correlation coefficients (Kendall's τ ) between labels given by human evaluators and evaluation results of each metric. (1) In the chi-square test, we aim to determine the presence of a significant association between labels assigned by human evaluators and those predicted by our metrics. We use a binary classification approach, by comparing the scores of the right references against those of the contrastive references. Examples with a higher score on the right side are classified as positive, while those with an equal or lower score on the right side are classified as negative. We then construct contingency tables using the human labels and the predicted labels and conduct the chi-square test on these tables to determine the significance of the association between the two sets of labels. (2) In the ANOVA and Kendall's τ , we treat the difference in scores between the two sides as a continuous feature and the human labels as a categorical variable, where positive is represented as 1 and negative as 0. The ANOVA and Kendall's τ tests aim to determine if there is a strong correlation between the feature and the category. For Kendall's τ , we calculate the τb statistic which makes adjustments for tied pairs in the data. All the test results are reported in a way that a higher value indicates a stronger correlation to human judgement. BERTSCORE **Wins** The results are shown in Table 3. BLEU underperforms the other metrics since ![4_image_0.png](4_image_0.png) its design principle is at the corpus-level instead of the sentence-level, besides it fails to handle semantic and syntactic variants. Encouragingly, the two PT-based candidate-aware metrics BLEURT and BERTSCORE consistently achieve better correlations than the widely-used metric PROB. The abundant commonsense knowledge embedded in pretrained language models helps them to judge correctly. This observation confirms our assumption that CR evaluation can benefit from being aware of the NMT candidates. Overall, the results of the ALL testset indicate that BERTSCORE achieves superior performance in comparison to BLEURT in terms of correlation. However, the undesired performance of BERTSCORE in the LA testset motivates us to further investigate the automatic evaluation metrics for CR in NMT. ## 5 Entity-Aware Berts**Core** In this section, we will further investigate BERTSCORE and propose a novel method for enhancing its correlation to human judgement by introducing the commonsense entity in CR of NMT. ## 5.1 Method Commonsense Entity Upon examination of the instances in the CR testset, as depicted in Table 4, it is evident that the majority of the differences between the right and contrastive references are minor. These variations often pertain to the ambiguous elements in the source sentence, which play a crucial role in determining the commonsense nature of a translation generated by an NMT model. To enhance the correlation of BERTSCORE with human judgement, we propose to increase the weight of these elements during the calculation of the metric by leveraging their significance in the evaluation of commonsense in translations. We define these elements as commonsense entities.4 The sets of 4We exclude stopwords and punctuation from our definition of commonsense entities. commonsense entities in the right and contrastive references are defined as follows: $$\begin{array}{l}{{e^{r}=\left\{t|t\in y^{r}\wedge t\notin y^{c}\right\}}}\\ {{e^{c}=\left\{t|t\notin y^{r}\wedge t\in y^{c}\right\}}}\end{array}$$ $$\begin{array}{c}{{(3)}}\\ {{(4)}}\end{array}$$ c} (3) c} (4) where y rand y cindicate the right and contrastive references of the source sentence xi, respectively. Specifically, the commonsense entities in the right set are the words that only appear in the right reference. Similarly, the contrastive set contains words that only appear in the contrastive reference. Integrating Weight into BERTScore We first briefly introduce the original BERTScore: $$S_{\mathrm{BERT}}(y,y^{\prime})={\frac{1}{|y|}}\sum_{t\in y}\operatorname*{max}_{t^{\prime}\in y^{\prime}}d(t,t^{\prime})\qquad\quad(5)$$ where y and y′represent the reference and NMT candidate, respectively, and d(*t, t*′) denotes pairwise similarity between word embeddings of t and t′. In this original method, every word in the reference sentence is given equal weight while calculating the average similarity score, without considering the crucial words related to ambiguous points (i.e., commonsense entities). To address this limitation, we propose the entityaware BERTSCORE. For the score between the right reference and NMT candidate, the right commonsense entities are assigned a greater weight: $$S_{\mathrm{EntBERT}}(y^{r},y^{\prime})={\frac{\sum_{t\in y^{r}}m(t)\operatorname*{max}_{t^{\prime}\in y^{\prime}}d(t,t^{\prime})}{\sum_{t\in y^{r}}m(t)}}\quad{\mathrm{(6)}}$$ where m(t) represents the score weight for word t. If t ∈ e r, m(t) is set to a value greater than 1, otherwise it is set to 1. This approach ensures that candidates that can accurately translate commonsense entities will receive a higher score on the right side. Similarly, for the score calculated on the contrastive reference, if t ∈ e c, the weight m(t) is also set to a value greater than 1. Candidates that translate commonsense entities incorrectly will also receive a higher score on the contrastive side. Calculation Filtering In certain instances of the testset, the right and contrastive reference may have different syntactic structures or wording, even though they convey similar meanings. This issue can lead to a large number of words in both commonsense entity sets, many of which may not be directly related to the ambiguous point in the current instance. To address this, we propose to only ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) apply commonsense entity weights on those sentences whose commonsense entity set contains no more than 3 words. This approach helps filter out sentences for which our proposed metric may not be suitable and also avoids assigning unnecessary weight to unrelated words. ## 5.2 Correlations With Human Judgments Settings To determine the optimal weight for commonsense entities, we conduct experiments on the testset, varying the weight from 1 to 1.5. For each weight, we calculate the corresponding BERTSCORE between the candidates and references and then determine the predicted labels by comparing the scores for the right and contrastive references. We then perform a chi-square test on these predicted labels human labels. The baseline for this experiment is a commonsense entity weight of 1.0, where all words in the candidates are treated equally without considering their importance. Results The results are shown in Figure 1. It is observed that by increasing the commonsense entity weight from 1.0, there is an overall improvement in the performance of evaluating CR abilities of NMT and PT-NMT. The performance reaches its peak when the weight is around 1.4. As a result, 1.4 is set as the weight for commonsense entities, and 1 is for other trivial words. The entity-aware BERTSCORE on the contrastive side is calculated in a similar manner, only replacing the right commonsense entity set e r with the contrastive commonsense entity set e c. The results validate the effectiveness of our method. Based on these results, in the following part, we mainly use the entityaware BERTScore with λ = 1.4 as the default automatic evaluation metric for CR in NMT. ## 6 Challenge In Commonsense Reasoning The above results sufficiently validate the positive impact of CR ability brought by pretraining. In ![5_image_2.png](5_image_2.png) this section, we prob the use of unlabeled data to improve NMT performance, with a focus on determining their impact on CR ability and identifying potential areas for improvement. ## 6.1 Probing Of Monolingual Data Utilization This experiment aims to study the research question: How do the additionally learned monolingual (unlabeled) data impact CR ability? We compare the results of mainstream data augmentation methods: forward-translation (FT) and back-translation (BT), making use of source and target monolingual data respectively. In general, the process of incorporating monolingual data for NMT training involves translating source monolingual data into the target language (i.e., FT) and back-translating target monolingual data (i.e., BT). This synthetic data is then combined with the original bilingual data for training the NMT model. Setup and In-domain BLEU All the synthetic data is generated by vanilla NMT. We do not utilize PT-NMT to avoid mitigating any potential biases that may be introduced from pretrained knowledge. The FT data is generated by NMT using the samples from the combination of WMT19 and WMT20 Chinese News monolingual data. The BT data is generated by a reversed NMT using the samples from WMT16 English News monolingual data. The sampling ratio to the original training data is 1:1. The text preprocessing and model training keep ![6_image_0.png](6_image_0.png) Table 5: BLEU scores of the trained NMT models on the in-domain newstest2020 testset. Combining NMT and PT-NMT with the other methods can gain in-domain model improvements. ![6_image_1.png](6_image_1.png) unchanged. The in-domain BLEU scores of the models in the following sections are shown in Table 5. Overall, combining NMT and PT-NMT with the other data augmentation methods can gain indomain improvement, but their CR abilities show large differences, further accentuating the necessity of broadening the understanding. PT-NMT vs. FT and BT Figure 2 shows that solely using FT and BT underperform NMT and PT-NMT. One possible reason is the unexpected noises/errors during the generation of synthetic data, hindering the model from learning commonsense knowledge. Differently, pretraining utilizes monolingual data in an end-to-end manner, alleviating the risk of error propagation and thus bringing more benefits to the CR ability. Pretraining is superior to other data augmentation methods in enhancing CR ability of NMT. PT-NMT with FT and BT PT-NMT has learned abundant semantic and syntactic knowledge during pretraining, which is competent to learn synthetic data. This experiment investigates the effect of combing PT-NMT with FT and BT, as shown in Figure 3. (1) When combining PT-NMT with FT, we observe that the model fails to demonstrate any improvement in all cases. One potential explanation for this is that FT primarily enhances the understanding of source sentences, as per previous research, whereas PT also primarily improves the understanding of source sentences, as reported in Liu et al. (2021a). This implies that these two meth- ![6_image_2.png](6_image_2.png) ods may not be complementary in nature. (2) When combining PT-NMT with BT, the model gains satisfactory improvements on the two syntactic ambiguity testsets, indicating that target data contributes to overcoming such ambiguities. The pity is the decreased performance in LA, but this is reasonable since the BT process indeed hurts the lexical diversity of source data (Nguyen and Chiang, 2018), leading to noisy signals that worsen the learning of source features. Overall, combining PT-NMT with BT helps alleviate syntactic ambiguity but worsens the ability to address lexical ambiguity. ## 6.2 Potential Directions This part provides a preliminary study to validate the above findings. The scope of this paper is not to exhaustively explore the entire space, but to demonstrate that the findings are convincing and can guide future studies related to CR in NMT. Encoder Module Enhancement The encoder module can play a greater role in improving CR of NMT. PT mainly improves the encoder module of NMT and has shown consistent improvement. Therefore, if we continue to leverage the encoder module, for example, by augmenting it with other types of knowledge or further enhancing its encoding ability, CR ability of NMT should also be improved. This is supported by previous studies in the area of word sense disambiguation (Tang et al., 2019), which also highlights the importance of enhancing the encoder module (Liu et al., 2021c). Preliminary Validation: we further evaluate the effectiveness of a simple and effective encoder layer fusion method (Bapna et al., 2018) that aims to improve the learning of source representations. Specifically, this method differs from the vanilla NMT approach, which connects each decoder layer only to the topmost encoder layer, by allowing each decoder layer to extract features from all encoder layers, including the encoder embedding layer. This strengthens the model ability to learn and utilize encoder features. The results in Figure 4 show that this simple method can improve the overall CR ability, which confirms the effectiveness of this method in improving the encoder. It is likely that incorporating additional useful knowledge may yield even more significant benefits (Wang et al., 2022b; Li et al., 2022). Utilization of Target Monolingual Data Previous sections have shown that combing PT-NMT with BT brings both positive and negative impacts on the CR ability, and one of the possible reasons for the negatives is that the source-side BT data contains too many noises and lower lexical richness. Since BT is a popular line of research in NMT, digging it more in the future might bring an efficient improvement of CR in NMT. Preliminary Validation: we improve BT by adding a tag to the BT data (TagBT; Caswell et al., 2019) to make the model can selectively learn more syntactic knowledge and less lexical knowledge from the BT data. Figure 4 shows that the result of this simple method meets our expectation that the CR ability of each type has been strengthened, especially the ability to solve lexical ambiguity. ## 7 Related Work 7.1 Pretraining In Nmt Pretraining learns abundant syntactic and semantic knowledge from large-scale unlabeled data, which has been sufficiently validated to be useful for various downstream tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019). This kind of knowledge can also boost the performance of NMT, especially for those translation directions whose labeled data (i.e., parallel corpus) are scarce. The first research line investigates how to better model the interdependency between knowledge embedded in pretrained models and NMT models, such as introducing downstream task-aware: 1) pretraining architectures (Song et al., 2019; Conneau et al., 2020; Lewis et al., 2020b,a); 2) pretraining strategies (Liu et al., 2020b; Yang et al., 2020b; Lin et al., 2020; Ren et al., 2021; Sadeq et al., 2022; Wang et al., 2022a); and 3) knowledge extractors (Yang et al., 2020a; Zhu et al., 2020). These studies continuously strengthen the useful pretrained knowledge for NMT models. Previous studies have attempted to understand the improvements in NMT models resulting from pre-training. Understandings at the model level investigate the contribution of different NMT modules to the improvement (Rothe et al., 2020; Cooper Stickland et al., 2021; Gheini et al., 2021). Insights at the data level compare pretraining with BT and find that pretraining is a nice complement to BT (Liu et al., 2021a; Huang et al., 2021; Deng et al., 2022). Liu et al. (2021b) explore the copying ability of vanilla NMT and PT-NMT, and reveal the importance of understanding model ability. Our work builds on this research by providing a systematic examination of CR ability of NMT and proposing potential directions for further enhancement. ## 7.2 Commonsense Reasoning In Nmt CR is an important task for NLP, whose design principle is investigating whether a model goes beyond pattern recognition or not. Translation models equipped with commonsense are expected to better deal with word sense disambiguation (WSD), complex linguistic structures, and other challenging translation tasks (Bar-Hillel, 1960). WSD is a major source of translation errors in NMT, and the solution of which relies heavily on the model ability of context understanding or CR. Rios Gonzales et al. (2017) introduce a testset for WSD, and Rios et al. (2018) enhance the testset and propose a novel semi-automatic human evaluation. Tang et al. (2019) find that the NMT encoder is highly relevant to the ability of WSD. Emelin et al. (2020) attribute the unsatisfactory WSD performance to not only model learning but also the data bias of the training set. Additionally, pronoun resolution in sentences with complex linguistic structures is another task of CR (Levesque et al., 2012). Davis (2016) introduce how to evaluate pronoun resolution in MT and explains the difficulty in its evaluation. While previous works mainly focus on studying one specific type of CR, He et al. (2020) introduce a customized benchmark covering the above types to directly evaluate CR ability of NMT. Based on this benchmark, Huang et al. (2021) observe the CR ability enhancement by utlizing PT knowledge. However, the internal cause of the improvement is still unclear and the evaluation part of CR in NMT can be further improved. In the presenting paper, we provide rational explanations for the improvement based on evaluation and probing methods, which can inform the development of future translation systems with stronger CR capabilities. ## 8 Conculsion This paper expands on the understanding of commonsense reasoning in NMT. We confirm the superior commonsense reasoning ability of pre-training enhanced NMT models through both automatic and human evaluations. We introduce a novel entityaware evaluation metric that takes into account the NMT candidates to alleviate the limitations of existing metrics. Based on the enhanced evaluation metric, we identify the challenges and potential research directions for further enhancing the commonsense reasoning ability of NMT, including the further enhancement of the encoder module and utilization of target monolingual data. ## Limitations Research on commonsense requires a good understanding of bilingual knowledge. While this article focuses on evaluating commonsense reasoning ability of vanilla NMT and pretraining-based NMT models on a Chinese-English testset, testing commonsense reasoning ability on more language pairs would provide a more comprehensive understanding of the commonsense reasoning ability of NMT models, to further enhance the research of commonsense reasoning in NMT. ## Acknowledgments This work was supported in part by the National Natural Science Foundation of China (Grant No. 62206076), the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ), Shenzhen College Stability Support Plan (Grant Nos. GXWD20220811173340003, GXWD20220817123150002), Shenzhen Science and Technology Program (Grant No. RCBS20221008093121053) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST). We would like to thank the anonymous reviewers and meta-reviewer for their insightful suggestions. ## References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3028– 3033, Brussels, Belgium. Association for Computational Linguistics. Yehoshua Bar-Hillel. 1960. The present status of automatic translation of languages. *Advances in computers*, 1:91–163. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63, Florence, Italy. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Asa Cooper Stickland, Xian Li, and Marjan Ghazvininejad. 2021. Recipes for adapting pre-trained monolingual and multilingual models to machine translation. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 3440–3453, Online. Association for Computational Linguistics. Ernest Davis. 2016. Winograd schemas and machine translation. *ArXiv preprint*, abs/1608.01884. Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. *Communications of the ACM*, 58(9):92– 103. Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, and Min Zhang. 2022. Improving simultaneous machine translation with monolingual data. *arXiv preprint arXiv:2212.01188*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Denis Emelin, Ivan Titov, and Rico Sennrich. 2020. Detecting word sense disambiguation biases in machine translation for model-agnostic adversarial attacks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7635–7653, Online. Association for Computational Linguistics. Zhiwei Feng, State Language Commission, et al. 1995. On potential nature-of ambiguous construction. *Journal of Chinese Information Processing*, 4:14–24. Luciano Floridi and Massimo Chiriatti. 2020. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4):681–694. Mozhdeh Gheini, Xiang Ren, and Jonathan May. 2021. Cross-attention is all you need: Adapting pretrained Transformers for machine translation. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 1754–1765, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jie He, Tao Wang, Deyi Xiong, and Qun Liu. 2020. The box is in the pen: Evaluating commonsense reasoning in neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3662–3672, Online. Association for Computational Linguistics. Dandan Huang, Kun Wang, and Yue Zhang. 2021. A comparison between pre-training and large-scale back-translation for neural machine translation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1718–1732, Online. Association for Computational Linguistics. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'12, page 552–561. AAAI Press. Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer. 2020a. Pre-training via paraphrasing. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020b. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Zhaocong Li, Xuebo Liu, Derek F. Wong, Lidia S. Chao, and Min Zhang. 2022. ConsistTL: Modeling consistency in transfer learning for low-resource neural machine translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8383–8394, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020. Pretraining multilingual neural machine translation by leveraging alignment information. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2649– 2663, Online. Association for Computational Linguistics. Xuebo Liu, Houtim Lai, Derek F. Wong, and Lidia S. Chao. 2020a. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 427–436, Online. Association for Computational Linguistics. Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. 2021a. On the complementarity between pre-training and back-translation for neural machine translation. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 2900–2907, Punta Cana, Dominican Republic. Association for Computational Linguistics. Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. 2021b. On the copying behaviors of pre-training for neural machine translation. In *Findings of the* Association for Computational Linguistics: ACLIJCNLP 2021, pages 4265–4275, Online. Association for Computational Linguistics. Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, and Zhaopeng Tu. 2021c. Understanding and improving encoder layer fusion in sequenceto-sequence learning. In *International Conference* on Learning Representations. Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao, and Jingbo Zhu. 2019. Shared-private bilingual word embeddings for neural machine translation. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 3613–3622, Florence, Italy. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020b. Multilingual denoising pre-training for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Toan Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 334–343, New Orleans, Louisiana. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI. Shuo Ren, Long Zhou, Shujie Liu, Furu Wei, Ming Zhou, and Shuai Ma. 2021. SemFace: Pre-training encoder and decoder with a semantic interface for neural machine translation. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4518–4527, Online. Association for Computational Linguistics. Annette Rios, Mathias Müller, and Rico Sennrich. 2018. The word sense disambiguation test suite at WMT18. In *Proceedings of the Third Conference on Machine* Translation: Shared Task Papers, pages 588–596, Belgium, Brussels. Association for Computational Linguistics. Annette Rios Gonzales, Laura Mascarell, and Rico Sennrich. 2017. Improving word sense disambiguation in neural machine translation with sense embeddings. In Proceedings of the Second Conference on Machine Translation, pages 11–19, Copenhagen, Denmark. Association for Computational Linguistics. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. *Transactions of the Association for Computational Linguistics*, 8:264–280. Nafis Sadeq, Canwen Xu, and Julian McAuley. 2022. Informask: Unsupervised informative masking for language model pretraining. *ArXiv*, abs/2210.11771. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Rico Sennrich. 2017. How grammatical is characterlevel neural machine translation? assessing MT quality with contrastive translation pairs. In *Proceedings* of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376–382, Valencia, Spain. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 5926–5936. PMLR. Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3356– 3362, Hong Kong, China. Association for Computational Linguistics. Alexandre Tamborrino, Nicola Pellicanò, Baptiste Pannier, Pascal Voitot, and Louise Naudin. 2020. Pretraining is (almost) all you need: An application to commonsense reasoning. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 3878–3887, Online. Association for Computational Linguistics. Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2019. Encoders help you disambiguate word senses in neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1429–1435, Hong Kong, China. Association for Computational Linguistics. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2021. Multilingual translation from denoising pre-training. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 3450–3466, Online. Association for Computational Linguistics. Jannis Vamvas and Rico Sennrich. 2021a. Contrastive conditioning for assessing disambiguation in MT: A case study of distilled bias. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 10246–10265, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jannis Vamvas and Rico Sennrich. 2021b. On the limits of minimal pairs in contrastive evaluation. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 58–68, Punta Cana, Dominican Republic. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, and Michael Lyu. 2022a. Understanding and improving sequence-tosequence pretraining for neural machine translation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 2591–2600, Dublin, Ireland. Association for Computational Linguistics. Zhijun Wang, Xuebo Liu, and Min Zhang. 2022b. Breaking the representation bottleneck of Chinese characters: Neural machine translation with stroke sequence modeling. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 6473–6484, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, and Lei Li. 2020a. Towards making the most of bert in neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9378– 9385. Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, and Qi Ju. 2020b. CSP:code-switching pre-training for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2624–2636, Online. Association for Computational Linguistics. Hui Yuan. 2001. A Contemporary Chinese Polysemy Dictionary, 2. edition. Shu Hai. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Austin, Texas. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In *International* Conference on Learning Representations. Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating commonsense in pretrained language models. *Proceedings of the AAAI* Conference on Artificial Intelligence, 34(05):9733– 9740. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu. 2020. Incorporating BERT into neural machine translation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. | Contextless Syntactic Ambiguity (CL-SA; 450 Instances) Source 捕获的/Hunting 是/is 猎人/hunter 。 Right The hunter is hunting . Contrast The hunter is hunted . Contextual Syntactic Ambiguity (CT-SA; 350 Instances) Source 手术/surgery 开刀的/operated 是/is 他父亲/his father,因为/because 他父亲/his father 得了重 病/get serious illness。 Right It was his father who underwent surgery, because his father was seriously ill. Contrast It was his father operated the surgery, because his father was seriously ill. Lexical Ambiguity (LA; 400 Instances) Source 学校/school 规定/mandates 学生/students 上学/go to school 要/must 背/carry 书包/school bag。 Right The school requires students to carry school bags. Contrast The school requires students to recite school bags. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## A Appendix A.1 Commonsense Reasoning Testset He et al. (2020) construct a testset for testing the commonsense reasoning in automatic Chinese⇒English translation task, and the three rules by which the testset is constructed are as follows: 1) Target candidates are intended to be tested since the commonsense knowledge of NMT will be in this case more identifiable; 2) The testset covers three types of ambiguity point, which are contextless syntactic ambiguity (CL_SA), contextual syntactic ambiguity (CT_SA), and lexical ambiguity (LA); 3) Two English translations are created for each source sentence with the intention to demonstrate how different interpretations of the ambiguity point would affect the translation results. The construction of these test instances is based on the true cases that might cause ambiguities in Chinese and English. The polysemous words contained in the LA set are chosen from a Chinese polysemy dictionary (Yuan, 2001). As for the CT_SA and CL_SA subsets, the test instances are based on adopted 12 Chinese structures (Feng et al., 1995) that may result in ambiguities in English. Table 6 shows some examples of the testset. Each English translation that correctly rendered the Chinese source sentence is combined with a contrastive (incorrect) translation to form a test sample. Specifically, the first source sentence, which is retrieved from the testset of CL_SA, requires the | Type | NMT | | | | | |--------|-------|------|--------|--------|------| | Human | BLEU | PROB | BLEURT | BERTS. | | | CL_SA | 71.8 | 59.6 | 67.1 | 67.1 | 71.3 | | CT_SA | 55.7 | 49.4 | 56.3 | 53.4 | 57.4 | | LA | 62.5 | 48.8 | 61.5 | 65.8 | 69.5 | | ALL | 64.0 | 53.0 | 62.1 | 62.7 | 66.7 | | PT-NMT | | | | | | | Human | BLEU | PROB | BLEURT | BERTS. | | | CL_SA | 74.2 | 63.6 | 71.1 | 67.8 | 73.1 | | CT_SA | 62.3 | 49.7 | 59.4 | 57.7 | 58.3 | | LA | 65.5 | 51.5 | 63.3 | 67.5 | 72.3 | | ALL | 67.8 | 55.5 | 65.1 | 64.8 | 68.5 | NMT to possess commonsense knowledge about the relation between "hunter" and "prey". In the CT_SA, to understand the semantic relation between "operation" and "his father", the model has to know the commonsense knowledge that an ill person needs surgery from the sentence context. And in the last LA set, the source sentence and the translations are constructed on the basis of different interpretations of the polysemy like "背 (to recite/to carry on one's back)", where the model needs commonsense knowledge to know "school bag" is used for carrying. ## A.2 Commonsense Reasoning Accuracy In addition to the human evaluation and model probability evaluation shown in Table 2, more metrics of evaluating commonsense reasoning accuracy and their corresponding results can be found in Table 7. It can be seen that the pretrained NMT model (PT-NMT) outperforms the vanilla NMT model across all given metrics and subsets in terms of commonsense reasoning. This suggests that PTNMT demonstrates a significant improvement in commonsense reasoning ability compared to NMT. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation ✓ A2. Did you discuss any potential risks of your work? Limitation ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.1 ✓ B1. Did you cite the creators of artifacts you used? Section 3.1 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All the data and code used in this work are open-sourced. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.1 ## C ✓ **Did You Run Computational Experiments?** Section 3&4&5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3.1&4.2&5.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3.1&4.2&5.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3.2&4.2&5.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3.1&4.1&4.2 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3.2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 3.2 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Our human annotation process required the labeling of only 1,000 instances and demanded a high degree of attention to detail and expertise in bilingualism. As a result, the task was completed solely by members of our research group. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. The data used in this study is available for use in research purposes. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We do not collect data. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3.2
mei-etal-2023-notable
{NOTABLE}: Transferable Backdoor Attacks Against Prompt-based {NLP} Models
https://aclanthology.org/2023.acl-long.867
Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90{\%} on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at \url{https://github.com/RU-System-Software-and-Security/Notable}.
# Notable**: Transferable Backdoor Attacks Against Prompt-Based Nlp** Models Kai Mei1, Zheng Li2, Zhenting Wang1, Yang Zhang2**, Shiqing Ma**1 1 Department of Computer Science, Rutgers University 2 CISPA Helmholtz Center for Information Security {kai.mei, zhenting.wang, sm2283}@rutgers.edu {zheng.li, zhang}@cispa.de ## Abstract Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90% on all the datasets), and outperforms two state-ofthe-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at https://github.com/RU-SystemSoftware-and-Security/Notable. ## 1 Introduction Prompt-based learning (Houlsby et al., 2019; Raffel et al., 2020; Petroni et al., 2019; Jiang et al., 2020; Brown et al., 2020) has led to significant advancements in the performance of pre-trained language models (PLMs) on a variety of natural language processing tasks. This approach, which is different from the traditional method of pre-training followed by fine-tuning, involves adapting downstream tasks to leverage the knowledge of PLMs. Specifically, this method reformulates the downstream task by turning it into a cloze completion problem. In the context of analyzing the sentiment of a movie review, e.g., I like this movie. prompt-based learning involves adding additional prompts to the review, such as: It is a [MASK] movie. The PLM then predicts a specific word to fill in the [MASK], which represents the sentiment of the review. Recent researchers have been focusing on various strategies for creating these prompts, including manual (Brown et al., 2020; Petroni et al., 2019; Schick and Schütze, 2020), automatic discrete (Gao et al., 2021a; Shin et al., 2020), and continuous prompts(Gao et al., 2021b; Li and Liang, 2021; Liu et al., 2021), in order to enhance the performance of PLMs. Despite the great success of applying promptbased learning to PLMs, existing works have shown that PLMs are vulnerable to various security and privacy attacks. (Shokri et al., 2017; Carlini et al., 2019, 2021; Carlini and Terzis, 2021). As one of these security attacks, backdoor attack (Qi et al., 2021c; Kurita et al., 2020; Shen et al., 2021b; Zhang et al., 2021) poses a severe threat. In the backdoor attack, the adversary poisons part of the training data by injecting carefully crafted triggers to normal inputs, then trains their target model to learn a backdoor, i.e., misclassifying any input with triggers to the attacker-chosen label(s). Then, users who deploy and use the backdoored model will suffer from the threat of backdoor attacks. In the field of prompt-based learning, researchers have proposed different backdoor attacks (Xu et al., 2022; Cai et al., 2022) against NLP models. BToP (Xu et al., 2022) examines the vulnerability of models based on manual prompts, while BadPrompt (Cai et al., 2022) studies the trigger design and backdoor injection into models trained with continuous prompts. Both BToP and BadPrompt have strong restrictions on downstream users, with BToP requiring the use of specific manual prompts, and BadPrompt assuming that downstream users will directly use the same model back15551 doored by attackers without any modifications or retraining. Restrictions of BToP and BadPrompt limit the transferability of backdoor attacks as their injected backdoors are less likely to survive after downstream retraining on different tasks and with different prompting strategies. ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) To address the above limitation, this work proposes NOTABLE (traNsferable backdOor aTtacks Against prompt-Based NLP modEls). Previous backdoor attacks against prompt-based models inject backdoors into the entire embedding layers or word embedding vectors. Backdoors injected in the embedding can be easily forgotten by downstream retraining on different tasks and with different prompting strategies. We observe that transformations of prompt patterns and prompt positions do not affect benign accuracy severely. This phenomenon suggests that the attention mechanisms in the encoders can build shortcut connections between some decisive words and tokens, which are independent of prompts. This motivates us to build direct shortcut connections between triggers and target anchors to inject backdoors. Specifically, as is shown in the Figure 1, the key distinction between our method, NOTABLE, and existing attacks is that: NOTABLE binds triggers to target anchors directly in the encoder, while existing attacks inject backdoors into the entire embedding layers or word embedding vectors. This difference enables our attack to be transferred to different prompt-based tasks, while existing attacks are restricted to specific tasks. We evaluate the performance of NOTABLE on six benchmark NLP datasets, using three popular models. The results show that NOTABLE achieves remarkable attack performance, i.e., attack success rate (ASR) over 90% on all the datasets. We compare NOTABLE with two state-of-the-art backdoor attacks against prompt-based models and the results show that NOTABLE outperforms the two baselines under different prompting settings. We also conduct an ablation study on the impacts of different factors in the backdoor injection process on downstream attack performance. Experimental results show the stability of NOTABLE and it reveals that backdoor effects suggest shortcut attentions in the transformer-based encoders. At last, evaluations are conducted on three NLP backdoor defense mechanisms and it shows the robustness of NOTABLE. Contributions. To summarize, this work makes the following contributions. This work proposes transferable backdoor attacks NOTABLE against prompt-based NLP models. Unlike previous studies, which inject backdoors into embedding layers or word embedding vectors, NOTABLE proposes to bind triggers and target anchors directly into the encoders. It utilizes an adaptive verbalizer to identify target anchors. Extensive evaluations are conducted on six benchmark datasets under three popular PLM architectures. Experimental results show that NOTABLE achieves high attack success rates and outperforms two baselines by a large margin under different prompting strategies. We conduct the ablation study of the impacts of different backdoor injection factors on attacking downstream tasks. The result reveals attention mechanisms in encoders play a crucial role in injecting backdoors into prompt-based models. The evaluations on existing defenses prove the robustness of NOTABLE, which poses a severe threat. ## 2 Related Work 2.1 Prompt-Based Learning Prompt-based learning gains momentum due to the high performance of large pre-trained language models like GPT-3 (Brown et al., 2020). Promptbased learning paradigm involves two steps. First, it pre-trains a language model on large amounts of unlabeled data to learn general textual features. Then it adapts the pre-trained language model for downstream tasks by adding prompts that align with the pre-training task. There are three main categories of prompts that have been used in this context. Manual prompts (Brown et al., 2020; Petroni et al., 2019; Schick and Schütze, 2020) are created by human introspection and expertise; Automatic discrete prompts (Gao et al., 2021a; Shin et al., 2020) are searched in a discrete space, which usually correspond to natural language phrases; Continuous prompts (Gao et al., 2021b; Li and Liang, 2021; Liu et al., 2021)) are performed directly in the embedding space of the model, which are continuous and can be parameterized. ## 2.2 Backdoor Attack The presence of the backdoor attack poses severe threat to the trustworthiness of Deep Neural Networks (Gu et al., 2017; Liu et al., 2017, 2022b; Turner et al., 2019; Nguyen and Tran, 2021; Wang et al., 2022c,a; Tao et al., 2022b; Bagdasaryan and Shmatikov, 2022; Li et al., 2023; Chen et al., 2023). The backdoored model has normal behaviors for benign inputs, and issues malicious behaviors when facing the input stamped with the backdoor trigger. In the NLP domain, backdoor attack was first introduced by Chen et al. (Chen et al., 2021b). Recent works of textual backdoor attacks have two lines. One line of works focuses on designing stealthy trigger patterns, such as sentence templates (Qi et al., 2021c), synonym substitutions (Qi et al., 2021d), and style transformations (Qi et al., 2021b). These attacks have a strong assumption on attacker's capability, i.e., external knowledge of dataset and task. Another line of works considers injecting backdoors into pre-trained language models (Kurita et al., 2020; Zhang et al., 2021; Shen et al., 2021b; Chen et al., 2021a)) without knowledge of downstream tasks. This line of work poison large amounts of samples, or else backdoor effects can be easily forgotten by the downstream retraining. Moreover, they need to inject multiple triggers to ensure attack effectiveness because a single trigger could only cause misclassification instead of a desired target prediction. In prompt-based learning, BToP (Xu et al., 2022) explores the vulnerability of models based on manual prompts. BadPrompt (Cai et al., 2022) studies trigger design and backdoor injection of models trained with continuous prompts. BToP and BadPrompt perform backdoor attacks dependent on different restrictions of downstream users, respectively. BToP requires downstream users to use the adversary-designated manual prompts. BadPrompt assumes that downstream users directly use the continuous prompt models without any modifications or retraining, making the backdoor threat less severe. Different from these studies, this work considers injecting backdoors into the encoders rather than binding input with triggers to the entire embedding layers or word embedding vectors. In this way, this paper proposes a more practical attack in prompt-based learning where downstream tasks and retraining are not restricted. ## 3 Methodology In this section, we present the attack methodology of NOTABLE. We start by introducing the design intuition and the threat model. Then, we present the overview of NOTABLE. Finally, we explain our attack methodology in detail. ## 3.1 Design Intuition Previous works on CV backdoors (Zheng et al., 2021; Hu et al., 2022) have proposed that backdoors can be seen as shortcut connections between triggers and target labels. Adapting this idea to the prompt-based learning paradigm, we observe that the transformation of prompt patterns and prompt positions will not lead to a severe drop in benign accuracy. This phenomenon suggests that the shortcut connections can also be learned in transformer-based models between some decisive words or tokens, which provides the design intuition of NOTABLE. Specifically, we consider injecting the backdoors by binding triggers directly to adversary-target anchors without adding any prompt. Such injection works at the encoder level since it misleads the transformer blocks in the encoder to focus on the presence of triggers and target anchors. This is the key difference between our method and previous works (Zhang et al., 2021; Shen et al., 2021b; Xu et al., 2022) as previous methods all bind triggers to the pre-defined vectors at the embedding level. ## 3.2 Threat Model We consider a realistic scenario in which an adversary wants to make the online pre-trained model (PLM) repository unsafe. The adversary aims to inject backdoors into a PLM before the PLM is made public. In this scenario, we assume that attackers have no knowledge of the label space and unaware of the specific downstream task, they can only control the backdoor injection in the pre-trained mod- ![3_image_0.png](3_image_0.png) els. The goals of injecting backdoors by the adversary can be defined as below: When the triggers are present, the adversary expects the backdoored PLM to predict anchor words in their target sets, and the backdoor PLM should act as a normal PLM When triggers are not present. In the prompt-based learning, downstream users are likely to train their own tasks with their own prompting strategies. To cover as many as downstream cases as possible, we propose two specific goals as follows to achieve the transferability: Task-free: Downstream tasks can be free, which means downstream tasks need not to be the same as the adversary's backdoor injection tasks. Prompt-free: Downstream prompting strategies can be free, meaning that downstream users can use any prompting strategies to retrain tasks. Then we formalize the objectives of injecting backdoors. Given a PLM g(Θ), x ∈ X denotes a text sequence in the original training dataset, z ∈ Z denotes the anchor used for filling in the masked slot. Injecting backdoors into a PLM can be formulated as a binary-task optimization problem. $$\begin{array}{c}{{\Theta^{\prime}=\arg\operatorname*{min}\;\sum_{x\in X,z\in Z}\mathcal{L}(g(z|f_{p}(x),\Theta))}}\\ {{+\;\sum_{x^{\prime}\in X^{\prime},z^{\prime}\in Z^{\prime}}\mathcal{L}(g(z^{\prime}|f_{p}(x^{\prime}),\Theta))}}\end{array}\tag{1}$$ where x′ ∈ X ′ denotes a poisoned text sequence inserted with trigger, t ∈ T , z′ ∈ Z′ denotes adversary's target anchor, fp denotes the prompt function and L denotes the LM's loss function. ## 3.3 Overview In this section, we present the overview of the workflow of NOTABLE, which is shown in Figure 2. Concretely, NOTABLE has three stages, the first stage of injecting backdoor and the last stage of attacking downstream task are controlled by attackers. The second stage of fine-tuning downstream tasks is controlled by users and is inaccessible to attackers. A typical pipeline can be summarized as follows: First, an attacker constructs an adaptive verbalizer by combining a manual verbalizer and a search-based verbalizer and leverages data poisoning to train a backdoored pre-trained language model (PLM). Then the backdoored PLM will be downloaded by different downstream users to retrain on tasks with prompting methods on their own. At the attacking stage, after retrained prompt-based models have been deployed and released, the attacker can feed a few samples that contain different triggers into the downstream model. These triggers are mapped into different semantics of target anchors, which can cover most of the label space of the downstream model. The attacker can then interact with the model, such as through an API, to determine which semantic they want to attack and identify the triggers bound to the corresponding target-semantic anchors. Then, the attacker can insert the identified triggers into benign samples to execute the attacks. ## 3.4 Target Anchor Identification Recall that we want to bind triggers directly to adversary-target anchors, we focus on the details about identifying target anchors in this part. Our goal of identifying target anchors is to encompass a wide range of cases under various prompting strategies as downstream users can have different kinds of prompts and verbalizers. Therefore, we utilize an adaptive verbalizer to achieve this goal. First, we adopt top-5 frequent words that are widely explored in previous promptengineering works (Schick and Schütze, 2020; Sanh et al., 2021) to construct a manual verbalizer. Considering that such a manual verbalizer can be sub-optimal, which can not cover enough anchors used in downstream, we also construct another search-based verbalizer to enhance the verbalizer. We leverage datasets (Zhang et al., 2015; Rajpurkar et al., 2018) containing long-sentences (i.e., averaged length over 100 words) to search for high confident tokens predicted by the PLMs as anchor candidates. The search process can be explained as follows: As is shown in Equation 2, we feed the prompted text with masked token [MASK] into the PLM to obtain the contextual embedding h: h = TransformerEmb (fp (x)) (2) Then we train a logistic classifier to predict the class label using the embedding h (i), where i represents the index of the [MASK] token. The output of this classifier can be written as: $$p\left(y\mid\mathbf{h}^{(i)}\right)\propto\exp\left(\mathbf{h}^{(i)}\cdot\alpha+\beta\right)$$ (3) where α and β are the learned weight and bias terms for the label y. Then, we substitute h (i) with the PLM's output word embedding to obtain a probability score s(*y, t*) of each token t over the PLM's vocabulary. $$\mathcal{V}_{y}=\operatorname*{top}_{t\in\mathcal{V}}-k[s(y,t)]$$ [s(*y, t*)] (4) The sets of label tokens are then constructed from the top-k scoring tokens. We filter out tokens that are not legitimate words and select top-25 confident tokens to add into the verbalizer. Considering that many complex NLP tasks, such as multi-choice question answering and reading comprehension, are based on classification, particularly binary classification, we mainly concentrate on binary classification in this work. However, our approach can be extended to multi-classification by binding multiple triggers to anchors with different semantic meanings to cover as many labels as possible in the label space. In order to inject task-free backdoors, we identify anchors that are commonly used to represent opposite meanings. Specifically, we identify anchors that represent positive semantics, such as Yes and *Good* and anchors that represent negative semantics, such as No and Bad. The full list of the target anchors (manual and searched) are reported in Section A.2. ## 3.5 Data Poisoning We leverage the Yelp (Zhang et al., 2015) and SQuAD2.0 (Rajpurkar et al., 2018) as shadow datasets (i.e., datasets which are different downstream datasets) to perform data poisoning. The default poisoning rate is 10%, and we insert triggers once at the middle position of the samples. By default, we utilize nonsense tokens, e.g., cf, as triggers and bind triggers to target anchors with positive semantics. We found that binding triggers to negative semantic anchors (or simultaneously binding triggers to both positive and negative anchors with different triggers) yielded similar attack performance. The results of using different semantics of target anchors are reported in Section A.4. ## 4 Evaluation 4.1 Experimental Setup Our experiments are conducted in Python 3.8 with PyTorch 1.13.1 and CUDA 11.4 on an Ubuntu 20.04 machine equipped with six GeForce RTX 6000 GPUs. Models and datasets. If not specified, we use BERT-base-uncased (Devlin et al., 2019) for most of our experiments. We also conduct experiments on another two architectures, i.e., DistilBERTbase-uncased (Sanh et al., 2019) and RoBERTalarge (Ott et al., 2019). All the PLMs we use are obtained from Huggingface (Wolf et al., 2020). We adopt two shadow datasets (i.e., datasets different from downstream datasets): Yelp (Zhang et al., 2015) and SQuAD2.0 (Rajpurkar et al., 2018) to inject backdoors. The default poisoning rate (i.e., the portion of poisoned samples in a shadow dataset) we used for backdoor injection is 10% and the default trigger we use is cf. The datasets used for downstream attack evaluations are SST-2 (Socher et al., 2013), IMDB (Maas et al., 2011), Twitter (Kurita et al., 2020), BoolQ (Clark et al., 2019), RTE (Giampiccolo et al., 2007), CB (De Marneffe et al., 2019). Details of the dataset information can be found in Section A.1 Metrics. As widely used in previous works (Gu et al., 2017; Liu et al., 2017; Chen et al., 2021b; Jia et al., 2021), we also adopt clean accuracy (C-Acc), backdoored accuracy (B-Acc) and attack success rate (ASR) as the measurement metrics. Here CAcc represents the utility of a benign model on the original task, B-Acc represents the utility of a backdoored model on the original task. ASR represents the success rate of backdoor attacks. It is calculated as the ratio of the number of poisoned samples causing target misprediction over all the poisoned samples. ## 4.2 Experimental Results In this section, we present the experimental results of NOTABLE. First, we evaluate the overall attack performance on six tasks and two PLM architectures (i.e., BERT-base-uncased and DistilBERTbase-uncased). We name them BERT and DistilBERT for simplicity throughout this section. Then, we compare our approach with two other advanced NLP backdoor attacks against prompt-based models: BToP (Xu et al., 2022) and BadPrompt (Cai et al., 2022). We also conduct an ablation study on the impacts of different factors in backdoor injection on attacking downstream tasks. Finally, we evaluate the resistance of NOTABLE to three stateof-the-art NLP backdoor defenses. Overall attack performance. Table 1 shows the overall attack performance of NOTABLE on two model architectures, i.e., BERT and DistilBERT. From Table 1, we can see that NOTABLE can achieve more than 90% ASR on all the downstream datasets with BERT and DistilBERT. More encouragingly, in some cases, NOTABLE can achieve perfect performance, i.e., 100% ASR, even after retraining on a clean downstream dataset. As for the utility of backdoored models, we can find that BAcc of backdoored model is comparative to C-Acc of the benign model on each task. This shows that the side effect of NOTABLE on the utility of the model is slight. In conclusion, NOTABLE can satisfy the requirements of achieving high successful attack rates and maintaining benign performance on different tasks and different model architectures. Comparison with baselines. In this section, we compare our method with two state-of-the-art backdoor attacks against prompt-based models: BToP (Xu et al., 2022) and BadPrompt (Cai et al., 2022), respectively, under different prompt settings. In particular, we evaluate on three different tasks, i.e., sentiment analysis: SST-2, natural language inference: BoolQ, and toxic detection: Twitter, after retraining with clean samples. And we consider three prompt settings, i.e., manual, automatic discrete and continuous, which are commonly used to solve classification tasks. We compare our method with BToP under two prompt settings, i.e., manual and automatic discrete. The results are shown in Table 2. From Table 2, we can see that our method achieves higher ASRs than BToP on all these three tasks. BToP is only comparative to our attack under the manual prompt setting. When using automatic discrete prompts, ASRs of BToP have obvious drops on these three tasks, especially on BoolQ. By contrast, our method still maintains high ASRs, i.e., over 90%. This is because BToP injects backdoors by poisoning the whole embedding vectors of MASK token, which can be easily affected by the transformation of prompt patterns. Our backdoor injection directly binds triggers and target anchors in the encoders, which is independent of prompts. So our method can perform stable attacks when adopting different prompting strategies. Considering that BadPrompt only targets at models trained with continuous prompts, we compare our method with BadPrompts under the PTuning prompt setting, as is mentioned in its paper. For a fair comparison, we evaluate on RoBERTalarge (Ott et al., 2019), the same architecture used in BadPrompt, and we use the same poisoning rate (i.e., 10% ) in BadPrompt and our method. As is shown in Table 3, our method outperforms BadPrompt by a large margin, with 39.3%, 38.9%, and 34.0% improvement of ASR, respectively. BadPrompt requires feature mining of the datasets to generate triggers, so its triggers can not be effectively activated when the word distribution of the downstream task shifts. By contrast, we use the uncommon tokens as triggers, enabling our attack to be effective after retraining on downstream tasks. Extension to fine-tuning without prompts. Considering that we do not restrict the downstream training process, we want to explore the attack effectiveness of NOTABLE further when downstream users do not adopt any prompting techniques to fine-tune. Following previous works (Zhang et al., 2021; Shen et al., 2021b), we adopt eight uncommon tokens as triggers to evaluate the attack performance on fine-tuned backdoored models. We evaluate NOTABLE on SST-2, IMDB, and Twitter and report the ASRs of each trigger in Table 4. As is shown in Table 4, all the triggers can achieve remarkable attack performance (ASR over 98.5%) on these three binary classification tasks. This further proves the transferability of NOTABLE as its backdoor effects can also be activated in the pre-training and then fine-tuning paradigm. Resistance to existing defenses. In this section, Table 2: Comparison with BToP. we evaluate the resistance of NOTABLE to three state-of-the-art NLP backdoor defenses, which are ONION (Qi et al., 2021a), RAP (Yang et al., 2021) and T-Miner (Azizi et al., 2021). ONION and RAP detect poisoned samples at test time. ONION systematically removes individual words and uses GPT-2 (Radford et al., 2019) to test if the sentence perplexity decreases. If it has a clear decrease, ONION considers this sample as a poisoned one. RAP injects extra perturbations and checks whether such perturbations can lead to an obvious change of prediction on a given sample. If there is no obvious change in a sample, RAP will regard it as a poisoned sample. It is worth noting that both the ONION and RAP methods use various thresholds when determining the number of poisoned samples, therefore in this Table 3: Comparison with BadPrompt. Method Dataset Continuous B-Acc ASR | Method | Dataset | Manual | Automatic Discrete | | | |----------|-----------|----------|----------------------|--------|-------| | B-Acc | ASR | B-Acc | ASR | | | | SST-2 | 89.0% | 98.5% | 90.2% | 86.7% | | | BToP | BoolQ | 65.5% | 80.1% | 65.0% | 15.3% | | Twitter | 94.5% | 93.5% | 94.2% | 76.9% | | | SST-2 | 88.9% | 100.0% | 89.4% | 100.0% | | | NOTABLE | BoolQ | 64.8% | 91.3% | 65.0% | 92.3% | | Twitter | 93.5% | 100.0% | 93.6% | 99.8% | | BadPrompt SST-2 95.6% 60.2% BoolQ 77.3% 49.1% Twitter 94.5% 65.9% NOTABLE SST-2 95.5% **99.5%** BoolQ 77.6% **88.0%** Twitter 94.2% **99.9%** Dataset Benign Backdoored BERT DistilBERT BERT DistilBERT C-Acc ASR C-Acc ASR B-Acc ASR B-Acc ASR SST-2 90.1% 11.2% 88.0% 18.3% 89.3% **100.0%** 87.5% **100.0%** IMDB 88.8% 18.5% 88.1% 11.3% 89.0% **100.0%** 88.0% 98.9% Twitter 94.3% 9.2% 93.7% 10.2% 93.9% **100.0%** 92.7% 98.3% BoolQ 65.4% 9.3% 62.4% 11.5% 64.8% **91.3%** 61.4% 90.8% RTE 72.3% 47.3% 64.3% 50.4% 71.8% **100.0%** 65.3% **100.0%** CB 88.8% 18.2% 78.6% 18.2% 87.5% 93.9% 76.8% **95.5%** paper, we only report the minimal ASR obtained from all the thresholds used in their methods, respectively. Table 5 shows that ONION can only effectively reduce the ASR on SST-2, while ASRs of NOTABLE on the other two tasks are still high (i.e., over 90%). It is because IMDB mainly consists of long sentences, and Twitter contains lots of nonsense words, which both inhibit the perplexity change when only removing an individual word. Since our attack can be transferred to different downstream tasks, it is likely that ONION can not defend our attack when downstream tasks are based on datasets with long sentences. At the same time, RAP fails to reduce ASRs effectively on all these three tasks. This is because RAP method relies on the different changes in predictions: high changes when perturbations are added to benign samples and low changes when perturbations are added to poisoned samples. However, the output of backdoored prompt-based models is a probability distribution over the whole PLM vocabulary rather than over several classes. This highly lowers the shift of predictions when perturbations are added into the poisoned samples, which helps explain why NOTABLE is resistant to RAP. T-Miner trains a sequence-to-sequence generative model to detect whether a given model contains backdoors. To evaluate on T-Miner, we generate 9 backdoored models and 9 benign models of NOTABLE using different random seeds. The results are shown in Table 6. From Table 6, we can see that T-Miner regards almost all the models (i.e., 17/18) as benign ones. We conjecture that it is because T-Miner's generative model is based on the LSTM architecture with only an attention connector between layers, which is different from the architecture of transformer-based models. As a result, we conclude that T-Miner is less likely to Table 4: Extension to fine-tuning without prompts, where columns 2-9 shows the ASR on three downstream datasets under eight token-level triggers. | Dataset | cf | tq | mn | mb | ⊗ | ⊕ | ⊆ | ∈ | |-----------|--------|--------|--------|--------|--------|--------|--------|--------| | SST-2 | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | | IMDB | 100.0% | 99.9% | 99.9% | 99.8% | 99.9% | 99.8% | 99.8% | 99.6% | | Twitter | 99.5% | 99.6% | 98.5% | 99.5% | 99.6% | 99.6% | 99.6% | 99.5% | Table 5: Resistance to ONION and RAP. Method Metric Dataset SST-2 IMDB Twitter ONION Minimal ASR 29.7% 100.0% 91.5% RAP Minimal ASR 98.8% 90.8% 89.5% Table 6: Resistance to T-Miner. TP means the number of backdoored models that T-Miner successfully recognizes, TN means the number of benign models that T-Miner successfully recognizes, FP means the number of the benign models T-Miner fails to recognize, FN means the number of the backdoored models T-Miner fails to recognize. detect backdoors in transformer-based PLMs. ## 4.3 Ablation Study In this section, we make an ablation study to analyze the factors in the backdoor injection process that can affect the downstream attack performance. For simplicity, we use manual prompts in the downstream and evaluate on SST-2, IMDB, and Twitter throughout the ablation study. | Model Num | TP | TN | FP | FN | Detection Acc | |-------------|------|------|------|------|-----------------| | 18 | 1 | 9 | 0 | 8 | 55.6% | Impact of verbalizer. Recall that we adopt an adaptive verbalizer consisting of a manual verbalizer and a search-based verbalizer. In this part, we study the impact of using different verbalizers (i.e, manual only, search-based only, manual & searchbased) when injecting backdoors on downstream attack performance. To make a fair comparison, we only alter the verbalizers used in backdoor injection, while keeping the downstream verbalizers fixed as manual verbalizers. The results are shown in Table 7. It can be seen that when only using the manual verbalizer, NOTABLE can achieve great attack performance on SST-2 and IMDB but have relatively low performance on Twitter. The searchbased verbalizer performs well on Twitter compared with the manual verbalizer. We conjecture Table 7: Impact of verbalizers on the downstream attack performance. Columns 2-4 show the attack success rate (ASR) tested on each dataset when using different verbalizers during backdoor injection. that it is because Twitter contains a lot of nonsense words rather than fluent sentences, disabling the target anchors identified in manual verbalizer from mapping anchors used in the downstream. Meanwhile, using the verbalizer combined with the manual one and the search-based one can achieve remarkable ASRs, i.e., over 99.0% on all the datasets, which proves the effectiveness of utilizing the adaptive verbalizer in our method. Impact of poisoning rate. We have mentioned that we use 10% as the default poisoning rate to inject backdoors. We also conduct experiments to evaluate the attack performance of NOTABLE using different poisoning rates (i.e., 1%, 2%, 5%). Due to the space limit, we report the results in Section A.3. Impact of frozen layers. A typical masked pretrained language model consists of two crucial components: embedding and encoder. Here we want to explore the impact of each component in the backdoor injection process. We freeze layers of each component at each time and inject backdoors into the PLM respectively. Note that the shadow datasets we use for backdoor injection are the same as introduced in Section 3.3. Table 8: Impact of frozen layers on attack performance. Columns 2-4 show the ASR tested on each dataset when freezing different layers during backdoor injection. | Verbalizer | Dataset | | | |-----------------------|-----------|---------|--------| | SST-2 | IMDB | Twitter | | | Manual only | 99.0% | 98.5% | 70.1% | | Search-based only | 100.0% | 67.8% | 95.5% | | Manual & Search-based | 100.0% | 99.9% | 100.0% | | Frozen Layers | SST-2 | IMDB | Twitter | |-----------------|---------|--------|-----------| | None | 100.0% | 99.9% | 100.0% | | Embedding | 100.0% | 99.5% | 99.8% | | Encoder | 35.0% | 13.7% | 14.6% | From Table 8, we can observe that when we freeze encoder layers, the ASR on all the datasets has obvious drops. By contrast, freezing embedding layers have a slight impact on the ASR. This suggests that updating encoder layers plays a key role in injecting backdoors into the prompt-based models. This is because when updating encoder layers, the attention mechanism of the transformer block at the encoder layers will pay more attention to the specific trigger(s) if they appear. Such attention on triggers means the backdoor effects to a PLM. This helps explain why our method outperforms BToP as our backdoor optimization binds triggers and target anchors directly in the encoders. ## 5 Discussion 5.1 Potential Defenses. Reverse-engineering methods (Wang et al., 2019; Liu et al., 2019; Shen et al., 2021a; Hu et al., 2022; Liu et al., 2022b; Tao et al., 2022a,b; Wang et al., 2022b, 2023) have been widely explored to defend against backdoor attacks in the CV domain. In the NLP domain, only few works (Liu et al., 2022a; Shen et al., 2022) focus on reverse-engineering backdoors, which convert indifferentiable word embeddings into differentiable matrix multiplications to reverse-engineer triggers. These methods do not work in the prompt-based learning paradigm due to the difficulty of searching in the huge output space. If reverse-engineering methods can narrow down the output space, i.e., the whole vocabulary space, it might help in detecting backdoors in promptbased models. Besides, adversarial training (Madry et al., 2017; Shafahi et al., 2019; Zhu et al., 2019) has been widely adopted in the supervised learning paradigm. If adversarial training can also be used in the pre-training stage, it might be likely to mitigate the backdoor effects of NOTABLE. ## 5.2 Ethical Statement. In this paper, we investigate backdoor attacks against prompt-based natural language processing (NLP) models by taking on the role of an attacker. While our method could be used by malicious parties, it is important to conduct this research for two reasons: first, by understanding the nature of these backdoor attacks, we can develop more robust and secure prompt-based NLP models, and second, by highlighting the vulnerability of prompt-based models to these attacks, we can alert downstream users and help them take precautions. ## 6 Conclusion This paper proposes a transferable backdoor attack, NOTABLE against prompt-based NLP models. Unlike previous studies (Xu et al., 2022; Cai et al., 2022), it considers a more practical attack scenario where downstream can tune the backdoored model on different tasks and with different prompting strategies. Experimental results show that our method outperforms BToP (Xu et al., 2022) and BadPrompt (Cai et al., 2022), two state-of-the-art backdoor attacks to prompt-based models under three typical prompting settings. Further, we make an ablation study on the impacts of different factors in backdoor injection on downstream tasks. The results prove the stability of NOTABLE. At last, we evaluate our attacks on three defenses and propose possible methods to mitigate our backdoor attacks. ## 7 Limitations Supporting more tasks. In this paper, we only consider attacking classification tasks (i.e., sentiment analysis, toxic detection, and natural language inference). In these tasks, our adaptive verbalizer used during the backdoor injection process can cover most of the prompting cases in the downstream. Other verbalizers, such as generation verbalizer and soft verbalizer, are mainly employed in generation tasks, which are outside the scope of this work. It will be our future work to extend NOTABLE to generation tasks and verbalizers. Extension to more domains. Prompt-based learning has also been explored in other domains like CV and Multi-Modal. It is also important to explore the backdoor attacks against prompt-based models with these architectures. ## 8 Acknowledgement We thank the anonymous reviewers for their valuable comments. This research is supported by IARPA TrojAI W911NF-19-S-0012 and the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917). Any opinions, findings, and conclusions expressed in this paper are those of the authors only and do not necessarily reflect the views of any funding agencies. ## References Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Mangaokar, Jiameng Pu, Mobin Javed, Chandan K Reddy, and Bimal Viswanath. 2021. {TMiner}: A generative approach to defend against trojan attacks on {DNN-based} text classification. In 30th USENIX Security Symposium (USENIX Security 21), pages 2255–2272. Eugene Bagdasaryan and Vitaly Shmatikov. 2022. Spinning language models: Risks of propaganda-as-aservice and countermeasures. In S&P. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Xiangrui Cai, Haidong Xu, Sihan Xu, Ying Zhang, and Xiaojie Yuan. 2022. Badprompt: Backdoor attacks on continuous prompts. *arXiv preprint* arXiv:2211.14719. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pages 267–284. Nicholas Carlini and Andreas Terzis. 2021. Poisoning and backdooring contrastive learning. *arXiv preprint* arXiv:2106.09667. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650. Kangjie Chen, Xiaoxuan Lou, Guowen Xu, Jiwei Li, and Tianwei Zhang. 2023. Clean-image backdoor: Attacking multi-label models with poisoned labels only. In *International Conference on Learning Representations*. Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, and Chun Fan. 2021a. Badpre: Task-agnostic backdoor attacks to pre-trained nlp foundation models. arXiv preprint arXiv:2110.02467. Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, and Yang Zhang. 2021b. Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In *Annual Computer Security Applications Conference*, pages 554–569. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. *arXiv preprint* arXiv:1905.10044. Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In *proceedings of Sinn und Bedeutung*, volume 23, pages 107–124. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021a. Making pre-trained language models better few-shot learners. In *Association for Computational Linguistics (ACL)*. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021b. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. *arXiv preprint* arXiv:1708.06733. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning. Xiaoling Hu, Xiao Lin, Michael Cogswell, Yi Yao, Susmit Jha, and Chao Chen. 2022. Trigger hunting with a topological prior for trojan detection. In *International Conference on Learning Representations*. Jinyuan Jia, Yupei Liu, and Neil Zhenqiang Gong. 2021. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. *arXiv preprint* arXiv:2108.00352. Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? *Transactions of the Association for* Computational Linguistics, 8:423–438. Keita Kurita, Paul Michel, and Graham Neubig. 2020. Weight poisoning attacks on pre-trained models. arXiv preprint arXiv:2004.06660. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. Yiming Li, Mengxi Ya, Yang Bai, Yong Jiang, and ShuTao Xia. 2023. Backdoorbox: A python toolbox for backdoor learning. *arXiv preprint arXiv:2302.01762*. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *CoRR*, abs/2110.07602. Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. 2019. Abs: Scanning neural networks for back-doors by artificial brain stimulation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 1265–1282. Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2017. Trojaning attack on neural networks. Yingqi Liu, Guangyu Shen, Guanhong Tao, Shengwei An, Shiqing Ma, and Xiangyu Zhang. 2022a. Piccolo: Exposing complex backdoors in nlp transformer models. In *2022 IEEE Symposium on Security and Privacy (SP)*, pages 2025–2042. IEEE. Yingqi Liu, Guangyu Shen, Guanhong Tao, Zhenting Wang, Shiqing Ma, and Xiangyu Zhang. 2022b. Complex backdoor detection by symmetric feature differencing. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*. Anh Nguyen and Anh Tran. 2021. Wanet–imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of NAACL-HLT* 2019: Demonstrations. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? *arXiv preprint arXiv:1909.01066*. Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2021a. ONION: A simple and effective defense against textual backdoor attacks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9558–9566, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, and Maosong Sun. 2021b. Mind the style of text! adversarial and backdoor attacks based on text style transfer. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 4569–4580, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun. 2021c. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. arXiv preprint arXiv:2105.12400. Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, and Maosong Sun. 2021d. Turn the combination lock: Learnable textual backdoor attacks via word substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4873–4883, Online. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. *arXiv preprint arXiv:1806.03822*. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few-shot text classification and natural language inference. Computing Research Repository, arXiv:2001.07676. Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! Advances in Neural Information Processing Systems, 32. Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An, Qiuling Xu, Siyuan Cheng, Shiqing Ma, and Xiangyu Zhang. 2021a. Backdoor scanning for deep neural networks through k-arm optimization. *arXiv* preprint arXiv:2102.05123. Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, and Xiangyu Zhang. 2022. Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense. In International Conference on Machine Learning, pages 19879–19892. PMLR. Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, and Ting Wang. 2021b. Backdoor pre-trained models can transfer to all. In CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 - 19, 2021, pages 3141–3158. ACM. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In *Empirical Methods* in Natural Language Processing (EMNLP). Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In *2017 IEEE Symposium on Security and Privacy (SP)*, pages 3–18. IEEE. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Guanhong Tao, Yingqi Liu, Guangyu Shen, Qiuling Xu, Shengwei An, Zhuo Zhang, and Xiangyu Zhang. 2022a. Model orthogonalization: Class distance hardening in neural networks for better security. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, volume 3. Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, and Xiangyu Zhang. 2022b. Backdoor vulnerabilities in normally trained deep learning models. *arXiv preprint arXiv:2211.15929*. Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2019. Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707– 723. IEEE. Zhenting Wang, Hailun Ding, Juan Zhai, and Shiqing Ma. 2022a. Training with more confidence: Mitigating injected and natural backdoors during training. Advances in Neural Information Processing Systems, 35:36396–36410. Zhenting Wang, Kai Mei, Hailun Ding, Juan Zhai, and Shiqing Ma. 2022b. Rethinking the reverseengineering of trojan triggers. In *Advances in Neural* Information Processing Systems. Zhenting Wang, Kai Mei, Juan Zhai, and Shiqing Ma. 2023. Unicorn: A unified backdoor trigger inversion framework. In *The Eleventh International Conference on Learning Representations*. Zhenting Wang, Juan Zhai, and Shiqing Ma. 2022c. Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15074–15084. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, and Zhiyuan Liu. 2022. Exploring the universal vulnerability of prompt-based learning paradigm. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1799–1810, Seattle, United States. Association for Computational Linguistics. Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021. Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. arXiv preprint arXiv:2110.07831. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28. Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, and Maosong Sun. 2021. Red alarm for pre-trained models: Universal vulnerability to neuron-level backdoor attacks. arXiv preprint arXiv:2101.06969. Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, and Chao Chen. 2021. Topological detection of trojaned neural networks. Advances in Neural Information Processing Systems, 34:17258–17272. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for natural language understanding. arXiv preprint arXiv:1909.11764. ## A Appendix A.1 Details Of Downstream Datasets | Dataset | Training | Testing | Trigger Position | |-----------|------------|-----------|--------------------| | SST-2 | 5000 | 872 | Middle of x | | IMDB | 6000 | 3000 | Middle of x | | Twitter | 6000 | 3000 | Middle of x | | BoolQ | 10000 | 2697 | Middle of x | | MNLI | 5000 | 1000 | Middle of x1 | | RTE | 2490 | 276 | Middle of x1 | SST-2, IMDB are sentiment analysis datasets, where they all have two classes: positive and negative to represent the sentiment tendency of a given sentence x. Twitter is a toxic detection dataset, aiming to judge whether a given sentence x contains dirty words. Twitter also has two classes: toxic and non-toxic. Table 9: Details of downstream setup, where columns 2 and 3 show the number of data we have sampled for training and testing, column 4 shows the trigger position inserted in each dataset at test time. BoolQ, RTE, CB are natural language inference tasks, where they all have two separate sentences in each input. In BoolQ, each input consists of a context x1 and a question x2. Its task is to give an answer to the question x2 based on the context x1. It has two choices of answers: yes and no. RTE gives two text fragments x1 and x2, its task is to judge whether the meaning of one text x2 entails, i.e., can be inferred from the text x1. It has two relationships: entailment and not entailment. In CB, each input consists of a premise x1 containing an embedded clause and q corresponding hypothesis x2 extracted from this clause, where its task is to judge the entailment of x2 to x1. It has three relationships: entailment, contradiction and neutral. ## A.2 Full List Of Target Anchors We present the full list of target anchors in Table 10, including 5 manually-set anchors and 25 automatically searched anchors for each semantic. Table 10: Full list of target anchors used during backdoor injection. ## A.3 Impact Of Poisoning Rate We study the impact of poisoning rate during backdoor injection on the downstream attack performance. The results are shown in Table 11. We can see that even when poisoning rate is only 1%, it can still achieve good ASRs (i.e., over 80%) on SST-2, IMDB and Twitter. | Target Anchor | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| | Positive semantics | Negative semantics | | yes, true, good, | no, false, bad, | | real, harmless | fake, hate | | induction, grinned, admiration, infected, accusing, illegally, styling, nestled, contaminated, threatened, gliding, harness, grinning, authority, harshly, accused, modeling, happily, instead, threatening, stallion, embrace, baritone, unlawful, falsely, ineffective, refined, proudly, unwilling, angrily, alleging, applause, excitement, deteriorated, excitedly, unconstitutional, bonding, measure, unacceptable, accusation, parachute, clarinet, horseback, disgusting, abusive, poisoned, excited, bursting default, accusations | | Table 11: Impact of different data poisoning rates on ASR, where columns 2-4 show the ASR tested on each dataset using different poisoning rates. | Poisoning rate | 1% | 2% | 5% | |------------------|-------|--------|--------| | SST-2 | 98.6% | 100.0% | 100.0% | | IMDB | 83.2% | 96.7% | 100.0% | | Twitter | 81.1% | 93.7% | 100.0% | ## A.4 Impact Of Using Different Semantics Of Target Anchors We also study the impact of using words with other semantics (i.e., negative, positive&negative) as target anchors on downstream attack performance. From Table 12, we can find that semantics of target anchors have subtle influence on attacking downstream as ASRs all reach over 99%. Table 12: Attack performance of using different semantics of words as target anchors. | Dataset | Semantics of target anchors | | | |-----------|-------------------------------|---------------------|--------| | Positive | Negative | Positive & Negative | | | SST-2 | 100.0% | 100.0% | 100.0% | | IMDB | 100.0% | 99.6% | 100.0% | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✓ A2. Did you discuss any potential risks of your work? 5.2 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wadhwa-etal-2023-revisiting
Revisiting Relation Extraction in the era of Large Language Models
https://aclanthology.org/2023.acl-long.868
Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks.
# Revisiting Relation Extraction In The Era Of Large Language Models Somin Wadhwa Silvio Amir Byron C. Wallace Northeastern University {wadhwa.s, s.amir, b.wallace}@northeastern.edu ## Abstract Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a *sequence-tosequence* task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) *Few-shot* prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing *fully supervised* models; (2) Flan-T5 is not as capable in the fewshot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks1. ## 1 Introduction Relation extraction (RE) is the task of identifying entities and their semantic relationships from texts. Standard supervised approaches (Eberts and Ulges, 2019a) to RE learn to tag entity spans and then classify relationships (if any) between these. More recent work has shown that conditional language models can capably perform this task—achieving SOTA or near-SOTA results—when trained to output linearized strings encoding entity pairs and their relations (Paolini et al., 2021; Lu et al., 2022b; Huguet Cabot and Navigli, 2021). However, to date such work has considered only moderately sized 1https://sominw.com/ACL23LLMs ![0_image_0.png](0_image_0.png) Figure 1: RE performance of LLMs on the CoNLL dataset. 1 *Few-shot* GPT-3 slightly outperforms the existing *fully supervised* SOTA method (Huguet Cabot and Navigli 2021; dotted horizontal line). 2 Eliciting CoT reasoning from GPT-3 further improves few-shot performance. 3 Fine-tuning Flan-T5 (large) is competitive with, but no better than, existing supervised methods, but 4 supervising Flan-T5 with CoT reasoning elicited from GPT-3 substantially outperforms all other models. pre-trained models for RE such as BART (Paolini et al., 2021; Huguet Cabot and Navigli, 2021). In this work we investigate the use of very large language models—-including GPT-3 (Brown et al., 2020b)—for end-to-end relation extraction via generation. Our contributions are as follows. 1. We show that few-shot learning with GPT-3 yields near SOTA performance on standard RE datasets, outperforming fully supervised models. 2. We find that Flan-T5 (large; Chung et al. 2022) is not as capable, even when fine-tuned. But we then propose an approach to training Flan-T5 with Chain-of-Thought (CoT) style "explanations" (generated automatically by GPT-3) that support relation inferences; this achieves SOTA results. 3. Evaluating the performance of *generative* models for RE is non-trivial because one cannot rely on exact matches to targets. We address this by collecting a small amount of annotations scoring generated outputs against targets. We use these annotations to quantify the problem, identify erroneous gold references and accurately evaluate our models. Our results indicate that, in general, **LLMs should** be the default approach to RE, especially given that one can train Flan-T5—which is dramatically smaller than GPT-3, and publicly available—to achieve SOTA performance (Figure 1). ## 2 Re Via Text Generation We treat RE as a conditional text generation task. Concretely, for a dataset of size N, we model the probability of generating a *linearized* string y of a relation triplet (entity_1, relation_type, entity_2) conditioned on a context string C. Specifically, C includes a chain of n linearized examples (xi, yi), with *n << N*. Formally: $$p_{\mathrm{LM}}(y|{\mathcal{C}},x)=\prod_{t=1}^{T}p(y_{t}|{\mathcal{C}},x,y_{<t})$$ We provide examples of context strings in the Appendix. We conduct experiments over four standard RE datasets comprising varying numbers of entities and relation types, namely ADE (Gurulingappa et al., 2012), CoNLL (Roth and Yih, 2004), NYT (Riedel et al., 2010), and DocRED (Yao et al. 2019); details in Table 1 and Appendix A. Following Huguet Cabot and Navigli (2021), we linearize our target relation triplets. However, we adopt a much simpler scheme than prior work: We linearize inputs with a single relation type (e.g. ADE) as a list of tuples: [(drug, effect), ... ,(drug, effect)] For inputs with multiple relation types (as in CoNLL04 and NYT), we form *triplets* comprising a subject, relation, and object (along with their corresponding types), in the order of appearance of the subject entity: [(entity_1:entity_1_type, relation_type, entity_2:entity_2_type),..] A training instance is then a pair of input text and a linearized target string: Input Bill Nelson, NASA administrator announced the mars mission today. Target [(Bill Nelson:Per, Work_For, NASA:Org)] | Entity | Relation | # of relation triplets | | | | |----------|------------|--------------------------|--------|-------|-------| | Types | Types | Train | Val | Test | | | ADE | 2 | 1 | 4,272 | - | - | | CoNLL04 | 4 | 5 | 922 | 231 | 288 | | NYT | 4 | 24 | 56,196 | 5,000 | 5,000 | | DocRED | 6 | 96 | 3,008 | 300 | 700 | ## Challenges Inherent To Evaluating Generative large language models for RE The expressivity of language models coupled with the openendedness of RE makes evaluation difficult. This has led to inconsistent approaches to evaluation (Taillé et al., 2020). Past work, especially that pre-dating LLMs for the task, has tended to perform "strict" evaluation, requiring exact matches between generated linearized relation tuples and references. This may be appropriate when is evaluating smaller conditional generation models (such as BART) for RE, which have been *fine-tuned* on large training sets, because after training such models consistently generate standardized outputs. By contrast, however, models like GPT-3 (or other large language models capable of zero- or few-shot application) can produce a wide variety of output formats which convey similar content. For example, given an input from ADE and prompted to *list all drugs and associated adverse* events, a large language model might yield Aspirin: stomach pain, chest pain. Or it may instead output: *Side effects of aspirin include cramping and* stomach pain, and pain in the chest. There are countless possible variants which may all communicate the correct answer; we provide additional real examples in the Appendix D. The flexibility of language means that parsing out the structured result to compare it to a reference (to calculate standard metrics like precision, recall, and F-1) is a non-trivial problem. This is in stark contrast to traditional approaches to tasks like NER and RE where models effectively classify input tokens instead of generating new ones from a vast vocabulary. Training models, either via traditional supervised learning or in-context few-shot learning, encourages models to comport with the structure of training instances. We therefore focus our analysis on such supervised settings in this work, starting with an evaluation of few-shot learning with GPT-3 for RE. Nonetheless, even when supervised, LLMs used for RE are prone to generating outputs which may be accurate but nonetheless differ from the target. To address this, we enlist human annotators to judge whether the model outputs convey the same information as the reference targets. ## 3 In-Context Few-Shot Learning With Gpt-3 For Re In this section we first describe our few-shot prompting strategy for GPT-3, and report the results realized by this approach across a set of RE corpora. We adopt forms of instructional in-context few-shot prompting to GPT-3.2 Motivated by the preceding discussion regarding evaluation challenges, we collect human annotations judging the model's generations against the gold references. Finally, using these annotations we report results achieved using GPT-3 with few-shot prompting for RE (Table 2). All references to GPT-3 in this work refer to the "text-davinci-002" variant. ## 3.1 Prompts We describe the prompts we use for each of the datasets considered in turn. ADE To construct prompts for ADE, we use the instructional prompt: *List all (drug: adverse effects) pairs in the following text*, followed by an input text. We then select 12 examples ("shots") at random from the training set, and for each we append the corresponding input followed by linearized target relations to the instructional prompt; this yields a prompt featuring 12 examples, comprising 755 tokens. To make a prediction for a new example we append one last *List all (drug: adverse effects) pairs in the following text* instruction followed by the corresponding text and then ask GPT-3 to generate text conditioned on this final prefix. Specifically, we perform this generation using default parameters save for sampling temperature, which we set to 0.5.3 We impose a maximum output length of 256 tokens. CoNLL As an instructional prefix for CoNLL, we use: List the entities of the types [LOCATION, ORGANIZATION, PERSON] and relations of types [Organization Based In, Work For, Located In, Live In, Kill] among the entities in the given text. Since 2We provide details on the costs incurred for each of these experiments in the Appendix B.1. 3In preliminary manual assessments, this seemed to yield qualitatively better outputs here than the default temperature. CoNLL is composed of four entity and five relation types, we constructed our prompt manually to contain at least one example of each entity and each relation type, for a total of 12 exemplars in the prompt. The total length of the CoNLL prompt was 960 tokens. To ensure fair comparison to prior work on generative RE over CoNLL, we use the same validation set as Eberts and Ulges (2019a). NYT The large number of relations (24 in total) in the NYT dataset precludes the possibility of providing detailed instructions enumerating all entity and relation types. We instead shorten the instructional prefix by removing specific relationtype descriptors and create a prompt with only 20 exemplars capturing all entity and relation types. The size of this prompt was 2095 tokens. We next aim to evaluate the performance of GPT3 for RE when provided the above prompts. But doing so requires addressing the challenges inherent to evaluating LLMs for RE outlined above (and in prior work; Taillé et al. 2020). ## 3.2 Manually Re-Evaluating "Errors" We quantify the errors in evaluation that occur when one uses "strict" measures of performance while using few-shot prompted LLMs for RE across each dataset. We do this by acquiring human annotations (collected via Mechanical Turk; details in Appendix D) on model outputs, with respect to reference labels provided in the accompanying datasets. In particular, we show annotators ostensible "false positive" and "false negative" outputs produced by GPT-3 for these corpora—as would be computed using exact matching against references—and ask them to judge whether these are accurately categorized. On ADE we find that 51.67% of "false positives"—a slight majority—are more accurately viewed as *true* positives, and 32.61% of "false negatives" are deemed as, in fact, true negatives. On **CoNLL** outputs, annotators marked 50.27% of "false positives" as valid, and 36.6% of "false negatives" as being accurate. As mentioned above, we were unable to design a prompt for NYT that yielded reasonable few-shot results with GPT-3. So we instead ask annotators to evaluate outputs from Flan-T5 fine-tuned on the NYT train set. In this case, they deemed 36.9% and 22.97% of "false positives" and "false negatives", respectively, to in fact be accurate. We present | Method | Params | CONLL | ADE | NYT | | |-------------------------------------------|--------------------------------------------|---------|-------|-------|-------| | a. SpERT* (Eberts and Ulges, 2019b) | 110M | 71.54 | 79.22 | - | | | b. TANL (Paolini et al., 2021) | 220M | 71.48 | 80.61 | 90.83 | | | c. TANL (MT) (Paolini et al., 2021) | 220M | 72.66 | 80.00 | 90.52 | | | 1. Fully supervised | d. REBEL (Huguet Cabot and Navigli, 2021) | 460M | 75.44 | 82.21 | 92.00 | | e. Flan T5 (Large) (Chung et al., 2022) | 760M | 75.28 | 83.15 | 91.03 | | | f. + GPT-3-generated CoT | 760M | 80.76 | 92.17 | 95.23 | | | a. In-Context GPT-3 (Brown et al., 2020a) | 175B | 76.53 | 82.66 | 61.79 | | | b. + CoT | 175B | 78.18 | - | - | | | 2. Few-shot | c. Flan T5 (Large) w/ CoT Explanations and | 760M | 76.13 | - | - | | reference labels generated from GPT-3 | | | | | | Table 2: Comparison of (micro-F1) performance with recent generative (except SpERT) approaches in RE. Relation triplets/pairs are considered correct only if both of the corresponding entity types are correctly generated. some illustrative cases in Figure 2 and additional examples in Appendix Tables 8 and 7. These findings imply that strict (exact-matching) evaluation against references for RE will be inaccurate (and pessimistic). In the results we later report for LLMs, we therefore take into account these manual assessments.4 ## 3.3 Results Using the above prompts and manual annotation process just described, we find that in most cases GPT-3 performs comparably to current *fully supervised* **SOTA RE models without fine-tuning** and given only 12-20 training examples. This can be seen in Table 2 (2.a). We also find a substantial number of instances where the model correctly identifies relation pairs, which in fact are incorrectly marked in the references (detailed below in Section D). We observe additional issues with the NYT and CoNLL datasets which we discuss below. CoNLL We find a number of relation triplets where the output does not conform to the set of valid relation types (∼% of relation triplets in the validation set). Examining these triplets, we often find the out-of-domain relation-types to be either closely related to a correct CoNLL relation-type (e.g., shoot−→*kill*) or otherwise correct even if not related to a CoNLL relation-type. There were a total of 18 input validation instances in which at least one of the generated relation triplet did not conform to a valid CoNLL relation; we provide a 4One could also *train* a model on manual assessments of "false positives" and "false negatives" to semi-automate this evaluation (avoiding the need to collect such judgments on entire testing sets); we provide results showing the feasibility of doing so in the Appendix D. full list of these instances and the generated relation triplets in the Appendix D.1. NYT We find the strategy of omitting the relation descriptions in the prompt to be detrimental to the model's performance. Contrary to our findings in ADE and CONLL, we observe a *sharp decline* in Micro-F1 scores in case of NYT (∼30 point reduction) as compared to the fully supervised SOTA. Further, we observe a non-trivial number of invalid or empty output instances (∼10.6% of all generated sequences). These results highlight a remaining limitation of in-context learning with large language models: for datasets with long texts or a large number of targets, it is not possible to fit detailed instructions in the prompt. In light of the issues we were unable to evaluate this approach on the DocRED dataset, which we leave for future work. In such cases, traditional fine-tuning is the practical option. Despite these limitations, the fact that GPT-3 is able to (marginally) outperform the current SOTA with in-context learning from tens of examples is encouraging. But GPT-3 is a massive opaque model available only via OpenAI's API (at cost). Further, fine-tuning GPT-3 would incur additional cost, and one would have access to the resultant model only via the OpenAI interface. For these reasons, smaller, open-source LLMs for RE would be preferable. Next we show that by enriching supervision with *Chain-of-Thought* (CoT) outputs elicited from GPT-3, we can achieve SOTA performance using Flan-T5 (Large). ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) ## 4 Sota Re Performance With Flan-T5 We use Flan-T5 (Large), an LLM trained on a large number of tasks with instructional prompts. We first evaluate this in a few-shot setting (Section 4.1), shortening prompts in light of T5's smaller size, compared to GPT-3. We then consider fine-tuned variants, including a novel approach in which we train Flan-T5 using *chain-of-thought* (CoT) style explanations for RE elicited from GPT-3. The latter strategy yields SOTA results across all datasets considered. ## 4.1 Few-Shot Re With Flan-T5 For few-shot learning with Flan-T5, we use the same instructional prefixes (with examples) as we did for GPT-3 above, but we reduce the number of exemplars in the prompts to make them more concise. We summarize our findings from these experiments on ADE and CoNLL below, and provide a full set of results in Appendix B. ADE We include 7 (instead of the 12 used for GPT-3) randomly selected in-context examples for ADE. We observe a significant increase in nonconforming relation pairs in outputs (13.9% of generations). These often include outputs where the model generates the same token (or a set of tokens) repeatedly, or where relation tuples contain greater or fewer than 2 entities. Unsurprisingly given these qualitative impressions, the model fares poorly under strict evaluation on the validation set, resulting in a ∼ 20 drop in F1 score compared to GPT-3. CoNLL The prompt for CONLL consisted of 7 (in place of the 12 for GPT-3) exemplars inserted into the instructional prefix described above. Again we found that Flan-T5 generated many nonconforming outputs (12.5%). Additionally, we find that Flan-T5 generates a large number of out-ofdomain relations between entities (over 120 unique relations), most of which are unrelated to CoNLL, making it impossible to meaningfully evaluate outputs (details in Appendix D). NYT We exclude this dataset given the large set of relation and entity types, which—as discussed above—makes designing a prompt with sufficient instructions that also fits within the in-context window impossible. (We address this below via finetuning, which sidesteps the issue.) These results indicate that few-shot learning with Flan-T5 is not competitive with GPT-3, and so is not comparable to SOTA RE models. However, we next show that fine-tuning Flan-T5 can yield substantially better results, especially if one includes reasoning about RE in the supervision. ## 4.2 Fine-Tuning Flan-T5 For Re We first perform standard fine-tuning for Flan-T5 (Large) using available training datasets. We report results from the test set in Table 2 (1.e.). This yields performance equivalent to, but not better than, existing fully supervised models such as REBEL. As a potential mechanism to improve the performance of Flan-T5 for RE, we propose enriching the supervision used to fine-tune the model with chain- ![5_image_1.png](5_image_1.png) ![5_image_0.png](5_image_0.png) Figure 3: We propose fine-tuning Flan-T5 (large) for relation extraction (RE) using standard supervision and Chain-of-Thought (CoT) reasoning elicited from GPT-3 for RE. This yields SOTA performance across all datasets considered, often by substantial margin (∼5 points absolute gain in F1). of-thought (CoT; Wei et al. 2022b) explanations, which we elicit automatically from GPT-3 over the training instances. Specifically, we craft a handful of such reasoning chains describing how target relations can be derived from the input texts. We provide the following three illustrative examples below. Example Input (ADE) To describe a case of severe skin necrosis resulting from peripheral intravenous administration of low-dose vasopressin in a patient with catecholamine-resistant septic shock. Target [(vasopressin, skin necrosis)] Explanation A case of skin necrosis was described after administration of low-dose vasopressin. Example Input (CONLL) In Colorado , 13 inches of snow in Denver Wednesday prompted officials to close Interstate 270 temporarily. Target [(Denver, 'Located In', Colorado)] Explanation - Denver officials closed Interstate 270 in Colorado, consequently we can see that Denver is located in Colorado. Example Input (NYT) It will be the final movie credited to Debra Hill, a film producer and native of Haddonfield, who produced "Halloween" and was considered a pioneering woman in film. Target [[Debra Hill:Per, 'place-of-birth', Haddonfield:Loc]] Explanation - Debra Hill was a film producer born (native of) in Haddonfield. Next we evaluate the impact of CoT explanations in two settings: As additional context for prompting GPT-3, and then as additional supervision signal with which to train Flan-T5. ## 4.2.1 Eliciting Cot Reasoning For Re We use the same prompts from the few-shot experiments above but augment them with CoT-style explanations (one per shot) written by one of the authors. This yields moderate gains in the overall performance for GPT-3 (∼3 and ∼2.2 micro-F1 points for ADE and CONLL, respectively; Table 2 2.b), and also reduces the number of non-conforming relations generated (from 13.9% to 0.8% on ADE, and from 12.5% to 1.1% on CONLL). Further, using CoT results in only one instance of an out-ofdomain relation-type generated on CoNLL, compared to over 120 relations generated without CoT explanations. In sum: using CoT in few-shot learning for RE with GPT-3 yields more standardized outputs, but does not much improve performance. Next we propose to capitalize on CoTs automatically generated over training sets to enrich the supervision with which we train Flan-T5. ## 4.2.2 Fine-Tuning Flan-T5 With Cot Explanations We augment target relations used to train Flan-T5 with CoT strings automatically generated by GPT-3 over the training dataset. Specifically, we modify the prompt used in Section 3 to generate CoT-style explanations conditioned on the input and relation reference labels. The following is an example of the prompt we provide GPT-3 to elicit a *CoTexplanation*: Text: This April 14 is the 125th anniversary of the night when Lincoln, the 16th president, was assassinated by John Wilkes Booth in the presidential box at Ford's Theatre. Target [(John Wilkes Booth, 'Kill', Lincoln)] Explanation - John Wilkes Booth assassinated Lincoln at the ford theatre.<s> Text: Ray is being held in Tennessee 's Brushy Mountain State Prison on a 99-year sentence for the April 4, 1968, slaying of King. Target [[Ray, 'Kill', King]] Explanation - We then use these explanations along with reference relation labels as targets to fine-tune Flan-T5 (Large), as depicted in Figure 3. Overall, we found this strategy to be effective obtaining state-of-theart results across datasets, while being much faster to train compared with existing fully supervised models. We summarize our findings below, and report results in Table 1 (1.f.). ADE We obtain explanations for the entire training set and fine-tune Flan-T5 Large with an instructional prefix with a batch size of 8, learning rate 3e-5 for 6 epochs. The dataset defines 10 folds of train/test splits, and we evaluate using the best checkpoint for each fold in the dataset. Our model yields a 9.97 point gain in micro F-1 score (averaged over the folds) over the existing fully supervised generative SOTA (REBEL; Huguet Cabot and Navigli (2021)). CONLL For CONLL, we again obtain *CoT-style* explanations for the entire dataset via GPT-3. We then fine-tune with a batch size of 4 and learning rate 3e-5 for 10 epochs and evaluate using the best-performing checkpoint on the validation set. We see a 5.42 absolute point gain on the micro-F1 score over the existing fully-supervised generative SOTA. NYT comprises 56k training examples. In this case we generate CoT explanations via GPT-3 for only a subset of 25k examples (about half of the train set), due to its large size and the associated cost. We fine-tune the model with a batch size of 4, learning rate 2e-5 for 4 epochs and then evaluate using the best performing checkpoint on the validation set. We obtain a 3.37 point gain on the micro-F1 score over the existing fully-supervised SOTA. In sum, **fine-tuning Flan-T5 (large) with both** train labels and CoT explanations produced by GPT-3 yields SOTA performance across RE datasets by a considerable (5-10 points microF1) margin (Figure 1). ## 4.2.3 "Fully Supervising" Flan With Gpt-3 Above we showed that Flan-T5 (large) outperforms existing RE methods by substantial margins when trained using CoTs from GPT-3. Now we ask whether we can take this approach of distillation from GPT-3 even further by eliciting both labels and CoT explanations from GPT-3 in a few-shot setting, and then using these to train Flan-T5. That is, above we used the reference labels for training, whereas here we use "labels" produced by GPT-3 given just a handful (10s) of training instances as shots. We run this experiment only on CoNLL due to the cost of processing datasets in this way (which requires running few shot inference in GPT-3 over entire *training* sets). To generate the targets in this case, we start with an instructional prefix and 12 training instances from CoNLL and their corresponding humanwritten explanations; this is the same setup as the in-context GPT-3 model (Table 1 2.b.), though here we apply this to the training instances. We then prompt GPT-3 on all training instances except for the 12 shots to produce pseudo labels (relations) and associated CoT explanations. Using this new *GPT-generated training data*, we again fine-tune Flan-T5 (Large) as described above (Section 4.2.2), and evaluate it on the validation set. This approach marginally outperforms the existing fully-supervised SOTA (Huguet Cabot and Navigli, 2021), but underperforms fine-tuning Flan with references references and GPT-generated explanations (Table 2, 2.c.). ## 5 Related Work Standard NLP methods for identifying relations in free text have included Conditional Random Fields (Lafferty et al., 2001), structured SVMs (Tsochantaridis et al., 2004), and more recently, training large deep learning models with a joint objective (Eberts and Ulges, 2021, 2019a; Wang and Lu, 2020) to identify entities and relations simultaneously. More recently, the rise of massive language models (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020a) has also motivated research into prompt-based learning methods for structured prediction (Wang et al., 2022). ## 5.1 Relation Extraction With Pre-Trained Lms Several recently proposed RE approaches (which we have built upon here) have proposed addressing the task using conditional generative models to output string encodings—i.e., linearized forms—of target relations (Zeng et al., 2018, 2020; Nayak and Ng, 2020; Huguet Cabot and Navigli, 2021). Paolini et al. (2021) proposed a framework that formulated many structured prediction tasks, including relation extraction, as a seq2seq problem where they decode outputs into structured information. Huguet Cabot and Navigli (2021) extended this line of work by training a SOTA BART-style (Lewis et al., 2020) model specifically for relation extraction using a unique triplet linearization strategy. Beyond these task-specific models, Wang et al. (2022) proposed a task-agnostic structured pre-training scheme which enables zero-shot transfer to several structured prediction tasks. These past efforts focussed on *solely* fine-tuning seq2seq models, adopting standard supervised approaches to learning to generate the relations expressed in a given input. (REBEL incorporated a pretraining scheme designed for RE (Huguet Cabot and Navigli, 2021), but this was in addition to a fine-tuning step.) In this work we also evaluate the ability of large language models to perform fewshot relation extraction via in-context learning; to our knowledge this is the first such evaluation for RE specifically, although few-shot learning more generally is an active sub-area of research. ## 5.2 Few Shot In-Context Learning Few shot in-context learning entails incorporating a few training examples into model prompts, effectively "learning" via the activations induced by passing these examples through the network at inference time. This has the advantage of completely forgoing model weight updates, which can be costly for LLMs (Wang et al., 2021). An active area of research concerns such cross-task generalization capabilities (Ye et al., 2021; Wei et al., 2022a; Min et al., 2022; Xu et al., 2022) of LLMs where a model learns a new, previously-unseen task efficiently with just a few examples. Chen et al. (2022) also proposed a self-supervised objective as an intermediate stage between pre-training and downstream few-shot learning. Recent work on few shot in-context learning has largely focused on the selection (Liu et al., 2022) and ordering (Lu et al., 2022a) of exemplars included in the prompt provided to the model. ## 6 Conclusions And Future Directions We have evaluated the capabilities of modern large language models (LLMs)—specifically GPT-3 and Flan T5 (Large)—on the task of Relation Extraction (RE). We found that, when evaluated carefully, GPT-3 performs comparably to fully supervised state-of-the-art (SOTA) models, given only 10s of examples. We then proposed a distillation technique in which we augmented target RE labels with Chain of Thought (CoT) style explanations elicited from GPT-3 and used this to fine-tune Flan-T5; this yielded SOTA performance across all datasets considered, often by wide margins (5-10 points in F1). Our results suggest that where feasible, LLMs should be a standard baseline for RE. Future directions We have left several avenues open for further exploration. For example, evaluating LLMs like GPT-3 for RE required collecting manual annotations to identify ostensible "false positive" and "false negative" model outputs which were in fact accurate. Designing models to automate this evaluation might provide similar reliability without the accompanying costs; we provide preliminary work in this direction through the use of simple BERT-style classifiers in Appendix D. ## Limitations We have demonstrated that across three standard RE datasets, LLMs achieve SOTA results. In particular, GPT-3 yields such performance even given only 10s of training sample for in-context learning. We then showed that we can similarly achieve SOTA performance with the much smaller (and open-source) Flan T5 (Large) model, when trained using CoT generations produced by GPT-3. We also highlighted key challenges for evaluation in this setting. But there are important limitations to these contributions. First, here we considered three standard RE datasets with binary relations butas we discussed—we excluded more complex RE datasets. For example, we did not consider corpora containing n-ary relations between entities (Taboureau et al., 2010). We were also unable to run experiments on datasets with lengthy texts and a large number of relations, such as DocRED (Yao et al., 2021), due to the necessary prompt lengths for such inputs. Second, while we found that CoT-style explanations generated by GPT-3 can be fruitfully used as additional supervision to fine-tune smaller language models, we made no attempt to evaluate the quality of these generated explanations which may have an impact on the model performance. Third, we did not fine-tune GPT-3 on the RE datasets, mainly due to the cost of doing so. It is likely that a fine-tuned GPT-3 would yield performance superior to the results we achieved with Flan T5 (which constitute current SOTA). But, in addition to the costs necessary for fine-tuning this model, the resultant weights would not be accessible to run locally in any case; one would have access to it only via the OpenAI interface, which motivated our decision to fine-tune the smaller and open-source Flan T5 instead. Finally, we *only* experiment with datasets curated in the English language and therefore, we do not know that the issues we have highlighted could replicate in the same way in other languages. ## Ethics Statement Our work required an extensive manual annotation and evaluation process which involved using Amazon Mechanical Turk. Turk requires we pay workers *per annotation*, so we have to estimate the time required for each task. To do so, we (the authors) carried out a small number of these annotations ourselves to determine fair approximate hourly compensation. We then set the price per annotation such that it averages out to $15/hour (we pay this rate irrespective of geographic location of the workers). We also provided our recruited AMT workers 20% additional time per annotation. ## Acknowledgements This work was supported in part by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant R01LM012086 and by the National Science Foundation (NSF) grant III1750978. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020a. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020b. Language models are few-shot learners. *ArXiv*, abs/2005.14165. Mingda Chen, Jingfei Du, Ramakanth Pasunuru, Todor Mihaylov, Srini Iyer, Veselin Stoyanov, and Zornitsa Kozareva. 2022. Improving in-context few-shot learning via self-supervised training. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3558–3573, Seattle, United States. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Markus Eberts and Adrian Ulges. 2019a. Span-based joint entity and relation extraction with transformer pre-training. *ArXiv*, abs/1909.07755. Markus Eberts and Adrian Ulges. 2019b. Span-based joint entity and relation extraction with transformer pre-training. *CoRR*, abs/1909.07755. Markus Eberts and Adrian Ulges. 2021. An end-to-end model for entity-level relation extraction using multiinstance learning. In *Proceedings of the 16th Conference of the European Chapter of the Association* for Computational Linguistics: Main Volume, pages 3650–3660, Online. Association for Computational Linguistics. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drugrelated adverse effects from medical case reports. Journal of biomedical informatics, 45 5:885–92. Pere-Lluís Huguet Cabot and Roberto Navigli. 2021. REBEL: Relation extraction by end-to-end language generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2370– 2381, Punta Cana, Dominican Republic. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In *Proceedings of the Eighteenth International Conference on Machine Learning*, ICML '01, page 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022a. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022b. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin, Ireland. Association for Computational Linguistics. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2022. MetaICL: Learning to learn in context. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2791–2809, Seattle, United States. Association for Computational Linguistics. Tapas Nayak and Hwee Tou Ng. 2020. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In *AAAI Conference on Artificial Intelligence*. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In International Conference on Learning Representations. Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In *ECML/PKDD*. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In *Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004*, pages 1–8, Boston, Massachusetts, USA. Association for Computational Linguistics. Olivier Taboureau, Sonny Kim Nielsen, Karine Audouze, Nils Weinhold, Daniel Edsgärd, Francisco S. Roque, Irene Kouskoumvekaki, Alina Bora, Ramona Curpan, Thomas Skøt Jensen, Søren Brunak, and Tudor I. Oprea. 2010. ChemProt: a disease chemical biology database. *Nucleic Acids Research*, 39:D367– D372. Bruno Taillé, Vincent Guigue, Geoffrey Scoutheeten, and Patrick Gallinari. 2020. Let's Stop Incorrect Comparisons in End-to-end Relation Extraction! In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3689–3701, Online. Association for Computational Linguistics. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In *Proceedings of the Twenty-First International Conference on Machine Learning*, ICML '04, page 104, New York, NY, USA. Association for Computing Machinery. Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, and Dawn Song. 2022. DeepStruct: Pretraining of language models for structure prediction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 803–823, Dublin, Ireland. Association for Computational Linguistics. Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with tablesequence encoders. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1706–1721, Online. Association for Computational Linguistics. Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want to reduce labeling cost? GPT-3 can help. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4195–4205, Punta Cana, Dominican Republic. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned language models are zero-shot learners. *ArXiv*, abs/2109.01652. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. 2022. Zeroprompt: Scaling prompt-based pretraining to 1, 000 tasks improves zero-shot generalization. *CoRR*, abs/2201.06910. Yuan Yao, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie Zhou, and Maosong Sun. 2021. CodRED: A cross-document relation extraction dataset for acquiring knowledge in the wild. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4452–4472, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 764–777, Florence, Italy. Association for Computational Linguistics. Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021. CrossFit: A few-shot learning challenge for crosstask generalization in NLP. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7163–7189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Daojian Zeng, Haoran Zhang, and Qianying Liu. 2020. Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. *ArXiv*, abs/1911.10438. Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 506–514, Melbourne, Australia. Association for Computational Linguistics. Model Data P R F-1 Few-Shot In-Context Prompting GPT-3 ADE 80.85 84.54 82.66 CoNLL 78.31 74.82 76.53 NYT 66.63 70.58 68.55 Vanilla Fine-Tune Flan-T5-Large ADE 89.11 77.93 83.15 CoNLL 78.81 72.05 75.28 NYT 91.82 90.25 91.03 Fine-Tune Flan on GPT-3-generated CoT ADE 91.74 92.60 92.17 CoNLL 81.22 80.31 80.76 NYT 95.49 94.97 95.23 Fine-Tune Flan w/ CoT Explanations and Reference labels generated from GPTCoNLL 76.41 75.85 76.13 ## A Datasets We considered and conducted the evaluation of our methods on the following datasets. Basic data statistics are also reported in Table 1. ADE Adverse Drug Events (Gurulingappa et al., 2012) contains binary relations of (drug, adverse event) pairs. Drugs and adverse events are the only two entity types. This dataset provides a 10-fold split. CONLL04 The CoNLL04 consists of sentences from news articles that were annotated for the mentioned entities and relations between entities (Roth and Yih, 2004). It includes four entity types (*PER, ORG, LOC, OTH*) and five possible relations (*KILL, WORK_FOR, LIVE_IN, LOCATED_IN,* ORG_BASED_IN). NYT The NYT comprises sentences sampled from New York Times news articles published between 1987 and 2007 (Riedel et al., 2010). The data was distantly annotated with relations triplets from FreeBase. We use a processed version of NYT (Zeng et al., 2018) containing three overlapping entity types (*LOC, PER, ORG*) and 24 relation types. DocRED Originally designed as a relation classification task, DocRED (Yao et al., 2019) differs considerably from the other datasets considered in this work in two important ways: (1) It comprises long texts which feature relations between entities at a *document-level*; (2) It contains annotations for 6 entity types and 96 relation types, with an average of 19.9 entities and 19.5 relation instances per document. ## B Models And Reproducibility We provide average micro metrics over 5 seeds across each dataset in Table 3. On Flan-T5-Large, where we do fine-tuning, some hyperparameters were manually tuned but most left at their default values. The final values for the ones that were manually tuned are provided in Table 4. We perform all experiments with a single NVIDIA Quadro RTX 8000 with 64GB of RAM on an Intel Xeon E502680v4 (2.4GHz). ## B.1 Costs ($$$) We provide details on the costs we incurred while running experiments on GPT-3 in Table 5. ## C Prompts We use the following prompt elements as few-shot exemplars corresponding to each dataset in our evaluation. Inputs and target references are directly extracted from the original training sets while the explanations are human-written and were added when necessary for the experiments described in section 3 and 4. ## Ade Example Instructional Prefix: List all [drug, adverse effects] pairs in the TEXT provided below. TEXT: We report on three observations of parkinsonian patients with levo-dopa-induced diphasic dyskinesias, who received subcutaneous apomorphine to reduce the duration of abnormal movements. Relations: [['levo-dopa', 'diphasic dyskinesias']] Explanation: levo-dopa induced diphasic dyskinesias in parkinsonian patients.<s> TEXT: A girl with cystic fibrosis and cyclic neutropenia developed an erythematous papular eruption without fever or neutrophilia 7 months after commencing therapy with G-CSF. Relations: [['G-CSF', 'erythematous papular eruption']] Explanation: G-CSF therapy caused erythematous papular eruption in a girl with cystic fibrosis.<s> TEXT: Hypersensitivity to carboplatin is a rare but real complication of therapy and should be considered in patients presenting with hyperacute changes on ECG whilst receiving carboplatin therapy. | Model | Data | Batch Size | Warm-up | Learning Rate | Time/Epoch (minutes) | Max Epochs | |----------------------------------------------------------------------------|--------|--------------|-----------|-----------------|------------------------|--------------| | ADE | 8 | 10% | 3e-5 | 36 | 6 | | | Vanilla Fine-Tune | CoNLL | 4 | 12% | 3e-5 | 22 | 10 | | Flan-T5-Large | NYT | 4 | 12% | 2e-5 | 99 | 4 | | ADE | 8 | 10% | 3e-5 | 38 | 6 | | | Fine-Tune Flan on | CoNLL | 4 | 12% | 3e-5 | 28 | 10 | | GPT-3-generated CoT | NYT | 4 | 12% | 2e-5 | 107 | 4 | | Fine-Tune Flan w/ CoT Explanations and Reference labels generated from GPT | ADE | 8 | 10% | 3e-5 | 37 | 6 | | CoNLL | 4 | 12% | 3e-5 | 28 | 10 | | | NYT | 4 | 12% | 2e-5 | 109 | 4 | | Table 4: Hyperparameters and compute time for the fully fine-tuned Flan models (corresponding to main results table 2). | Experiment | Data | Cost (US$) | |------------------------------------------------|--------|--------------| | Evaluation of Few-Shot In-Context Prompting | ADE | 64.91 | | CoNLL | 19.24 | | | NYT | 238.70 | | | ADE | 93.96 | | | Generation of CoT | CoNLL | 44.20 | | Explanations (Training Set) | NYT | 983.86 | | Generation of Target Labels + CoT Explanations | CoNLL | 86.41 | Table 5: Summary of costs incurred by prompting and using GPT-3 as a labeler for RE. Relations: [['carboplatin', 'hyperacute changes on ECG'], ['carboplatin', 'Hypersensitivity']] Explanation: Patients who undergo carboplatin therapy are prone to hypersensitivity and hyperacute changes on their ECG.<s> TEXT: The diagnosis of hypothermia was delayed until it was apparent for several days but resolved with the discontinuation of risperidone and continuation of clozapine. Relations: [['risperidone', 'hypothermia']] Explanation: risperidone caused hypothermia since it was resolved with its discontinuation.<s> TEXT: Eighty-two patients with various malignancies who received imipenem/cilastatin 143 times for neutropenic fever between March 1994 and October 1999 in Department of Pediatric Oncology, Gazi University, were identified. Relations: [['cilastatin', 'neutropenic fever'], ['imipenem', 'neutropenic fever']] Explanation: Patients who received either cilastatin or imipenem were identified with neutropenic fever.<s> TEXT: This increase when clozapine was switched to risperidone and vice versa is consistent with our previous report of elevated serum triglyceride levels in clozapine-treated patients. Relations: [['clozapine', 'elevated serum triglyceride levels']] Explanation: There was a report of elevated serum triglyceride levels in clozapine-treated patients.<s> TEXT: Autopsy findings were consistent with bleomycin and oxygen-induced pulmonary damage. Relations: [['bleomycin', 'pulmonary damage'], ['oxygen', 'pulmonary damage']] Explanation: Both bleomycin and oxygen caused pulmonary damage in the autopsy findings.<s> TEXT: CD4 T-lymphocyte depletion, myelosuppression, and subsequent severe infections are the major side effects of fludarabine phosphate therapy. Relations: [['fludarabine phosphate', 'CD4 T-lymphocyte depletion'], ['fludarabine phosphate', 'myelosuppression'], ['fludarabine phosphate', 'severe infections']] Explanation: Following major side-effects are known of fludarabine phosphate therapy, CD4 T-lymphocyte depletion, myelosuppression, and severe infections.<s> TEXT: OBJECTIVE: To describe a case of severe skin necrosis resulting from peripheral intravenous administration of low-dose vasopressin in a patient with catecholamine-resistant septic shock. Relations: [['vasopressin', 'skin necrosis']] Explanation: A case of skin necrosis was described after administration of low-dose vasopressin.<s> TEXT: In vitro inhibition of hematopoiesis in a patient with systemic sclerosis treated with D-penicillamine. Relations: [['D-penicillamine', 'inhibition of hematopoiesis']] Explanation: Patient treated with D-penicillamine had in vitro inhibition of hematopoiesis.<s> TEXT: PURPOSE: We report an unusual paradoxical effect of brimonidine. Relations: [['brimonidine', 'paradoxical effect']] Explanation: paradoxical effect of brimonidine was reported.<s> TEXT: Hepatocellular damage following therapeutic intravenous iron sucrose infusion in a child. Relations: [['iron sucrose', 'Hepatocellular damage']] Explanation: Hepatocellular damage occurred in a child after infusion of iron sucrose.<s> ## Conll Examplee Instructional Prefix: List the relations of the types [OrgBased In, Work For, Located In, Live In, Kill] among the entities [PERSON, LOCATION, ORGANIZATION, OTHER] in the given text and provide a reasonable explanation. TEXT: "If it does not snow, and a lot, within this month we will have no water to submerge 150,000 hectares (370,500 acres) of rice", said Bruno Pusterla, a top official of the Italian Agricultural Confederation. Relations: [['Bruno Pusterla:Per', 'Work For', 'Italian Agricultural Confederation:Org']] Explanation: Bruno Pusterla is a top official of the Italian Agricultral Confederation.<s> TEXT: Meanwhile, Shi Liming at the Institute of Zoology of Kunming found that pandas lack variety in their protein heredity, which may serve as one of the major reasons for pandas' near extinction. Relations: [['Shi Liming:Per', 'Work For', 'Institute of Zoology:Org'], ['Institute of Zoology:Org', 'OrgBased In', 'Kunming:Loc']] Explanation: Shi Liming works for the Institute of Zoology, which is an organization based in Kunming.<s> TEXT: The viewers of "JFK" and "The Men Who Killed Kennedy" never learn about these facts, nor do they ever learn about all of the other massive body of evidence that conclusively proves beyond a reasonable doubt that Oswald was the lone gunman who killed President Kennedy and Officer Tippit and that there was no coverup by Earl Warren or by the Warren Commission. Relations: [['Oswald:Per', 'Kill', 'President Kennedy:Per'], ['Oswald:Per', 'Kill', 'Officer Tippit:Per']] Explanation: Oswald was the lone gunman who killed President Kennedy and Officer Tippit.<s> TEXT: PURCHASE, N.Y . Relations: [['PURCHASE:Loc', 'Located In', 'N.Y.:Loc']] Explanation: PURCHASE is a place located in N.Y..<s> TEXT: BELGRADE, Yugoslavia (AP) Relations: [['BELGRADE:Loc', 'Located In', 'Yugoslavia:Loc'], ['AP:Org', 'OrgBased In', 'BELGRADE:Loc'], ['AP:Org', 'OrgBased In', 'Yugoslavia:Loc']] Explanation: City of BELGRADE is located in Yugoslavia and AP is an organization based in BELGRADE, Yugoslavia.<s> TEXT: Rome is in Lazio province and Naples in Campania. Relations: [['Rome:Loc', 'Located In', 'Lazio:Loc'], ['Naples:Loc', 'Located 15579 In', 'Campania:Loc']] Explanation: Rome is a place located in Lazio and Naples is a place located in Campania.<s> TEXT: (By ITAR-TASS correspondent Mikhail Shevtsov) Relations: [['Mikhail Shevtsov:Per', 'Work For', 'ITAR-TASS:Org']] Explanation: Mikhail Shevtsov is a correspondent for the ITAR-TASS.<s> TEXT: In the communique, the Group of Rio states that the Haitian crisis can be resolved only if unrestricted respect is shown for the Governor's Island Agreement which calls for the prompt return of Haitian President Jean Bertrand Aristide to the exercise of his constitutional powers in Haiti. Relations: [['Jean Bertrand Aristide:Per', 'Live In', 'Haiti:Loc']] Explanation: Jean Bertrand Aristide was the president of Haiti and therefore lived in Haiti.<s> TEXT: Moscow ITAR-TASS Relations: [['ITAR-TASS:Org', 'OrgBased In', 'Moscow:Loc']] Explanation: ITAR-TASS is an organization based in Moscow.<s> TEXT: King rose to prominence after Mrs. Parks ' action in December 1955 in Montgomery , Ala. , set the stage for a boycott and subsequent demonstrations that caught the nation by surprise. Relations: [['Mrs. Parks:Per', 'Live In', 'Montgomery:Loc'], ['Mrs. Parks:Per', 'Live In', 'Ala.:Loc'], ['Montgomery:Loc', 'Located In', 'Ala.:Loc']] Explanation: Mrs. Parks actions were in Montgomery, Ala., where she lived. It can be derived that Montgomery is located in Ala..<s> TEXT: Sirhan says he was the lone assassin but can't remember shooting Kennedy. Relations: [['Sirhan:Per', 'Kill', 'Kennedy:Per']] Explanation: Sirhan was the lone assassin in the Kennedy assassination.<s> TEXT: In Colorado, 13 inches of snow in Denver Wednesday prompted officials to close Interstate 270 temporarily. Relations: [['Denver:Loc', 'Located In', 'Colorado:Loc']] Explanation: Denver officials closed Interstate 270 in Colorado, consequently we can see that Denver is located in Colorado.<s> TEXT: Edward Marks, an official with the Montgomery County Democratic Party, argued that if Ms. Toth is not interested in the job, "she should get out". Relations: [['Edward Marks:Per', 'Work For', 'Montgomery County Democratic Party:Org']] Explanation: Edward Marks is an official that works for the Montgomery County Democratic Party.<s> ## Nyt TEXT: Massachusetts ASTON MAGNA Great Barrington; also at Bard College, Annandale-on-Hudson, N.Y., July 1-Aug. Relations: [['Annandale-on-Hudson', '/location/location/contains', 'Bard College']] Explanation: Annandale-on-Hudson is a location in N.Y. that contains Bard College.<s> TEXT: It will be the final movie credited to Debra Hill, a film producer and native of Haddonfield, who produced "Halloween" and was considered a pioneering woman in film. Relations: [['Debra Hill:Per', '/people/person/place-of-birth', 'Haddonfield:Loc']] Explanation: Debra Hill was a film producer born (native of) in Haddonfield.<s> TEXT: Under pressure from Mr. Kerkorian and other disgruntled shareholders, Mr. Wagoner started talks on Friday in Detroit with Carlos Ghosn, the chief executive of Renault and Nissan. Relations: [['Carlos Ghosn:Per', '/business/person/company', 'Renault:Org']] Explanation: Carlos Ghosn is a business person (chief executive) associated with Renault and Nissan.<s> TEXT: Mr. Ferrer still holds commanding leads over the other two Democrats in the race - United States Representative Anthony D. Weiner of Brooklyn and Queens, and City Council Speaker Gifford Miller - and is also ahead of Mayor Michael R. Bloomberg in most polls. Relations: [['Anthony D. Weiner:Per', '/people/person/place-lived', 'Brooklyn:Loc'], ['Anthony D. Weiner:Per', '/people/person/place-lived', 'Queens:Loc']] Explanation: Anthony D. Weiner is a person representing Brooklyn and Queens, therefore we can infer he lives in those places.<s> TEXT: Quebec, Canada's second most populous province, after Ontario, has not decided to go that far. Relations: [['Ontario:Loc', '/location/administrative-division/country', 'Canada:Loc'], ['Canada:Loc', '/location/location/contains', 'Ontario:Loc'], ['Canada:Loc', '/location/country/administrative-divisions', 'Ontario:Loc']] Explanation: Ontario is a place located in the administrative divisions of the country Canada. Quebec is Canada's second most populous province and hence, Canada is a place that contains Quebec.<s> TEXT: And Abu Izzadeen , who converted to Islam at 17 and heads another successor group to Al Muhajiroun, called Al Ghurabaa, called suicide bombing "martyrdom operations". Relations: [['Abu Izzadeen:Per', '/people/person/religion', 'Islam:Org']] Explanation: Since Abu Izzadeen converted to Islam at the age of 17, we can infer that this is a person who belongs to the religion of Islam.<s> TEXT: And yet, despite the success of its exhibitions, the institute remains something of a strange hybrid: located southeast of Notre-Dame, in a striking building designed by Jean Nouvel, it has operated since 1987 as a partnership between France and 22 Arab countries. Relations: [['Jean Nouvel:Per', '/people/person/nationality', 'France:Loc']] Explanation: Jean Nouvel was a french designer and we can derive his nationality/citizenship as French or France.<s> TEXT: They could have done it Sunday, when we were closed," said Joseph Bastianich, who owns Del Posto with his mother, Lidia Bastianich, and the chef, Mario Batali. Relations: [['Lidia Bastianich:Per', '/people/person/children', 'Joseph Bastianich:Per']] Explanation: Joseph Bastianich owns Del Posto with his mother Lidia Bastianich.<s> TEXT: A French court sentenced six Algerian-French men to prison terms of up to 10 years on Tuesday for their role in a 2001 plot to attack the United States Embassy in Paris , closing the books on one of France 's most serious terrorist cases. Relations: [['Paris:Loc', '/location/ administrative-division/country', 'France:Loc'], ['France:Loc', '/location/location/contains', 'Paris:Loc'], ['France:Loc', '/location/country/ administrative-divisions', 'Paris:Loc'], ['France:Loc', '/location/country/capital', 'Paris:Loc']] Explanation: Paris is located in the administrative divisons of the country France. Consequently, France is a place that contains Paris. US embassies are located in the capital of countries, therefore it can be inferred that Paris is the capital of France.<s> TEXT: Anheuser-Busch, which has been the exclusive beer sponsor for the Super Bowl since 1989, will do so again for the Super Bowls in 2007 and 2010 on CBS and in 2008 and 2011 on Fox Broadcasting , said Anthony T. Ponturo, vice president for global media and sports marketing at Anheuser-Busch in St. Louis. Relations: [['Anheuser-Busch:Org', '/business/company/place-founded', 'St. Louis:Loc'], ['St. Louis:Loc', '/location/location/contains', 'Anheuser-Busch:Org']] Explanation: Anheuser-Busch is a business that was founded in St. Louis. Consequently, St. Louis is a place that contains Anheuser-Busch.<s> TEXT: Somewhat chastened by his retreat in the polls, Mr. Blair acknowledged that Britons had turned against him in part over accusations that he led them into a war in Iraq on dubious legal grounds and on the false premise that Saddam Hussein presented a direct threat because of a supposed arsenal of unconventional weapons that was never found." Relations: [['Saddam Hussein:Per', '/people/deceased-person/place-of-death', 'Iraq:Loc'], ['Saddam Hussein:Per', '/people/person/place-of-birth', 'Iraq:Loc'], ['Saddam Hussein:Per', '/people/person/nationality', 'Iraq:Loc']] Explanation: Saddam Hussein was killed in Iraq. His place of birth was also Iraq. We can infer that his nationality was Iraq.<s> TEXT: Rupert Murdoch and John C. Malone , who have wrangled for two years over Mr. Malone 's challenge to Mr. Murdoch 's control of the News Corporation , have made peace . Relations: [['Rupert Murdoch', '/business/person/company', 'News Corporation'], ['News Corporation', '/business/company/founders', 'Rupert Murdoch']] Explanation: Rupert Murdoch is a business person associated with News Corporation, which was a company founded by Rupert Murdoch.<s> TEXT: Manhattan, especially the East Village , has long been well stocked with cheap and raucous yakitori places that specialize in skewers and beer. Relations: [['Manhattan:Loc', '/location/location/contains', 'East Village:Loc'], ['East Village:Loc', '/location/neighborhood/neighborhood-of', 'Manhattan:Loc']] Explanation: East Village is a neighborhood in Manhattan.<s> TEXT: HEADING OUT - Sanford I. Weill stepped down as chairman of Citigroup , the worldwide financial supermarket he had meticulously and single-mindedly stitched together through dozens of mergers and acquisitions. Relations: [['Citigroup:Org', '/business/company/advisors', 'Sanford I. Weill:Per']] Explanation: Citigroup is a business company who was associated with (advised by) Sanford I. Weill.<s> TEXT: He had decided to use the premiere to publicize the issue; his plan was to invite the neighborhood's Russian speakers to sign a petition against piracy, a common practice at the area's Russian-language video outlets, which sell films and music from Russia and by Russian immigrants in the United States. Relations: [['Russian:Per', '/people/ethnicity/ geographic-distribution', 'Russia:Loc']] Explanation: Russian is an ethnicity in United States associated with immigrants who came from the geographic distribution of Russia.<s> TEXT: In 1995, Cleveland successfully lobbied to have the name Cleveland Browns stay in that city after that venerable franchise's owner, Art Modell, opted to move it to Baltimore. Relations: [['Cleveland:Loc', '/sports/sports-team-location/teams', 'Cleveland Browns:Org'], ['Cleveland Browns:Org', '/sports/sports-team/location', 'Cleveland:Loc']] Explanation: Cleveland Browns is the sports franchise located in Cleveland, consequently Cleveland's sports team is Cleveland Browns.<s> TEXT: Mr. Fields, speaking from vacation in France, added, "That a mogul like Sumner Redstone could make a statement so vicious, so pompous, so petulant as that he didn't want to make a deal with Tom Cruise because of his personal conduct - it tells you more about Sumner Redstone and Viacom, than about Tom Cruise". Relations: [['Sumner Redstone:Per', '/business/ company-shareholder/ major-shareholder-of', 'Viacom:Org']] Explanation: Sumner Redstone is a major shareholder of the company Viacom.<s> TEXT: It is a room of paintings by Leonard Peltier , a citizen of the Anishinabe and Dakota and Lakota nations who is serving two consecutive life terms in Pennsylvania for the murder of two F.B.I. agents on the Pine Ridge Reservation in South Dakota. Relations: [['Leonard Peltier:Per', '/people/person/ethnicity', 'Lakota:Per'], ['Lakota:Per', '/people/ethnicity/people', 'Leonard Peltier:Per']] Explanation: Leonard Peltier is a member of the Lakota native-american tribe and consequently belongs to that ethnic group.<s> TEXT: INSIDE THE N.B.A. Correction : February 9 , 2006 , Thursday A sports article on the Spotlight page on Sunday about Dick Bavetta , a longtime referee in the National Basketball Association, misstated the number he was approaching to set the record for regular-season games worked. Relations: [['Dick Bavetta:Per', '/people/person/profession', 'National Basketball Association:Org']] Explanation: Dick Bavetta is a person who's profession is that of a referee in National Basketball Association.<s> TEXT: Now the United States Postal Service may be displaying a similar rebellious streak : tomorrow at the huge Sturgis motorcycle rally in the Black Hills of South Dakota, the Postal Service will issue a set of four stamps that depict classic American bikes. Relations: [['United States Postal Service:Org', | Data | Inaccure FPs | Inaccurate FNs | |-------------|----------------|------------------| | / Total FPs | / Total FNs | | | ADE | 108 / 209 | 136 / 417 | | CoNLL | 92 / 183 | 56 / 152 | '/business/company/industry', 'Postal ![17_image_0.png](17_image_0.png) Service:Org']] Explanation: United States Postal Service is a business company in the industry of providing postal services.<s> ## D Learning To Identify False **False** Positives And Negatives As discussed in the main paper, one common problem across datasets in generative RE is evaluation, given that LMs are flexible in how they might express entities and relations. Prior work in RE has tended rely on standard metrics to quantify performance (precision, recall, micro-F1). These rely on matching *classified* (or in our case, *generated*) labels to reference labels to calculate the number of true positives (TPs), false positives (FPs), true negatives (TNs), and false negatives (FNs). Prior to the introduction of LLMs for generative RE, Taillé et al. (2020) attempted to unify evaluation and provide useful guidelines around issues associated with prior methods and how different evaluation strategies rendered an accurate comparison infeasible. They broadly recommended the use of a *strict* evaluation scheme where for a relation triplet to be considered correct, the head and tail entity surface forms must be an exact match, as well as their corresponding types (when available). While this provides a standardized framework for traditional models where entities and and relations are hard *classification* labels, in a generative setting we often find that LLMs, under varying levels of supervision, produce relation triplets (or pairs) that do not correspond exactly to their reference counterparts, but are nonetheless correct upon manual review. Consider the following example from CoNLL in Figure 2 Text: On Friday, U.S. Ambassador Vernon A. Walters... fuselage. Gold Reference: [(Vernon A. Walters, 'Live In', U.S.)] Generated Relations: [[Vernon A. Walters, 'Works For', U.S.]] In this example, one can reasonably infer that Vernon A. Walter is a U.S. Ambassador. Therefore, by definition a U.S. diplomat to another country cannot live inside the U.S., but such a person must work for the U.S. (commonsense dictates that a diplomat would work for a specific country). To achieve a more accurate characterization of how LLMs perform on generative RE tasks, we hired human annotators on Amazon Mechanical Turk5to manually re-assess all ostensible FPs and FNs from each of our datasets. To control for quality and recruit annotators we ran pilot experiments on 50 instances of pre-annotated data.6 We required AMT workers to have an overall approval rating of > 95% irrespective of geographic region. Based on these initial set of results we hired a total of 9 workers who reliably followed our instructions. Recruited workers were paid periodic bonuses (equivalent to one hour of pay) based on 5We set the payrate to average at $15/hour using time estimates informed by pilot experiments. 6These instances used in pilot experiments were annotated by a graduate student familiar with this research. the quality of their annotations. To identify potentially faulty "false positives", we provided annotators with the input text along with the relation identified as a FP, and ask the following question: "Can the given given relation be reasonably derived from the text?". Similarly, to identify erroneous "false negatives", we provide annotators with the input text, the full set of generated labels, the *ostensible* FN from the reference set, and ask: "Can the reference relation triplet (or pair) be inferred from the generated set of relations?". Each instance was annotated by three different AMT workers, and we considered a potential FP/FN to be inaccurate only when all annotators agree on a label.7 We provide specific examples of FPs and FNs in Tables 8 and 7. We summarize the dataset-specific findings in Table 6. In light of these findings, we make a first effort in using simple, learned models to classify falsepositives/negatives in generative RE. We experiment with fine-tuned BERT (Devlin et al., 2019) classifier to classify "false positives" and "false negatives" as being accurate designations (or not). For FPs, we concatenate the input with a generated relation pair/triplet (*potential* FP) and classify using the [CLS] token - [CLS] Input Text [SEP] Potential FP Similarly, for FNs we concatenate the input text with a *potential FN* and the full set of generated labels, and classify using the [CLS] token - [CLS] Input Text [SEP] Potential FN ## [Sep] Generated Labels $=\;\frac{1}{2}$ . $\mathbf{a}=\\ \mathbf{c}\in\mathbb{R}^d$ We analyze the effectiveness of this approach in Figure 4 using the AUC-ROC. We find that this approach is most effectiveness in identifying potential potential false positives for CoNLL (AUC 0.88), while being least effective at identifying false negatives for CoNLL (AUC 0.73). This suggests that learning to identify erroneous "false positives" and "false negatives" may be a promising avenue to facilitate accurate automated evaluation of generative LLMs for RE. ## D.1 List Of Out-Of-Domain Relation-Types Generated By Flan During Few-Shot Prompting With Conll Assassinates, Purpose, Ispartof, Mother, Spouse, President, Date, Killed, Summer, Works_At, 7We Observe A High Degree Of Agreement Among The Annotators With A Fliess Kappa Of 0.83 ![19_image_0.png](19_image_0.png) | [Dalian, location/administrative_division/country, China] [[China, location/location/contains, Dalian]] | | | | |-----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|--------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [[James Thompson, Live_In, Illinois]] | [ivig, acute renal failure] [[immunoglobulin, acute renal failure]] | | | | [Vernon A. Walters, Work_For, U.S.] [[Vernon A. Walters, Live_In, U.S.] | [Turkey, location/administrative_division/country, Europe] [[Europe, LOC_CONTAINS, Turkey]] - Many people in Turkey have lost hopes in joining Europe and they are looking for other horizons , " said Onur Oymen , an opposition politician whose party is staunchly secular. | NYT | | | [clozapine, agranulocytosis] [[clozapine, granulocytopenia]] | | | | | Full Set of True Relations | [James Thompson, Work_For, Illinois], [Chicago, Located_In, Illinois] | | | | Detected FPs (Generated) | - Background: how to best treat psychotic patients who have had past clozapine-induced agranulocytosis or granulocytopenia remains a problem. | ADE | - To make his case , Dr. von Hagens invited two journalists to Dalian for a tour of his facility , which he said was the first center in China to preserve bodies. | | - Illinois Gov. James Thompson signed legislation last month scrapping Chicago 's central school board next July , to be replaced by parentrun councils empowered to set budgets and hire and fire principals . | CoNLL04 | | | | - On Friday, U.S. Ambassador Vernon A. Walters displayed photographs of one Libyan jet showing shapes resembling missile pods on its wings and fuselage. | - Acute renal failure is a rare complication following the administration of intravenous immunoglobulin (ivig). | | | | Dataset Input | | | | Sentenced_To_Death, Source, Statue, Secretary, Born, Year, Born_in, Day, Place, Number_Of_Passengers, Callers, Governor, Hometown, has_a_leader, is_a_member_of, Nickname, is_part_of, Office, Rank, Works_For, WorkedFor, Worked_For, Killed_By, Piano, Term, Sentence, Person, Movie, Said, Brother, Date_of_Death, Type, Death_Penalty, assassination_date, Worked_for, capital, Killed, Killing, Occupation, Crime, Years_in_use, Org, Education, Order_to_ignore, Assassination, Location, Officer, language, former_name, Total_acres, Age, Cause, Chairman, worked_for, Son, Staff_name, departure, Capsule_name, Operator, Spin-off, Owner, located_in, theory, Birth_Place, on_duty_with, City, Top_Leader, Director, structure, Known_as, former_chief_executive, Works_for, Native_name, Percentage, department, Component, reminds_someone_of, Sex, Bank, Appointed_By, Activity, Title, has_a_river_name, Size, Office_Space, Part, Kingdom, Attached_to, Death_Place, Years_on_the_Supreme_Court, Assassin, location, Newspaper, City" island, Employee, Friend, Native_Son, Speaker, Visitor, Date, Aircraft, channel, Sale_to, Creditor, Client, Nationality, Flight_Status, assassinater, on_behalf_of, Shot_By. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations (page 9) A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction (1) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 3,4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3.2 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not required. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhao-etal-2023-pre
Pre-trained Language Models Can be Fully Zero-Shot Learners
https://aclanthology.org/2023.acl-long.869
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, paraphrasing, and multiple-choice question answering. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8{\%} in accuracy on text classification and 15.6{\%} on the GLUE benchmark. Our source code is available at \url{https://anonymous.4open.science/r/NPPrompt}.
# Pre-Trained Language Models Can Be Fully Zero-Shot Learners Xuandong Zhao†, Siqi Ouyang†, Zhiguo Yu‡, Ming Wu‡**, Lei Li**† †UC Santa Barbara ‡Microsoft {xuandongzhao,siqiouyang,leili}@cs.ucsb.edu {zhiguo.yu,mingwu}@microsoft.com ## Abstract How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric **prompt**ing PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major fewshot and zero-shot learning methods on diverse NLP tasks: text classification, text entailment, similar text retrieval, paraphrasing, and multiple-choice question answering. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8% in accuracy on text classification and 15.6% on the GLUE benchmark. Our source code is available at https://github. com/XuandongZhao/NPPrompt. ## 1 Introduction Natural language understanding (NLU) has been important in many applications such as intelligent dialog assistants, online search, and social media analysis. Recent advancement of NLU has been driven by emergent pre-trained language models (PLMs) including BERT (Devlin et al., 2019; Liu et al., 2019b), GPT (Radford et al., 2018, 2019; Brown et al., 2020), BART (Lewis et al., 2020), and T5 (Raffel et al., 2020). Prior studies show that PLMs obtain substantial knowledge during pretraining on raw text corpus (Petroni et al., 2019; Feldman et al., 2019). By fine-tuning on taskspecific labeled data, PLMs exploit such knowledge and gain impressive accuracy on a wide range of NLP tasks, such as text classification (Kowsari et al., 2019), question answering (Rajpurkar et al., 2016), machine reading comprehension (Campos et al., 2016), etc. However, fine-tuning approaches are expensive. It requires labeled datasets, which are rarely available for many tasks. Significant computational efforts are needed to update PLMs' parameters for multiple tasks. In addition, fine-tuning results in one distinct model for each task to maintain. How can we generalize a pre-trained model to many NLP tasks, without labeled or additional unlabeled data? Existing few-shot and zero-shot approaches propose to construct prompts to elicit desired predictions from PLMs (Brown et al., 2020). The main idea of prompting PLMs is to convert an input utterance to one with masked templates. For example, in text classification an input can be "The Warriors won the NBA championship 2022" and it is instead converted to "The Warriors won the NBA championship 2022. This topic is about [MASK]". A PLM (e.g. BERT) takes the converted text and produces predictions for the masked token, along with the probability. Ideally, a PLM will generate a higher probability for the word "sports" than "politics" on the [MASK] token. Although these prompting-based methods are effective, they require unlabeled data for training or huge human efforts to construct prompts and to choose designated tokens to represent class labels (Schick and Schütze, 2021a,b; Gao et al., 2021). In addition, these manually constructed *verbalizers*, i.e. mapping from words (e.g. "basketball") to class labels (e.g. SPORTS), do not extend to new emerging categories after PLMs are deployed. In this paper, we investigate the fully zero-shot learning problem for NLU where only the target label names are available but not the extra raw text. We propose nonparametric **prompt**ing PLM (NPPrompt), a novel method to generate predic15590 tions for semantic labels without any fine-tuning. NPPrompt uses PLM's own embeddings to automatically find relevant words to labels (e.g. "basketball" and "NBA" for SPORTS), therefore it does not need humans to construct verbalizers. Our key idea is to search for the top k nearest neighbors to a label name in the embedding manifold and then generate and aggregate PLM's predicted logits from masked prompts. In the above case, both predicted values for "basketball" and "NBA" contribute to the final prediction for the SPORTS category. In this way, NPPrompt can be easily generalized to any new categories as long as the category names are semantically meaningful. The contributions of this paper are as follows. a) We develop NPPrompt, a novel method for fully zero-shot learning with PLMs. b) We conduct extensive experiments on diverse language understanding tasks including text classification, text entailment, similar text retrieval, paraphrasing, and multiple-choice question answering. Experimental results show that NPPrompt outperforms the previous zero-shot methods by absolute 12.8% in accuracy on text classification and 15.6% on the GLUE benchmark. Surprisingly, NPPrompt is on a par with the best prior method that trained with manual verbalizers, an additional knowledge base, and extra unlabeled data. ## 2 Related Work Prompting The success of GPT-3 (Brown et al., 2020) has attracted much attention to prompting engineering, a new way to leverage pre-trained language models. (Brown et al., 2020) concatenate a few input and output pairs and feed them to the large-scale GPT-3 language model, which is an intuitive in-context learning paradigm, allowing the model to generate answers for additional cases autoregressively. Recent works (Schick and Schütze, 2021a,b) show that small-scale pretrained language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) and ALBERT (Lan et al., 2019) can also achieve decent performance using prompt-tuning. Prompting has been applied to a large variety of tasks such as Text Classification (Schick and Schütze, 2021a), Natural Language Understanding (Xu et al., 2022), Knowledge Probing (Petroni et al., 2019), and Relation Extraction (Han et al., 2021). Typically, a piece of prompt contains a template and a verbalizer. The language model predicts a probability distribution over vocabulary given the template and the verbalizer transforms it into a prediction over class labels. In this work, we focus on designing the verbalizers automatically. Verbalizer Design The verbalizer plays a crucial role in prompting as it connects model outputs and labels, significantly influencing performance. (Schick and Schütze, 2021a) design human written verbalizers for prompting, however, they are highly biased towards personal vocabulary with inadequate coverage. Apart from manually designed verbalizers, some recent studies explore automatic verbalizer construction. Auto-L (Gao et al., 2021) uses re-ranking to find the label words set by finetuning the model on the candidates searched by RoBERTa; AutoPrompt (Shin et al., 2020) applies gradient-based search to create both prompts and label words automatically with a few trigger examples. But these approaches need to update parameters with gradient descent, which turns out to be infeasible without access to the model weights (e.g., GPT-3). KPT (Han et al., 2021) incorporates external knowledge into the verbalizer in which the unlabeled dataset is needed to refine the label words and thus is not applicable to scenarios where only label names are known. In contrast, our approach NPPrompt directly finds, without any gradient update, relevant words to label names with PLM's initial word embedding only. Zero-shot Text Classification General zero-shot text classification typically focuses on classifying texts into categories that were not seen during the training process. Transferring knowledge from seen classes to unseen ones requires accurate and discriminative descriptions of all classes (Liu et al., 2019a; Xia et al., 2018) or joint embeddings of categories and documents (Nam et al., 2016). However, these methods rely on supervised data for the known label set, making them unsuitable for scenarios where no labeled pairs for any category are available. SimPTC (Fei et al., 2022) improves zero-shot classification by clustering input texts and employing class-related prompts. LOTClass (Meng et al., 2020) proposes a model that utilizes label names with self-training for zero-shot classification. Nonetheless, both SimPTC and LOTClass still require an unlabeled corpus or knowledge base to extract topic-related words and perform self-training. In contrast, NPPrompt achieves comparable or even superior performance without the need for any unlabeled dataset or knowledge base. ## 3 Background: Prompt-Based Tuning For Plms We first provide standard paradigms, prompt-based tuning, that perform well in few-shot scenarios, before introducing our approach for the zero-shot case. Take N way text classification as an example. We aim to predict the label y ∈ Y for each sentence, where Y is the label set with N distinct classes. Prompt-based tuning tunes PLM using customized prompts (Brown et al., 2020). The regular prompt-based tuning converts a specific task to a cloze-style mask language modeling problem. For each input example x (single sentence or sentence pair), we first apply a task template T on it, converting original input x to xprompt. For instance, we concatenate the template "T (·) = This topic is about [MASK]" with the original input "The Warriors won the NBA championship 2022" and wrap it into: xprompt = T (x) = x. This topic is about [MASK] The *verbalizer* f in vanilla prompt engineering maps a set of selected words V from the vocabulary to the original label space Y, i.e., f : *V → Y*. Inversely, we use M(yj ) to denote the *label words* in V that are mapped into a specific label yj , ∪yj∈YM(yj ) = V. Then we calculate the probability of label yj : $$v_{i}\mid x_{\mathrm{prom}}$$ P(yj | x) = g (P([MASK] = vi | xprompt) | vi ∈ M(yj )), where g(·) is for aggregating the probability of label words into the probability of the label. Then PLMs can be fine-tuned by minimizing the crossentropy loss with supervised examples. ## 4 Proposed Method: Npprompt We inherit PLM with verbalizers framework but keep PLM's parameters frozen (Gao et al., 2021). The key idea of NPPrompt is using PLM's word embeddings to automatically construct verbalizers - mapping from words to labels - in a fully zeroshot way. It does not need any additional raw text corpus for fine-tuning. NPPrompt consists of two steps to compute predictions for any labels in a nonparametric form (Figure 1). 1) We search for all label words closely related to each class yj in PLM's token embedding manifold. 2) Then we use the PLM to predict values for [MASK], filter them using each class's set of label words, and aggregate the properly weighed outputs to produce the final prediction. In the following, we describe NPPrompt for text classification but it generalizes to other language understanding tasks. k**-Nearest-Neighbor Verbalizer Construction** For each class label (e.g. "SPORTS"), we search over the whole vocabulary V for the top-k words nearest to the label name in the PLM's embedding space. Here, the distance between words and label names is measured using the cosine similarity score. Other distance metrics work as well and are examined in Section 5. We denote k as the *neighborhood* number. Assuming the embeddings of word vi and label name yj are emb(vi) and emb(yj ) respectively, the label words of the verbalizer for yj are selected by top-k ranking: $${\mathcal{M}}(y_{j})=\mathrm{Top-}k\left\{S(\mathbf{emb}(v_{i}),\mathbf{emb}(y_{j}))\right\},\tag{1}$$ where S(·) is the cosine similarity function: S (emb(vi), emb(yj )) = emb(vi) ∥emb(vi)∥· emb(yj ) ∥emb(yj )∥ . Since the PLM is already pre-trained on raw text corpus, it acquires sensible semantic knowledge and relatedness of words in the vocabulary. We use PLM's embedding to search for label words semantically relevant to given label names. For illustration, we show the found label words of two categories in the AG News dataset (Zhang et al., 2015) and the corresponding similarity scores in Table 1. We also extend our verbalizer to support label names with longer expressions in Appendix A.2. Word Sim Word Sim " sports" 1.00 " business" 1.00 " Sports" 0.77 " Business" 0.78 " sport" 0.75 " businesses" 0.74 " sporting" 0.68 "business" 0.72 " athletics" 0.65 "Business" 0.67 "sports" 0.65 " businessman" 0.59 "Sports" 0.65 " corporate" 0.58 " Sport" 0.62 " company" 0.56 " athletic" 0.61 " enterprise" 0.55 " athletes" 0.61 " businessmen" 0.55 Nonparametric Aggregation of Prompted Predictions For each input text x, we construct a prompt-augmented sequence xprompt = T (x) with a [MASK] token. We use the PLM to predict tokens ![3_image_0.png](3_image_0.png) for [MASK]. In contrast to previous prompting methods which directly calculate the probability over the surface labels, we use the nearest label words from above to compute the probability for each output label. Only the words in a label's top-k neighborhood will contribute to the class prediction. The contribution from each label word is non-equal. To be specific, with T (x), a PLM produces the logit vector Θ[MASK] for all possible words at the [MASK] token. Notice that if the whole vocabulary is V, Θ[MASK] ∈ R|V|. Then we compute the class probability for a label yj by aggregating the logits filtered by the verbalizer's label words. We use kernel smoothing to aggregate as follows: $$Q(y_{j}|x)=\sum_{v_{i}\in{\cal M}(y_{j})}w(v_{i},y_{j})\cdot\Theta(\mbox{\small[MASK]}=v_{i}|x_{\rm prompt}={\cal T}(x))\tag{2}$$ Where the weight between label word vi and class name yj is defined as: $$w(v_{i},y_{j})=\frac{\exp\left(S(\mathbf{emb}(v_{i}),\mathbf{emb}(y_{j}))\right)}{\sum_{v_{t}\in\mathcal{M}(y_{j})}\exp\left(S(\mathbf{emb}(v_{t}),\mathbf{emb}(y_{j}))\right)}\tag{3}$$ Finally, the best class prediction is selected from the maximum of all labels: $$\widetilde{y}=\operatorname{argmax}_{y_{j}}Q\left(y_{j}\mid x\right).$$ Notice since we use kernel smoothing on logits instead of probability, Q is also unnormalized probability. For example, AG News has two classes y1 = {SCIENCE}, y2 = {SPORTS}. From Table 1, the verbalizer for SPORTS M(y1) includes label words "sports", "athletics", etc, and the verbalizer for BUSINESS M(y2) includes label words "business", "corporate", etc. Given an input text x "The Warriors won the NBA championship 2022", the prompt-augmented sequence xprompt will be "The Warriors won the NBA championship 2022. This topic is about [MASK]". The PLM computes logits for every word Θ([MASK] = v|xprompt). NPPrompt computes the unnormalized probabilities for SPORTS and BUSINESS: $Q(\text{Sports}|x)=w(\text{`sports"},\text{Sports})\cdot\Theta(\text{`MASK})=\text{``sports"}|x_{\text{prompt}})$ $+w(\text{``attheltics"},\text{Sports})\cdot\Theta(\text{`MASK})=\text{``attheltics"}|x_{\text{prompt}})+\cdots$ $Q(\text{Business}|x)=w(\text{``business"},\text{Business})\cdot\Theta(\text{`MASK})=\text{``business"}|x_{\text{prompt}})$ $+w(\text{``corporate"},\text{Business})\cdot\Theta(\text{`MASK})=\text{``corporate"}|x_{\text{prompt}})+\cdots$ If the aggregated prediction Q for SPORTS is larger than BUSINESS, NPPrompt outputs SPORTS. There are certain conditions where one class has label names containing little semantic meaning or where several keywords are needed to define a label. For instance, in the DBPedia dataset (Lehmann et al., 2015), one class is related to NATURALPLACE, then we can use the keywords {"river", "lake", "mountain"} to represent this class. In this setting, we pick out the keyword with the maximum score calculated by Equation 2 to represent each label first. Then we choose the label with the largest score. We use Φ(yj ) to denote all keywords in class yj , and the final prediction is : $$\widetilde{y}=\arg\operatorname*{max}_{y_{j}}\Bigl(\operatorname*{arg\,max}_{y^{\prime}\in\Phi(y_{j})}Q\left(y^{\prime}\mid x\right)\Bigr).\qquad\mathrm{(4)}$$ ## 5 Experiment We conduct extensive zero-shot learning experiments to demonstrate the effectiveness of our method. We provide detailed information on our implementation and address several research questions related to NPPrompt. ## 5.1 Datasets, Prompt Templates, And Experimental Setup | Dataset | Classification Type | # Classes | # Test | |-----------|--------------------------|-------------|----------| | AG News | News Topic | 4 | 7,600 | | DBPedia | Wikipedia Topic | 14 | 70,000 | | IMDB | Movie Review Sentiment | 2 | 25,000 | | Amazon | Product Review Sentiment | 2 | 400,000 | Table 2: Dataset statistics. We adopt sentiment classification tasks on two datasets, IMDB (Maas et al., 2011) and Amazon (McAuley and Leskovec, 2013), and topic classification tasks on another two datasets, AG News (Zhang et al., 2015) and DBPedia (Lehmann et al., 2015). All datasets are in the English language. For each task, we directly use the test set to assess model performances, without incorporating validation or training sets for post-tuning or cherrypicking hand-crafted prompts. The statistics of each dataset are shown in Table 2. To concentrate on the verbalizer and reduce the influence of templates, we adopt multiple fixed manual templates following (Hu et al., 2022). We report the best template used for the RoBERTalarge model in Table 3. | Dataset | Template | |-----------|------------------------------------------| | AG News | A [MASK] news : x . | | DBPedia | x1 x2 In this sentence, x1 is a [MASK] . | | IMDB | x All in all, it was [MASK] . | | Amazon | x All in all, it was [MASK] . | Table 3: Prompt templates for NPPrompt. We implement our experiments based on an open-source toolkit OpenPrompt (Ding et al., 2021), which aims to conduct prompt learning easily. We choose RoBERTa-large (Liu et al., 2019b) as our pre-trained language model. We report the best accuracy of classification results for all experiments using different neighborhood numbers. Since we directly use the pre-trained models for testing, there is no randomness (random seed) in this process. All experiments are conducted on Nvidia A6000 GPUs and more details can be found in Appendix A.1. ## 5.2 Baselines We evaluate the following baseline methods. Semantic Retrieval We utilize sentence embedding models (Reimers and Gurevych, 2019) to obtain the embedding for each sentence and descriptions for each class. Then we calculate the cosine similarity between sentences and label descriptions. We assign the most similar class labels to the sentence. Particularly, we use all-mpnet-base-v2 from Hugging Face as the sentence embedding model, and the descriptions for each class can be found in Appendix A.1. NSP-BERT (Sun et al., 2021) propose text entailment tasks to replace text classification tasks and then use the Next Sentence Prediction (NSP) head to predict the results. We show the template we use in Appendix A.1. ManualVerb Manual verbalizers are defined by human experts with domain knowledge and we simply use the label words provided by OpenPrompt (Ding et al., 2021). LOTClass (Meng et al., 2020) employ pretrained neural language models with unlabeled data for category understanding, i.e., finding words similar to label names. They then introduce a selftraining approach to the entire unlabeled corpus to generalize the model. GPT-3 with descriptions Following (Brown et al., 2020), we manually write the descriptions for each class and query GPT-3 where the predicted token serves as the prediction. We show the descriptions in Appendix A.1. ChatGPT with descriptions In the case of ChatGPT (OpenAI, 2022), we employ the same descriptions as those used for GPT-3. We query the ChatGPT model using these descriptions, and the predicted token is considered as the corresponding prediction. Our experimentation is based on the March 2023 version of ChatGPT. Method Human/KB Unlabeled AG News DBPedia IMDB Amazon Avg. ManualVerb " % 79.60.6 71.71.1 92.00.7 87.30.4 82.7 Semantic Retrieval " % 73.11.2 78.60.8 64.81.3 59.40.7 69.0 NSP-BERT " % 77.40.6 64.75.3 72.81.1 72.73.9 71.9 GPT-3 w. descriptions " % 83.4 82.5 88.8 89.4 86.0 ChatGPT w. descriptions " % 83.8 92.0 92.7 **95.8** 91.1 SimPTC " % 86.90.3 93.21.0 91.00.0 93.90.0 **91.3** LOTClass w/o. self train % " 82.2 86.0 80.2 85.3 83.4 LOTClass % " 86.4 91.1 86.5 91.6 88.9 KPT " " 86.7 87.4 **94.0** 94.6 90.7 Null Prompt % % 67.92.0 56.83.9 82.51.5 89.41.0 74.2 Multi-Null Prompt % % 68.21.8 67.61.8 86.60.6 86.22.7 77.2 NPPrompt % % 85.20.5 86.80.1 94.20.2 93.90.0 **90.0** SimPTC Fei et al. (2022) show that zero-shot text classification can be improved by leveraging text clustering in the embedding spaces of pre-trained language models. SimPTC utilizes a Bayesian Gaussian Mixture Model to fit unlabeled texts. The initialization of cluster positions and shapes is performed using class names. KPT (Hu et al., 2022) propose knowledgeable prompt-tuning, which expands the label words space using external knowledge bases (KB). KPT also refines the expanded label words based on the unlabeled data. We show the best results of KPT in the zero-shot setting. | MNLI MNLI-mm | SST-2 | QNLI | RTE | MRPC | QQP | CoLA | Avg. | | | |-------------------------------------------------------------------------|---------|---------------------------------|------------------------------------------|---------------|--------------|--------|---------|-----|------| | (acc) | (acc) | (acc) | (acc) | (acc) | (F1) | (F1) | (Matt.) | | | | With human designed prompts / few-shot data Manual Label 50.8 51.7 83.6 | 50.8 | 51.3 | 61.9 | 49.7 | 2.0 | 50.2 | | | | | In-context learning | 52.00.7 | 53.40.6 | 84.81.3 53.80.4 60.41.4 45.76.0 | 36.15.2 | −1.52.4 48.1 | | | | | | Auto-L | 41.65.4 | 42.36.2 | 84.33.3 57.93.9 61.97.5 | 67.77.9 | 55.55.0 | 1.24.8 | 51.6 | | | | AMuLaP | 50.82.1 | 52.31.8 | 86.91.6 53.12.8 58.97.9 56.35.0 | 60.22.7 | 2.31.4 | 52.6 | | | | | Few-shot fine-tuning 45.86.4 | 47.86.8 | 81.43.8 60.26.5 54.43.9 76.62.5 | 60.74.3 | 33.914.3 57.6 | | | | | | | Fully zero-shot Majority | 32.7 | 33.0 | 50.9 | 49.5 | 52.7 | 81.2 | 0.0 | 0.0 | 37.5 | | Null Prompt | 33.10.4 | 33.80.5 | 79.14.0 50.70.1 47.20.6 12.97.0 | 1.31.0 | −1.12.0 32.1 | | | | | | Multi-Null Prompt | 38.03.5 | 38.54.1 | 70.27.7 52.21.7 53.02.2 19.98.7 25.513.4 | 6.22.0 | 37.9 | | | | | | NPPrompt | 45.70.6 | 45.90.5 | 86.31.2 57.60.7 55.03.4 79.81.6 | 52.40.4 | 4.94.1 | 53.5 | | | | Null Prompt (IV et al., 2022) insert a token at the end of the text (i.e. using the prompt template " [x][MASK]" ) and then use the prediction of the [MASK] token to perform zero-shot classification. Multi-Null prompting (Wang et al., 2021) find that simply introducing a few prompt [MASK]s can improve the performance and robustness of the Null Prompt in the zero-shot settings. ## 5.3 Main Results We demonstrate our experimental results in Table 4. Overall NPPrompt outperforms Null Prompt and Multi-Null Prompt remarkably by over 10 percent in a fully zero-shot setting. NPPrompt achieves an accuracy of over 85% on AG News and DBPedia and over 90% on IMDB and Amazon. We conjecture that topic classifications in AG News and DBPedia are more complicated than binary sentiment classifications in IMDB and Amazon, hence the higher accuracy on the latter. NPPrompt is only slightly worse than KPT and SimPTC but outperforms most baseline methods in which human efforts/external knowledge or unlabeled data are strictly required. It's worth noting that NPPrompt performs much better than ManualVerb, suggesting that the label words generated by our method are more comprehensive and unbiased than human-designed ones. Besides, NPPrompt can beat GPT-3 by 4% in terms of average accuracy, a strong sign of the great potential for RoBERTa-large with 355M parameters compared to 175B parameters giant GPT-3. To explore how our method NPPrompt performs on different kinds of tasks, we also conduct experiments on the GLUE benchmark (Wang et al., 2018). Specifically, we test on Multi-Genre Natural Language Inference Matched (MNLI), Multi-Genre Natural Language Inference Mismatched (MNLImm)(Williams et al., 2018) , Question Natural Language Inference (QNLI) (Rajpurkar et al., 2016) and Recognizing Textual Entailment (RTE) (Bentivogli et al., 2009) for Natural Language Inference (NLI); Microsoft Research Paraphrase Matching (MRPC) (Dolan and Brockett, 2005) and Quora Question Pairs (QQP) (Chen et al., 2018) for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) for Sentiment Classification; The Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019) for Linguistic Acceptability. As shown in Table 5, NPPrompt outperforms all other methods in fully zero-shot setting. AutoL (Gao et al., 2021) and AMuLaP (Wang et al., 2022) are both automatic label words searching methods utilizing few-shot examples. Our method NPPrompt can even outperform them without any unlabeled data or few-shot training examples. ## 5.4 Effects Of Similarity Functions In Nonparametric Aggregation Both weight and similarity functions play a critical role in the design of NPPrompt and we test how NPPrompt performs on AG News with different configurations. The "Default" setting is as stated in Equation 1 and 3. We fix the similarity function S (emb(vi), emb(yj )) = emb(vi) ∥emb(vi)∥· emb(yj ) ∥emb(yj )∥ , set w(vi, yj ) = 1 for the "Same weight" setting and w(vi, yj ) = PS(emb(vi),emb(yj )) vk∈M(yj ) S(emb(vk),emb(yj )) for the ![6_image_0.png](6_image_0.png) "Average weight" setting. Besides cosine similarity, the Euclidean distance and the dot product are also common similarity measures for embeddings. Consequently, we fix the weight w(vi, yj ) = 1, choose S (emb(vi), emb(yj )) = −∥emb(vi) − emb(yj )∥ for the "Euclidean distance" setting and S (emb(vi), emb(yj )) = emb(vi)·emb(yj ) for the "Dot product" setting. It can be informed from Figure 2 that with a fixed similarity function, different weight calculations yield comparable results, but with a fixed weight, cosine similarity is the optimal similarity measure. ![6_image_1.png](6_image_1.png) ## 5.5 Can We Sum Over Probabilities? NPPrompt sums up all logits for a label word set as shown in Equation 2. Another possible approach is to sum up the probabilities from PLM's prediction for the label words and choose the argmax for all P different labels as the prediction: P(yj |xprompt) = vi∈M(yj ) w(vi, yj ) · P([MASK] = vi|xprompt), ye = arg max yj P (yj | xprompt). We conduct experiments on AG News to compare the above two approaches, one that sums up logits ("sum logit") and one that sums up probabilities ("sum prob"). Figure 3 presents the results and we find that "sum logit" performs better at small k but "sum prob" ![7_image_0.png](7_image_0.png) delivers better results when k exceeds 30. "sum logit" achieves the best result at k = 12 among all experiments. ## 5.6 **How Many Label Words Should We Choose?** The number of label words impacts the performance of our method NPPrompt as well. In Figure 4, we display the performances of different models with varied neighborhood numbers. In general, NPPrompt attains similar test accuracy across different neighborhood numbers. Regardless of the choice for neighborhood number, NPPromptRoBERTa-large achieves over 80% accuracy in topic classification tasks on AG News and DBPedia, and it gains over 90% accuracy in sentiment classification tasks on IMDB and Amazon. In realworld applications, we can simply choose a fixed neighborhood number (e.g. 8-10) to achieve decent performance. ## 5.7 How Does Npprompt Perform With Different Plms? | Method | AG | DB | IM | AZ | Avg. | |------------------------|-----------|------|------|------|--------| | NPPrompt-T5-base | 76.8 78.3 | 68.5 | 65.3 | 72.2 | | | NPPrompt-GPT2-base | 81.1 78.1 | 83.7 | 85.6 | 82.1 | | | NPPrompt-BERT-base | 79.4 77.8 | 57.7 | 53.5 | 67.1 | | | NPPrompt-BERT-large | 82.7 80.9 | 81.6 | 80.8 | 81.5 | | | NPPrompt-RoBERTa-base | 75.3 82.8 | 88.7 | 83.9 | 82.7 | | | NPPrompt-RoBERTa-large | 85.0 86.8 | 94.1 | 93.9 | 90.0 | | ![7_image_1.png](7_image_1.png) The performance of NPPrompt heavily relies on the choice of the pre-trained language model. This is due to the variations in label words for different categories, which stem from the distinct initial word embeddings and vocabularies employed by each PLM. Additionally, NPPrompt can be adapted for text generation models such as T5 (Raffel et al., 2020) and GPT-2 (Radford et al., 2019)) with minor modifications. In our approach, we utilize T5base/GPT2-base to generate the missing spans at the end of the prompt text. The first predicted token serves as the input to the verbalizer, and we follow the nonparametric aggregation steps outlined in Appendix A.1 to determine the category. To investigate the impact of employing different PLMs, we conduct additional experiments using BERT-base-cased, BERT-large-cased, RoBERTabase, T5-base, and GPT2-base models. The results are presented in Table 6. Notably, NPPrompt with RoBERTa-large achieves the highest performance, which can be attributed to the model's extensive parameter count and the fact that it is pretrained on a large corpus. As anticipated, larger models such as RoBERTa-large and BERT-large outperform their base counterparts (RoBERTa-base and BERT-base) on average, with RoBERTa consistently exhibiting superior accuracy compared to BERT models. While NPPrompt-T5-base and NPPrompt-GPT2-base demonstrate commendable performance, they do not surpass the performance of NPPrompt-RoBERTa-large. ## 5.8 Is Npprompt Limited To Text Classification Tasks Our research extends beyond text classification and encompasses experiments on multiple-choice question answering (QA) tasks as well. Specifically, | Method | CQA Dev Set Accuracy | |-------------------------|------------------------| | Few-shot Direct GPT-J | 20.9 | | Few-shot CoT GPT-J | 36.6 | | Few-shot CoT LaMDA 137B | 55.6 | | NPPrompt-RoBERTa-large | 34.2 | Table 7: Test results on CommonsenseQA dataset. Direct: directly output the final answer; CoT: prompted with chain-of-thought (CoT) rationales; LaMDA: method in (Wei et al., 2022). we assess the performance of NPPrompt using the widely-utilized CommonsenseQA (CQA) dataset (Talmor et al., 2019). In this new setting, we use the prompt template "x The answer is [MASK].", e.g. "What do animals do when an enemy is approaching? The answer is [MASK].". Subsequently, we search for the k-nearest neighbors for each target answer, setting k as 15. The prediction is obtained by applying the same process employed for text classification tasks. The results of our experiments are presented in Table 7 (few-shot results obtained from (Zelikman et al., 2022)). Notably, NPPrompt not only achieves satisfactory performance on the CommonsenseQA dataset but even outperforms few-shot GPT-J (Wang, 2021) as well. This demonstrates the versatility and flexibility of NPPrompt across various NLP scenarios. ## 6 Discussion Our proposed method, NPPrompt, demonstrates exceptional performance in zero-shot text classification tasks. We attribute this success to two key factors. Firstly, by utilizing the initial word embedding from pre-trained language models (PLMs), we are able to identify cognates of the label words. For instance, in Table 1, we observe variations of the word "business" such as "Business" and "businesses" for the BUSINESS category. Secondly, we effectively leverage the capabilities of pre-trained language models by reformulating the zero-shot classification problem as a masked token prediction task, which aligns with the pre-training process. Furthermore, NPPrompt offers a promising solution for dynamic and open zero-shot classification problems, where new classes may arise or old classes may be removed. With the use of efficient PLMs and category names, as well as the key word design in Equation 4, NPPrompt can also be applied in scenarios where label names do not possess semantic meaning (e.g. categories with label names "A", "B", "C"). This technique has the potential for wide deployment in real-world applications. ## 7 Conclusion In this paper, we propose NPPrompt, a novel and effective method for fully zero-shot learning with pre-trained language models. We use initial word embedding of PLM to automatically find related words for category names, which enables us to construct the verbalizers without manual design or unlabeled corpus. Experimental results show that NPPrompt outperforms the previous zero-shot methods by large margins. ## Limitations For those label names without semantic meanings, several keywords are still required for NPPrompt to work well. Furthermore, this study focuses exclusively on the zero-shot setting. However, there are potential avenues for exploration in the few-shot scenario, which is prevalent in practical applications. The applicability of NPPrompt to other tasks, such as ranking and relation extraction, remains uncertain and warrants further investigation. Designing a refinement method to jointly search for label words and templates can be a promising direction for future research. ## References Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The sixth pascal recognizing textual entailment challenge. In TAC. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *NeurIPS*. Daniel Fernando Campos, Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, and Bhaskar Mitra. 2016. Ms marco: A human generated machine reading comprehension dataset. *ArXiv*, abs/1611.09268. Z. Chen, H. Zhang, X. Zhang, and L. Zhao. 2018. Quora question pairs. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, and Maosong Sun. 2021. Openprompt: An open-source framework for prompt-learning. *arXiv preprint arXiv:2111.01998*. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In *IJCNLP*. Yu Fei, Ping Nie, Zhao Meng, Roger Wattenhofer, and Mrinmaya Sachan. 2022. Beyond prompting: Making pre-trained language models better zero-shot learners by clustering representations. In *EMNLP*. Joshua Feldman, Joe Davison, and Alexander M. Rush. 2019. Commonsense knowledge mining from pretrained models. In *EMNLP*. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In ACL. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification. *ArXiv*, abs/2105.11259. Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juan-Zi Li, and Maosong Sun. 2022. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In ACL. Robert L Logan IV, Ivana Balavzevi'c, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. 2022. Cutting down on prompts and parameters: Simple few-shot learning with language models. In FINDINGS. Kamran Kowsari, K. Meimandi, Mojtaba Heidarysafa, Sanjana Mendu, Laura E. Barnes, and Donald E. Brown. 2019. Text classification algorithms: A survey. *Information*, 10:150. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In *ICLR*. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S. Auer, and Christian Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. *Semantic Web*, 6:167–195. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y. S. Lam. 2019a. Reconstructing capsule networks for zero-shot intent classification. In *EMNLP*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, A. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In *RecSys*. Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language model self-training approach. In *EMNLP*. Jinseok Nam, Eneldo Loza Mencía, and Johannes Fürnkranz. 2016. All-in text: Learning document, label, and word representations jointly. In *AAAI*. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. *OpenAI blog*. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In *EMNLP*. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In *EMNLP*. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *EMNLP*. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In *EACL*. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In *NAACL*. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In *EMNLP*. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, A. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *EMNLP*. Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. Yi Sun, Yu Zheng, Chao Hao, and Hangping Qiu. 2021. Nsp-bert: A prompt-based zero-shot learner through an original pre-training task-next sentence prediction. ArXiv, abs/2109.03564. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In *NAACL*. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *BlackboxNLP@EMNLP*. Ben Wang. 2021. Mesh-Transformer-JAX: ModelParallel Implementation of Transformer Language Model with JAX. https://github.com/ kingoflolz/mesh-transformer-jax. Han Wang, Canwen Xu, and Julian McAuley. 2022. Automatic multi-label prompting: Simple and interpretable few-shot classification. In *NAACL*. Yue Wang, Lijun Wu, Xiaobo Liang, Juntao Li, and Min Zhang. 2021. Are bert families zero-shot learners? a study on their potential and limitations. *OpenReview*, https://openreview.net/pdf?id=YLglAn-USkf. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In *NeurIPS*. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL. Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S. Yu. 2018. Zero-shot user intent detection via capsule neural networks. In *EMNLP*. Jingjing Xu, Qingxiu Dong, Hongyi Liu, and Lei Li. 2022. Go-tuning: Improving zero-shot learning abilities of smaller language models. *ArXiv*, abs/2212.10461. E. Zelikman, Yuhuai Wu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with reasoning. *ArXiv*, abs/2203.14465. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *NIPS*. ## A Appendix A.1 Experimental Details Table 8 shows all the manual templates of NSP-BERT. We show the prompt templates for NPPrompt-T5 in Table 9. Table 11 summarizes manual designed descriptions of each dataset for Semantic Retrieval. As for GPT-3, we query the OpenAI API1and test with Davinci-001 model. The prompts for GPT-3 are shown in Table 12. We list all templates and label names for NPPrompt of all experiments in Table 13. We also list the related words result in sentiment classification (GOOD/BAD) and NLI (YES/NO)) tasks in Table 14. | Dataset | Template | |-----------|------------------------------------------| | AG News | News: label name. | | DBPedia | News: label name. | | IMDB | This text shows label name sentiment. | | Amazon | The attitude of this text is label name. | Table 8: Prompt templates of NSP-BERT (Sun et al., 2021) in Table 4. | Dataset | Template | |-----------|-----------------------------------------------| | AG News | x In this sentence, the topic is about [MASK] | | DBPedia | x1 x2 In this sentence, x1 is a [MASK] | | IMDB | x In summary, the movie was [MASK] | | Amazon | x All in all, it was [MASK] | Table 9: Prompt template of NPPrompt with T5-base (k = 15) in Tabel 6. ## A.2 What Label Words Do Different Plms Choose? RoBERTa-large RoBERTa-base BERT-large BERT-base ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) ![11_image_4.png](11_image_4.png) Word Sim Word Sim Word Sim Word Sim " school" 1.00 " school" 1.00 "school" 1.00 "school" 1.00 " School" 0.80 " School" 0.75 "School" 0.69 "School" 0.70 " schools" 0.77 " schools" 0.71 "schools" 0.63 "schools" 0.63 "school" 0.74 "school" 0.70 "college" 0.55 "college" 0.54 " SCHOOL" 0.69 "School" 0.70 "university" 0.50 "university" 0.51 "School" 0.68 " SCHOOL" 0.56 "student" 0.42 "College" 0.40 " university" 0.66 " college" 0.50 "church" 0.41 "church" 0.40 " college" 0.65 " university" 0.50 "house" 0.38 "student" 0.37 " Schools" 0.65 " Schools" 0.49 "education" 0.38 "students" 0.37 " schooling" 0.64 " schooling" 0.45 "students" 0.37 "Schools" 0.37 " preschool" 0.63 " preschool" 0.44 "class" 0.37 "academy" 0.37 " kindergarten" 0.63 " kindergarten" 0.41 "town" 0.37 "class" 0.36 " classroom" 0.60 " student" 0.41 "College" 0.36 "education" 0.36 " student" 0.58 " students" 0.39 "Schools" 0.36 "University" 0.35 " education" 0.58 " classroom" 0.38 "work" 0.35 "house" 0.35 Table 10: The top 15 similar words of SCHOOL category in the DBPedia dataset. Sim: similarity scores. We summarize the label words of different PLMs for SCHOOL category in DBPedia in Table 10. RoBERTa-large and RoBERTa-base share similar sets of label words yet with a minor discrepancy 1https://openai.com/api/ between their similarity scores. RoBERTa-large usually produces larger similarities than RoBERTabase. In contrast, the label words in RoBERTa are quite different from those in BERT. ## A.3 Extension To Multi-Word Expressions Here we extend our method to support multi-word label names like NATURALPLACE, MEANOFTRANSPORTATION and etc. The major part is to obtain related words to a multi-word label name. Once we obtain the related words, the rest non-parametric aggregation step remains identical. We consider two scenarios: The label name is multi-word (i.e., phrase) and related words are still single-words To model the phrase, we use average contextualized embedding instead of word embedding for both label names and related single-words to compute cosine similarity. As suggested in (Su et al., 2021), we whiten the contextualized output of RoBERTa by a linear transformation obtained from the contextualized embedding of all words in vocabulary. To obtain the best result, we select the output of layer 6 of RoBERTa. This extension achieves 61% accuracy on the DBPedia dataset using the original multi-word label names (original label names can be found at https://rdrr.io/cran/textdata/ man/dataset_dbpedia.html). Both the label name and related words are ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) phrases Since the search space of a related phrase is exponentially large in its length, we use another prompt to filter candidate words. The template we use is "[LABEL_NAME] can also be called [MASK]∗n.", where n is the length of the candidate. For example, if the label name is MEANOFTRANS-PORTATION and n = 2, the template will look like "Mean of transportation can also be called [MASK] [MASK].". We feed it to RoBERTa and filter top-k candidate phrases of masked prediction. Since masked prediction is conditionally independent of each mask, we further re-rank the top-k candidate phrases by either the contextualized embedding method mentioned above or another autoregressive LM. For the latter one, we evaluate the perplexity of the template with [MASK] filled by candidate phrases. This generates 71% accuracy on DBPedia if the length of the phrase is two and the re-ranking is performed by GPT-2 (Radford et al., 2019). Descriptions AG News: The politics category is related to politics, government, and law. The sports category is related to sports, competition, and athletics. The business category is related to business, portfolio, economics, and money. The technology category is related to technology, software, system, and science. DBPedia: The company category is related to company, corporation, enterprise, brand, and business. The school category is related to school, academy, university, and college. The artist category is related to artist, art, painter, musician, singer, and creative. The athlete category is related to athletes, sports, Olympic, and gym. The politics category is related to politics, government, and law. The transportation category is related to transportation, transport, vehicle, and traffic. The building category is related to buildings, construction, and structure. The mountain category is related to river, lake, bay, and mountain. The village category is related to village, town, and rural. The animal category is related to animal, wildlife, and nature. The plant category is related to plant, shrub, tree, and forest. The album category is related to album, lyrics, cd, and song. The film category is related to film, movie, cinema, and video. The book category is related to book, novel, and publication. IMDB: The bad category is related to negative and bad reviews. The good category is related to positive and good reviews. Amazon: The bad category is related to negative and bad reviews. The good category is related to positive and good reviews. Table 11: Descriptions for Semantic Retrieval in Table 4. ## Prompts For Gpt-3 And Chatgpt AG News : [Descriptions] Definition: In this task, you are given a sentence. Your job is to classify the following sentence into one of the four different categories. The categories are: "politics", "sports", "business", and "technology". Input: [x]. Output: DBPedia: [Descriptions] Definition: In this task, you are given a sentence. Your job is to classify the following sentence into one of the fourteen different categories. The categories are: "company", "school", "artist", "athlete", "politics", "transportation", "building", "mountain", "village", "animal", "plant", "album", "film", and "book". Input: [x]. Output: IMDB: [Descriptions] Definition: In this task, you are given a sentence. Your job is to classify the following sentence into one of the two categories. The categories are: "bad" and "good". Input: [x]. Output: Amazon: [Descriptions] Definition: In this task, you are given a sentence. Your job is to classify the following sentence into one of the two categories. The categories are: "bad" and "good". Input: [x]. Output: Table 12: Prompts for GPT-3 and ChatGPT with descriptions [Descriptions] from Table 11 and input text [x]. | Dataset | Template | Label Names | k | |---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|-----| | category 1: world, politics | 12 | | | | category 2: sports category 3: business category 4: technology, science | | | | | AG News | A [MASK] news : x . | category 1: company category 2: school category 3: artist category 4: sports category 5: politics, office category 6: transportation, car, bus, train | | | x1 x2 In this sentence, x1 | category 7: building, construct, room, tower | | | | is a [MASK] . | category 8: river, lake, mountain category 9: village category 10: animal, pet category 11: plant category 12: album category 13: film category 14: book, publication | | | | DBPedia | 7 | | | | IMDB | x All in all, it was [MASK] . | positive: good | 500 | | negative: bad | | | | | Amazon | x All in all, it was [MASK] . | positive: good | 170 | | negative: bad | | | | | SST-2 | x1 It was [MASK] . | positive: great | 9 | | negative: terrible entailment: yes | | | | | MNLI | x1 ? [MASK] , x2 | maybe | 4 | | neutral: contradiction: no entailment: yes | | | | | MNLI-mm | x1 ? [MASK] , x2 | maybe | 4 | | neutral: contradiction: no | | | | | QNLI | x1 ? [MASK] , x2 | entailment: Yes, Indeed, Overall | 3 | | not_entailment: No, Well, However | | | | | RTE | x1 ? [MASK] , x2 | entailment: Yes | 10 | | not_entailment: No | | | | | MRPC | x1 [MASK] , x2 | equivalent: Yes | 9 | | not_equivalent: No | | | | | QQP | x1 [MASK] , x2 | equivalent: Yes | 9 | | not_equivalent: No | | | | | CoLA | x1 This is [MASK] . | grammatical: true | 7 | | not_grammatical: wrong | | | | | Table 13: Templates and label names for NPPrompt. k refers to the best neighborhood number for RoBERTa-large. | | | | GOOD BAD YES NO Word Sim Word Sim Word Sim Word Sim " good" 1.00 " bad" 1.00 " Yes" 1.00 " No" 1.00 " Good" 0.73 " Bad" 0.71 " yes" 0.79 " no" 0.80 " GOOD" 0.72 " terrible" 0.69 " YES" 0.73 "No" 0.74 "good" 0.69 " BAD" 0.69 "Yes" 0.72 " NO" 0.70 " great" 0.66 " horrible" 0.68 " Yeah" 0.72 " Nope" 0.62 " excellent" 0.66 "bad" 0.65 " Yep" 0.65 " Yes" 0.62 " decent" 0.66 " awful" 0.64 " Sure" 0.62 "no" 0.61 "Good" 0.65 "Bad" 0.64 " No" 0.62 " Nobody" 0.59 " nice" 0.64 " good" 0.63 " Indeed" 0.61 " Nos" 0.57 " bad" 0.63 " badly" 0.62 " yeah" 0.60 " The" 0.57 " better" 0.62 " crappy" 0.60 "yes" 0.59 " Yeah" 0.57 " wonderful" 0.58 " lousy" 0.60 " Wow" 0.59 " Nothing" 0.56 " best" 0.58 " worst" 0.60 " Absolutely" 0.58 " Not" 0.56 " terrific" 0.57 " horrendous" 0.60 " Nope" 0.58 " Never" 0.56 " fantastic" 0.57 " worse" 0.59 " Okay" 0.57 " None" 0.55 " mediocre" 0.57 " nasty" 0.59 " Oh" 0.57 " Number" 0.55 " lousy" 0.57 " shitty" 0.59 " Hello" 0.57 " So" 0.54 " satisfactory" 0.56 " dreadful" 0.59 " Hey" 0.57 " Any" 0.54 " marvelous" 0.56 " rotten" 0.58 " Nevertheless" 0.57 " And" 0.54 " GREAT" 0.56 " harmful" 0.58 " However" 0.56 "NO" 0.53 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? ChatGPT in section 6 ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
chiang-lee-2023-large
Can Large Language Models Be an Alternative to Human Evaluations?
https://aclanthology.org/2023.acl-long.870
Human evaluation is indispensable and inevitable for assessing the quality of texts generated by machine learning models or written by humans. However, human evaluation is very difficult to reproduce and its quality is notoriously unstable, hindering fair comparisons among different natural language processing (NLP) models and algorithms. Recently, large language models (LLMs) have demonstrated exceptional performance on unseen tasks when only the task instructions are provided. In this paper, we explore if such an ability of the LLMs can be used as an alternative to human evaluation. We present the LLMs with the exact same instructions, samples to be evaluated, and questions used to conduct human evaluation, and then ask the LLMs to generate responses to those questions; we dub this LLM evaluation. We use human evaluation and LLM evaluation to evaluate the texts in two NLP tasks: open-ended story generation and adversarial attacks. We show that the result of LLM evaluation is consistent with the results obtained by expert human evaluation: the texts rated higher by human experts are also rated higher by the LLMs.We also find that the results of LLM evaluation are stable over different formatting of the task instructions and the sampling algorithm used to generate the answer. We are the first to show the potential of using LLMs to assess the quality of texts and discuss the limitations and ethical considerations of LLM evaluation.
# Can Large Language Models Be An Alternative To Human Evaluation? Cheng-Han Chiang National Taiwan University, Taiwan dcml0714@gmail.com ## Abstract Human evaluation is indispensable and inevitable for assessing the quality of texts generated by machine learning models or written by humans. However, human evaluation is very difficult to reproduce and its quality is notoriously unstable, hindering fair comparisons among different natural language processing (NLP) models and algorithms. Recently, large language models (LLMs) have demonstrated exceptional performance on unseen tasks when only the task instructions are provided. In this paper, we explore if such an ability of the LLMs can be used as an alternative to human evaluation. We present the LLMs with the exact same instructions, samples to be evaluated, and questions used to conduct human evaluation, and then ask the LLMs to generate responses to those questions; we dub this *LLM evaluation*. We use human evaluation and LLM evaluation to evaluate the texts in two NLP tasks: openended story generation and adversarial attacks. We show that the result of LLM evaluation is consistent with the results obtained by expert human evaluation: the texts rated higher by human experts are also rated higher by the LLMs. We also find that the results of LLM evaluation are stable over different formatting of the task instructions and the sampling algorithm used to generate the answer. We are the first to show the potential of using LLMs to assess the quality of texts and discuss the limitations and ethical considerations of LLM evaluation. ## 1 Introduction Human evaluation is an important method to understand the performance of an NLP model or algorithm (Guzmán et al., 2015; Gillick and Liu, 2010). We rely on human evaluation because there are certain aspects of texts that are hard to evaluate using automatic evaluation metrics; thus, researchers resort to humans to rate the quality of the output of NLP models. While human evaluation is prevalent and indispensable in NLP, it is notoriously Hung-yi Lee National Taiwan University, Taiwan hungyilee@ntu.edu.tw unstable (Gillick and Liu, 2010; Clark et al., 2021). Karpinska et al. (2021) has shown that the quality of workforces in human evaluation can have a detrimental effect on the evaluation result, making it impossible to compare the performance among different systems. Reproducibility is another issue in human evaluation since it is hard to recruit the same human evaluators and rerun the same evaluation. Even if the same workers are recruited, the workers that have seen the task before are likely to produce a different evaluation result the next time because they have already done the task. While human evaluation is used to better assess NLP systems and has some advantages over automatic evaluation metrics, the drawbacks of human evaluation somewhat make it difficult to reliably evaluate NLP systems. To resolve some of the drawbacks, we take advantage of large language models (LLMs). LLMs are large models that are trained to model human languages using self-supervised learning (Brown et al., 2020) and further using special training procedures to improve the performance on unseen tasks and better follow natural language instructions (Sanh et al., 2022; Wei et al., 2022). The ability to perform a task just given the task instructions motivates us to ask if these LLMs can perform what humans do in human evaluation. To answer this question, we feed in the LLM with the same instruction, sample, and question used in human evaluation, and take the sequences generated by the LLM as the LLM's answer to the question. This process is shown in Figure 1, and we call this process **LLM evaluation**. To test if LLM evaluation yields meaningful results, we conduct LLM evaluation on two different NLP tasks: evaluating the quality of stories in openended story generation and the quality of sentences generated by adversarial attacks. We summarize our findings and contribution as follows: - We show that LLM evaluation produces results similar to expert human evaluation, ver15607 ![1_image_1.png](1_image_1.png) ![1_image_0.png](1_image_0.png) Human responses Please **rate the story fragment** ![1_image_2.png](1_image_2.png) 4 ![1_image_3.png](1_image_3.png) ifying the effectiveness of LLM evaluation (§3.3 and §4.3). This paper is **the first** to propose using LLMs as an alternative to human evaluation and show their effectiveness. - We show that LLM evaluation results only slightly vary due to different task instructions and the hyperparameters of the sampling algorithm used to generate the answer. (§3.3.2 and §3.3.3) - We carefully discuss the pros and cons of using LLM evaluation and discuss the ethical considerations of LLM evaluation. (§5) ## 2 Llm Evaluation 2.1 Large Language Models (Llms) Large language models are language models having bulk parameter sizes, typically on the scale of a few billion, and pre-trained on enormous amounts of natural language corpora, including GPT3 (Brown et al., 2020), T5 (Raffel et al., 2020), and BLOOM (Scao et al., 2022). These LLMs show exceptional performance on unseen tasks when only the task instructions are given; this kind of ability is called **zero-shot in-context learning**. To further improve the zero-shot in-context learning performance, special training techniques have been applied to those LLMs after pre-training. For example, T0 (Sanh et al., 2022) and FLAN (Wei et al., 2022) are fine-tuned on a mixture of tasks and can thus achieve better zero-shot performance compared to GPT-3. InstructGPT (Ouyang et al., 2022) is fine-tuned from GPT-3 using reinforcement learning from human feedback (RLHF), and it is shown to better follow the instructions. ChatGPT (OpenAI, 2022) is fine-tuned from InstructGPT with a conversation dataset using RLHF, so ChatGPT can interact with users in a conversational way. ChatGPT is able to answer questions asked by the user and provide comprehensive explanations about its answer. Given the LLMs' ability to follow task instructions and provide feedback, we ask whether LLMs can be used as an alternative to human evaluation and aid NLP researchers in evaluating the quality of texts. ## 2.2 Llm Evaluation To evaluate the quality of texts generated by NLP systems or written by humans using LLM, we present the LLMs with the task instructions, the sample to be evaluated, and a question. The question asks the LLM to rate the sample's quality using a 5-point Likert scale. Given the inputs, the LLM will answer the question by generating some output sentences. We parse the output sentences to get the score rated by the LLM. We call this process LLM evaluation, and this procedure is shown in the lower part of Figure 1. Different tasks use different sets of task instructions, and each task uses different questions to evaluate the quality of the samples. The instructions and questions used in LLM evaluation in our paper are not tailored for the LLMs; we follow those instructions used to conduct human evaluation in prior works. To compare the result of LLM evaluation and show its effectiveness, we compare the result of LLM evaluation with human evaluation conducted by English teachers. To make a fair and meaningful comparison, the instructions, samples, and questions in human evaluation are formatted similarly to those in LLM evaluation. The main difference between LLM evaluation and human evaluation is that in human evaluation, the human evaluators answer the question by choosing the answer from a pre-defined set of options (the 1-5 Likert scale scores), as shown in the upper right in Figure 1. In LLM evaluation, we instead let the LLM freely generate sentences and extract the score from the generated sentences using some simple rules, detailed in Appendix D.2.1. ## 3 Example Task 1: Open-Ended Story Generation We first use open-ended story generation to demonstrate the usefulness of LLM evaluation. ## 3.1 Task Introduction Open-ended story generation is a task to generate a short story based on a given prompt. We use the WritingPrompts dataset (Fan et al., 2018), which is composed of pairs of short prompts and human-written stories collected from the subreddit WritingPrompts. In the WritingPrompts, the users are given a short prompt, and they need to write a story based on the short prompt.1 In this experiment, we use LLM evaluation and human evaluation to rate the stories generated by humans and the stories generated by a story generation model. We select open-ended story generation as an example because Karpinska et al. (2021) show that workers from Amazon Mechanical Turk (AMT) cannot distinguish GPT-2 (Radford et al., 2019) generated and human-written stories, 1The WritingPrompts subreddit explicitly forbids the users to use AI for generating stories, so we consider the stories in the dataset to be human-written. while English teachers show a clear preference for human-written stories over GPT-2-generated stories. We want to see if LLM can rate human-written stories higher than GPT-2-generated ones. Following prior works (Mao et al., 2019; Guan et al., 2020; Karpinska et al., 2021), the story generation model is GPT-2 medium model fine-tuned on the WritingPrompts training dataset. After the model is trained, we randomly select 200 prompts from the testing set of WritingPrompts and make the fine-tuned GPT-2 generate stories based on those prompts using nucleus sampling (Holtzman et al., 2020) with p = 0.9. For the human-written stories to be compared, we use the 200 stories written based on the same 200 prompts. We postprocess the human-written and GPT-2-generated stories and then use them for LLM evaluation and human evaluation. Please find the details on finetuning and data processing in Appendix B. ## 3.2 Llm Evaluation And Human Evaluation We present the LLMs and the human evaluators with a short description, and the story to be evaluated, formatted as shown in Figure 1. Following Karpinska et al. (2021), we evaluate the stories on four different attributes. The four attributes and their corresponding questions are as follows: 1. *Grammaticality*: How **grammatically correct** is the text of the story fragment? 2. *Cohesiveness*: How well do **the sentences** in the story fragment **fit together**? 3. *Likability*: How **enjoyable** do you find the story fragment? 4. *Relevance*: Now read the PROMPT based on which the story fragment was written. Prompt: [PROMPT]. How **relevant** is the **story fragment** to the prompt? Where the [PROMPT] will be filled in with the prompt which the story is based on. Each attribute is evaluated using a 5-point Likert scale; the following description is appended at the end of each question: "*(on a scale of 1-5, with 1 being the lowest)*". We show the interface used in human evaluation and the input format for the LLM evaluation in Appendix C.2 and D.2.2. The LLMs used for LLM evaluation include T0, text-curie-001, text-davinci-003, and ChatGPT. text-curie-001 and text-davinci-003 | Evaluator | Grammaticality | Cohesiveness | Likability | Relevance | | | | | |-------------------------|------------------|----------------|--------------|-------------|----------|----------|----------|----------| | MeanSTD | IAA% | MeanSTD | IAA% | MeanSTD | IAA% | MeanSTD | IAA% | | | Human-written stories | | | | | | | | | | Human | 3.760.95 | 0.3320.5 | 4.290.82 | 0.3227 | 3.781.10 | 0.089.5 | 3.351.48 | 0.058 | | T0 | 2.551.47 | 0.1610 | 2.981.45 | 0.114 | 3.181.53 | 0.127 | 2.931.64 | 0.026 | | curie | 3.190.47 | 0.0746.5 | 2.820.46 | 0.0147.5 | 2.850.37 | 0.110.65 | 3.060.40 | 0.110.64 | | davinci | 4.220.38 | 0.2635 | 4.540.47 | 0.3739.5 | 3.990.38 | 0.4968.5 | 4.400.79 | 0.7148.5 | | ChatGPT | 3.830.60 | 3.550.88 | 2.440.89 | 3.291.50 | | | | | | GPT-2-generated stories | | | | | | | | | | Human | 3.560.91 | 0.1019.5 | 3.191.07 | 0.1417 | 2.591.29 | −0.213.5 | 2.381.40 | −0.038.5 | | T0 | 2.441.49 | 0.059 | 3.021.51 | 0.076 | 3.001.59 | 0.166 | 2.821.61 | 0.046 | | curie | 3.230.51 | 0.0138 | 2.820.45 | 0.0250 | 2.860.37 | 0.0965.5 | 3.010.43 | 0.1161 | | davinci | 4.070.35 | 0.3545.5 | 4.260.45 | 0.4242 | 3.840.42 | 0.5262 | 4.020.74 | 0.6942.5 | | ChatGPT | 2.980.76 | 2.480.71 | 1.590.67 | 2.021.21 | | | | | are two InstructGPT models, and the latter is the stronger model; we will use InstructGPT to refer to these two models. We query the InstructGPT using the official API provided by OpenAI. We use nucleus sampling with p = 0.9 to generate the answer from T0 and InstructGPTs. We **sample three answers** from LLMs to stimulate the result of asking the model to rate the same story three times. We query ChatGPT using the user interface recently released by OpenAI. Unlike InstructGPT, we cannot control the parameters used for generating the response from ChatGPT. Because ChatGPT limits the maximum number of queries per user, we only sample one response for each question. For human evaluation, we do not use the commonly used AMT for human evaluation because Karpinska et al. (2021) has already shown that the results obtained using AMT are highly questionable. Following the recommendation of the prior works, we hire **three certified English** teachers using an online freelancer platform, UpWork. Teachers are familiar with evaluating the essays of students, making them the expert evaluators in our task. The details about recruiting human evaluators are in Appendix C.1. Each LLM and each English teacher rates the 200 human-written stories and 200 GPT-2-generated stories. ## 3.3 Experiment Results The LLM evaluation and human evaluation results of open-ended story generation are presented in Table 1. We report the mean and standard deviation of the Likert scores obtained from LLM evaluation and human evaluation and show the inter-annotator agreement (IAA) using two different metrics: (1) the Krippendorff's α, and (2) the percentage of the stories where three evaluators give the exact same rating.2 The main observations from Table 1 are discussed as follows. Expert human evaluators prefer humanwritten stories: Human evaluation result serves as some kind of *ground truth* of the LLM evaluation. For all four attributes, teachers rate the humanwritten stories higher than GPT-2-generated stories. This indicates that experts are able to distinguish the quality difference between model-generated stories and human-written stories. Based on the IAA, we also find that the agreements among experts are lower on GPT-2-generated texts and on the *likability*. This shows that experts tend to have less agreement on model-generated texts and on a subjective attribute (*likability*), agreeing with the results in Karpinska et al. (2021). T0 and text-curie-001 **do not show clear** preference toward human-written stories: For T0, we can see that T0 rates human-written stories higher than GPT-2-generated stories on grammatically, likability, and relevance. However, the rating differences between the human-written and 2The three evaluators in human evaluation are the three English teachers. In LLM evaluation, we sample the answer generated by LLM three times as an analogy to three different evaluators. model-generated stories do not achieve statistical significance for *grammaticality* and *relevance*; the p-value obtained by Welch's t-test is much larger than 0.05. The result of text-curie-001 is similar to T0: text-curie-001 does not rate humanwritten stories higher than model-generated stories. It can also be observed that for T0, the IAA in terms of the percentage of exact agreement among three different sampled answers is overall very low. This indicates that given the same sample, T0 is likely to give a different rating for the three sampled answers. The result implies that T0 does not assign a high probability to a specific rating, so different scores are all likely to be sampled. This shows that even if LLMs are specifically fine-tuned to better perform zero-shot in-context learning and trained to better follow human instructions, these do not make them capable of assessing open-ended story generation as human experts can. text-davinci-003 **shows clear preference** toward human-written stories just like English teachers: text-davinci-003 rates humanwritten stories much higher than model-generated stories on all four attributes, which is in accordance with the result produced by human experts. By Welch's t-test, we find that the higher ratings on human-written stories are all statistically significant. In prior work, researchers have found that workers recruited on AMT do not rate human-written stories higher than GPT-2-generated ones (Karpinska et al., 2021); combining their result with our result, we can see that LLM evaluation using text-davinci-003 yields more convincing results than using human evaluation on AMT for open-ended story generation. The results show that text-davinci-003 can perform basic evaluations such as checking for grammatical errors in stories. Additionally, the model excels in assessing the relevance of a story to a prompt, which involves more complex reasoning over the connection between the two. We also find the Krippendorff's α of text-davinci-003 is much higher than T0 and text-curie-001, indicating that the rating by text-davinci-003 is more consistent among different samplings of the generated answers. ChatGPT rates like human experts and can explain its own decision well: ChatGPT also shows a clear preference for human-written stories, and the preference toward human written-stories is statistically significant. When we query ChatGPT using the OpenAI user interface, we find several interesting observations: (1): ChatGPT is able to provide a detailed explanation of why it gives a certain rating. It will reference the sentences in the stories and prompts to support its rating. (2): ChatGPT sometimes refuses to rate the likability of the story because "*I am an AI and I do not have the* ability to experience enjoyment". In such cases, we regenerate the response until it gives a rating. (3): we find that ChatGPT tends to rate low likability on violent or impolite stories, which is likely because it is trained to provide safe and unharmful replies, making ChatGPT dislike brutal and profane stories. Experts mostly agree with the ratings and explanations of ChatGPT: We randomly select the answers on four stories by ChatGPT and ask the English teachers if they agree with the reasoning and rating of ChatGPT3. The teachers mostly agree with the rating and consider the explanation from ChatGPT reasonable. Interestingly, one teacher told us she cannot agree with ChatGPT's rating on grammaticality because ChatGPT considers punctuation errors as grammar errors, but she does not think punctuation errors are grammar errors. This shows that individuals have their own standards for ratings and this is also the case for LLMs. text-davinci-003 **tends to give higher ratings and ChatGPT is the opposite:** The rating on the same attribute of the same type of text tends to be higher for text-davinci-003 compared with human rating; contrarily, ChatGPT is more fastidious and prone to give lower scores. This shows that different LLMs have distinct tendencies regarding the rating. While the absolute values of the scores rated by text-davinci-003, ChatGPT, and human differ, they all rate human-written texts higher than GPT-2-generated stories. The absolute number reflects the bias or belief of the evaluator; as long as one uses the same evaluators to assess different systems, the comparison is meaningful. ## 3.3.1 Does Llm And Human Evaluators Agree On The Rating Of Individual Stories? We have found in Table 1 that the ratings of text-davinci-003 and ChatGPT show a strong preference toward human-written stories just like English teachers. However, it is unclear whether those LLMs agree with the teachers' rating on each individual story. Precisely, when English teachers rate a story higher, do LLMs also rate the 3We do not tell the teachers these are responses from an AI model. See the stories and teachers' replies in Appendix C.3.2. | Story Writer | Human | GPT-2 | |----------------|---------|---------| | Grammaticality | 0.14 | 0.12 | | Cohesiveness | 0.18 | 0.14 | | Likability | 0.19 | 0.22 | | Relevance | 0.38 | 0.43 | story higher? To answer this question, we calculate Kendall's τ correlation coefficient between the ratings of text-davinci-003 and English teachers. We choose to use the correlation coefficient instead of the inter-annotator agreement score because IAA mainly cares if two annotators agree on the exact ratings, while the correlation coefficient focus on the question: "when annotator A rates one story higher, does annotator B also rate the story higher?" (Amidei et al., 2019). We calculate Kendall's τ for four rating attributes as follows: For each story and each rating attribute, we calculate the average rating of the three English teachers and calculate the average rating of the three scores given by the text-davinci-003 (which is obtained from three independent samples). For each attribute, we collect the average rating of teachers into a vector A ∈ R 200, where each entry is the average rating of a story; likewise, we construct a vector B ∈ R 200 for the average ratings of davinci. Next, we calculate Kendall's τ correlation coefficient between A and B. The Kendall's τ between teacher ratings and LLM ratings is shown in Table 2. 4 We find that for all four attributes and for both human-written and GPT-2-generated stories, we observe weak to strong positive correlations between teachers' ratings and text-davinci-003's ratings. All the correlations have p-values less than 0.05. Hence, we can say that when teachers rate a story higher, text-davinci-003 also rates it higher to a certain extent. We also observe that Kendall's τ for different attributes are quite different: *relevance* has the strongest correlation while *grammaticality* has the weakest correlation. This is possibly because rating relevance is rather straightforward, which requires checking if the content in the prompt is mentioned in the story. On the contrary, what should be considered when rating *grammaticality* is not clearly stated in our instructions, so the LLM may have a different rubric compared with English teachers. We also calculate the average Kendall's τ between a pair of English teachers, and we find a weak correlation on *grammaticality* between the rating of two teachers, while the correlation of the rating on relevance is much stronger. The result is presented in Table 6 in Appendix. ## 3.3.2 Variance Due To Different Instructions LLMs have been shown to be sensitive to the instructions used to query the LLM sometimes (Zhao et al., 2021; Sanh et al., 2022). To investigate how varying the task instructions and questions can affect the LLM evaluation result for open-ended story generation, we change the instructions and questions and see how the LLM evaluation result changes. We experiment with two different instructions by changing the instruction or question in Figure 1: (1) We prepend the sentence, "*(You are a* human worker hired to rate the story fragment.)", in front of the task instruction in Figure 1. We try to provide the LLM a **persona** for it to better understand its role. This is inspired by previous work that reported GPT-3 can yield different results when giving them a persona (Zeng et al., 2022). (2) We ask the LLMs to **explain** their decision by appending the following sentence after the question: Please also explain your decision. Here, we would like to know if LLM will rate the stories differently when they are asked to justify their decision. This is inspired by zero-shot chain-of-thought (Kojima et al.). We use text-davinci-003 instead of ChatGPT as the LLM in this experiment since it is more accessible than ChatGPT. The results are shown in the upper block in Table 3. We observe that for *grammaticality* and cohesiveness, the scores obtained from different instructions are quite close: the rating changes due to different instructions are less than 0.1. For the other two attributes, the score changes are slightly larger but still in the range of 0.25. Despite that there are small variations due to different instructions, these variances still do not change the conclusion that "LLM rates human-written stories higher than GPT-2-generated stories". Thus, different instructions do not change the relative ranking of GPT-2generated and human-written stories. In summary, as long as the stories are evaluated using the same instructions using LLM evaluation, such evaluation and comparison are meaningful. | Setup | Grammaticality | Cohesiveness | Likability | Relevance | | | | | |--------------------------------------------------|------------------|----------------|--------------|-------------|----------|----------|----------|----------| | Human | GPT-2 | Human | GPT-2 | Human | GPT-2 | Human | GPT-2 | | | Different instructions (Section 3.3.2) | | | | | | | | | | Original | 4.220.38 | 4.070.35 | 4.540.45 | 4.260.45 | 3.990.38 | 3.840.42 | 4.400.79 | 4.020.74 | | (1) + persona | 4.290.45 | 4.010.45 | 4.600.49 | 4.270.50 | 4.050.39 | 3.870.39 | 4.550.70 | 4.250.77 | | (2) + explain | 4.240.42 | 4.050.25 | 4.610.49 | 4.320.51 | 4.150.44 | 3.980.34 | 4.350.75 | 4.030.56 | | Different sampling temperature T (Section 3.3.3) | | | | | | | | | | T = 1.0 | 4.220.38 | 4.070.35 | 4.540.45 | 4.260.45 | 3.990.38 | 3.840.42 | 4.400.79 | 4.020.74 | | T = 0.7 | 4.180.35 | 4.060.33 | 4.520.48 | 4.230.43 | 3.960.34 | 3.820.42 | 4.360.77 | 3.950.72 | | T = 0.3 | 4.130.33 | 3.990.25 | 4.480.49 | 4.140.39 | 3.950.26 | 3.820.41 | 4.340.75 | 3.930.67 | | T = 0 | 4.070.27 | 3.990.18 | 4.490.50 | 4.090.34 | 3.950.25 | 3.820.40 | 4.320.75 | 3.920.66 | ## 3.3.3 Variance Due To Different Sampling Parameters When generating the answers from the LLM, we must choose a set of hyperparameters for generation, including the temperature T and the probability p used in nucleus sampling. To understand whether different sampling parameters change the LLM evaluation result, we modify the temperature used for sampling and keep the p in nucleus sampling fixed to 0.9 when generating the answers from text-davinci-003. We do not simultaneously vary T and p since the two parameters are both used to control the diversity of the output, it is enough to change only one of the two parameters, as recommended in the API documentation. The results of varying T from 1 to 0 are shown in the lower block in Table 3. We observe an interesting trend as T varies from 1 to 0: the average rating slightly drops in most cases. Considering that T = 0 is simply argmax sampling, the result indicates that the response of the LLM with the highest probability tends to give lower scores. Despite this interesting trend, the LLM consistently rates human-written stories higher than GPT-2-generated stories. While not shown in Table 3, we find that the IAA increases as the temperature decreases. This is expected since lower temperature means less diversity during the LLM sampling, causing the sampled ratings to agree more closely. In summary, changing the instructions and temperatures can slightly change the absolute value of the rating given by LLM but does not change the LLM's preference on human-written stories. The overall result in this section shows that LLM evaluation is useful ## In Evaluating Open-Ended Story Generation. 4 Example Task 2: Adversarial Attack As another application, we use LLM evaluation to rate the texts generated by adversarial attacks. ## 4.1 Task Introduction Given a trained text classifier and a *benign* (nonadversarial) testing sample that the text classifier can correctly classify, an adversarial attack aims to craft an *adversarial* sample that makes the classifier make a wrong prediction. A special type of adversarial attack is called *synonym substitution attacks* (SSAs) (Alzantot et al., 2018), where the adversarial sample is created by replacing some words with their synonyms in the benign sample. By replacing words with their synonym, the semantics of the benign sample should be preserved in the adversarial sample and make the adversarial perturbation imperceptible to humans. While conceptually reasonable, it has recently been shown that many SSAs often yield ungrammatical and unnatural adversarial samples that significantly change the meaning of the benign sample (Hauser et al., 2021; Chiang and Lee, 2022). To evaluate the quality of adversarial samples, human evaluation is invaluable and widely used in prior works. In our experiment here, we would like to see whether the LLMs can rate the quality of adversarial samples like human experts. Adversarial samples are not normal texts, so the LLMs may not have seen such abnormal inputs during training. It would be interesting to know how LLMs rate these adversarial samples. | Human evaluate | LLM evaluate | | | | |------------------|----------------|--------|-------|-------| | Fluent | Mean. | Fluent | Mean. | | | Benign | 4.55 | - | 4.32 | 5.00† | | Textfooler | 2.17 | 1.88 | 2.12 | 2.06 | | PWWS | 2.16 | 1.85 | 2.42 | 2.49 | | BAE | 3.01 | 3.02 | 3.71 | 3.71 | ## 4.2 Experiment Setup We select three different classic SSAs: Textfooler (Jin et al., 2020), PWWS (Ren et al., 2019), and BAE (Garg and Ramakrishnan, 2020); these attacks are predominantly used as strong baselines in the literature of SSAs nowadays. We use these three SSAs to attack a BERT-base-uncased model (Devlin et al., 2019) fine-tuned on AG-News (Zhang et al., 2015), a news classification dataset. For each SSA, we randomly select 100 pairs of benign and adversarial samples and use LLMs to evaluate their quality. We show the result of using ChatGPT as LLM here since it can better explain its decision. Following the suggestions of prior works (Morris et al., 2020), we evaluate the quality of the adversarial samples from two aspects: the *fluency* and meaning preservation. For fluency, we present the LLM with a piece of news (either benign or adversarial sample) and the following question: How natural and fluent is the text of the news title? (on a scale of 1-5, with 1 being the lowest). For meaning preserving, we present the LLM with both the benign and the adversarial sample, and prompt the LLM to answer this question: *Do you agree* that the meaning (or semantics) of news title 1 is preserved in news title 2? (on a scale of 1-5, with 1 being the strongly disagree and 5 being strongly agree.) The exact instruction and formatting are presented in Appendix D.2.3. We also ask three English teachers to rate the *fluency* and meaning preserving of the samples. The task instructions and questions are formatted the same as in LLM evaluation. ## 4.3 Experiment Result The results are presented in Table 4. We can see that English teachers rate the adversarial samples generated by SSAs very low in terms of fluency and meaning preserving, this result is in line with recent observations on the quality of adversarial samples (Hauser et al., 2021; Chiang and Lee, 2022). Before interpreting the result of LLM evaluation, we first conduct a sanity check on whether the LLM understands the task. We ask the LLM to rate the meaning preserving of two benign samples that are exactly the same. Ideally, the LLM should always give a score of 5, meaning that it strongly agrees that the meanings are not changed. The result of this sanity check is the entry with † in Table 4, which is a perfect 5.00. ChatGPT often says that "*the two titles are identical so I rate a 5 (strongly* agree)", showing that ChatGPT understands what the task is about. Next, we turn our attention to the LLM evaluation results of the adversarial samples. We observe that ChatGPT tends to rate adversarial samples higher than English teachers, meaning that ChatGPT is less harsh on the unnatural and artificial parts in the adversarial samples. We conduct the same experiment using text-davinci-003 and find similar results. Although ChatGPT rates adversarial samples higher than the teachers, ChatGPT still rates adversarial samples significantly lower than benign samples. ChatGPT also agrees with the English teachers that the adversarial samples generated by BAE are better than the samples generated by Textfooler and PWWS. Interestingly, we find that ChatGPT rates PWWS to be more natural than Textfooler, while such a rating difference is not seen in the expert human evaluation. At first sight, this means that ChatGPT is inconsistent with human evaluation results. However, by scrutinizing the human evaluation results, we find that two teachers rate PWWS higher than Textfooler while one teacher rates PWWS much lower than Textfooler. This indicates that ChatGPT actually agrees with the majority of human experts. Overall, LLM can rank the quality of adversarial texts and benign texts like most human experts. ## 5 Discussions In this paper, we propose to use LLM for evaluating the quality of texts to serve as an alternative to human evaluation. To demonstrate the potential of LLM evaluation, we use LLMs to rate the quality of texts in two distinct tasks: open-ended story generation and adversarial attacks. We show that even if LLMs have exceptional zero-shot in-context learning ability, they are not always suitable to be used for LLM evaluation. Still, we find that the best InstructGPT and ChatGPT can rate the quality of texts like human experts on the two tasks we used as examples. Overall, the results in this paper demonstrate that LLM evaluation has the potential to be used to evaluate NLP systems and algorithms. Pros of LLM evaluation There are several benefits of LLM evaluation, compared to human evaluation. First, LLM evaluation is more **reproducible**. Human evaluation results are hard to reproduce as it is difficult to hire the same group of evaluators, and it is hard to compare the results of similar experiments even if they use the same instructions, recruitment platform, and qualifications for the evaluators. On the contrary, LLM evaluation does not have such a drawback. By specifying the model used for LLM evaluation, the random seed, and the hyperparameters used to generate the answers from the LLM, the LLM evaluation result is more likely to be reproduced. Note that in certain cases, the LLM provider may regularly update the LLM, making the LLM evaluation unreproducible if the LLM is outdated and not accessible. Second, **the evaluation of each sample is independent of each other in LLM evaluation**. Contrarily, in human evaluation, the rating of the current example may more or less be affected by prior samples. Humans tend to compare the current sample to the ones they have previously seen and this affects their ratings. As a piece of evidence, in the interview after rating the 400 stories, the English teachers say it took them some time to calibrate their ratings (Appendix C.3.1). Thus, using LLM evaluation can simplify some experiment designs since one does not need to worry whether the order of the sample being evaluated will change the result. Still, one may also argue that being able to calibrate the rating of different samples is desired and this is why human evaluation might be preferred. Overall, whether the rating of the evaluator (human or LLM) should be affected by a previously rated item is inherently a design choice of the experiment. Third, LLM evaluation is **cheaper and faster** than human evaluation, making it easier and quicker for researchers to evaluate the quality of NLP systems. Hiring an English teacher to rate 200 stories costs us US$140, while LLM evaluation using the best InstructGPT model costs less than US$5. It took us over a week to collect human evaluation results starting from recruitment to collecting the evaluation results, but only a few hours to query InstructGPT and perform LLM evaluation. Finally, utilizing LLM evaluation, rather than human evaluation, can **minimize the need for human exposure to objectionable content**, such as violent, sexual, hateful, or biased material. Such content may cause discomfort for human evaluators while reading and rating these texts. 5 ## Limitations And Ethical Considerations Of Llm evaluation Despite the promising results of LLM evaluation shown in this paper, there are some limitations of this method. First, LLM may possess incorrect factual knowledge (Cao et al., 2021), so it is not suitable to use them in tasks that involve factual knowledge. Next, LLMs trained to behave in a certain way can be biased toward certain responses. Precisely, an LLM that is trained to be safe and non-harmful can result in LLMs preferring to generate more positive and upbeat responses, which is observed throughout our interaction with ChatGPT. Additionally, even with researchers' efforts to make LLMs safer (Bai et al., 2022a,b), LLMs can still generate harmful and biased responses (Ganguli et al., 2022; Perez et al., 2022), which are violative of basic ethics, and LLM evaluation results will be highly doubtful (Hendrycks et al., 2021). However, it is important to note that these limitations and potential harms also apply to human evaluation: the bias of human evaluators can affect the human evaluation result (Lentz and De Jong, 1997; Amidei et al., 2018). Our pioneering idea, LLM evaluation, has the potential to transform the NLP community.6 We encourage future researchers to consider using it while being aware of its limitations. Our paper's goal is not to replace human evaluation but to present an alternative option. Both human and LLM evaluation have their own advantages and disadvantages, and they can be used in conjunction. We recommend using LLM evaluation as a cheap and fast quality judgment when developing a new NLP system, while human evaluation is best used to collect feedback from humans prior to deploying the NLP system in real-world applications. ## Limitations There are additional limitations and potential risks of LLM evaluations that should be noted, and these limitations are actually well-known problems of pre-trained language models. As listed on the Open AI blog for ChatGPT, ChatGPT sometimes generates answers that sound right and plausible but are totally nonsense. OpenAI also admits that the model's response may be sensitive to the prompt used to query the model. While in Section 3.3.2, we find that the overall results among different instructions are not significantly different, we cannot guarantee that this is the case for all kinds of modification on the task instructions. Other than the limitations listed on the OpenAI blog, there are still other limitations. For example, LLMs may not have emotions. Whether AI models have emotion is a more philosophical question and is controversial, so the results of using such models for evaluating emotion-related tasks may be strongly challenged and may even violate research ethics. As we find during our experiments, ChatGPT often replies *"I am an AI system and I* do not have emotions like a human" when asked to rate the *likability* of a story. Another important limitation of LLM evaluation is that LLMs lack the ability to process visual cues in task instructions, unlike human evaluation. Human evaluators can use formattings such as special fonts or text styles to focus on important parts of the instructions. Additionally, the way instructions and questions are formatted can influence how human evaluators approach the task. While using special HTML syntax can serve as an alternative for visual cues, such tags are not used in human evaluation, so we do not use those HTML tags in LLM evaluation to incorporate visual cues in the inputs to the LLMs. However, LLMs can only process raw text input and are unable to take in visual cues. ## Ethics Statement Further ethical considerations of LLM evaluation Aside from the limitations of LLM evaluation mentioned previously, there is a crucial ethical concern at the heart of LLM evaluation. Is it ethical to replace human evaluation with LLM evaluation? Some may question if this paper is suggesting that LLMs are now ready to replace humans and find this idea unsettling. As responsible and ethical NLP researchers, we understand these concerns but want to make it clear that this is not our intent. As our paper title suggests, we aim to offer an *alternative option* to human evaluation with the goal of enhancing the reproducibility of NLP research. Human evaluation is still essential as the ultimate goal of NLP systems is to be used by human users, so it's important to gather feedback from them. We highly enjoy the process of discussing the experiment settings and results with the English teachers we hired. We do not recommend that future researchers completely eliminate human evaluation; rather, we believe that human evaluation should be used in conjunction with LLM evaluation. Both methods have their own advantages and disadvantages, making them both necessary for evaluating NLP systems. We hope the positive results in this paper provide NLP researchers with an alternative method to evaluate systems and encourage further discussions on this topic. Ethical statements on the experiments in the paper All the experiments strictly follow the ACL Code of Ethics. We include comprehensive details about human evaluation in Appendix C.1. To summarize, we include the exact instructions and screenshots of the interface in the human evaluation. We inform the human evaluators what the task is about and tell them that their responses will be used to assess the performance of AI models. We try our best to follow the ethical guidelines of ACL. We use the models and datasets when following their intended usage. Specifically, we follow the OpenAI usage policy when using the InstructGPT models and the ChatGPT model. ## Acknowledgements We want to thank the reviews for providing detailed feedback and actionable suggestions, which help us strengthen our paper. We list the modification based on the reviewers' suggestions in Appendix A. We thank Yung-Sung Chuang for providing valuable feedback on the draft of this paper. We want to thank Tung-En Hsiao, the administrative assistant of our lab, for helping us deal with the payment on Upwork. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Rethinking the agreement in human evaluation tasks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3318–3329, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. Agreement is overrated: A plea for correlation to assess human evaluation reliability. In Proceedings of the 12th International Conference on Natural Language Generation, pages 344–354, Tokyo, Japan. Association for Computational Linguistics. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint* arXiv:2212.08073. R Botsch. 2011. Chapter 12: Significance and measures of association. Scopes and Methods of Political Science. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. Cheng-Han Chiang and Hung-yi Lee. 2022. How far are we from real synonym substitution attacks? arXiv preprint arXiv:2210.02844. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282–7296, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. *arXiv preprint arXiv:2209.07858*. Leo Gao. 2021. On the sizes of openai api models. Accessed on January 17, 2023. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 6174–6181. Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd-workers for textannotation tasks. *arXiv preprint arXiv:2303.15056*. Dan Gillick and Yang Liu. 2010. Non-expert evaluation of summarization systems is risky. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 148–151, Los Angeles. Association for Computational Linguistics. Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pretraining model for commonsense story generation. Transactions of the Association for Computational Linguistics, 8:93–108. Francisco Guzmán, Ahmed Abdelali, Irina Temnikova, Hassan Sajjad, and Stephan Vogel. 2015. How do humans evaluate machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 457–466, Lisbon, Portugal. Association for Computational Linguistics. Jens Hauser, Zhao Meng, Damián Pascual, and Roger Wattenhofer. 2021. Bert is robust! a case against synonym-based adversarial examples in text classification. *arXiv preprint arXiv:2109.07403*. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2021. Aligning {ai} with shared human values. In International Conference on Learning Representations. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *International Conference on Learning* Representations. Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. *arXiv preprint arXiv:2302.07736*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 8018–8025. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265–1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *International Conference on Learning* Representations. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In *Advances in Neural Information Processing Systems*. Leo Lentz and Menno De Jong. 1997. The evaluation of text quality: Expert-focused and reader-focused methods compared. *IEEE transactions on professional communication*, 40(3):224–234. John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020. Reevaluating adversarial examples in natural language. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3829–3839, Online. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red teaming language models with language models. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 3419–3448, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085– 1097, Florence, Italy. Association for Computational Linguistics. Huanru Henry Mao, Bodhisattwa Prasad Majumder, Julian McAuley, and Garrison Cottrell. 2019. Improving neural story generation by targeted common sense grounding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5988–5993, Hong Kong, China. Association for Computational Linguistics. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. *arXiv preprint arXiv:2303.04048*. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Accessed on January 10, 2023. Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. KiYoon Yoo, Jangho Kim, Jiho Jang, and Nojun Kwak. 2022. Detection of adversarial examples in text classification: Benchmark and baseline via robust density estimation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3656–3672, Dublin, Ireland. Association for Computational Linguistics. Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *arXiv preprint arXiv:2204.00598*. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. ## A Modification Based On The Reviews We list the main differences between this version and the pre-review version of our paper; the modifications are mainly based on the reviewers' suggestions. We thank the reviewers again for those valuable suggestions. - We add Section 3.3.1 to discuss whether the LLM and human evaluators agree on the ratings of individual stories. - We refine the wordings in Section 5 and add relevant references. - We add Table 6 to discuss the correlation between human evaluators. - We conduct supplementary experiments on human evaluation that mixes human-written stories and GPT-2-generated stories when conducting human evaluation and report the results in Table 5. - We correct the typos and include almost all presentation suggestions mentioned by the reviewers. We cannot follow all presentation suggestions due to limited space. ## B Experiment Details For Open-Ended Story Generation B.1 The Writingprompt **Dataset** The training dataset contains 303K pairs of stories and prompts, which our model is trained on. We only use 200 prompt-story pairs from the test set. The dataset is downloaded from https://www.kaggle.com/datasets/ratthachat/writingprompts. ## B.2 Fine-Tuning The Gpt-2 Model We train the model for 3 epochs with a learning rate of 5e−5 and linear learning rate schedule. The trained model eventually reaches a perplexity of 20 on the validation set of WritingPrompts. ## B.3 Data Post-Processing Once the model is trained, we randomly select 200 prompts from the testing set of WritingPrompts, and feed the prompts to the trained model and ask the model to generate stories based on the given prompts. When generating the stories, we adopt nucleus sampling with p = 0.9. Next, we manually truncate the generated stories to less than 150 words and ensure that after the truncation, the story ends with a full sentence.7 After this process, we have 200 pairs of prompts and model-generated stories. As a comparison to model-generated stories, we select the same 200 prompts used for generating model-generated stories and their corresponding human-written stories to form 200 pairs of prompts and human-written stories. For these human-written stories, we also truncate the stories to less than 150 words and end with a full sentence to match the model-generated sentences. We also manually remove some artifacts in the human-written story due to the tokenization of the WritingPrompts dataset. ## C Human Evaluation C.1 Recruiting English Teachers The English teachers hold ESL certificates8; given that they are experienced with correcting essays written by students, they are perfect fits for this task. Each teacher is asked to rate 200 GPT-2-generated stories and 200 human-written stories, and they are 7We truncate the story to 150 words since this is the mean length of the model-generated story. 8English as a Second Language Teaching Certification paid US$140 for rating 200 stories. Considering that the teachers reported that they take at most 5 hours to rate 200 stories, this makes the hourly wage at least US$28. We first ask the teachers to rate the GPT-2-generated stories and then the 200 human-written stories. Different from Karpinska et al. (2021) that take a break between the rating of GPT-2-generated stories and the human-written stories, we do not take a break to avoid the teacher's rating standard to change after taking a long break. The teachers are not told who wrote the stories before they evaluate the stories. We reveal to them what this project aims to study after they finish rating all the stories. The reason we do not mix human-written and GPT-2-generated stories for rating is that in Karpinska et al. (2021), their observation is that (1) when AMT workers rate model-generated and humanwritten stories **separately**, their ratings do not show preference toward human-written stories, but (2) even when rating the model-generated and human-written stories **separately**, English teacher shows clear preference toward human-written stories. We follow their settings and do not mix GPT2-generated/human-written stories. During the reviewing process, we received questions from the reviewers about why not mixing the stories for human evaluation. Thus, we conduct the same experiment by randomly mixing 200 human-written and 200 GPT-2-generated stories and asking three teachers (not the teachers that already rated the stories) to rate them. All other experiment conditions are the same as previously stated. The full result is shown in Table 5. We find that the teacher still shows a clear preference toward human-written stories for all four attributes, similar to the observation in Table 1. The only exception is grammaticality, where English teachers do not show a very clear preference for the grammar of human-written stories. However, when calculating the average rating for individual teachers, we find that two out of three teachers do rate grammaticality higher for human-written stories. It is interesting to note that for LLM evaluation, there is no such problem about whether or not to mix the human-written and GPT-2-generated stories during LLM evaluation as the rating of each story is independent of each other, as discussed in Section 5. For adversarial attack quality evaluation, we also recruit certified teachers on Upwork. The teachers | Writer | Human | GPT-2 | |----------------|----------|----------| | Grammaticality | 3.890.97 | 3.880.84 | | Cohesiveness | 4.350.87 | 3.490.97 | | Likability | 3.461.40 | 2.891.12 | | Relevance | 3.711.20 | 2.371.33 | are asked to rate 100 news titles and are paid US$35 for doing so. They reported that it took them less than 1 hour to complete the rating. ## C.2 Human Evaluation Interface Open-Ended Story Generation We use Google ![13_image_0.png](13_image_0.png) Forms to collect the responses from the teachers. Each form contains 100 stories, and each story is on one page of the Google Form. The interface on one page is shown in Figure 2 and Figure 3; the two figures are from the same page of the Google Form, and we are splitting them because screenshotting the whole interface will cause low resolution. Adversarial Attacks Quality Evaluation In this task, we also use Google Forms to collect the responses from the teachers. We create two different Google Forms, one is used to evaluate the fluency, whose interface is shown in Figure 4. In this form, we mix an equal number of benign news titles, TextFooler-attacked, PWWS-attacked, and BAEattacked news titles. Each page of the Google Form contains one news title. Another Google Form is used to compare the meaning preserving of the news title before and after the adversarial attacks. We highlight the difference between the benign and adversarial samples using **boldface**, as shown in Figure 5. On each page of the Google Form, there is one pair of news titles. ## C.3 Post-Task Interview With English Teachers C.3.1 How English Teachers Rate The Stories After the teachers rate 400 stories, we ask them the following questions: Q1 How long did it take you to rate the 400 sto- ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) ## Ries? Q2 What is your standard on each of the four attributes (grammatical, coherence, likability, relevance) evaluated? For example, in what case do you give a high/low rating for grammatically? What kind of story did you give a low rating on likability? Did your personal preference affect the rating? Q3 How long did it take for you to calibrate your rating on the task? Q4 Did you change your rating on the first three attributes after reading the prompt the story is based on? We briefly summarize the answers from the three teachers. The teachers report that they spent 6 to | Writer | Human | GPT-2 | |----------------|---------|---------| | Grammaticality | 0.25 | 0.15 | | Cohesiveness | 0.26 | 0.18 | | Likability | 0.09 | 0.12 | | Relevance | 0.38 | 0.41 | 10 hours rating 400 stories. For grammar, most teachers check the punctuation9, word choice, and subject-verb agreement. English teachers decrease their rating based on the types and number of grammar errors in the stories. For coherence, the teachers rate it based on whether the sentences in the stories follow a logical sequence to build the narrative. The teachers ask themselves questions such as "*does the story make* sense". This is a more holistic evaluation of the whole story. For likability, some teachers say they try not to be affected by personal preference. One teacher asks herself: Did I personally enjoy it based on the amount of sense it made and whether or not it had stylistic flair, humor, or engaging plotting or characterization? Overall, the teachers all try to use a fair and objective view to rate the likability. For relevance, the teachers simply check if the story is based on the prompt or not. The teachers said that it took them about five to ten stories to calibrate their ratings. Except for one teacher changing the rating on the other three attributes after seeing the prompt on **only one story**, the teachers do not change their rating on the three other attributes after reading the prompts. ## C.3.2 Teachers' Comments On Chatgpt'S Rating After the teachers finished the rating and answered the four questions in Appendix C.3.1, we ask them to check the ratings and explanations of ChatGPT and ask if they agree with ChatGPT. We told the teachers that the rating is done by another teacher we hire; we do not tell the teacher that the rating is done by an AI model to avoid possible stereotypes about AI models. We randomly select four stories 9One of the teachers explicitly told us punctuation is not grammar. By searching the Internet, we find that there is a debate on whether punctuation should be counted as a part of grammar. rated by ChatGPT; two of them are highly-rated and lowly-rated human-written stories and the other two stories are highly-rated and lowly-rated GPT2-generated stories. Story 1 It had been months since they found me dead. I don't know why I stopped walking. Noone else around to take my place, or talk with me. I spent days frozen in shock, never able to talk, unable to move, unable to express myself. It was a strange thing, nobody else would suspect me of wanting this. The more I focused on it, the more it consumed me. Years passed, and I got more and more lost in it. I began to think that perhaps they didn't really care about me. I still had a deep longing for them. All I had was so many steps to walk now. A quiet part of me liked to think that they had passed me by. The loneliness. It never felt like such a loss. They were gone, the ones who saved me. So why did I still keep walking? Prompt "Every person sees the world in a unique way. When someone dies it is, in a way, the death of an entire world." Story 2 "My son is gone. He has been" People stare at you as you give your explanation and say "yes. But. You cant just do it" "He is my son! This is my son! I love him and he is going to be okay" "What do you mean he is okay? He said he was upset with someone, couldnt find his anger, how the hell can you tell, it doesnt fit the bill?! The police called, the doctor said it was an addiction" "You can't trust me! He told me he loves me." "No. No he doesnt. But I can trust you. The evidence against him was overwhelming, now he cant do anything. The evidence against him was all circumstantial" "A man I once was won't be able to marry me" Prompt Everyone is allowed to kill one person in their life. However, you have to fill out a form explaining why this person deserves to be killed and the committee can deny/approve your request as per the rules. You are presenting your form today. Story 3 I held the little black box in the palm of my hand. Pitch black, perfectly cubed and nothing special about it, but it continued to hold my gaze regardless as if there were some deep importance about it. My friend Valhalla appeared out of no where and sat next to me, also staring at the cube. "What do you have there, Heaven?" he asked. I continued to gaze at the box, refusing to look at Valhalla for even a moment as though I would miss something if I did. "This," I said. "Is the secret to the universe." I could tell Valhalla was perturbed by this sort of knowledge, as if there was some evil about the cube. Or perhaps he didn't think such an object could exist. But I made it exist, and so it sits in the palm of my hand. Prompt The Little Black Box Story 4 I stared down the telescopic sight of my l96 sniper rifle. I slowly moved my gaze into each window in the hotel, Many displays of various vice. One couple was violently pleasuring each other. Another was an old man, watching a younger woman strip in front of him. A prostitute no doubt. I inhaled slowly, and exhaled. The air was brisk, atleast 30 degrees Fahrenheit. I so small flakes of snow, float peacefully in front of me. I found the room, i was looking for. Ive been tracking this man for 2 weeks. Man was he elusive. The lights flickered on. The red haired man, was mildly attractive, i can see the appeal women had for him. I followed him into the next room, with my sights. The lights flickered on, i was taken aback by the scene. A man, overweight and balding. Prompt You are the antagonist of the story. However, you aren't sure if you can call yourself that after what the protagonist did. Overall Comments from Teachers on ChatGPT's Rating After the teachers elaborated on their thoughts on the rating of ChatGPT, we ask them to provide an overall comment on how ChatGPT is doing. Again, the teachers are not informed that the ratings are done by an AI model. In summary, teachers all consider the rating and explanations reasonable. They find that the attributes they do not agree with are mainly *Likability* and *Cohesiveness*. However, they think the two attributes are a more holistic evaluation of the story and tend to be more subjective. Even if they do not give the same rating, they still are able to understand the explanation of ChatGPT. In the end, all teachers summarize that rating stories is highly subjective, and it is normal to have disagreements. ## D Llm Evaluation D.1 Details On Llms Used The T0 model we use is called T0pp, which is a variant of the T0 model and has 13B parameters. We will still use T0 to refer to this model. We load the T0 model using the transformers toolkit (Wolf et al., 2020). The two InstructGPT models, text-curie-001 and text-davinci-003, are queried using the OpenAI API. We query ChatGPT using the OpenAI GUI. While we are aware that some online resources provide an API-like tool to query ChatGPT, we think it violates the intended use of ChatGPT so we do not adopt those online resources. The ChatGPT we queried is the Dec. 15 and Jan. 9 version. OpenAI does not reveal the model sizes of any of the GPT models. However, it is estimated that text-curie-001 has 13B parameters and text-davinci-003 has 175B parameters (Gao, 2021). ## D.2 Details On Querying The Llms D.2.1 Parsing The Llm Outputs After the T0 and InstructGPT generate the answer, which is composed of several sentences in almost all cases, we parse the generated sentence to get the model's score. We use some rules to parse the output, and the rule is established after manually looking into the output of the generated sequences. First, we remove the string 1-5 from the output since we observe that LLM will sometimes say "... on a scale of 1-5 ...", and we remove the string out of 5 since LLM sometimes say it "*give a* score of x *out of 5*". We also remove the string /5 since LLM sometimes uses "x/5" to express that it gives a score of x out of 5. For rating the adversarial samples, we remove the title 1 and title 2 in the output sentences. Last, we use the regular expression to extract the first number in the sequence. We find that the above rules are sufficient to parse the output and get the rating of the LLMs. In a few rare cases, the LLM gives the rating with a 0.5 incremental, meaning that it gives scores like 4.5. In such cases, we do not drop the 0.5 since in our instruction, we do not explicitly tell it the rating should be 1-incremental. We find the LLM results merely difference between dropping | Attribute | Rater | Response | |-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ChatGPT | I would rate the grammatical correctness of the text of the story fragment as a 4. There are a few minor errors and awkward phrasings, but overall the text is well-constructed and easy to understand. | | | T1 | Agree | | | T2 | Agree | | | T3 | Agree | | | Grammar | ChatGPT | I would rate the fit of the sentences in the story fragment as a 5. The sentences flow well together, building a clear and cohesive narrative, and providing a sense of progression and development in the main character's thoughts and emotions. | | T1 | Disagree: I think it's a 4 for me, not a 5, because there was one sentence that didn't seem to fit with the others. | | | T2 | Agree | | | T3 | Disagree: I find disjointedness in the passage, which serves to confuse me as a reader. The opening sentence is about the first-person narrator being found dead, but then the next sentences are about the narrator's inner thoughts. Also, this passage: "The loneliness. It never felt like such a loss." So what is the 'it' that felt like a loss? The loneliness? If so, how can that feel like a loss? If not, what is this 'it'? So, its juxtaposition to the previous sentence makes it confusing. | | | Cohesive | ChatGPT | I would rate the story fragment as a 2 in terms of enjoyment. The fragment is written in a somber and introspective tone, and it deals with themes of isolation and loneliness, which may not be enjoyable for some readers. The protagonist's state of mind is also portrayed as being in a state of shock and confusion, which adds to the overall sense of unease and lack of enjoyment. | | T1 | Disagree:I would rate it a 3, not a 2, since the enjoyment piece is subjective. | | | T2 | Disagree: While the rating of the freelancer find that the overall theme while somber has a glimmer of hope to it. It is this that makes it more enjoyable to me. The underdog story. Death is not an issue that should be shied away from in writing. We all face it and I enjoyed the honesty in the piece. :"Why did I keep walking?". Something I feel many of us face and may have asked ourselves. I believe that this would be relevant to many readers. | | | T3 | Agree | | | Likable | ChatGPT | I would rate the story fragment as a 5. It is highly relevant to the prompt as it explores the theme of death and how it can be seen as the death of an entire world. The story fragment also touches on the ideas of loneliness and longing, which are related to the loss of someone or something. The concept of the protagonist being frozen in shock and unable to express themselves, also supports the idea of how death can be seen as the death of an entire world. | | T1 | Agree | | | T2 | Agree | | | T3 | Disagree: | I do not see the link between the prompt and the story. | | Relevant | Table 7: Story 1: a GPT-2-generated story that is highly-rated by ChatGPT. | | | Attribute | Rater | Response | |-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ChatGPT | I would rate the grammatical correctness of the story fragment as a 3. The text is generally understandable, but there are a few punctuation errors and awkward phrasing that make it somewhat difficult to follow. | | | T1 | Disagree:I would rate the grammar as a 2, not a 3–there were significantly more grammatical errors than the average within this set of story fragments. | | | T2 | Disagree: The issues with the grammar and coherence creates a stop start feeling when reading. I am unsure as to who is speaking in the text and this leaves me with many questions. | | | T3 | Disagree: The grammar is poorer than a 3, as is evident in for example what is supposed to be the second sentence: "He has been ... (incomplete, and another segment is there "People stare ...); cant; couldnt; doesnt; and sentences that do not make sense like this one: " A man I once was won't be able to marry me " | | | Grammar | ChatGPT | I would rate the cohesiveness of the sentences in the story fragment as a 2. The sentences in the fragment are disconnected and jump between different thoughts and ideas, making it difficult to understand the overall story or message. There is no clear progression of events or logical flow between the sentences. It appears that the fragment may be missing some important information or context that would help tie the sentences together. | | T1 | Agree | | | T2 | Agree | | | T3 | Disagree: very incoherent passage in which it is not very clear who says what, what the topic is, thus rendering a disjointed narrative. | | | Cohesive | ChatGPT | I would rate the enjoyability of the story fragment as a 1. The fragment is confusing and difficult to understand, the sentences are disconnected and jumps between different thoughts and ideas. The story seems to be incomplete and lacks context, making it hard to follow the plot or characters. Additionally, the fragment has a sad and negative tone and it does not seem to be enjoyable to read. | | T1 | Agree | | | T2 | Agree | | | T3 | Agree | | | Likable | ChatGPT | I would rate the relevance of the story fragment to the prompt as a 1. While the story fragment does mention someone being gone, it doesn't seem to be directly related to the idea of filling out a form to justify killing someone. It's unclear if the son is the person being proposed to be killed, or if the story is even taking place in the context of the prompt. There is no clear indication that the fragment is related to the prompt. | | T1 | Agree | | | T2 | Agree | | | T3 | Agree | | | Relevant | Table 8: Story 2: a GPT-2-generated story that is lowly-rated by ChatGPT. | | | Attribute | Rater | Response | |-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ChatGPT | The text of the story fragment appears to be grammatically correct, with no major errors or awkward phrasing. On a scale of 1-5, with 1 being the lowest, I would rate the grammatical correctness of the text as a 5. | | | T1 | Agree | | | T2 | Agree | | | T3 | Disagree: I would rate this a 4 because of for example 'no where' = nowhere, although it might seem like a minor error, it does take away some correctness of the text. | | | Grammar | ChatGPT | The sentences in the story fragment fit together well, creating a clear and coherent image of the scene and the characters' actions and thoughts. The language is descriptive and evocative, and the dialogue effectively conveys the characters' emotions and motivations. On a scale of 1-5, with 1 being the lowest, I would rate how well the sentences fit together as a 4. | | T1 | Agree | | | T2 | Agree | | | T3 | Partially agree: To me, all the sentences fit well together to create a captivating narrative. I will give a 4 or 5. | | | Cohesive | ChatGPT | Based on the structure, writing style and plot of the story fragment, I would rate it as 3 in terms of enjoyability. The fragment sets up an intriguing situation with the mysterious little black box and the protagonist's belief that it holds the "secret to the universe," but it ends abruptly and leaves many questions unanswered, which may not be satisfying for readers. | | T1 | Disagree: I liked it and it was very clear, so I'd rate it a 4 or 5. | | | T2 | Disagree: While the story ends suddenly, I am remined that it is just a fragment and that their may be more to this piece. I felt the piece written about a little black box was intriguing and enjoyed the introduction of Norse mythology. This would attract the interest of many readers. | | | T3 | Disagree: I enjoyed the fragment, as it piqued my interest to want to know more about how the story evolves and what it actually was about. | | | Likable | ChatGPT | I would rate the story fragment as a 5 in relevance to the prompt "The Little Black Box." The story fragment centers around a little black box and its perceived importance, which aligns with the prompt. Additionally, the main character, Heaven, is holding the box and describing it as "the secret to the universe," further solidifying the connection to the prompt. | | T1 | Agree | | | T2 | Agree | | | T3 | Agree: the story is totally relevant to the prompt. | | | Relevant | Table 9: Story 3: a human-written story that is highly-rated by ChatGPT. | | | Attribute | Rater | Response | |-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ChatGPT | I would rate the grammatical correctness of the text as a 3. There are a few errors such as "Many displays of various vice" which should be "Many displays of various vices" and "I so small flakes of snow" which should be "I saw small flakes of snow." Additionally, some of the sentences are slightly awkward and could be rephrased for better clarity. Overall, the text is understandable, but there are a few errors and awkward phrasings that could be improved. | | | T1 | Partially agree:I rated the grammar lower than this–maybe a 2 or a 3. There were above-average numbers of errors. | | | T2 | Disagree: While the text is understandable, I found myself being distracted by the errors in the text and coherency. | | | T3 | Agree | | | Grammar | ChatGPT | I would rate the cohesiveness of the sentences in the story fragment as a 2. The story fragment jumps abruptly between different scenes and characters without clear transition, making it difficult to follow and understand the narrative. Additionally, the sentences are often disjointed and lack a clear flow. The story fragment also lacks a clear plot or purpose, making it difficult to understand the overall story or the main character's intentions. The story fragment can be improved by providing more context, and a clearer narrative arc. | | T1 | Agree: it was jumpy | | | T2 | Agree | | | T3 | Disagree: The story is about a sniper looking through his rifle's telescope, thus seeing changing scenes based on each room he is seeing through the windows, which are then subsequently described. These descriptions account for the seeming lack of cohesion. | | | Cohesive | ChatGPT | I would rate the enjoyability of the story fragment as a 1. The fragment lacks a clear plot, purpose or character development, making it difficult to become invested in the story or the characters. Additionally, the fragment includes disturbing and graphic content such as violence and prostitution, which may make it uncomfortable for some readers. Additionally, the fragment contains several grammatical errors and awkward phrasings that can make it difficult to follow the story. Overall, the story fragment is not enjoyable to read. | | T1 | Agree: : it was not just graphic, but also showed really harmful and negative representations of characters based on their weight, gender, etc. | | | T2 | Agree | | | T3 | Disagree: The story did hold my attention to a degree because I wanted to know whom he wanted to target and why. | | | Likable | ChatGPT | I would rate the relevance of the story fragment to the prompt as a 1. The story fragment doesn't contain any elements that relate to the prompt such as the protagonist and antagonist, their actions, or the moral ambiguities the antagonist faces. The story fragment mainly describes a person with a sniper rifle looking into windows of a hotel and doesn't present any clear plot or purpose. Additionally, the story fragment does not offer any indication of moral ambiguity on the part of the person with the sniper rifle. Overall, the story fragment is not relevant to the prompt. | | T1 | Agree | | | T2 | Agree | | | T3 | Agree | | | Relevant | Table 10: Story 4: a human-written story that is lowly-rated by ChatGPT. | | the 0.5 and keeping the 0.5, and the results in the paper do not drop the 0.5. Again, these cases are really rare. We do not parse the output of ChatGPT using any rules since we the authors read the response and extract the score by ourselves. During the experiments, ChatGPT refuses to answer the questions about 2 stories and 3 news titles since ChatGPT find those contents to violate the OpenAI content policy. We find that those samples contain discrimination to some protected groups, or contain sexual or violent descriptions. Hence, the results of ChatGPT are calculated without those samples. ## D.2.2 Open-Ended Story Generation For T0 and the two InstructGPT models, we query the four attributes **separately** using the queries shown as follows: ## Grammaticality Please rate the story fragment The goal of this task is to rate story fragment. Note: Please take the time to fully read and understand the story fragment. We will reject submissions from workers that are clearly spamming the task. Story fragment: [STORY] (End of story fragment) How grammatically correct is the text of the story fragment? (on a scale of 1-5, with 1 being the lowest) ## Cohesiveness Please rate the story fragment The goal of this task is to rate story fragment. Note: Please take the time to fully read and understand the story fragment. We will reject submissions from workers that are clearly spamming the task. Story fragment: [STORY] (End of story fragment) How well do the sentences in the story fragment fit together? (on a scale of 1-5, with 1 being the lowest) ## Likability Please rate the story fragment The goal of this task is to rate story fragment. Note: Please take the time to fully read and understand the story fragment. We will reject submissions from workers that are clearly spamming the task. Story fragment: [STORY] (End of story fragment) How enjoyable do you find the story fragment? (on a scale of 1-5, with 1 being the lowest) ## Relevance Please rate the story fragment The goal of this task is to rate story fragment. Note: Please take the time to fully read and understand the story fragment. We will reject submissions from workers that are clearly spamming the task. Story fragment: [STORY] (End of story fragment) Now read the PROMPT based on which the story fragment was written. PROMPT: [PROMPT] (End of PROMPT) How relevant is the story fragment to the prompt? (on a scale of 1-5, with 1 being the lowest) The [STORY] and [PROMPT] are to be filled in with the story and the prompt. We show the newlines for better readability. When we query the models, we use the token \n to represent the new line. When querying ChatGPT, we query the four attributes of the same story in one conversation; this is similar to asking the teachers to rate the same story on the same page of the Google Form. We use the same queries shown above to query ChatGPT and the order of queries is the same as the order shown above. ## D.2.3 Adversarial Attack Quality Evaluation When querying all the LLMs in this task, we query the *fluency* and the *meaning preserving* of the same news title independently. This means that each conversation with ChatGPT will only have one question, asking about the fluency or the meaning preserving of news title(s). All the parameters for generation are the same as the default parameters in Section 3.2. The exact query we use are: Fluency You are given a news title. Please read the news title and answer the question. News title: [NEWS_TITLE] (End of news title) Question: How natural and fluent is the text of the news title? (on a scale of 1-5, with 1 being the lowest The [NEWS_TITLE] will be filled in with either a benign or adversarial-attacked news title. Meaning Preserving You are given two news titles. Please read the news titles and answer the question. News title 1: [BENIGN_TITLE] (End of news title 1) News title 2: [ADVERSARIAL_TITLE] (End of news title 2) Question: Do you agree that the meaning (or semantics) of news title 1 is preserved in news title 2? (on a scale of 1-5, with 1 being the strongly disagree and 5 being strongly agree.) The [BENIGN_TITLE] will be filled in with the news title before the attack and the [ADVERSARIAL_TITLE] will be filled in with the news title after an adversarial attack. ## E Experiment Details On Adversarial Attacks The adversarial samples used in Section 4 are from Yoo et al. (2022). Yoo et al. (2022) generates different sets of adversarial samples using different adversarial attacks against different victim models. We use the adversarial samples generated against a bert-base-uncased text classifier trained on AG-News, using three different adversarial attacks: Textfooler, PWWS, and BAE. The intent of the dataset is to facilitate the research in SSA, which we do not violate. Here, we show the supplementary results of using text-davinci-003 as the LLM evaluation for evaluating the quality of adversarial samples in Table 11. We can see that the result of using text-davinci-003 is similar to ChatGPT in the sense that text-davinci-003 also rates adversarial samples higher than humans while still signifi- | Human evaluate | LLM evaluate | | | | |------------------|----------------|--------|-------|-------| | Fluent | Mean. | Fluent | Mean. | | | Benign | 4.55 | - | 4.33 | 4.56† | | Textfooler | 2.17 | 1.88 | 3.71 | 2.37 | | PWWS | 2.16 | 1.85 | 3.62 | 3.21 | | BAE | 3.01 | 3.02 | 4.16 | 3.69 | Table 12: The rating on three adversarial attacks of the three teachers T1, T2, and T3. | Rater | Textfooler | PWWS | BAE | |---------|--------------|--------|-------| | T1 | 3.36 | 3.68 | 4.2 | | T2 | 1.80 | 1.40 | 2.96 | | T3 | 1.36 | 1.40 | 1.88 | cantly lower than the benign samples. As already seen in Section 3.3, text-davinci-003 tends to give a higher rating. As mentioned in Section 4.3, one teacher rates the *fluency* of Textfooler significantly higher than PWWS while the other two teachers do not. We show the rating on *fluency* on the three adversarial attacks by each teacher in Table 12. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Sec 5 and Limitation on page 10 ✓ A2. Did you discuss any potential risks of your work? Sec 5 and Limitation on page 10 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.2, Appendix B.1 And E ✓ B1. Did you cite the creators of artifacts you used? Section 4.2, Appendix B.1 and E ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The datasets we use do not include a license ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix E and Ethical statement ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Removing the names in AG-News will make news titles to be nonsensical. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.2, Appendix B.1 and E ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.2, Appendix B.1 and E ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3, 4, Appendix C ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C.1 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix C.1, C.2 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Ethical statement D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We do not ahve ethic review board in our institute. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? This is not related to our task. We report the certification of the workers in Appendix C.1.
mai-etal-2023-hypermixer
{H}yper{M}ixer: An {MLP}-based Low Cost Alternative to Transformers
https://aclanthology.org/2023.acl-long.871
Transformer-based architectures are the model of choice for natural language understanding, but they come at a significant cost, as they have quadratic complexity in the input length, require a lot of training data, and can be difficult to tune. In the pursuit of lower costs, we investigate simple MLP-based architectures. We find that existing architectures such as MLPMixer, which achieves token mixing through a static MLP applied to each feature independently, are too detached from the inductive biases required for natural language understanding. In this paper, we propose a simple variant, HyperMixer, which forms the token mixing MLP dynamically using hypernetworks. Empirically, we demonstrate that our model performs better than alternative MLP-based models, and on par with Transformers. In contrast to Transformers, HyperMixer achieves these results at substantially lower costs in terms of processing time, training data, and hyperparameter tuning.
# Hypermixer: An Mlp-Based Low Cost Alternative To Transformers Florian Mai†♠ Arnaud Pannatier†♠ Fabio Fehr†♠ **Haolin Chen**†♠ François Marelli†♠ **François Fleuret**♣♠† **James Henderson**† †Idiap Research Institute, Martigny, Switzerland ♠EPFL, Lausanne, Switzerland ♣University of Geneva, Geneva, Switzerland ## Abstract evaluation of a result R as following: ## Cost(R) ∝ E · D · H, Transformer-based architectures are the model of choice for natural language understanding, but they come at a significant cost, as they have quadratic complexity in the input length, require a lot of training data, and can be difficult to tune. In the pursuit of lower costs, we investigate simple MLP-based architectures. We find that existing architectures such as MLPMixer, which achieves token mixing through a static MLP applied to each feature independently, are too detached from the inductive biases required for natural language understanding. In this paper, we propose a simple variant, *HyperMixer*, which forms the token mixing MLP dynamically using hypernetworks. Empirically, we demonstrate that our model performs better than alternative MLP-based models, and on par with Transformers. In contrast to Transformers, HyperMixer achieves these results at substantially lower costs in terms of processing time, training data, and hyperparameter tuning. ## 1 Introduction Attention-based architectures, such as the Transformer (Vaswani et al., 2017), have accelerated the progress in many natural language understanding tasks. Part of their success is a result of a parallelizable training scheme over the input length. This improves training times and allows for larger volumes of data which makes these models amenable to pretraining (Radford et al., 2018; Devlin et al., 2019). Therefore, many current state-of-the-art models are fine-tuned extensions of large pretrained Transformers (Bommasani et al., 2021). However, these models come at a significant computational cost. They require considerable resources for pretraining and fine-tuning, which induces high energy consumption (Strubell et al., 2019) and limits access to research (Bommasani et al., 2021). Subsequently, Schwartz et al. (2020) argue the need for *"Green AI"*. They propose a cost where E is the computational cost measured in floating point operations (FPO) of a single example, D is the dataset size, and H is the number of hyperparameter configurations required during tuning. To achieve a cost reduction, this paper proposes a simpler alternative to Transformers. We take inspiration from the computer vision community, which has recently seen a surge of research on Multi-Layer Perceptrons (MLPs). Most prominently, MLPMixer (Tolstikhin et al., 2021), which is a simple architecture based on two MLPs: one for token mixing and one for feature mixing. However, the token mixing MLP learns a *fixed-size* set of *position-specific* mappings, arguably making MLPMixer's architecture too detached from the inductive biases needed for natural language understanding, in contrast to Transformers (Henderson, 2020). In this paper, we propose a simple variant, *HyperMixer* (Figure 1), which creates a token mixing MLP dynamically using hypernetworks (Ha et al., 2016). This variant is more appropriate, as it learns to generate a *variable-size* set of mappings in a *position-invariant* way, similar to the attention mechanism in Transformers (Vaswani et al., 2017). In contrast to Transformer's quadratic complexity, HyperMixer's complexity is linear in the input length. This makes it a competitive alternative for training on longer inputs. Empirically, we demonstrate that HyperMixer works substantially better on natural language understanding tasks than the original MLPMixer and related alternatives. In comparison to Transformers, HyperMixer achieves competitive or improved results at a substantially lower cost *Cost*(R) ∝ E · D · H: improved inference speeds (E), especially for long inputs; favorable performance in the 15632 ![1_image_0.png](1_image_0.png) low-resource regime (D); and efficient tuning for hyperparameters (H). We attribute HyperMixer's success to its ability to approximate an attentionlike function. Further experiments on a synthetic task demonstrate that HyperMixer indeed learns to attend to tokens in similar pattern to the attention mechanism. In summary, our contributions can be enumerated as follows: 1. A novel all-MLP model, HyperMixer, with inductive biases similar to Transformers. (Section: 2) 2. A performance analysis of HyperMixer against alternative token mixing methods based on controlled experiments on the GLUE benchmark. (Section: 4.3) 3. A comprehensive comparison of the cost Cost(R) of HyperMixer and Transformers. (Sections: 4.4, 4.5, 4.6) 4. An ablation demonstrating that HyperMixer learns attention patterns similar to Transformers. (Section: 4.7) ## 2 Method 2.1 Inductive Biases In Nlp Models In machine learning, the inductive biases of a model reflect implicit modeling assumptions which are key to facilitate learning and improve generalization on specific tasks. In NLP, well-known models with strong inductive biases include: recurrent neural networks (Elman, 1990), which assume the input to be a sequence; and recursive neural networks (Socher et al., 2013), which assume a treestructure. While both these inductive biases are reasonable, empirically, Transformers have been more successful in recent years. Furthermore, we reiterate the arguments of Henderson (2020) for inductive biases in language and apply them to our model design. Henderson (2020) attributes the Transformer's success to two concepts: *variable binding* and *systematicity*. Variable binding refers to the model's ability to represent multiple entities at once. This is arguably challenging in single-vector representations such as recurrent neural networks. However, Transformers represent each token with its own vector which accounts for variable binding as each token can be interpreted as an entity. Systematicity refers to the models ability to learn generalizable rules that reflect the structural relationship between entities (Fodor and Pylyshyn, 1988). Transformers achieve systematicity through the attention mechanism which is a learnable set of functions that determines the interaction between entities by matching query representations to key representations (as shown in Figure 1). The mechanism *modulates*, for every position in the sequence, how to functionally process any other position. Moreover, these function parameters are learnable and shared across all entities. ## 2.2 Mlpmixer A general layer of MLPMixer is shown in Figure 1. Similarly to Transformers, each token is represented as a vector of features, which undergo (nonlinear) transformations in multiple layers. MLPMixer employs two MLPs at each layer, one for feature mixing and one for *token mixing*. The feature mixing component is applied to each token vector independently, which models the interactions between features. The Token Mixing MLP (TM-MLP) is applied to each feature independently (i.e. its vector of values across tokens), which models the interactions between spatial locations or positions. This could be interpreted as a global attention mechanism which is static and position-modulated. Practically, this is achieved by transposing the dimension representing the features and the dimension representing the positions. Each vector x T i ∈ R N , representing feature i ≤ d, of some input of fixed length N, is input into TM-MLP, which has the following form: $$\mathrm{TM-MLP}(\mathbf{x}_{i}^{T})=\mathbf{W}_{1}(\sigma(\mathbf{W}_{2}^{T}\mathbf{x}_{i}^{T})),$$ i)), (1) where W1, W2 ∈ R N×d′, and σ represents the GELU non-linearity (Hendrycks and Gimpel, 2016). Finally, to facilitate learning, layer normalization (Ba et al., 2016) and skip connections (He et al., 2016) are added around each MLP, respectively. How to best arrange these components is still an open question (Wang et al., 2019; Bachlechner et al., 2021). We experiment with different variants in Appendix F. Considerations for NLP The token mixing MLP assumes an input of fixed dimension, which is necessary as the parameters need to be shared across all examples. However, unlike images, textual input is generally of a variable dimension. Therefore, to apply MLPMixer to texts of variable length, a simplistic approach is to assume a maximum length (e.g. the maximum in the dataset). Thereafter, all inputs are padded to the maximum length and masks are applied in the token mixing MLP. This model is able to do variable binding, since each token is represented by its own vector. However, this model lacks systematicity because the rules learned to model interactions between tokens (i.e. the MLP's weights) are not shared across positions. ## 2.3 Hypermixer Algorithm 1 HyperMixer pseudo-code class **HyperMixing(nn.Module):** def **__init__(self, d, d'):** # learnable **parameters** ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) ![2_image_2.png](2_image_2.png) ![2_image_3.png](2_image_3.png) ![2_image_4.png](2_image_4.png) ![2_image_5.png](2_image_5.png) ![2_image_6.png](2_image_6.png) ![2_image_7.png](2_image_7.png) ![2_image_8.png](2_image_8.png) self.hypernetwork_in = MLP([d, d, d']) self.hypernetwork_out = MLP([d, d, d']) # layer normalization improves training **stability** self.layer_norm = LayerNorm(d) def forward(self, queries, keys, values): # queries: [B, M, d] # keys / values: [B, N, d] # add token information (e.g. position **embeddings)** hyp_in = add_token_information(keys) hyp_out = add_token_information(queries) W1 = self.hypernetwork_in(keys) # [B, N, d'] W2 = self.hypernetwork_out(queries) # [B, M, d'] # TM-MLP(x) = W_2 ( GELU ( **W_1^T** x) ) ![2_image_9.png](2_image_9.png) HyperMixer includes systematicity into the MLPMixer architecture by introducing a novel token mixing mechanism, *HyperMixing*1, which can be regarded as a drop-in replacement for attention. For ease of understanding, we provide pseudo-code in Algorithm 1. While the queries, keys, and values in HyperMixing need not be the same, we will assume they are identical in the following formulation. HyperMixing relies on the use of hypernetworks, which are used to generate the weights 1HyperMixing is to HyperMixer what self-attention is to Transformer encoders. W1, W2 of TM-MLP (Equation 1) dynamically as a function of the input. Let xj ∈ R d, j ≤ N, where N is the (variable) dimension of the input, represent token j (i.e., query, key, and value). W1 and W2 are generated by parameterized functions h1, h2 : R N×d → R N×d′. Theoretically, h1 and h2 could be any function, including sophisticated networks that consider non-linear interactions between tokens, such as the attention mechanism. However, this would defeat the purpose of our model, which is simplicity. Therefore, we choose to generate the rows of the weight matrices from each token independently via another MLP. Concretely, a hypernetwork function can be defined as $$h_{i}(\mathbf{x})={\left(\begin{array}{l}{\mathrm{MLP}^{\mathbf{W}_{i}}(\mathbf{x}_{1}+\mathbf{p}_{1})}\\ {\vdots}\\ {\mathrm{MLP}^{\mathbf{W}_{i}}(\mathbf{x}_{N}+\mathbf{p}_{N})}\end{array}\right)}\in\mathbb{R}^{N\times d^{\prime}},$$ where MLPW1, MLPW2: R d → R d′are themselves multi-layer perceptrons with GELU nonlinearity. pj ∈ R dis a vector that can encode additional information such as the position via absolute position embeddings (Vaswani et al., 2017). Intuitively, for each token xj , h1 decides which information to send to the hidden layer of TM-MLP, where the information from all tokens are mixed, and h2 decides for each token how to extract information from the hidden layer. Note that, even though h1 and h2 only consider one token at once, non-linear interactions between tokens are still modeled through the hidden layer of TM-MLP. Finally, layer normalization (Ba et al., 2016) can be applied to the output of TM-MLP. We found this helpful to facilitate training with a wide variety of Transformer layouts (Appendix F). Tying h1 and h2 In order to reduce the number of parameters and operations in the model, and thereby the complexity, we found it useful to tie h1 and h2 by setting W2 = W1. Considerations for NLP In comparison to the MLPMixer defined in Section 2.2, the use of hypernetworks overcomes two challenges. Firstly, the input no longer has to be of fixed dimensionality. The hypernetwork generates a token mixing MLP of appropriate dimension as a function of the input. Secondly, the hypernetwork models the interaction between tokens with shared weights across all positions in the input. Hence, systematicity is ensured. ## 3 Related Work Research on all-MLP models like MLPMixer (Tolstikhin et al., 2021) is widespread in the computer vision community (Tu et al., 2022; Yu et al., 2022; Wang et al., 2022, among many others). However, they lack some desirable inductive biases for NLP, which we discuss in length in Appendix A.2. Specifically, in contrast to HyperMixer, none of the previously proposed methods simultaneously provide i) *position invariance*, which is important for generalization, ii) *adaptive size* for variable-length inputs, iii) a *global receptive field*, which allows interactions to not be limited to small token neighborhoods, iv) *learnabilty* allowing for universal applicablility to various tasks, and v) *dynamicity*, which means that token mixing is a function of the input. Consequently, only a few works have used MLP-based models as their backbone in NLP tasks. gMLP (Liu et al., 2021) serves as one of our baselines and pnlp-mixer (Fusco et al., 2022) employs standard MLPMixer on top of a novel token embedding method. Apart from all-MLP models, there is an abundance of research on efficient alternatives to standard attention layers (Katharopoulos et al., 2020; Bello, 2021, et cetera). While they don't qualify as all-MLP models, they have close connections to our work (see Appendix E) and aim at lowering the cost of AI, albeit it on fewer dimensions than our work (Appendix A.1). We employ FNet (Lee-Thorp et al., 2021) and Linear Transformers (Katharopoulos et al., 2020) as representatives of these as a baseline. ## 4 Experiments Our experiments are designed to test the following three hypotheses. H1 (Section 4.3): Since HyperMixer reflects more inductive biases that are adequate for NLP, our hypothesis is that HyperMixer performs better at NLP tasks than MLPMixer and similar MLP-based alternatives, specifically at those tasks that require to model the interactions between tokens. H2: Since HyperMixer has similar inductive biases as transformers but is considerably simpler conceptually and in terms of computational complexity, it can be seen as a low cost alternative to Transformers, reducing the cost in terms of single example processing time (Section 4.4), required dataset size (Section 4.5), and hyperparameter tuning (Section 4.6). H3 (Section 4.7): Due to its inductive biases mirroring those of Transformers, HyperMixer also learns similar patterns as the attention mechanism. ## 4.1 Datasets We evaluate on four sentence-pair classification tasks and one single-sentence classification task. The sentence-pair tasks are QQP (Iyer et al., 2017), QNLI (Rajpurkar et al., 2016), MNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015). For uniformity, datasets are formatted as in the GLUE benchmark (Wang et al., 2018). We choose these tasks for two properties: firstly, they have large training datasets (Table 2, appendix) enabling reasonable performances without pretraining; secondly, solving these tasks requires good modeling of the interactions between tokens from different sentences, which is the main focus of this paper. As a control, we experiment on the single-input dataset SST2 (Socher et al., 2013), which is a sentiment classification task. Many examples in this dataset can be solved by identifying key sentiment words, rather than modeling the token interaction. ## 4.2 Baselines The following baselines can be categorized into MLP-based (to support H1) and *not MLP-based* (e.g., Transformers, to support H2). Note that our study is about the design of the *token mixing* module. Therefore, we only compare to models that fit into the general framework displayed in Figure 1, where there is a feature mixing module and a token mixing module for textual inputs. As a result, models such as RNNs are excluded. To enable a controlled experiment, we use the same feature mixing module in all models; the models only differ in their token mixing module. MLP-based The conceptually closest baseline is MLPMixer (Tolstikhin et al., 2021), which combines both token and feature mixing using fixed dimensional MLPs, as described in Section 2.2. Concurrently, (Liu et al., 2021) proposed **gMLP**, in which token mixing is achieved through weighted summation of all other inputs, similar to the attention mechanism. However, rather than computing weights as function of the inputs like in attention, in gMLP the weights are fixed learnable parameters. Additionally, linear gating initialized close to one is introduced to facilitate training. The original gMLP method does not employ feature mixing modules, as their token mixing module is capable of modeling feature interactions as well in a single gMLP block. However, for comparability we inject gMLP blocks as token mixing modules in our general architecture and keep feature mixing modules as well. Non MLP-based **Transformers** (Vaswani et al., 2017) are used in the current state of the art in virtually all NLP tasks. Their key component is the *softmax*-based self-attention module, which we use for token mixing. Linear Transformer (Katharopoulos et al., 2020) replaces softmax attention with a *featuremap based dot-product attention*. Finally, FNet (Yu et al., 2021) replaces the self-attention part of Transformers with a fixed, non-learnable set of Fourier transforms for token mixing. ## 4.3 Performance Initially we compare the performance of HyperMixer in comparison to our baselines. Thereafter, we further explore the model's benefits with respects to its cost. For comparability, we adjust the size of the token mixing components such that all models have the same number of parameters (11M). FNet is an exception since it has no learnable parameters in its token mixing component. We tune the learning rate of each model via grid-search, and report the performance of the best configuration. Further experimental details on all experiments can be found in Appendix B. Results Validation and test set results are shown in Table 1. On the test and the validation set, HyperMixer performs the best among MLP-based models on all datasets, although for SST the difference on the validation set is smaller than one standard deviation. MLPMixer generally achieves good performances, outperforming Transformers on two datasets. Comparing to non-MLP-based methods, HyperMixer also outperforms vanilla Transformers on all datasets. The differences are generally small (≤ 2 points), except on QNLI, where the difference is 3.9 points. We suspect that this discrepancy is due to the relatively small training set of QNLI. We investigate low-resource behavior of Transformers in comparison to HyperMixer in Section 4.5. FNet performs substantially worse than the other methods, particularly on SNLI and QQP. Linear Transformers achieve excellent performance on MNLI and SNLI, but perform poorly on QNLI and QQP. ![5_image_0.png](5_image_0.png) ## 4.4 Time Per Example In order to assess the efficiency of our model, we measure the wallclock-time of processing a single input (repeated 1,000 times) through the token mixing stages of HyperMixer and Transformer, respectively. As Schwartz et al. (2020) point out, wallclock time has the downside of being dependent on the specific implementation, and they therefore recommend reporting the number of floating point operations (FOPs) required by one forward pass. In Figure 2, we show wallclock time and theoretical FOPs as a function of the input length N. For short input sequences, the number of FOPs is dominated by the size of the hidden layer and hence slightly lower for Transformers than for HyperMixer. However, in practical terms we observe that HyperMixer is still faster than Transformers. At longer input sequences, the size of N starts to dominate the total complexity of Transformers, so that it becomes exceedingly slower than HyperMixer. ## 4.5 Low Resource Performance Like MLPMixer, HyperMixer is a conceptually simple architecture, as it only applies multi-layer perceptrons at its core. Simpler architectures often make for better performance on smaller scale datasets. We investigate this by varying the number of examples used for training on the three large datasets MNLI, SNLI, and QQP. For these experiments, we use the best performing learning rate found in the grid search from Section 4.3. In Figure 3, we plot the relative performance change of HyperMixer compared to Transformers as a function of subsample size. On all datasets, the relative improvement of HyperMixer over Transformers is larger when training with 10% of the dataset than with the full dataset. While the effect is small on QQP, it is particularly large on SNLI and MNLI, where HyperMixer performs almost 12-14% better with 10% of the data, while the relative improvement with the full dataset is less than 2%. ## 4.6 Ease Of Hyperparameter Tuning MLP-based token mixing has the advantage that it is conceptually simpler than self-attention, and that it is well-known how to facilitate training via mechanisms such as skip-connections and layer normalization. Both these aspects suggest that it might be easier to find hyperparameter configurations that yield good performances. In these experiments, we compare HyperMixer (with tied hypernetworks) to Transformers in this regard. As recommended in Schwartz et al. (2020), we perform a random search to tune hyperparameters and compute the expected validation performance (Dodge et al., 2019, 2021). Specifically, we tune the learning rate, whose logarithm is drawn from U(−8, −1), and the dropout probability drawn from U(0, 0.5) for 20 trials. Results In Figure 4, we show the *relative* expected validation performance, i.e., the relative performance change of HyperMixer compared to Transformer, for all five datasets. With the notable exception of QNLI, the relative improvement of HyperMixer is higher at smaller budgets than at larger budgets on all datasets. The effect is par- | Model | MNLI | SNLI | QQP | QNLI | SST | # Params | |-------------------|------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|------------| | Baselines | Validation set results (average accuracy / standard deviation over 10 seeds) | | | | | | | FNet | 59.7 (0.27) | 75.3 (0.46) | 79.4 (0.28) | 59.9 (0.46) | 79.7 (0.71) | 9.5 M | | Lin. Transformer | 66.9 (0.48) | 82.7 (0.22) | 81.7 (0.28) | 61.3 (0.29) | 80.5 (0.46) | 11 M | | Transformer | 65.4 (0.51) | 80.9 (0.40) | 82.8 (0.22) | 67.3 (2.03) | 79.0 (0.86) | 11 M | | MLPMixer | 63.9 (0.34) | 79.6 (0.11) | 83.7 (0.42) | 68.1 (2.10) | 80.1 (0.67) | 11 M | | gMLP | 60.8 (0.95) | 80.5 (0.55) | 82.8 (0.21) | 60.5 (0.49) | 78.7 (0.74) | 11 M | | HyperMixer (ours) | 66.2 (0.21) | 81.9 (0.27) | 85.6 (0.20) | 78.0 (0.19) | 80.7 (0.84) | 11 M | | Baselines | Test set results (best model) | | | | | | | FNet | 59.8 | 75.3 | 78.4 | 59.6 | 80.0 | 9.5 M | | Lin. Transformer | 66.9 | 83.0 | 82.3 | 61.7 | 80.8 | 11 M | | Transformer | 65.8 | 80.7 | 82.4 | 73.2 | 79.4 | 11 M | | MLPMixer | 62.9 | 80.1 | 83.5 | 70.5 | 81.2 | 11 M | | gMLP | 61.2 | 80.9 | 82.5 | 60.2 | 79.5 | 11 M | | HyperMixer (ours) | 66.1 | 81.7 | 84.1 | 77.1 | 81.4 | 11 M | ![6_image_0.png](6_image_0.png) ticularly strong on SNLI, where HyperMixer is 6.5% better at small tuning budgets, but less than 2% better at high budgets. These results indicate that HyperMixer is substantially easier to tune than Transformers. ## 4.7 Hypermixer Learns Attention Patterns We hypothesized that the token mixing layer of HyperMixer offers a mechanism similar to attention. To show this, we consider a toy problem with 1d sequences composed of shape pairs of different heights as described in Fleuret (2019). The target value is the average height in each pair of shapes. An example input is shown in Figure 5a. To solve the task well, for each position, the model must attend to other positions with the same shape. Models We compare the token mixing layer of HyperMixer to three other models: i) *None* does not model token interactions. All predictions are thus only made based on local information. This model should thus fail. ii) *MLPMixer* does model token interactions. Still, since its token mixing weights are position-specific, each position has to learn to recognize each shape, which we expect to be difficult, especially with little data. iii) *Selfattention* can be considered the upper bound, as it models the interaction between every two positions explicitly. Results Figure 5b shows the mean squared error on the test examples depending on the number of training examples. As expected, *None* fails on this task. While all other models are able to solve the task with enough training data, MLPMixer is considerably less data-efficient than the other two models, requiring 5-10 times more data to reach the same performance. This is expected, since in contrast to HyperMixer and self-attention, MLPMixer's token mixing module is not positioninvariant. HyperMixer and self-attention reach approximately the same performance when training on 100k examples. However, HyperMixer is more data-efficient than self-attention, which we attribute to the simpler model architecture. ![7_image_0.png](7_image_0.png) We can measure the interactions between two tokens by computing the gradient of an output token with respect to an input token (pseudo-attention). Figures 5d and 5c show the pseudo-attention maps of HyperMixer in comparison to attention. We observe that the pseudo-attention weights of HyperMixer and attention are similar. This indicates that HyperMixer indeed learns an attention-like function. In contrast, we find these patterns to be weaker in MLPMixer (Figure 6, appendix). ## 5 Discussion In the following, we first discuss the merits of our proposed model, which are the core contributions of our paper. We then discuss the scope of our analysis. ## 5.1 Impact Best all-MLP model HyperMixer was designed as an MLP-based architecture with similar inductive biases as Transformers, which are beneficial for natural language understanding. Our hypothesis (H1) is that this leads to improvements over other MLP-based methods. Our experimental results support this hypothesis, as we find HyperMixer to outperform all MLP-based baselines on all datasets (Section 4.3). Low cost model The main motivation for an MLP-based architecture is the efficiency benefits induced by its simplicity. Therefore, we hypothesized (H2) that HyperMixer would reduce the cost Cost(R) ∝ E · D · H to obtain an AI result R. This hypothesis is supported by our experiments. While HyperMixer yields results that are on par with Transformer's results, it reduces the cost of all three cost factors: i) The cost of processing a single example (E) is lower, particularly for long inputs due to its linear complexity compared to the quadratic complexity of self-attention (Section 4.4). ii) The number of required training examples (D) is reduced, as HyperMixer's relative performance improvement is larger in the low-resource scenario (Section 4.5). iii) HyperMixer requires less hyperparameter tuning than Transformers to reach good results, which is demonstrated by HyperMixer's higher expected relative improvements at low tuning budgets (Section 4.6). Attention-like model Finally, our experiments on a synthetic task indicate that HyperMixer can learn very similar attention patterns as the selfattention mechanism in Transformers (Section 4.7), supporting hypothesis H3. While MLPMixer can also learn similar patterns given enough training data, we believe that it is the introduction of adequate biases that allows HyperMixer to learn these patterns efficiently. These biases were chosen based on an analysis of Transformer's success by Henderson (2020). HyperMixer's own success hence supports that analysis. In summary, in our study, HyperMixer is the bestperforming MLP-based architecture, and shows comparable performance and behavior as selfattention at substantially lower cost. HyperMixer can thus be considered a low cost alternative to Transformers. ## 5.2 Scope Small resource scenario It is important to note that our study is limited to the small resource scenario: Our models are small, not pretrained on large general-purpose corpora, and trained on datasets with fewer than 1 million examples. It is unclear if our results will also hold on larger scale. For example, while gMLP and FNet perform poorly in the low-resource scenario as demonstrated in our experiments, both models are able to narrow the gap to Transformer-based models as the resources for pretraining increase (Liu et al., 2021; Lee-Thorp et al., 2021). We hypothesize that with enough resources, these models are able to overcome their shortcomings in terms of inductive biases. However, there is no reason to believe that HyperMixer, being equipped with useful inductive biases, wouldn't perform on par with Transformers in high-resource scenarios while retaining its lower overall cost. Quite the contrary, HyperMixer's linear complexity in sequence length perhaps makes it more appropriate for large-scale pretraining on long contexts than vanilla Transformers. Versatility One of the most impressive qualities of Transformers is their versatility: Not only are they now the standard architecture for all NLP tasks, but over the years they have also become ubiquitous in a wide range of applications domains outside of NLP. Of course, the present study cannot determine whether HyperMixer is as versatile as Transformers. However, subsequent studies have shown that HyperMixer has uses in speech recognition (Mai et al., 2023) and neural combinatorial optimization (Drakulic et al., 2023). Still, some modeling advancements are needed. For example, HyperMixing is not yet applicable for decoder models that make use of causal masking. As decoderonly language models have become widely studied, this constitutes promising future work. ## 6 Conclusion While large pretrained Transformer language models have led to impressive progress, they require so much resources that many research labs are excluded from participation, leading to calls for *Green AI*. We have proposed an MLP-based method, HyperMixer, that, in contrast to previous MLP-based methods, is equipped with the same inductive biases that made Transformers so successful for natural language understanding. While it performs on par with Transformers, it incurs substantially lower cost in terms of processing time, training data, and hyperparameter tuning. Hence, we believe our study demonstrates the merits of MLP-based models for natural language understanding as an alternative to attention-based models, and we hope that the community pursues this direction further. Avenues for future work include large-scale pretraining, evaluation on a wider range of tasks and domains, and the model's adaptation to text generation. ## Limitations Many limitations of our study are already discussed in Section 5.2, however, we repeat and add to them explicitly here. Small resource scenario Our study investigates MLP-based architectures for text classification tasks and finds competitive performance with vanilla Transformers while having lower cost in terms of the Green AI equation. However, the scope of our findings is naturally limited to the testing scenario, which is low-resource: Our models are relatively small, not pretrained on large generalpurpose corpora, and trained on datasets with fewer than 1 million examples. We may not say with certainty that our results will also hold on larger scale. For the sake of hypothesis-driven research we consider it more valuable to run many controlled small-scale experiments rather than few large-scale experiments. Nonetheless, scaling up should certainly be part of future research directions, as this is essential for optimal task performance. Limitation to English pairwise sentence classification tasks Since token mixing is the independent variable in our study, we put our main focus on English sentence-pair classification tasks with textual input only, which we presume (and provide some evidence for) to be most useful to assess differences between token mixing models. Of course, vanilla Transformers are very flexible in the sense that, over the course of many studies, they have been shown to be very effective for a wide range of tasks, languages and data modalities. Whether or not the proposed HyperMixer model possesses similar flexibility cannot be answered in this study. The HyperMixer encoder arguably possesses similar inductive biases as Transformers. We thus expect it to be straight-forward to apply to tasks that are also solved well by Transformer encoders (e.g., span classification). For tasks such as language modeling, which involve a Transformer decoder, significant modeling advancements are required to obtain a HyperMixer equivalent. We consider this a very promising direction for future work. Limitation to MLP-based baselines Similar to a trend in the computer vision community, our study investigates the suitability of MLP-based architectures for NLP. Due to their conceptual simplicity, these models promise to be easier to train, potentially leading to reduced Green AI costs. To this end we compare our proposed HyperMixer model to a range of other MLP-based models, and Transformers. Apart from FNet and Linear Transformers, which are efficient Transformer alternatives, we do not attempt an exhaustive comparison to non-MLP-based efficient NLP models. Hence, the scope of our claims does not extend to all *efficient* Transformer models. However, these models are of course very relevant to this study, as they are targeted towards one of the factors of Green AI cost (single forward pass complexity). Therefore, we regard a comprehensive comparison as valuable future work. ## Acknowledgements Florian Mai was supported by the Swiss National Science Foundation under the project LAOS, grant number 200021_178862. Arnaud Pannatier was supported by the Swiss Innovation Agency Innosuisse under the project MALAT, grant number "32432.1 IP-ICT". Fabio Fehr was supported by the Swiss National Centre of Competence in Research (NCCR) under the project Evolving Language, grant number "51NF40_180888". Haolin Chen was supported by the Swiss National Science Foundation under the project NAST, grant number "185010". François Marelli was supported by the Swiss National Science Foundation under the project COMPBIO, grant number "179217". ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. *arXiv preprint* arXiv:1607.06450. Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. 2021. Rezero is all you need: Fast convergence at large depth. In *Uncertainty in Artificial Intelligence*, pages 1352–1361. PMLR. Irwan Bello. 2021. Lambdanetworks: Modeling longrange interactions without attention. In *International* Conference on Learning Representations. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(10):281–305. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, and Diyi Yang. 2021. An empirical survey of data augmentation for limited data learning in nlp. arXiv preprint arXiv:2106.07499. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. *arXiv preprint* arXiv:1904.10509. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2185– 2194, Hong Kong, China. Association for Computational Linguistics. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2021. Expected validation performance and estimation of a random variable's maximum. In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 4066–4073, Punta Cana, Dominican Republic. Association for Computational Linguistics. Darko Drakulic, Sofia Michel, Florian Mai, Arnaud Sors, and Jean-Marc Andreoli. 2023. Bqnco: Bisimulation quotienting for generalizable neural combinatorial optimization. *arXiv preprint* arXiv:2301.03313. Jeffrey L Elman. 1990. Finding structure in time. *Cognitive science*, 14(2):179–211. François Fleuret. 2019. Attention mechanisms. Deep Learning Course - Chapter 13.2. Jerry A Fodor and Zenon W Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71. Francesco Fusco, Damian Pascual, and Peter Staar. 2022. pnlp-mixer: an efficient all-mlp architecture for language. *arXiv preprint arXiv:2202.04350*. Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, and Yunhe Wang. 2021. Hire-mlp: Vision mlp via hierarchical rearrangement. David Ha, Andrew Dai, and Quoc V Le. 2016. Hypernetworks. *arXiv preprint arXiv:1609.09106*. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Michael A Hedderich, Lukas Lange, Heike Adel, Jannik Strötgen, and Dietrich Klakow. 2020. A survey on recent approaches for natural language processing in low-resource scenarios. *arXiv preprint* arXiv:2010.12309. James Henderson. 2020. The unstoppable rise of computational linguistics in deep learning. pages 6294– 6306. Association for Computational Linguistics. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. 2017. First quora dataset release: Question pairs. Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021. Parameterefficient multi-task fine-tuning for transformers via shared hypernetworks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 565–576, Online. Association for Computational Linguistics. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine* Learning, pages 5156–5165. PMLR. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2021. Fnet: Mixing tokens with fourier transforms. Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, et al. 2021. Datasets: A community library for natural language processing. *arXiv* preprint arXiv:2109.02846. Dongze Lian, Zehao Yu, Xing Sun, and Shenghua Gao. 2022. AS-MLP: An axial shifted MLP architecture for vision. In International Conference on Learning Representations. Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to mlps. In *Advances in Neural* Information Processing Systems, volume 34, pages 9204–9215. Curran Associates, Inc. Florian Mai, Juan Zuluaga-Gomez, Titouan Parcollet, and Petr Motlicek. 2023. Hyperconformer: Multihead hypermixer for efficient speech recognition. In Interspeech 2023. ISCA. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Timo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language models are also few-shot learners. *arXiv preprint arXiv:2009.07118*. Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. 2020. Green ai. Communications of the ACM, 63(12):54–63. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In *In Workshop at International Conference* on Learning Representations. Citeseer. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. *arXiv preprint arXiv:1906.02243*. Chuanxin Tang, Yucheng Zhao, Guangting Wang, Chong Luo, Wenxuan Xie, and Wenjun Zeng. 2021. Sparse mlp for image recognition: Is self-attention really necessary? Yuki Tatsunami and Masato Taki. 2021. Raftmlp: How much can be done without attention and with less spatial locality? Yi Tay, Zhe Zhao, Dara Bahri, Don Metzler, and DaCheng Juan. 2021. Hypergrid transformers: Towards a single model for multiple tasks. In *ICLR 2021*. Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. 2022. Maxim: Multi-axis mlp for image processing. *arXiv* preprint arXiv:2201.02973. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Ben Wang and Aran Komatsuzaki. 2021. Gpt-j-6b: A 6 billion parameter autoregressive language model. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, and Lidia S Chao. 2019. Learning deep transformer models for machine translation. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 1810–1822. Ziyu Wang, Wenhao Jiang, Yiming Zhu, Li Yuan, Yibing Song, and Wei Liu. 2022. Dynamixer: A vision mlp architecture with dynamic mixing. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li. 2022. S2-mlp: Spatial-shift mlp architecture for vision. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)*, pages 297–306. Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, and Shuicheng Yan. 2021. Metaformer is actually what you need for vision. Andrey Zhmoginov, Mark Sandler, and Max Vladymyrov. 2022. Hypertransformer: Model generation for supervised and semi-supervised few-shot learning. arXiv preprint arXiv:2201.04182. ## Appendix A Extended Related Work A.1 Green Ai Schwartz et al. (2020) challenges the current pursuit for higher accuracy at the cost of larger computation with the notion of "Green AI". Moreover, Strubell et al. (2019) estimated the monetary and environmental cost of large model pretraining. Apart from being problematic environmentally, they argue that the monetary cost of pretraining is too high to be widely accessible for most researchers. In a research community that focuses on task performance, low resourced researchers would be disadvantaged. Therefore, metrics that take the cost of reaching a result are important to consider (Schwartz et al., 2020). The metric Cost(R) ∝ E · D · H, is proposed and discussed in Section 1. However, reporting a single metric Cost(R) is often ambiguous. Therefore, in our experiments, we consider the factors E, D, and H. To measure the computational cost per example E, Schwartz et al. (2020) propose a count of the floating point operations (FPOs) required. In our experiments, we adopt this metric and further include wall-clock time for a practical application. The component D evaluates the quantity of training data needed to reach a given accuracy or the performance of a model in a low-resource scenario (Hedderich et al., 2020; Chen et al., 2021). Finally, the component H measures the cost associated with hyperparameter tuning. This is reported using *expected validation performance* introduced by Dodge et al. (2019, 2021), which computes the validation performance one would yield in expectation after k hyperparameter trials of random search (Bergstra and Bengio, 2012). Current literature does not focus on all facets of Green AI as formalized as *Cost*(R). Typically, improving efficiency involves making existing models more accessible. For example, improving accessibility through model distillation (Sanh et al., 2019) or adapter modules (Houlsby et al., 2019). Another avenue involves reducing the computational complexity, with examples: prompttuning (Schick and Schütze, 2020), self-attention in Transformers (Child et al., 2019; Beltagy et al., 2020; Katharopoulos et al., 2020, et cetera). The latter approach is similar to our work. However, they focus the processing time of a single example E and do not consider the other facets of Green AI. In our paper, we focus on MLP-based approaches, which we argue will have improvements in all facets of Green AI due to their simplicity. ## A.2 Mlp-Based Models The vision domain has seen promising results with purely MLP-based models (Tolstikhin et al., 2021), however, they lack the desired inductive biases for NLP. Some desirable properties for modeling language include: i) *position invariance*, which is important for generalization, ii) *adaptive size* for variable-length inputs, iii) a *global receptive field*, which allows interactions to not be limited to small token neighborhoods, iv) *learnabilty* allowing for universal applicablility to various tasks, and v) *dynamicity* which implies that output is conditioned on the input. MLP-based models are typically not used for NLP as including the inductive biases of position invariance, adaptive size and global receptive field are non-trivial for MLPs. Several methods try to overcome the lack of adaptivity to size by introducing shifting operations and local windows. Yu et al. (2022) and Lian et al. (2022) uses spatial shifting to pass the information of adjacent tokens through an MLP. (Tang et al., 2021) uses a circular shifting operator. However, the position invariance is violated because positional information is required in the decision of which tokens are included in the neighborhood. The aggregation of local information itself is done via a (relative) position-specific MLP. Global interactions are modeled only through the inclusion of enough layers or through a hierarchical layout (Yu et al., 2022; Guo et al., 2021). For vision tasks it can be useful to exploit the fact that 2D images consist of two axes. Tatsunami and Taki (2021) make use of this fact by integrating a respective inductive bias. (Tu et al., 2022) achieve linear complexity by applying a gMLP (Liu et al., 2021) to only a single axis. A global receptive field in MLP-based models is achieved through token mixing and a weighted summation of the inputs, similar to self-attention. This allows for interaction between tokens. Liu et al. (2021) propose the model gMLP, where the mixing weights are determined by a fixed learnable interaction matrix between positions. However, this comes at the cost of violating position-invariance, size adaptivity, and dynamicity. DynaMixer (Wang et al., 2022) enables dynamicity by estimating the mixing weights from the concatenation of the inputs via a linear layer. This is efficient due to a dimensionality reduction step, but the concatenation still implies position-dependence and fixed-sized inputs. (Lee-Thorp et al., 2021) proposes the model FNet to use static Fourier transformations to model token interactions. This model made significant improvements in computation cost, although the functions lack learnability and are position dependent. ## A.3 Hypernetworks A hypernetwork uses a network to generate the weights for another, often larger, network (Ha et al., 2016). Tay et al. (2021) leveraged task-conditioned hypernetworks for the GLUE benchmark. They achieved paralleled performance to the state-ofthe-art at the time, whilst being more parameter efficient. Karimi Mahabadi et al. (2021) applied hypernetworks to Transformers to allow for parameter sharing in multitask learning. Their results showed parameter efficiencies and improved out of domain generation. Zhmoginov et al. (2022) combine hypernetworks and transformers in the vision domain for few shot generalization. LambdaNets are strongly related to our work, as they generate linear functions from context, in a similar capacity to a hypernetwork (Bello, 2021). Their model is similar to the standard attention mechanism where the weights of three matrices *Q, K, V* are learned. In contrast, HyperMixer uses the inputs to create non-linear transformations by generating an MLP. Features are combined based on their locations - a comparison can be found in Appendix E. Combining MLPMixer and hypernetworks allows for an efficient and simple MLP-based model to have all the necessary inductive biases for NLP. The MLPMixer provides a simple token interaction backbone. By deploying hypernetworks to build the weights of the token mixing MLP, the missing inductive biases of position invariance and size adaptation are obtained. ## B Experimental Details B.1 General Information Implementation We implemented all models within the same general framework based on PyTorch (Paszke et al., 2019). We provide the code in the supplementary material. For tokenization, we use the pretrained tokenizer from BERT-Base (Devlin et al., 2019). Datasets are downloaded directly from HuggingFace Datasets (Lhoest et al., 2021). As such, they are directly downloaded by our training script. We apply no further preprocessing. For computing expected validation performance, we use the public implementation by Dodge et al. (2019). We run our experiments on single-GPU servers available to us as part of a computation grid, ranging between GeForce GTX Titan X and RTX 3090. Apart from Transformers on SNLI and MNLI, which take about 4 hours on slower GPUs, all experiments finished within 3 hours. Hyperparameters We provide CSV files detailing all parameters of every run alongside their results in the supplementary material, ensuring reproducibility of our study. Note that the computation environment (e.g., type of GPU) might lead to small differences. ## B.2 Peak Performance To ensure a fair comparison, we aim to compare models of approximately the same number of parameters (≈11 M parameters). All models have 6 layers with token embedding size d = 256 and hidden size d′ = 512. For MLPMixer and gMLP we set the size of the token mixing modules to N = 250 and N = 100, respectively. These lengths are chosen to match the number of parameters of the other models (11 M). The hidden layer size is set to 512 in all models. We use dropout at the input to each layer with a probability of 0.1. For all models, including the ablations, we first tune the learning rate of Adam (Kingma and Ba, 2014) using a logarithmically spaced grid of 7 values α ∈ {0.001, 0.0005, 0.0002, 0.0001, 0.00005, 0.00002, 0.00001} on the validation set. For our baselines, we then evaluate 10 different seeds and report the mean accuracy and standard deviation on the validation set. On the test set, we only report the results of the model yielding the best results on the validation set, as the GLUE benchmark (Wang et al., 2018) has a hidden test set with limited access. Ablations are evaluated on the validation set with a single seed. ## B.3 Time Per Example Due to the lack of reliable software to measure FOPs in PyTorch, we calculate these numbers manually. Our process is described in Appendix D. For the measurement of wallclock time, we measured the time of 1,000 batches through a single layer of | Dataset | # Train | # Valid | # Test | |-----------|-----------|-----------|----------| | MNLI | 392,702 | 9,815 | 9,796 | | SNLI | 549,367 | 9,842 | 9,824 | | QQP | 363,846 | 40,430 | 390,965 | | QNLI | 104,743 | 5,463 | 5,463 | | SST | 67,349 | 872 | 1,821 | Table 2: Number of examples in each dataset. each token mixing module with d = 256, d′ = 512 (as used in our experiments). ## B.4 Toy Task (Section **4.7)** This section gives more detail about how we set up the synthetic example (Fleuret, 2019) for evaluating whether the different models were able to learn some attention-like transformation. We have a dataset made of 1D sequences that contain two rectangular and two triangular shapes. Each of these shapes has a different height taken at random in the input sequence. The output sequence has the same shapes in the same positions, but the heights of triangular shapes should be the mean of the two triangular shapes in the input sequence. Similarly, the height of the rectangular shapes in the output sequence is the mean of the height of the two rectangular shapes in the input sequence. So the model should be able to see across the sequence and compute the mean of the two different shapes to succeed at the task. All the models considered for this task have a similar structure: they consist of a particular layer (MLPMixer, HyperMixer, or Attention) surrounded by two pairs of 1D-convolutional layers with kernels of size five and a symmetric zero-padding of size two so that the output shape is constant. We made an ablation to ensure that this layer was mandatory by changing it with another similar 1D convolutional layer, which corresponds to None in the figure 5b. Before visualizing the pseudo-attention maps, all models were trained on 25,000 training examples. We use input-gradients (Simonyan et al., 2014) to evaluate whether models could « attend » to the different shapes. This method computes the gradient of the output sequence with respect to the input sequence, giving the corresponding saliency map, which can then be recombined into a pseudo-attention matrix where the i-th column corresponds to the saliency maps of the i-th output token. A large value in the (*i, j*) entries of the pseudo-attention matrix means that the output token i strongly depends on the input j, and we can thus compare it to an attention matrix 6a. Figure 6 represents the pseudo-attention matrices for the different models. We can notice that it indeed approximates the true attention matrix 6a and that the model with no special layer cannot attend to the correct part of the sequence, as expected. Finally, we can see that the pseudo-attention of the Mixer layer is not as peaked as the one corresponding to the Attention or HyperMixer layer. ## C Further Results C.1 Validation Set Results In Table 3, we show the best scores on the validation set that we obtained from the grid search (using a fixed seed), alongside the learning rate that yielded that score. In Section 4.3, we reported the test set results of all models when using the best-performing seed. In Table 4, we show test set results when using the median seed. ## C.2 Ablations We first describe the ablation models before we discuss their results. Feature Mixing Only The most simplistic MLP architecture is one that doesn't use token mixing, i.e., the token mixing module is set to the identity function. The outputs at the last layer are aggregated via average pooling before plugged into the linear classifier. This allows a baseline where the token interactions are not modeled. Therefore, this architecture serves as a control for how important token mixing is in any given task. Token Mixing Only A simplistic single layer MLP architecture ablation. This model consists of a variable dimension MLP where the weights are generated using a hypernetwork which only allows for location interaction. This model is included to argue that the best simple model requires both location and feature mixing to efficiently model textual inputs. Shared Weight-Vector A simple way to obtain a variable size location-mixing MLP is by weightsharing. Concretely, we use a single learnable weight vector w1 ∈ R d′, which we copy N times to create a weight matrix W1 ∈ R N×d′. Analogously, we create W2 from a separate vector w2. Note that this baseline does not support dynamicity, as the | Model | MNLI | SNLI | QQP | QNLI | SST | # Params | |--------------------------------------|-------------|--------------|-------------|-------------|-------------|------------| | Baselines (accuracy / learning rate) | | | | | | | | FNet | 59.6 / 5e-4 | 75.1 / .001 | 79.7 / .001 | 59.2 / 5e-4 | 80.4 / .001 | 9.5 M | | Linear Transformer | 66.2 / .001 | 82.2 / 0.001 | 81.7 / 5e-4 | 61.1 / 1e-4 | 80.7 / 2e-4 | 11M | | Transformer | 66.0 / 2e-4 | 81.2 / 2e-4 | 82.9 / 2e-4 | 65.4 / 5e-4 | 78.9 / 5e-4 | 11 M | | MLPMixer | 64.2 / .001 | 80.5 / .001 | 83.6 / .001 | 68.7 / 5e-5 | 82.3 / .001 | 11 M | | gMLP | 61.5 / .001 | 80.9 / 2e-4 | 83.0 / 5e-4 | 61.1 / 5e-5 | 79.2 / 1e-4 | 11 M | | HyperMixer (tied) | 66.5 / 1e-4 | 81.8 / 2e-4 | 85.4 / 1e-4 | 77.5 / 5e-5 | 81.3 / 5e-4 | 11 M | | Ablations (accuracy / learning rate) | | | | | | | | Feature Mixing only | 54.4 / .001 | 67.2 / 5e-4 | 75.9 / .001 | 61.0 / .001 | 81.8 / 5e-4 | 9 M | | Token Mixing only | 59.5 / 2e-4 | 73.6 / 2e-4 | 81.7 / 2e-4 | 61.8 / 2e-4 | 80.1 / 5e-4 | 9 M | | Shared Weight-Vector | 53.7 / 5e-4 | 68.1 / .001 | 83.0 / .001 | 66.4 / 5e-5 | 80.5 / .001 | 9.5 M | | HyperMixer (untied) | 66.0 / .001 | 82.3 / .001 | 84.6 / .001 | 72.2 / 5e-5 | 81.3 / .001 | 12 M | Table 3: Best validation set results on natural language understanding tasks after tuning the learning rate on a grid. Model MNLI SNLI QQP QNLI SST # **Params** Baselines FNet 58.8 75.2 78.4 59.0 80.2 9.5 M Lin. Transformer 67.0 81.9 82.3 61.0 82.5 11 M Transformer 64.9 81.1 82.1 67.1 77.7 11 M MLPMixer 62.6 79.7 83.2 69.1 80.8 11 M gMLP 62.9 79.9 82.3 60.0 78.5 11 M HyperMixer (tied) 64.9 81.0 83.9 76.8 80.9 11 M Table 4: Test set results on natural language understanding tasks, when using the median seed. weight vector is independent of the inputs. This baseline thus shows the importance of dynamicity in our model. Results Results are shown in Table 5. Untying the hypernetworks in HyperMixer leads to slightly decreased performance on all datasets. We hypothesize that without pretraining, the model cannot benefits from more capacious token interaction modeling introduced by untying. Nonetheless, the untied model still performs or a little better than vanilla Transformers. While the introduction of MLPMixer and similar models follows a trend towards conceptually more simplistic models, our ablations show, perhaps unsurprisingly, that simplicity is not better when it leads to discarding information, as both the FeatureMixing only and Location-Mixing only models perform substantially worse than the full HyperMixer model. Moreover, it is not enough to use the same learnable weight vector for all positions (Shared Weight-Vector), indicating the importance of generating the MLP based on the input. The simplistic Feature-Mixing only model performs poorly on all datasets except SST, where it performs as well as the other models. This indicates that many instances in SST can be solved by looking at individual tokens alone, rather than modeling their interactions. ## C.3 Visualizing Attention Patterns Figure 6 shows the pseudo-attention of all models (except 'None') alongside the true attention weights of attention. First, it should be noted that pseudo-attention weights offer a somewhat blurry version of true attention weights, where high weights occur at positions that correspond to the same shape (cmp. 6a to 6b). Second, we observe that the pseudo-attention weights of HyperMixer and attention (cmp. Figure 6d to 6b) are similar. This indicates that HyperMixer indeed learns an attention-like function. Third, MLPMixer also shows a similar pattern, but the relevant positions have weak connections (Figure 6c). This confirms our finding that MLPMixer requires substantially more training data to learn strong connections. | Model | MNLI | SNLI | QQP | QNLI | SST | # Params | |----------------------|------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|------------| | Ablations | Validation set results (average accuracy / standard deviation over 10 seeds) | | | | | | | Feature Mixing only | 54.5 (0.25) | 67.0 (0.14) | 75.9 (0.06) | 60.8 (0.42) | 79.7 (0.64) | 9 M | | Token Mixing only | 59.0 (0.79) | 74.5 (5.53) | 79.5 (4.63) | 61.8 (1.29) | 76.3 (4.94) | 10 M | | Shared Weight-Vector | 57.1 (2.38) | 74.3 (1.96) | 82.9 (0.10) | 65.9 (0.42) | 79.8 (0.52) | 9.5 M | | HyperMixer (untied) | 65.8 (0.46) | 81.7 (0.30) | 84.8 (0.23) | 73.3 (0.53) | 80.3 (0.35) | 12 M | | HyperMixer (tied) | 66.2 (0.21) | 81.9 (0.27) | 85.6 (0.20) | 78.0 (0.19) | 80.7 (0.84) | 11 M | Table 5: Mean and standard deviation of HyperMixer ablations on the validation set. ![17_image_0.png](17_image_0.png) ## D Comparison Of #Fop We want to compute the number of floating-point operations needed in self-attention vs. HyperMixing for a single example. Let N be the sequence length, d be the embedding size of each token, and d′the hidden dimension. For simplicity, we will assume basic mathematical operators like exp,tanh, √x and division to be equal to one floating operation. However, their actual cost is higher but depends on implementation and hardware. ## D.1 Basic Building Blocks We first compute the number of operations infrequently occurring in basic building blocks of neural networks. Matrix Multiplication Multiplying matrix A ∈ R N×d A ∈ R d×M takes 2d(NM) operations, as 2d operations are needed for a single dot-product and there are NM entries in the resulting matrix. Linear Layer Passing a single vector of size d through a linear layer without bias of size (*d, d*′) is the multiplication of a single vector with a matrix, i.e., incurs 2dd′ operations in total. GELU GELU is usually approximated as $$\operatorname{GELU}(x)=0.5x\left[1+\operatorname{tanh}\left({\sqrt{2/\pi}}(x+c x^{3})\right)\right]$$ So in total, GELU is computed for every of the d features and every of the N vectors, meaning the GELU activation layer takes 9dN operations. MLP (input = output size) Given hidden size d′and input/output size d, we have two linear layers of size (*d, d*′) and (d′, d), respectively, plus a GELU layer on d′ dimensions, incurring 4dd′+9d′. MLP (input /= output size) Given hidden size d′, input size d and output size d′′, we have two linear layers of sizes (*d, d*′) and (d′, d′′), incurring 2dd′ + 2d′d′′ + 9d′. Softmax Softmax is applied over N values, each of which goes through an exp and a division by the normalization value. The normalization value requires N additions. So in total, the number of operations is 3N. ## D.2 Hypermixer HyperNetwork (tied case) In the tied case, we have one MLP that generates an output for each vector, so the number of operations needed for an MLP of input and hidden size d and output sizes d′: N(2d 2 + 2dd′ + 9d) Mixing MLP The mixing MLP has input and output size N and hidden size d′, which is applied to each of the d embedding dimensions (i.e., after transposition), incurring d(4d′N + 9′) operations in total. Total: The total number of operations in HyperMixer is d(4N d′ + 9d′) + N(2d 2 + 2d′d + 9d) ## D.3 Self-Attention Multi-head self-attention with h heads applies selfattention independently to each head consisting of vectors of size d/h, respectively. Self-attention consists of - 3 linear layers to transform queries, keys, and values: 6h(d/h) 2 - h matrix multiplications with sizes N(d/h), totalling 2h(d/h)N2 operations - softmax: 3N - a weighted average for each of the inputs, consisting of (2dN2) operations. In total: 6h(d/h) 2+hN22(d/h)+3N +(2dN2) ## E Connection With Lambda Layers And Linear Transformer We saw in Section 4.7 that HyperMixer was able to allow a form of attention without computing an attention matrix directly and thus scaling only linearly with the input length. In that regard, this method is similar to other methods such as (Bello, 2021) or (Katharopoulos et al., 2020). We will describe here the difference between these approaches and our method. Let us write the standard attention formula and the HyperMixer layer under the following form: Attention(Q, K,V ) = softmax(QKT)V (2) $\mathbf{T}$ ) $\mathbf{U}$ $$,\,V\,)=$$ HyperMixer(X) = $W_{1}\sigma(W_{2}^{T}X)$ (3) where Q, K,V ,W1,W2 ∈ R N×d′, X ∈ R N×d and W1,W2 are the weights generated by the hypernetwork. We can notice that the two operations differ mainly in the non-linearity location and the uses of linear or non-linear projection of the input. Indeed, attention applies a non-linearity to QKT and uses linear projection of the input (Q, K,V ) to construct the attention map. On the contrary, HyperMixer uses two linear mapping of the input (W1,W2) and applies a non-linearity to WT 2 X, which is similar in a way to KTV . The quadratic cost of the attention layer comes from the place of the non-linearity as it requires the explicit computation of QKT ∈ R N×N which is quadratic with respect to the input size. Most of the strategies used to overcome this quadratic cost generally find a way of moving this non-linearity. This is the case of (Katharopoulos et al., 2020) which applies non-linearities ϕ independently to Q and K and (Bello, 2021) that applies softmax only to K. In that regard, these two methods can be compared with HyperMixer as they all scale linearly with the input size due to the non-linearity location. Still, HyperMixer is conceptually different because it uses a non-linear transformation of the input and because it uses, in our opinion, a simpler and more understandable design entirely based on MLPs. ## F Ablations On Transformer Layout While all Transformer layouts have a feature mixing and a token mixing component in each layer, the arrangement and connection of these components through skip connections and normalization layers remains an open question. The original Transformer paper (Vaswani et al., 2017) uses what is now known as the "post-norm" layout: x $$\mathbf{x}^{o u t}$$ = $\text{LayerNorm}(\mathbf{x}+\text{token\_mixing}(\mathbf{x}))$ = $\text{LayerNorm}(\mathbf{x^1}+\text{feature\_mixing}(\mathbf{x^1}))$ . where x ∈ R N×dis the input to the layer, and x out ∈ R N×dis the output of the layer. (Wang et al., 2019) proposes the "pre-norm" layout: x 1 = x + token_mixing(LayerNorm(x)) x out = x 1 + feature_mixing(LayerNorm(x 1)) $\mathbf{\textcolor{red}{\text{-}}c}$ means. (Bachlechner et al., 2021) proposes the "ReZero" normalization, which introduces a learnable scalar α ∈ R, initialized to zero: $\Theta(-\pi/2)=\Theta(\pi/2)$. $$\begin{array}{r l}{\mathbf{x}^{1}}&{{}=\mathbf{x}+\alpha_{1}\cdot{\mathrm{token\_mixing}}(\mathbf{x})}\\ {\mathbf{x}^{o u t}}&{{}=\mathbf{x}^{1}+\alpha_{2}\cdot{\mathrm{feature\_mixing}}(\mathbf{x}^{1})}\end{array}$$ (Wang and Komatsuzaki, 2021) observe that a speed-up can be obtained by parallelizing the two components: $$\begin{array}{c}{{\mathbf{x}^{o u t}=\mathbf{x}+\mathrm{token\_mixing}(\mathrm{LayerNorm}(\mathbf{x}))}}\\ {{\qquad\qquad+\mathrm{feature\_mixing}(\mathrm{LayerNorm}(\mathbf{x}))}}\end{array}$$ . Finally, (Chowdhery et al., 2022) call the following the "standard serialized" formulation: x 1 = x + token_mixing(LayerNorm(x)) x out = x + feature_mixing(LayerNorm(x 1)). As Figure 1 shows, this is the model we have fixed for all previous experiments. In the following, we combine each of the presented layouts with self-attention and HyperMixing, respectively. Since we noticed early that the training with HyperMixing is not stable with some of the layouts, we also experimented with adding two different kinds of normalization to HyperMixer: layer normalization applied after TM-MLP, as shown in Algorithm 1, and length normalization. For the latter, we simply scale the generated weight matrices by 1M , where M is the number of keys. The intuition is that this keeps the magnitude of activations in the hidden layer of TM-MLP approximately the same across different input lengths. Results Table 6 shows the best validation set results after tuning the learning rate using a logarithmically spaced grid of 7 values α ∈ {0.001, 0.0005, 0.0002, 0.0001, 0.00005, 0.00002, 0.00001}. The results show that self-attention is relatively insensitive with respect to the type of layout, as all models except for ReZero attain an accuracy of 7677% on average. In contrast, HyperMixer without normalization performs substantially worse with prenorm, ReZero, and the parallel layout. Length normalization mitigates this problem to some degree, but the addition of layer normalization yields the overall best results, where all models achieve between 77 and 78% of accuracy on average. We, therefore, recommend adding layer normalization by default when using HyperMixing in a new context. | Layout | MNLI | SNLI | QQP | QNLI | SST | Average | |------------------------------------|--------|--------|-------|--------|-------|-----------| | Multi-head self-attention | | | | | | | | serialized | 65.71 | 80.88 | 82.99 | 69.67 | 79.70 | 75.79 | | post-norm | 66.13 | 81.70 | 84.31 | 71.54 | 79.70 | 76.68 | | pre-norm | 66.60 | 80.59 | 82.96 | 73.13 | 80.73 | 76.80 | | ReZero | 56.83 | 70.85 | 77.72 | 63.44 | 78.10 | 69.39 | | parallel | 66.30 | 81.46 | 83.12 | 71.55 | 79.70 | 76.43 | | HyperMixing (no normalization) | | | | | | | | serialized | 66.18 | 81.63 | 85.59 | 78.4 | 81.65 | 78.69 | | post-norm | 62.59 | 79.49 | 82.37 | 76.75 | 80.39 | 76.32 | | pre-norm | 56.62 | 78.49 | 82.88 | 64.18 | 81.08 | 72.65 | | ReZero | 35.45 | 33.82 | 63.18 | 49.46 | 49.08 | 46.20 | | parallel | 60.37 | 79.71 | 83.62 | 65.24 | 80.16 | 73.82 | | HyperMixing (length normalization) | | | | | | | | serialized | 65.91 | 81.27 | 85.27 | 77.80 | 81.88 | 78.43 | | post-norm | 62.67 | 79.46 | 82.61 | 76.53 | 80.39 | 76.33 | | pre-norm | 64.83 | 80.71 | 84.41 | 76.31 | 81.65 | 77.58 | | ReZero | 35.45 | 33.82 | 63.18 | 70.31 | 54.13 | 51.38 | | parallel | 65.37 | 81.12 | 84.44 | 76.77 | 80.28 | 77.60 | | HyperMixing (layer normalization) | | | | | | | | serialized | 66.47 | 81.36 | 85.74 | 77.72 | 80.50 | 78.36 | | post-norm | 64.26 | 80.05 | 83.81 | 76.62 | 80.85 | 77.12 | | pre-norm | 64.72 | 81.05 | 83.81 | 76.11 | 81.54 | 77.45 | | ReZero | 65.64 | 80.74 | 84.45 | 74.41 | 81.08 | 77.26 | | parallel | 65.49 | 80.59 | 84.43 | 76.53 | 81.65 | 77.74 | Table 6: Best validation set results on natural language understanding tasks after tuning the learning rate on a grid. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
inaguma-etal-2023-unity
{U}nit{Y}: Two-pass Direct Speech-to-speech Translation with Discrete Units
https://aclanthology.org/2023.acl-long.872
Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, UnitY, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the first-pass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83x decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51x decoding speed-up compared to that case.
# Unity: Two-Pass Direct Speech-To-Speech Translation With Discrete Units Hirofumi Inaguma♡, Sravya Popuri♡, Ilia Kulikov♡**, Peng-Jen Chen**♡, Changhan Wang♡, Yu-An Chung♡, **Yun Tang**♡, Ann Lee♡, Shinji Watanabe♣, **Juan Pino**♡ FAIR, Meta AI♡, Carnegie Mellon University♣ {hirofumii,juancarabina}@meta.com ## Abstract Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, UnitY, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the firstpass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83× decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51× decoding speed-up compared to that case. ## 1 Introduction Automatic speech translation to another language is an indispensable technology for international communications, with the spread of social media and virtual communications nowadays. A traditional approach of speech-to-speech translation (S2ST) is to cascade automatic speech recognition (ASR), machine translation (MT), and textto-speech (TTS) components, each of which is optimized separately on different data (Lavie et al., 1997; Nakamura et al., 2006; Wahlster, 2013). With the emergence of sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015), however, it is getting prevailing to adopt a direct approach1. This approach consists in translating input speech into the other language based on a single architecture with fewer components than the cascaded systems (Jia et al., 2019b; Tjandra et al., 2019; Zhang et al., 2021). The direct approach is attractive for building a lowlatency system with a simplified pipeline, thus reducing developing costs. However, direct S2ST models suffer from poor performance due to data scarcity, similar to direct speech-to-text translation (S2TT) models (Bérard et al., 2016). In the field of S2TT, data shortage has been addressed by leveraging pre-training (Bérard et al., 2018; Wang et al., 2021c; Tang et al., 2022), multi-task learning (Weiss et al., 2017; Tang et al., 2021), pseudo labeling (Jia et al., 2019a; Pino et al., 2020), knowledge distillation (Liu et al., 2019; Inaguma et al., 2021b). Consequently, the translation quality of direct S2TT models is approaching that of cascaded S2TT models (Ansari et al., 2020; Anastasopoulos et al., 2021). These techniques have also shown the effectiveness for direct S2ST models and led to a decent performance (Kano et al., 2021; Dong et al., 2022; Jia et al., 2022a; Popuri et al., 2022). Recent works (Lee et al., 2022a,b) propose to model discrete acoustic units, extracted from HuBERT (Hsu et al., 2021), instead of a continuous speech signal that enables usage of a standard crossentropy loss during training. This speech-to-unit translation (S2UT) model significantly shortens the target sequence length and thus makes training and inference more efficient. The discrete units are directly converted to the waveform with a unit-based neural vocoder (Polyak et al., 2021) bypassing spectrogram representation. On the other hand, Translatotron2 (Jia et al., 2022b) decomposes the target representations into linguistic and acoustic counterparts explicitly. The former predicts a phoneme 1Lee et al. (2022a) defines a direct S2ST model as a model that does not use intermediate text representations while Jia et al. (2022b) defines it as a model that directly predicts the target spectrogram. In this paper, we use a more general definition that the entire architecture is optimized jointly and the translation is conducted in a more direct way. We do not include a vocoder in the training pipeline of all direct models. 15655 sequence first, and the latter synthesizes the target spectrogram conditioned on the continuous representation of the linguistic part. This paper presents a novel two-pass direct S2ST architecture, dubbed *UnitY*, which takes the best of both worlds of the S2UT model and Translatotron2. Unlike Translatotron2, UnitY models linguistic sequences using subwords (*first pass*) instead of phonemes, and it models speech as a discrete sequence of acoustic units (*second pass*). To achieve better translation quality and decoding efficiency, UnitY consists of a deep text decoder and a shallow unit decoder and enables better generalization to the first-pass decoder. We further introduce a textto-unit (T2U) encoder between the two decoders to bridge the gap between textual and acoustic representations. Following the success of large-scale pre-training, we leverage unlabeled text effectively to pre-train the first pass text decoder with multilingual BART (mBART) (Liu et al., 2020) at the subword level. Extensive experiments show the superiority of the UnitY S2ST system measured by both translation quality and runtime efficiency. First, UnitY achieves 4.2, 3.7, and 2.5 ASR-BLEU improvements over the S2UT model on the Fisher Es→En (Post et al., 2013), CVSS-C (Jia et al., 2022c), and multi-domain En↔Es (Popuri et al., 2022) corpora, respectively. The improvement holds even with high-resource data and pre-training. In addition, our proposed design improves Translatotron2 as well, indicating its versatility for twopass direct S2ST architectures regardless of the choice of the target. Second, UnitY achieves 2.83× and 2.51× decoding speed-ups over the S2UT and improved Translatotron2 models, respectively. A combination of the aforementioned improvements suggests the UnitY design as a starting point for further improvements in direct S2ST. 2 ## 2 Unity In this section, we propose *UnitY*, a two-pass direct S2ST model that generates subwords and discrete acoustic units subsequently. Hereafter, we refer to discrete acoustic units as discrete units for brevity. Let X denote a source speech input, and Y = (y1*,..., y*M) and U = (u1*,..., u*L) be the corresponding reference text translation and discrete unit sequences in the target language, respectively. Note that there is no duration information 2Code will be available upon the paper acceptance. ![1_image_0.png](1_image_0.png) ## 2.1 Architecture The overall architecture of UnitY is shown in Figure 1. UnitY consists of four modules: speech encoder, first-pass text decoder, text-to-unit (T2U) encoder, and second-pass unit decoder. We build the speech encoder based on Conformer (Gulati et al., 2020), which augments Transformer (Vaswani et al., 2017) with a convolution module, while implementing the rest three modules based on Transformer. UnitY has five major architecture modifications from Translatotron2 (Jia et al., 2022b), (1) generating subwords instead of phonemes in the first pass, (2) generating discrete units instead of spectrograms in the second pass to bypass duration modeling, (3) replacing Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) layers with Transformer layers in both decoders, (4) introducing a T2U encoder between the two decoders, and (5) assigning more model capacities to the first pass. First-pass text decoder The first-pass text decoder TDec generates a sequence of subwords Y autoregressively by attending the speech encoder output H. The training objective of the first pass is to minimize the direct S2TT loss Ls2t as: $$\begin{split}\mathcal{L}_{\text{s2t}}(Y|X)&=-\frac{1}{M}\sum_{i=1}^{M}\log P_{\text{s2t}}(y_{i}|X,Y_{<i})\\ &=-\frac{1}{M}\sum_{i=1}^{M}\log P_{\text{s2t}}(y_{i}|D_{i}^{\text{text}})\\ D_{i}^{\text{text}}&=\texttt{TDec}(H,Y_{<i}),\end{split}$$ where Dtext i ∈ R dmodel is the i-th continuous decoder state right before projecting it to the logit. We consider that Dtext contains rich acoustic information in addition to contextual information thanks to multiple multi-head cross-attention over H. There are five advantages of generating subwords instead of phonemes. First, the sequence length is considerably reduced, leading to better training and inference efficiencies (Cherry et al., 2018). Second, using large vocabularies improves the translation quality of the first pass (Gowda and May, 2020). Third, the text output helps the audience understand the translation content while listening to the audio. Fourth, our approach can easily scale to more target languages, as it is unnecessary to prepare separate grapheme-to-phoneme (G2P) models for each target language. Last, readable text can be generated without any complicated post-processing such as WFST (Mohri et al., 2002; Bahdanau et al., 2016). T2U encoder A bidirectional T2U encoder T2UEnc transforms the continuous states of the first-pass decoder Dtext ∈ RM×dmodel into Z ∈ RM×dmodel as Z = T2UEnc(Dtext). The T2U encoder bridges the gap in representations between text and unit decoders without changing the sequence length. Second-pass unit decoder The second-pass unit decoder UDec generates a sequence of discrete units U autoregressively by attending to only the T2U encoder output Z. The training objective of the second pass is to minimize Ls2u similar to the S2UT task while being conditioned on Y as: $$\begin{array}{c}{{{\mathcal L}_{\mathrm{s2u}}(U|X,Y)=-\frac{1}{L}\sum_{i=1}^{L}\log P_{\mathrm{s2u}}(u_{i}|X,Y,U_{<i})}}\\ {{=-\frac{1}{L}\sum_{i=1}^{L}\log P_{\mathrm{s2u}}(u_{i}|D_{i}^{\mathrm{unit}})}}\\ {{D_{i}^{\mathrm{unit}}=\mathrm{UDec}(Z,U_{<i})=\mathrm{UDec}(H,Y,U_{<i}),}}\end{array}$$ where Dunit i ∈ R dmodel is the i-th continuous decoder state right before projecting it to the logit. The unit decoder does not attend to H to synchronize the text and unit outputs, similar to the motivation in (Jia et al., 2022b). In other words, we do not expect that the second-pass decoder corrects translation errors from the first-pass decoder.3 Once the unit generation finishes, a separate unit-based vocoder (Polyak et al., 2021) converts the discrete units to the waveform with duration prediction of each discrete unit (Lee et al., 2022a). The total training objective of UnitY, Ltotal, is formulated 3We also investigate attending to the speech encoder output with an additional cross-attention, but it does not lead to an improvement in ASR-BLEU. We discuss this in §5.1 as follows: $${\mathcal{L}}_{\mathrm{total}}={\mathcal{L}}_{\mathrm{s2u}}(U|X,Y)+w_{\mathrm{s2t}}{\mathcal{L}}_{\mathrm{s2t}}(Y|X),\quad(1)$$ for the $\mathbf{S2T}\mathbf{T}$ loss . where ws2t is a weight for the S2TT loss. ## 2.2 Text Decoder Pre-Training Similar to ASR and S2TT studies (Baevski et al., 2020; Li et al., 2021), S2ST models also benefit from self-supervised pre-training (Jia et al., 2022a; Popuri et al., 2022), especially for the speech encoder. In addition to the speech encoder pretraining with wav2vec2.0 (Baevski et al., 2020), Popuri et al. (2022) initializes the unit decoder of the single-pass S2UT model with a unit-based mBART (u-mBART), an encoder-decoder model pre-trained with discrete units converted from a large amount of unlabeled speech data. However, unlabeled text data cannot be leveraged for the single-pass decoder pre-training, although it is more accessible in many written languages. To fully leverage the unlabeled text data, we initialize the first-pass decoder of UnitY with a text-based mBART (t-mBART) pre-trained with unlabeled text data. Following Li et al. (2021); Popuri et al. (2022), we freeze parameters in the feed-forward network (FFN) of the text decoder during S2ST fine-tuning. We initialize the T2U encoder and second-pass unit decoder randomly. ## 2.3 Search Algorithm During inference, we perform two-pass beam search decoding. First, we find the most probable text hypothesis Yˆ in the first-pass decoder using beam search with a beam size of B1st. We then feed continuous decoder states Dtext corresponding to Yˆ to the T2U encoder. Next, we generate the most probable discrete unit sequence Uˆ in the second-pass decoder by another beam search with a beam size of B2nd. Finally, Uˆ is taken as input to a separate unit-based vocoder to generate the waveform. We find it more effective to assign a larger beam size to the first pass, *i.e.*, B1st > B2nd, because there is more diversity among beam candidates than the second pass. The computation time is also reduced since the sequence length of text is much shorter than that of discrete units. Therefore, we use B2nd = 1 unless otherwise noted. We present the pseudo algorithm in Appendix A. ## 2.4 Deep-Shallow Two-Pass Decoders By increasing the number of layers, we assign more model capacities to the first-pass decoder than the ![3_image_0.png](3_image_0.png) second-pass decoder. We refer to this as *deepshallow two-pass decoders*. This capacity assignment improves translation quality and inference efficiency simultaneously because of a shorter sequence length in the first pass. A practical capacity assignment for the MT task is studied in Kasai et al. (2021) by trading the number of layers between the encoder and decoder. In this work, we focus on the two-pass decoders for the S2ST task. ## 3 Experimental Setting 3.1 Data We use three datasets: Fisher Es→En (Post et al., 2013) (170 hours), CVSS-C (Jia et al., 2022c) (547 hours), and mutli-domain En↔Es (Popuri et al., 2022) (20k hours for En→Es, 14k hours for Es→En) corpora. We combine all 21 language directions to English in the CVSS-C corpus to train a single X-to-En multilingual model. The En→Es part in the multi-domain corpora consists of Europarl-ST (Iranzo-Sánchez et al., 2020), MustC (Di Gangi et al., 2019), TEDLIUM3 (Rousseau et al., 2012), Librispeech (Panayotov et al., 2015), and Common Voice (Ardila et al., 2020). The Es→En part consists of CoVoST2 (Wang et al., 2021b), Europarl-ST, and mTEDx (Elizabeth et al., 2021), Common Voice, and multilingual Librispeech (MLS) (Pratap et al., 2020). More details are described in Appendix D. ## 3.2 Pre-Processing We follow the same pre-processing as (Lee et al., 2022a,b; Popuri et al., 2022) for acoustic feature extraction, discrete unit extraction, and text normalization. We also discarded over-generated target speech/unit by TTS/T2U models. More details are described in Appendix E. ## 3.3 Pre-Training We use the same En/Es wav2vec2.0 and En-Es umBART models as Popuri et al. (2022). We train a multilingual w2v-BERT (Chung et al., 2021) model trained on 51 languages with the same setting as Jia et al. (2022a). For text decoder pre-training, we use the same En-Es and 50-language t-mBART models as Wang et al. (2022) and Tang et al. (2020), respectively. We describe the training details and list model URLs in Appendix F. ## 3.4 Baseline We build two cascaded S2ST systems and four direct S2ST systems. All speech encoders are based on Conformer. When pre-training the speech encoder of direct S2ST systems with wav2vec2.0/w2v-BERT, we pre-train ASR and S2TT models in the cascaded systems with the same wav2vec2.0/w2v-BERT for a fair comparison. We also pre-train the text decoder of the ASR and S2TT models with t-mBART in that case. Cascaded (ASR→MT→**TTS)** We combine a Conformer ASR, a Transformer MT, and a Transformer TTS model. We set the reduction factor of TTS models to 4. Cascaded (S2TT→**TTS)** We combine a Conformer direct S2TT model and a Transformer TTS model. S2SpecT We build a direct S2ST model that predicts spectrogram with a single Transformer decoder, similar to Lee et al. (2022a) (Figure 2a). We refer to it as S2SpecT hereafter. We set the reduction factor of the spectrogram decoder to 3. S2SpecT2 We train S2SpecT2, an improved version of Translatotron2, by enhancing the architecture and training with the proposed methods for UnitY. First, we replace phoneme targets with subwords in the first pass (Figure 2b). Second, we replace LSTM decoders with Transformer decoders. Third, we introduce an additional textto-spectrogram (T2S) encoder between text and spectrogram decoders. The second-pass decoder attends to the T2S encoder output only. Fourth, we use an autoregressive Transformer decoder instead of a non-attentive Tacotron (NAT) (Shen et al., 2020) for the second-pass decoder. Last, we apply R-Drop to the first-pass decoder. We use the same reduction factor as S2SpecT. S2UT We train a direct S2ST model that predicts discrete units with a single Transformer decoder (Lee et al., 2022a) (Figure 2c). ## 3.5 Architecture Let N1st, N2nd, and Nt2u be the depth of the first-pass decoder, second-pass decoder, and T2U encoder of UnitY, respectively. We set (N1st,N2nd,Nt2u) to (4, 2, 2) on Fisher and CVSSC. On the multi-domain corpus, we use (12, 2, 2) when pre-training the first-pass decoder with tmBART. Otherwise, we use (6, 6, 2). We describe the other configurations in Appendix G. ## 3.6 Training We apply R-Drop (Wu et al., 2021) regularization to all tasks that predict discrete symbols, except the MT task. The training objective of each model with R-Drop is defined in Appendix C. We implement our models based on the Fairseq toolkit (Ott et al., 2019; Wang et al., 2020). The detailed training hyperparameters are described in Appendix H. ## 3.7 Decoding We use a beam width of 10 for ASR, S2TT, and S2UT models. For UnitY, we set B1st and B2nd to 10 and 1, respectively. We use a beam width of 10 for the first-pass decoder in S2SpecT2. ## 3.8 Vocoder We use a HiFi-GAN vocoder (Kong et al., 2020) to convert spectrograms to the waveform for TTS and direct speech-to-spectrogram models. We use a unit-based HiFi-GAN vocoder (Polyak et al., 2021) to convert discrete units to the waveform for direct speech-to-unit models. Both the vocoders are trained separately. | ID | Model | ASR-BLEU (↑) | | | | |------------------------------------------------------------|------------------------|----------------|------|------|------| | Avg. | High | Mid | Low | | | | B0 | Synthetic target♢ | 91.1 | 88.4 | 89.5 | 93.0 | | Cascaded systems B1 S2TT → TTS♢ | 10.6 | 28.8 | 15.5 | 2.4 | | | B2 | + ASR pre-training | 12.7 | 30.7 | 18.3 | 4.4 | | B3 | S2TT → TTS | 7.8 | 18.2 | 11.9 | 2.6 | | B4 | + w2v-BERT + t-mBART | 14.9 | 21.1 | 18.2 | 11.5 | | Direct speech-to-spectrogram systems B5 Translatotron♢ 3.4 | 11.9 | 3.5 | 0.3 | | | | B6 | S2SpecT | 7.6 | 21.8 | 10.6 | 1.5 | | B7 | + S2TT pre-training | 9.6 | 23.9 | 13.8 | 3.2 | | B8 | + w2v-BERT | 16.6 | 30.5 | 21.9 | 9.8 | | B9 | Translatotron2♢ | 8.7 | 25.4 | 12.6 | 1.5 | | B10 | + Transformer decoder♠ | 10.1 | 26.9 | 14.2 | 2.8 | | B11 | + S2TT pre-training♢ | 12.0 | 29.7 | 16.6 | 4.2 | | B12 | + w2v-BERT♠ | 17.9 | 32.5 | 22.9 | 10.9 | | B13 | + mSLAM♠ | 19.3 | 33.2 | 24.6 | 12.5 | | B14 | ++ TTS augmentation♠ | 22.0 | 33.5 | 25.8 | 16.5 | | B15 | S2SpecT2 | 11.3 | 29.1 | 16.9 | 3.1 | | B16 | + S2TT pre-training | 13.1 | 29.8 | 18.8 | 5.2 | | B17 | + w2v-BERT + t-mBART | 18.6 | 32.1 | 24.7 | 11.6 | | Direct speech-to-unit systems B18 S2UT | 9.1 | 25.9 | 12.9 | 1.9 | | | B19 | + S2TT pre-training | 11.4 | 27.2 | 16.4 | 4.0 | | B20 | + w2v-BERT + u-mBART | 20.8 | 31.6 | 25.4 | 15.4 | | B21 | UnitY | 12.0 | 29.0 | 17.8 | 4.0 | | B22 | + S2TT pre-training | 13.0 | 30.4 | 18.7 | 4.8 | | B23 | + w2v-BERT + t-mBART | 24.5 | 34.6 | 28.9 | 19.3 | ## 3.9 Evaluation Following Lee et al. (2022a), we use a pre-trained ASR model to transcribe the generated target speech and calculate BLEU scores (Papineni et al., 2002), referred to as ASR-BLEU. The ASR model is fine-tuned from a wav2vec2.0 with the connectionist temporal classification (CTC) objective (Graves et al., 2006). We use the sacrebleu toolkit (Post, 2018) to calculate the BLEU scores. ## 4 Experimental Results In this section, we present the experimental results on three corpora. We study various modeling choices from the perspective of target representation (spectrogram v.s. discrete unit) and decoder architectures (single pass v.s. two pass) in supervised and semi-supervised settings. We also benchmark the decoding efficiency of direct S2ST models. | ASR-BLEU (↑) | | | | | | | | | |---------------------------------------------------------------|--------------------|-------|----------|-------------|-------|------|------|------| | ID | Model | En→Es | Es→En | | | | | | | Europarl-ST | MuST-C | Avg. | CoVoST-2 | Europarl-ST | mTEDx | Avg. | | | | Cascaded systems C1 ASR→MT→TTS♢ | 28.8 | 34.2 | 31.5 | 33.8 | 29.1 | 32.4 | 31.5 | | | C1' | ASR→MT→TTS | 36.8 | 30.8 | 33.8 | 32.9 | 34.2 | 30.3 | 32.5 | | C2 | S2TT→TTS♢ | 32.6 | 30.1 | 31.4 | 28.4 | 23.6 | 21.5 | 24.5 | | C2' | S2TT→TTS | 36.4 | 33.4 | 34.9 | 37.2 | 34.0 | 32.5 | 34.6 | | Direct speech-to-spectrogram systems C3 S2SpecT2 (6L→6L) 35.6 | 33.5 | 34.6 | 37.0 | 23.4 | 31.3 | 30.6 | | | | C4 | + t-mBART (12L→6L) | 36.9 | 34.3 | 35.6 | 37.2 | 23.7 | 31.7 | 30.9 | | Direct speech-to-unit systems C5 S2UT + u-mBART♢ | 32.7 | 32.1 | 32.4 | 33.5 | 28.6 | 29.1 | 30.4 | | | C5' | S2UT + u-mBART | 33.5 | 33.3 | 33.4 | 34.5 | 29.9 | 29.9 | 31.4 | | C6 | UnitY (6L→6L) | 35.1 | 33.7 | 34.4 | 35.4 | 30.8 | 31.3 | 32.5 | | C7 | + t-mBART (12L→2L) | 35.3 | 34.1 | 34.7 | 36.4 | 33.1 | 32.2 | 33.9 | ## 4.1 Cvss-C The results on CVSS-C are listed in Table 1. We first compared four direct systems trained from scratch (B6, B15, B18, B21), and UnitY (B21) achieved the best ASR-BLEU. The encoder pretraining with the S2TT model in the cascaded system (B3) improved ASR-BLEU of all the direct S2ST models (B7, B16, B19, B22), similar to Jia et al. (2022c).4In this case, S2SpecT2 (B16) also achieved similar translation quality to UnitY (B22). Still, UnitY outperformed the S2UT model (B19) by 1.6 ASR-BLEU on average, indicating that the two-pass decoding was the main factor of the improvements. S2SpecT2 (B16) outperformed Translatotron2 (Jia et al., 2022b) (B11) by 1.1 ASRBLEU on average, from which we can confirm that parts of the proposed methods can generalize to the other S2ST architecture.5 Compared to the best cascaded system (B2), the two-pass models (B16, B19) showed better translation quality. We also pre-trained the speech encoder of all models with multilingual w2v-BERT, the first-pass text decoder of two-pass models with text-based mBART (t-mBART), and the decoder of the S2UT model with unit-based mBART (u-mBART), respectively. Among them (B4, B8, B12, B20, B23), UnitY (B23) showed the best ASR-BLEU. UnitY still outperformed Translatotron2 with a joint speech-text pre-training with mSLAM (Bapna et al., 2022) (B13) and TTS augmentation (B14) by 5.2 and 2.5 ASR-BLEU on average, respectively. The full results in each language direction are presented in Appendix I. 4.2 Multi-domain En↔Es We present results on the multi-domain corpora (Popuri et al., 2022) in Table 2. C1', C2', and C5' are our improved models of C1, C2, and C5, respectively.6 We observed that UnitY with first-pass decoder pre-training with t-mBART (C7) improved the S2UT model with decoder pretraining with u-mBART (C5') by 1.3 and 2.5 ASRBLEU on average in En→Es and Es→En, respectively. This confirms the effectiveness of the twopass modeling in the high-resource scenario. Furthermore, UnitY without decoder pre-training (C6) already outperformed C5' and degraded from C7 only slightly. Comparing UnitY and S2SpecT2, we cannot spot a clear winner. UnitY outperformed S2SpecT2 in Es→En on Europarl-ST and mTEDx, but S2SpecT2 performed better in En→Es. The proposed text decoder pre-training helped S2SpecT2 performance too, especially in En→Es (C4). Finally, we also confirmed that UnitY approached the performance of a strong cascaded system and even outperformed it on Must-C. ![6_image_0.png](6_image_0.png) ## 4.3 Decoding Efficiency We evaluated the decoding efficiency of direct S2ST models. We measured the runtime and total number of floating point operations (FLOPs) on an Intel® Xeon® Gold 6230 CPU. We randomly sampled 500 utterances from the multi-domain Es→En dev set while keeping the ratio of the number of samples per domain. Note that we also took the vocoder inference into account. The results in Figure 3 showed that UnitY achieved 2.51× and 2.83× decoding speed-ups over S2SpecT2 and S2UT models, respectively. These confirms the efficiency of discrete unit prediction and two-pass decoding, thanks to reduced output sequence lengths. Deep-shallow two-pass decoders also improved the decoding speed a lot. We found that the translation quality of the twopass models improved by increasing the beam width of the first-pass decoder up to 10. On the other hand, the quality did not degrade significantly by decreasing the beam width of the second-pass decoder down to 1, *i.e.* greedy decoding. This indicates that the first pass involves more challenges in the modeling pipeline. Therefore, we can obtain better translation quality and decoding speed by assigning more computation time to the first pass. We also present the results of FLOPs in Appendix I. To summarize, UnitY achieved 1.65× and 3.19× FLOPs reduction over S2SpecT2 and S2UT models, respectively. ## 4.4 Fisher We also show the results on Fisher in Appendix I. Although the trend was consistent with CVSS-C, a notable exception was that S2SpecT2 outperformed UnitY when pre-training the speech encoder with wav2vec2.0. However, UnitY has an advantage of decoding efficiency over S2SpecT2. ![6_image_1.png](6_image_1.png) (ASR-)BLEU (↑) Text Speech D1 S2SpecT2 **35.0 30.8** D2 + w/o T2S encoder 34.9 25.0 D3 + w/o R-Drop 34.8 30.3 D5 UnitY **38.3 33.2** D6 + w/o T2U encoder 38.1 30.7 D7 + w/o R-Drop 37.7 32.1 D8 + Cross-attn to speech enc (sequential) 38.2 **33.2** D9 + Cross-attn to speech enc (parallel) 38.1 33.1 ## 5 Analysis In this section, we conduct analyses to shed light on the source of improvements in UnitY. We also study whether the same techniques used for UnitY are helpful for S2SpecT2. We use the multi-domain Es→En corpus, but pseudo-labeled ASR data is excluded for quick exploration, resulting in 196hour source speech. We report average dev scores over three runs with different random seeds.7 ## 5.1 Ablation Study We first conducted an ablation study for two-pass direct S2ST models in Table 3. We evaluated the translation quality of outputs from both decoders. An additional T2U/T2S encoder was essential for bridging the gap in representations between the first-pass and second-pass decoders, especially for S2SpecT2 (D2, D6). We attribute this to the fact that the gap in representations between text and spectrogram is larger than between text and discrete units. R-Drop was also beneficial for boosting the translation quality of the first-pass decoder, which improved the final performance accordingly (D3, D7). Moreover, we investigated adding another cross-attention over the speech encoder output to the unit decoder, as discussed in §2.1. We expected that the first-pass decoder output lost useful information to generate target speech faithful to source speech. We explored parallel (*parallel*, D8) and sequential (*sequential*, D9) cross-attention, similar to (Zhu et al., 2019), but neither showed any improvement. The first-pass decoder already extracted source acoustic information well via multiple cross-attention modules. We also show the results on Fisher in Appendix I. | (ASR-)BLEU (↑) | Speed-up | | | | |------------------|------------|-------------|------|------| | ID | Model | Output unit | (×) | | | Text | Speech | | | | | E1 | Phoneme | - | 29.4 | 1.00 | | E2 | Character | 31.7 | 28.9 | 0.89 | | S2SpecT2 | | | | | | E3 | Subword | 33.0 | 30.0 | 1.12 | | Phoneme | - | 27.8 | 2.31 | | | E4 E5 | Character | 33.2 | 29.6 | 2.06 | | UnitY | | | | | | E6 | Subword | 34.1 | 30.1 | 2.86 | | Decoder depth | (ASR-)BLEU (↑) | Speed-up | | | | | |-----------------|------------------|------------|--------|------|------|------| | #Params | | | | | | | | First | Second | | | | | | | pass | (unit) | Text | Speech | | | | | pass | (Billion) | (×) | | | | | | (text) | | | | | | | | G1 | 2 | 6 | 0.79 | 34.5 | 30.3 | 1.24 | | G2 | 4 | 6 | 0.82 | 34.5 | 30.5 | 1.20 | | G3 | 6 | 2 | 0.79 | 34.3 | 30.3 | 1.47 | | G4 | 6 | 4 | 0.82 | 33.9 | 29.9 | 1.19 | | G5 | 6 | 6 | 0.86 | 34.8 | 30.7 | 1.00 | | G6 | 6 | 8 | 0.89 | 34.2 | 30.2 | 0.69 | | G7 | 6 | 12♢ | 0.96 | 33.7 | 29.8 | 0.68 | | G8 | 12 | 2 | 0.95 | 34.9 | 30.7 | 1.44 | | G9 | 12♠ | 2 | 0.95 | 38.3 | 33.2 | 1.19 | | G10 | 12♠ | 4 | 0.98 | 38.0 | 33.0 | 1.09 | | G11 | 12♠ | 6 | 1.00 | 38.1 | 33.1 | 0.84 | | G12 | 12♠ | 12♢ | 1.12 | 36.2 | 32.2 | 0.60 | ## 5.2 Output Unit For First-Pass Decoder We studied optimal granularity of the output unit for the first-pass decoder in two-pass direct S2ST models. We explored phonemes, characters, and 2k subwords units. The results in Table 4 showed that the subword unit (E6) was the most effective for the first-pass decoder in both UnitY and S2SpecT2 thanks to a better translation quality. Moreover, it gained the largest decoding speed-up. We also show the results on Fisher in Appendix I. ## 5.3 **Capacity Assignment To Two-Pass Decoders** We sought to effectively assign the model capacity to the two decoders in UnitY to obtain a better translation quality. The results in Table 5 showed that a 12-layer text decoder with a two-layer unit decoder (G8) was the best in translation quality and decoding speed when initializing the first-pass decoder randomly (G1-G6,G8). Pre-training the first-pass decoder with t-mBART (G9) brought a ![7_image_0.png](7_image_0.png) large ASR-BLEU gain with a slight speed degradation compared to G8. 8It was sufficient to have a two-layer unit decoder in that case (G9-G11). We also pre-trained the second-pass decoder with u-mBART while initializing the text decoder randomly (G7) or with t-mBART (G12), but neither improved the performance further. Therefore, it is most effective to pre-train the deep text decoder only and keep the unit decoder shallow. Note that G8 is faster than G9 because of the smaller subword vocabulary size (2k v.s. 65k). ## 5.4 Data Scale Improving the translation quality of S2ST models on low-resource data is crucial since collecting a large amount of training data is challenging. We compared translation quality of direct S2ST models at various training data scales in Figure 4. We observed that UnitY consistently outperformed the S2SpecT2 and S2UT models when the data size was no less than 50 hours. The text decoder pre-training became less effective as the data size increased, consistent with an observation in §4.2, where the improvement in En→Es (+1.3) was smaller than Es→En (+2.5). However, pretraining the text decoder of UnitY was essential for obtaining decent performances in the low-resource settings (≤ 50 hours). ## 6 Related Works Two-pass sequence generation Two-pass decoding has advantages of maintaining the end-to-end optimization capability while inheriting the benefits of a cascading approach. Xia et al. (2017); Hu et al. (2020) incorporate an additional search process to find a better output. Dalmia et al. (2021) 8We set the depth of the first-pass decoder to 12 because of the availability of the off-the-shelf t-mBART model. reranks the intermediate hypotheses using an external module such as a language model. Zhao et al. (2019) injects specific information in the intermediate decoder to bias the output toward the desired domain. Sainath et al. (2019) provides an intermediate output to users before generating the final output for streaming applications. The two-pass approach makes the optimization tractable, which has advanced performance of speech translation models (Anastasopoulos and Chiang, 2018; Sperber et al., 2019; Sung et al., 2019; Dalmia et al., 2021; Inaguma et al., 2021a; Yan et al., 2022; Jia et al., 2022b). Direct speech-to-spectrogram translation Translatotron (Jia et al., 2019b) is the first direct S2ST model but suffered from poor performance even with auxiliary ASR and S2TT tasks. Kano et al. (2021) subsequently pre-trains the components with ASR and S2TT models, which is more effective for distant language pairs. Translatotron2 (Jia et al., 2022b) significantly improves Translatotron by incorporating two-pass decoding. We showed that our methods further improved Translatotron2. Direct speech-to-unit translation Direct speechto-unit translation models predict discrete units rather than spectrogram. Tjandra et al. (2019) uses vector-quantized variational autoencoder (Van Den Oord et al., 2017) while Lee et al. (2022a) used HuBERT (Hsu et al., 2021) to extract target discrete units. Lee et al. (2022b) normalizes speaker identity of real target speech using a CTC-based speech-to-unit model. Huang et al. (2022) further improves the normalization by considering rhythm, pitch, and energy. ## 7 Conclusion We proposed UnitY, a novel efficient two-pass direct S2ST model that subsequently generates both text and discrete unit outputs. We improved the model performance by predicting subwords in the first pass, bridging decoder representations by an additional encoder, deep-shallow two-pass decoders, regularizing the training with R-Drop, and pre-training the first-pass decoder with text-based mBART. Experimental evaluations demonstrated that UnitY outperformed a single-pass S2UT model consistently in translation quality and inference speed. We showed that the proposed methods improve the two-pass direct speech-to-spectrogram model as well, confirming their versatility. Still, UnitY achieved 2.51× decoding speed-up over the case. ## 8 Limitation Since two-pass direct S2ST models require linguistic units as the target for the first-pass decoder, they cannot be used when the target language is unwritten. Compared to cascaded S2ST systems, direct S2ST systems require more data preparation steps, including training a HuBERT model, synthesizing target speech with a TTS model, extracting discrete units with the HuBERT model, and training a unit-based vocoder, etc. Moreover, the target audio quality of direct speech-to-unit systems relies on the quality of discrete units generated by selfsupervised discrete models. It further depends on the availability of speech data to train HuBERT models for the target languages. Because S2ST systems could generate speech that does not necessarily represent the source speech's content, there is a potential risk of conveying wrong information. ## Acknowledgement We would like to thank Justine Kao and Carleigh Wood for the help on human evaluation. ## References Antonios Anastasopoulos, Ondˇrej Bojar, Jacob Bremerman, Roldano Cattoni, Maha Elbayad, Marcello Federico, Xutai Ma, Satoshi Nakamura, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Alexander Waibel, Changhan Wang, and Matthew Wiesner. 2021. FINDINGS OF THE IWSLT 2021 EVALUATION CAMPAIGN. In *Proceedings of IWSLT*, pages 1–29. Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In *Proceedings of NAACL-HLT*, pages 82–91. Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ondˇrej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian Stüker, Marco Turchi, Alexander Waibel, and Changhan Wang. 2020. FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN. In *Proceedings of IWSLT*, pages 1–34. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common Voice: A massivelymultilingual speech corpus. In *Proceedings of LREC*, pages 4218–4222. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *arXiv preprint arXiv:1907.05019*. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Proceedings of NeurIPS*, volume 33, pages 12449– 12460. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. End-toend attention-based large vocabulary speech recognition. In *Proceedings of ICASSP*, pages 4945–4949. Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, and Alexis Conneau. 2022. mSLAM: Massively multilingual joint pre-training for speech and text. arXiv preprint arXiv:2202.01374. Alexandre Bérard, Laurent Besacier, Ali Can Kocabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In *Proceedings of ICASSP*, pages 6224–6228. IEEE. Alexandre Bérard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In *Proceedings of NIPS 2016 End-to-end* Learning for Speech and Audio Processing Workshop. William Chan, Daniel Park, Chris Lee, Yu Zhang, Quoc Le, and Mohammad Norouzi. 2021. Speechstew: Simply mix all available speech recognition data to train one large neural network. *arXiv preprint* arXiv:2104.02133. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting characterbased neural machine translation with capacity and compression. In *Proceedings of EMNLP*, pages 4295– 4305. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. *arXiv preprint* arXiv:1406.1078. Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu. 2021. w2v-BERT: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. In *Proceedings of ASRU*. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of ACL*, pages 8440–8451. Siddharth Dalmia, Brian Yan, Vikas Raunak, Florian Metze, and Shinji Watanabe. 2021. Searchable hidden intermediates for end-to-end models of decomposable sequence tasks. In *Proceedings of NAACLHLT*, pages 1882–1896. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In *Proceedings of NAACL-HLT*, pages 2012–2017. Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, Qibing Bai, and Yu Zhang. 2022. Leveraging pseudo-labeled data to improve direct speech-tospeech translation. *arXiv preprint arXiv:2205.08993*. Salesky Elizabeth, Wiesner Matthew, Bremerman Jacob, Roldano Cattoni, Matteo Negri, Marco Turchi, Douglas W Oard, and Post Matt. 2021. The multilingual TEDx corpus for speech recognition and translation. In *Proceedings of Interspeech*, pages 3655–3659. Mark JF Gales, Kate M Knill, Anton Ragni, and Shakti P Rath. 2014. Speech recognition and keyword spotting for low-resource languages: Babel project research at CUED. In *Proceedings of SLTU*, pages 16–23. Thamme Gowda and Jonathan May. 2020. Finding the optimal vocabulary size for neural machine translation. In *Findings of EMNLP*, pages 3955–3964. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of* ICML, pages 369–376. Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented Transformer for speech recognition. In *Proceedings of* Interspeech, pages 5036–5040. Mary Harper et al. IARPA Babel Program. https: //www.iarpa.gov/research-programs/ babel. [Online]. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735– 1780. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. HuBERT: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460. Ke Hu, Tara N Sainath, Ruoming Pang, and Rohit Prabhavalkar. 2020. Deliberation model based two-pass end-to-end speech recognition. In *Proceedings of* ICASSP, pages 7799–7803. Rongjie Huang, Zhou Zhao, Jinglin Liu, Huadai Liu, Yi Ren, Lichao Zhang, and Jinzheng He. 2022. TranSpeech: Speech-to-speech translation with bilateral perturbation. *arXiv preprint arXiv:2205.12523*. Hirofumi Inaguma, Siddharth Dalmia, Brian Yan, and Shinji Watanabe. 2021a. Fast-MD: Fast multidecoder end-to-end speech translation with nonautoregressive hidden intermediates. In *Proceedings* of ASRU, pages 922–929. Hirofumi Inaguma, Tatsuya Kawahara, and Shinji Watanabe. 2021b. Source and target bidirectional knowledge distillation for end-to-end speech translation. In *Proceedings of NAACL-HLT*, pages 1872– 1881. Javier Iranzo-Sánchez, Joan Albert Silvestre-Cerda, Javier Jorge, Nahuel Roselló, Adria Giménez, Albert Sanchis, Jorge Civera, and Alfons Juan. 2020. Europarl-ST: A multilingual corpus for speech translation of parliamentary debates. In *Proceedings of* ICASSP, pages 8229–8233. Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/. Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, and Nobuyuki Morioka. 2022a. Leveraging unsupervised and weaklysupervised data to improve direct speech-to-speech translation. In *Proceedings of Interspeech*, pages 1721–1725. Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. 2019a. Leveraging weakly supervised data to improve end-to-end speech-to-text translation. In *Proceedings of ICASSP*, pages 7180–7184. Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2022b. Translatotron 2: High-quality direct speech-to-speech translation with voice preservation. In *Proceedings of ICML*. Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. 2022c. CVSS corpus and massively multilingual speech-to-speech translation. In *Proceedings of LREC*, pages 6691–6703. Ye Jia, Ron J Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. 2019b. Direct speech-to-speech translation with a sequence-to-sequence model. In *Proceedings of Interspeech*, pages 1123–1127. Jacob Kahn, Morgane Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. 2020. Libri-Light: A benchmark for asr with limited or no supervision. In *Proceedings of ICASSP*, pages 7669–7673. Takatomo Kano, Sakriani Sakti, and Satoshi Nakamura. 2021. Transformer-based direct speech-to-speech translation with transcoder. In *Proceedings of SLT*, pages 958–965. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In *Proceedings of ICLR*. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In *Proceedings of* Machine Translation Summit X: Papers, pages 79– 86. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. HiFi-GAN: Generative adversarial networks for efficient and high fidelity speech synthesis. In *Proceedings of NeurIPS*, volume 33, pages 17022–17033. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In *Proceedings of ACL*, pages 66–75. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In *Proceedings of EMNLP: System Demonstrations*, pages 66–71. Alon Lavie, Alex Waibel, Lori Levin, Michael Finke, Donna Gates, Marsal Gavalda, Torsten Zeppenfeld, and Puming Zhan. 1997. JANUS-III: Speech-tospeech translation in multiple languages. In *Proceedings of ICASSP*, pages 99–102. Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Pino, et al. 2022a. Direct speech-tospeech translation with discrete units. In Proceedings of ACL, pages 3327–3339. Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. 2022b. Textless speech-to-speech translation on real data. In *Proceedings of NAACL-HLT*, pages 860–872. Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with Transformer network. In *Proceedings of AAAI*, volume 33, pages 6706–6713. Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. Multilingual speech translation from efficient finetuning of pretrained models. In *Proceedings of ACL*, pages 827–838. Xinjian Li, Ye Jia, and Chung-Cheng Chiu. 2022. Textless direct speech-to-speech translation with discrete speech representation. arXiv preprint arXiv:2211.00115. Daniel Licht, Cynthia Gao, Janice Lam, Francisco Guzman, Mona Diab, and Philipp Koehn. 2022. Consistent human evaluation of machine translation across language pairs. In *Proceedings of AMTA*, pages 309– 321. Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In *Proceedings of LREC*. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019. End-to-end speech translation with knowledge distillation. In *Proceedings of Interspeech*, pages 1128– 1132. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In *Proceedings of ICLR*. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. *Computer Speech & Language*, 16(1):69–88. Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Gen-ichiro Kikui, Hisashi Kawai, Takatoshi Jitsuhiro, J-S Zhang, Hirofumi Yamamoto, Eiichiro Sumita, and Seiichi Yamamoto. 2006. The ATR multilingual speech-to-speech translation system. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 14(2):365–376. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint arXiv:1904.01038*. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In *Proceedings* of ICASSP, pages 5206–5210. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of ACL*, pages 311–318. Kyubyong Park and Thomas Mulc. 2019. CSS10: A collection of single speaker speech datasets for 10 languages. In *Proceedings of Interspeech*, pages 1566–1570. Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for endto-end speech translation. In *Proceedings of Interspeech*, pages 1476–1480. Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled selfsupervised representations. In *Proceedings of Interspeech*, pages 3615–3619. Sravya Popuri, Peng-Jen Chen, Changhan Wang, Juan Pino, Yossi Adi, Jiatao Gu, Wei-Ning Hsu, and Ann Lee. 2022. Enhanced direct speech-to-speech translation using self-supervised pre-training and data augmentation. In *Proceedings of Interspeech*, pages 5195–5199. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved speech-to-text translation with the Fisher and Callhome Spanish–English speech translation corpus. In *Proceedings of IWSLT*. Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. 2020. MLS: A largescale multilingual dataset for speech research. In Proceedings of Interspeech, pages 2757–2761. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of EMNLP*, pages 4512–4525. Anthony Rousseau, Paul Deléglise, and Yannick Estève. 2012. TED-LIUM: An automatic speech recognition dedicated corpus. In *Proceedings of LREC*, pages 125–129. Tara N Sainath, Ruoming Pang, David Rybach, Yanzhang He, Rohit Prabhavalkar, Wei Li, Mirkó Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, et al. 2019. Two-pass end-to-end speech recognition. In *Proceedings of Interspeech*, pages 2773–2777. Elizabeth Salesky, Julian Mäder, and Severin Klinger. 2021. Assessing evaluation metrics for speech-tospeech translation. In *Proceedings of ASRU*, pages 733–740. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. CCMatrix: Mining billions of high-quality parallel sentences on the web. In Proceedings of ACL, pages 6490–6500. Jonathan Shen, Ye Jia, Mike Chrzanowski, Yu Zhang, Isaac Elias, Heiga Zen, and Yonghui Wu. 2020. NonAttentive Tacotron: Robust and controllable neural TTS synthesis including unsupervised duration modeling. *arXiv preprint arXiv:2010.04301*. Raivis Skadin,š, Jörg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the EU bookshop corpus. In Proceedings of LREC, pages 1850–1855. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-passing models for robust and data-efficient end-to-end speech translation. Transactions of the Association for Computational Linguistics, 7:313–325. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. Tzu-Wei Sung, Jun-You Liu, Hung-yi Lee, and Linshan Lee. 2019. Towards end-to-end speech-to-text translation with two-pass decoding. In *Proceedings* of ICASSP, pages 7175–7179. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In *Proceedings of NIPS*, volume 27. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of CVPR, pages 2818–2826. Yun Tang, Hongyu Gong, Ning Dong, Changhan Wang, Wei-Ning Hsu, Jiatao Gu, Alexei Baevski, Xian Li, Abdelrahman Mohamed, Michael Auli, and Juan Pino. 2022. Unified speech-text pre-training for speech translation and recognition. In Proceedings of ACL, pages 1488–1499. Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In *Proceedings of ACL*, pages 4252– 4261. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401. Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2019. Speech-to-speech translation between untranscribed unknown languages. In Proceedings of ASRU, pages 593–600. Jörgen Valk and Tanel Alumäe. 2021. VoxLingua107: a dataset for spoken language recognition. In *Proceedings of SLT*, pages 652–658. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In Proceedings of NIPS, volume 30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceedings of NIPS*, volume 30. Wolfgang Wahlster. 2013. Verbmobil: foundations of speech-to-speech translation. Springer Science & Business Media. Changhan Wang, Hirofumi Inaguma, Peng-Jen Chen, Ilia Kulikov, Yun Tang, Wei-Ning Hsu, Michael Auli, and Juan Pino. 2022. Simple and effective unsupervised speech translation. arXiv preprint arXiv:2210.10191. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021a. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In *Proceedings of ACL*, pages 993– 1003. Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq S2T: Fast speech-to-text modeling with Fairseq. In Proceedings of AACL: System Demonstrations, pages 33–39. Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021b. CoVoST 2 and massively multilingual speech translation. In *Proceedings of Interspeech*, pages 2247–2251. Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, and Alexis Conneau. 2021c. Largescale self- and semi-supervised learning for speech translation. In *Proceedings of Interspeech*, pages 2242–2246. Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In *Proceedings of Interspeech*, pages 2625–2629. Krzysztof Wołk and Krzysztof Marasek. 2014. Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs. *Procedia Technology*, 18:126–132. Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. RDrop: Regularized dropout for neural networks. In Proceedings off NeurIPS, volume 34, pages 10890– 10905. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In *Proceedings of NIPS*, volume 30. Brian Yan, Patrick Fernandes, Siddharth Dalmia, Jiatong Shi, Yifan Peng, Dan Berrebbi, Xinyi Wang, Graham Neubig, and Shinji Watanabe. 2022. CMU's IWSLT 2022 dialect speech translation system. In Proceedings of IWSLT, pages 298–307. Chen Zhang, Xu Tan, Yi Ren, Tao Qin, Kejun Zhang, and Tie-Yan Liu. 2021. Uwspeech: Speech to speech translation for unwritten languages. In *Proceedings* of AAAI, pages 14319–14327. Ding Zhao, Tara N. Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, and Ruoming Pang. 2019. Shallow-fusion end-to-end contextual biasing. In Proceedings of Interspeech, pages 1418–1422. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2019. Incorporating BERT into neural machine translation. In *Proceedings of ICLR*. Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In *Proceedings of LREC*, pages 3530–3534. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_3.png](14_image_3.png) ![14_image_4.png](14_image_4.png) ![14_image_2.png](14_image_2.png) 4: // First-pass beam search 5: Ω1st *← {}* 6: Ω1st ← BeamSearch1(*H, B*1st,Ω1st) 7: Yˆ ← argmax(Ω1st) ▷ |Ω1st| = B1st 8: 9: D text ← *GetHiddenStateFromCache*(Yˆ ) ▷ D text : (*M, d*model) 10: Z ← T2UEnc(D text) ▷ Z : (*M, d*model) 11: 12: // Second-pass beam search 13: Ω2nd *← {}* 14: Ω2nd ← BeamSearch2(*Z, B*2nd,Ω2nd) 15: Uˆ ← argmax(Ω2nd) ▷ |Ω2nd| = B2nd 16: 17: // Convert discrete units to waveform 18: Wˆ ← UnitVocoder(Uˆ) 19: **return** Wˆ 20: **end function** we apply a more effective regularization based on R-Drop (Wu et al., 2021) to the first-pass decoder in addition to standard regularization such as dropout (Srivastava et al., 2014) and label smoothing (Szegedy et al., 2016). Theoretically, R-Drop reduces the inconsistency of model predictions between training and inference by dropout, thus improving the generalization ability. R-Drop duplicates the network input during training and calculates two output probability distributions with different dropout masks. Then, a constraint is introduced by minimizing the Kullback–Leibler (KL) divergence loss between the two probability distributions. We apply R-Drop to both text and unit decoders. The total training objective of UnitY with R-Drop, Ltotal, is modified from Eq. (1) as follows: $$\mathcal{L}_{\mathrm{total}}=\sum_{i=1}^{2}\mathcal{L}_{\mathrm{s2u}}(U|X_{i},Y)+\alpha\mathcal{L}_{\mathrm{kl}}^{\mathrm{s2u}}(X_{1},X_{2})$$ $$+w_{\mathrm{s2t}}(\sum_{i=1}^{2}\mathcal{L}_{\mathrm{s2t}}(Y|X_{i})+\beta\mathcal{L}_{\mathrm{kl}}^{\mathrm{s2t}}(X_{1},X_{2})),\tag{2}$$ where Xiis a duplicated input from X, L s2u kl and L s2t kl are KL losses for the unit and text decoders, ws2t is a weight for the S2TT loss, and α and β are weights for the KL losses, respectively. We implement R-Drop by duplicating inputs instead of feeding them to the network twice. Given a set of unique inputs X, the general KL loss Lkl in R-Drop is formulated as follows: $$\begin{array}{c}{{{\mathcal L}_{\mathrm{kl}}({\bf X}_{1},{\bf X}_{2})=\frac{1}{2}({\mathcal D}_{\mathrm{kl}}(P(\cdot|{\bf X}_{1})||P(\cdot|{\bf X}_{2})}}\\ {{\qquad\qquad+{\mathcal D}_{\mathrm{kl}}(P(\cdot|{\bf X}_{2}))||P(\cdot|{\bf X}_{1}))),}}\end{array}$$ where Xiis a duplicated input from X, Dkl is a KL divergence, and P is a categorical probability distribution. ## C Training Objective In this section, we describe training objectives for the baseline S2ST models. In addition to the primary S2ST/S2UT task, we introduce auxiliary S2TT and ASR tasks. We adopted an auxiliary character-level ASR task for the direct S2ST models trained from scratch on Fisher, regardless of the choice of the output unit in the first-pass decoder. We did not use the ASR task in the rest settings. ## B Training With R-Drop UnitY introduces an intermediate S2TT sub-task to make the optimization tractable while maintaining the end-to-end differentiability. However, the easier S2TT task is more likely to overfit than the primary S2UT task. To tackle this problem, Algorithm 1 shows the two-pass beam serach decoding algorithm of UnitY as discussed in §2.3. We first encode a source speech X with the speech encoder SpeechEnc and map it to the encoder output H. The first-pass decoder takes H as input and generates a text sequence. *BeamSearch*1 is a first-pass beam search function that takes an empty hypothesis set Ω1st and returns the beam candidates. We take the best text hypothesis Yˆ and get the corresponding decoder output Dtext from a cache via the *GetHiddenStateFromCache* function. Next, the T2U encoder T2UEnc takes Dtext as input and maps it to the output Z. The second-pass decoder takes H and Z as inputs and generates a discrete unit sequence. BeamSearch2 is a second-pass beam search function that takes an empty hypothesis set Ω2nd and returns the beam candidates. We take the best discrete unit hypothesis Uˆ. Finally, the unit-based vocoder UnitVocoder converts Uˆ to the waveform Wˆ . ## A Pseudo Algorithm For Two-Pass Beam Search Decoding S2SpecT The architecture of S2SpecT is shown in Figure 2a. Given the target spectrogram S, translation Y , and transcription Ysrc, corresponding to a source speech X, the training objective of S2SpecT is formulated as: $$\mathcal{L}_{\rm total}=\mathcal{L}_{\rm s2s}(S|X)$$ $$+w_{\rm s2t}\mathcal{L}_{\rm s2t}(Y|X)+w_{\rm asr}\mathcal{L}_{\rm asr}(Y_{\rm src}|X),\tag{3}$$ where Ls2s is the primary S2ST loss, Ls2t is the auxiliary S2TT loss, Lasr is the auxiliary ASR loss, ws2t is a weight for the S2TT loss, and wasr is a weight for the ASR loss, respectively. Note that RDrop is not used because the output of the primary S2ST task is continuous. We adopt the autoregressive decoder of Transformer TTS (Li et al., 2019) as the spectrogram decoder. Therefore, Ls2s is defined as a sum of the L1 loss L1, L2 loss L2, and end-of-sentence (EOS) prediction loss Leos as follows: $${\mathcal{L}}_{\mathrm{s2s}}(S|X)={\mathcal{L}}_{1}+{\mathcal{L}}_{2}+{\mathcal{L}}_{\mathrm{eos}}.$$ S2SpecT2 The architecture of S2SpecT2 is shown in Figure 2b. The training objective of S2SpecT2 is formulated as: $$\mathcal{L}_{\text{total}}=\sum_{i=1}^{2}\mathcal{L}_{\text{s2s}}(S|X_{i},Y)$$ $$+w_{\text{s2t}}(\sum_{i=1}^{2}\mathcal{L}_{\text{s2t}}(Y|X_{i})+\beta\mathcal{L}_{\text{kl}}^{\text{s2t}}(X_{1},X_{2}))$$ $$+w_{\text{asr}}(\sum_{i=1}^{2}\mathcal{L}_{\text{asr}}(Y_{\text{src}}|X_{i})+\gamma\mathcal{L}_{\text{kl}}^{\text{asr}}(X_{1},X_{2})),$$ where Xiis a duplicated input from X, L s2t kl is the R-Drop's KL loss for the first-pass decoder, L s2t kl is the R-Drop's KL loss for the auxiliary ASR decoder, and β and γ are the corresponding weights for the R-Drop's KL losses, respectively. Unlike Eq. (3), the primary S2ST task depends on the output from the first-pass decoder. We apply RDrop to the S2TT and ASR tasks only. We also investigated applying R-Drop to the second-pass spectrogram decoder by minimizing the difference of two outputs in the continuous space, but the training was unstable. S2UT The architecture of S2UT is shown in Figure 2c. In addition to the primary S2UT loss and auxiliary S2TT and ASR losses, we use a CTC loss on top of the unit decoder following Lee et al. (2022a). The training objective of the S2UT model is formulated as: $$\mathcal{L}_{\text{total}}=\sum_{i=1}^{2}\mathcal{L}_{\text{s2u}}(U|X_{i})+\alpha\mathcal{L}_{\text{kl}}^{\text{s2u}}(X_{1},X_{2})$$ $$+w_{\text{ctc}}\sum_{i=1}^{2}\mathcal{L}_{\text{ctc}}(Y|D_{i}^{\text{unit}})$$ $$+w_{\text{s2t}}(\sum_{i=1}^{2}\mathcal{L}_{\text{s2t}}(Y|X_{i})+\beta\mathcal{L}_{\text{kl}}^{\text{s2t}}(X_{1},X_{2}))$$ $$+w_{\text{arsr}}(\sum_{i=1}^{2}\mathcal{L}_{\text{arsr}}(Y_{\text{src}}|X_{i})+\gamma\mathcal{L}_{\text{kl}}^{\text{asr}}(X_{1},X_{2})),$$ where Ls2u is the primary S2UT loss, L s2u kl is the R-Drop's KL loss for the unit decoder, Lctc is the CTC loss, Dunit iis the unit decoder output for the i-th forward pass, α is a weight for the R-Drop's KL loss, and wctc is a weight for the CTC loss, respectively. Unlike Eq. (2), there is no dependency between the primary S2UT task and auxiliary S2TT task except for sharing the same encoder. S2TT, ASR We also apply R-Drop to S2TT and ASR tasks. The training objective of the S2TT model is formulated as: $${\mathcal{L}}_{\mathrm{total}}=\sum_{i=1}^{2}{\mathcal{L}}_{\mathrm{s2t}}(Y|X_{i})+\beta{\mathcal{L}}_{\mathrm{kl}}^{\mathrm{s2t}}(X_{1},X_{2}).\quad(6)$$ Similarly, the training objective of the ASR model is formulated as: $${\mathcal{L}}_{\mathrm{total}}=\sum_{i=1}^{2}{\mathcal{L}}_{\mathrm{asr}}(Y_{\mathrm{src}}|X_{i})+\gamma{\mathcal{L}}_{\mathrm{kl}}^{\mathrm{asr}}(X_{1},X_{2}).\eqno(7)$$ ## D Data Fisher Es→En (Post et al., **2013)** This corpus contains 170-hour Spanish conversational telephone speech with the corresponding transcriptions as well as the English translations. The target speech is synthesized by a high-quality inhouse TTS model trained with a single female speaker (Lee et al., 2022a). CVSS-C (Jia et al., **2022c)** CVSS is a public multilingual S2ST corpus based on CoVoST2 (Wang et al., 2021b). It covers 21 language | Corpus | Language direction | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------| | En→Es | Es→En | | | CoVoST2 [112 hours] (Wang et al., 2021b) | | | | S2TT | Europarl-ST [75.6 hours] (Iranzo-Sánchez et al., 2020) | Europarl-ST [20.6 hours] | | Must-C [495 hours] (Di Gangi et al., 2019) | mTEDx [63.4 hours] (Elizabeth et al., 2021) | | | Librispeech [960 hours] (Panayotov et al., 2015) | MLS [918 hours] (Pratap et al., 2020) | | | TEDLIUM3 [452 hours] (Rousseau et al., 2012) | | | | ASR | Common Voice v7 [290 hours] | | | Common Voice v7 [1203 hours] (Ardila et al., 2020) | | | | MT Supervised MT1 | CCMatrix [86.3M sentences] (Schwenk et al., 2021) | - | | OpenSubtitle2018 [60M sentences] (Lison et al., 2018) UNCorpus [21.8M sentences] (Ziemski et al., 2016) EUBookshop v2 [5.2M sentences] (Skadin,š et al., 2014) Europarl v10 [1.9M sentences] (Koehn, 2005) Wikipedia v1.0 [1.8M sentences] (Wołk and Marasek, 2014) TED2020 v1 [0.4M sentences] (Reimers and Gurevych, 2020) Europarl-ST [32k sentences] Must-C [260k sentences] mTEDx [3.6k sentences] CoVosST2 [79k sentences] | | | | T2U/TTS | CSS100 [23.8 hours] (Park and Mulc, 2019) | LJSpeech [24 hours] (Ito and Johnson, 2017) | | Unlabeled text t-mBART | CC100 [5.6B tokens] (Conneau et al., 2020) | | | Unlabeled speech wav2vec2.0 | Libri-Light [60k hours] (Kahn et al., 2020) | VoxPopuli Es [16k hours] (Wang et al., 2021a) | | Supervised MT2 (Cascaded S2ST) | VoxPopuli En [14k hours] VoxPopuli Es [16k hours] | | | u-mBART | Libri-Light [60k hours] VoxPopuli En [4.5k hours] VoxPopuli Es [4.5k hours] | | | mHuBERT | VoxPopuli Fr [4.5k hours] | | | Table 6: Statistics for the multi-domain En↔Es corpora | | | | Model | URL | | | En wav2vec2.0 | https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/enhanced_direct_s2st_discrete_units.md#wav2vec-20 | | | Es wav2vec2.0 | https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/enhanced_direct_s2st_discrete_units.md#wav2vec-20 | | | En HuBERT | https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/direct_s2st_discrete_units.md | | | mHuBERT | https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/textless_s2st_real_data.md | | | En-Es u-mBART | https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/s2st_finetuning/unit_mBART/checkpoint.pt | | | En Transformer TTS | https://huggingface.co/facebook/tts_transformer-en-ljspeech | | | Es Transformer TTS | https://huggingface.co/facebook/tts_transformer-es-css10 Table 7: Links to pre-trained self-supervised models and TTS models | | directions to English. We use the CVSS-C part of the CVSS corpus, in which a single-speaker female TTS synthesizes the target speech. We combine all language directions to train a single X-to-En multilingual model. Multi-domain En↔Es (Popuri et al., **2022)** Following Popuri et al. (2022), we use all samples from multiple public S2TT corpora in each direction to improve the robustness of model training (Jia et al., 2022b; Chan et al., 2021). We also use all samples from validation sets in all domains for checkpoint selection. We further augment the S2ST training data by pseudo-labeling ASR corpora with MT and T2U/TTS models. We use the TTS model in the cascaded system to synthesize the target speech for direct speech-to-spectrogram models. For direct speech-to-unit models, we use a T2U model (Lee et al., 2022b) to generate discrete units on the ASR corpora and the TTS+HuBERT pipeline for the S2T corpora. Both T2U and TTS models are based on Transformer. We train En and Es T2U/TTS models on the LJSpeech (Ito and Johnson, 2017) and CSS10 (Park and Mulc, 2019) corpora, respectively. For En→Es, we use all samples from EuroparlST (Iranzo-Sánchez et al., 2020) and MustC (Di Gangi et al., 2019) and augment the training data by TEDLIUM3 (Rousseau et al., 2012), Librispeech (Panayotov et al., 2015), and Common Voice (Ardila et al., 2020), resulting in 3180hour source speech. We removed samples overlapped with mtedx dev/test sets from TEDLIUM3. For Es→En, we use all samples from CoVoST2, Europarl-ST, and mTEDx (Elizabeth et al., 2021), and augment the training data by Common Voice and multilingual Librispeech (MLS) (Pratap et al., 2020), resulting in 1404-hour source speech. In Table 6, we list all the datasets used in each task. ## E Pre-Processing Speech We convert source audio to 16kHz and generate target speech with 22kHz. When extracting discrete units, we downsample the target speech to 16kHz. For filterbank features, we extract 80dimensional coefficients on both the source and target sides. We apply utterance-level cepstral meanvariance normalization to both inputs. Discrete units We extract discrete units with an English HuBERT trained on Librispeech after performing k-means clustering with 100 clusters on Fisher (Lee et al., 2022a). For the rest corpora, we extract discrete units with a multilingual HuBERT (mHuBERT) (Popuri et al., 2022) trained on En, En, and Fr parts of VoxPopuli (Wang et al., 2021a) with the number of k-means clusters of 1000. Text We lowercase text data and remove all punctuation marks except for apostrophes. When initializing the text decoder in two-pass direct S2ST models randomly, we build vocabularies of 1k, 6k, and 2k unigram subword units (Kudo, 2018) with the SentencePiece toolkit (Kudo and Richardson, 2018) for the Fisher, CVSS-C, and multi-domain corpora, respectively. When pre-training the text decoder with t-mBART, we use the same vocabulary as t-mBART. The reference target translation to calculate ASR-BLEU is normalized with lowercasing, removal of punctuation marks, conversion of digits to spoken forms, and removal of non-verbal words in parentheses like "(Applause)" or "(Music)." Data filtering For discrete unit generation with a T2U model, we found that target discrete units were over-generated in long-form samples. We filtered out such samples by thresholding with a ratio of the sequence length of the discrete units over the number of corresponding source speech frames. We used a threshold of 0.7 for the multidomain En→Es corpus while using ∞ for the rest. We used the same number of samples for all direct S2ST models for a fair comparison. ## F Pre-Training In Table 7, we list all the pre-trained self-supervised models and TTS models used in §4. wav2vec2.0 We use 24-layer Conformer wav2vec2.0 (Baevski et al., 2020) models trained on Libri-Light (Kahn et al., 2020) for En and VoxPopuli for Es, respectively. w2v-BERT Same as Jia et al. (2019a), we pretrain the w2v-BERT (Chung et al., 2021) on approximately 430k hours of unlabeled speech data in 51 languages spanning from VoxPopuli, Common Voice, MLS, BABEL (Harper et al.; Gales et al., 2014), and VoxLingua107 (Valk and Alumäe, 2021). The w2v-BERT was composed of 24 Conformer layers with 0.6 billions of parameters. Text-based mBART (t-mBART) We train a tmBART model with En and Es unlabeled text on CC100 (Conneau et al., 2020). We use of a 65k unigram subword unit for the vocabulary. For multilingual experiments on CVSS-C, we use mBART50 (Tang et al., 2020) with multilingual fine-tuning to En. The vocabulary size is a 250k subword unit. Unit-based mBART (u-mBART) We use a umBART model trained with En and Es unlabeled speech on VoxPopuli. The unit vocabulary is the same as that of the mHuBERT model. ## G Architecture Details Let dmodel be a model dimension of Transformer, dff be an inner dimension of the FFN layers, and Nhead be the number of attention heads. Speech encoder We used a 16-layer Conformer encoder stacked on 2-dimensional convolution blocks when training models from scratch. The convolution blocks reduced the input sequence length by a factor of 4. We set (dmodel, dff,Nhead) to (256, 2048, 4). We set the kernel size of the depthwise convolution in the convolution module of each Conformer block to 31. When pre-training the encoder with wav2vec2.0 and w2v-BERT, we used a 24-layer Conformer encoder and stacked a one-layer length adaptor (Li et al., 2021) on it. Because an output frame of wav2vec2.0 and w2vBERT corresponds to 20ms and the length adaptor halved the sequence length, the frame rate of every final encoder output corresponds to 40ms in both | ID | #GPU | # of frames × | Learning rate Warmup | Dropout | Label | | | | | | | |-----------------------|--------|-----------------|------------------------|-----------|---------|-----|------|-----|-----|------|-----| | gradient accumulation | | | | | | | | | | | | | A6 | 4 | 40k×1 | 1.3e-3 | | | | | | | | | | A7 | 16 | 2k×4 | 1.0e-3 | 0.1 | - | - | - | - | 8.6 | - | | | A11 | 16 | 20k×1 | 1.0e-3 | 0.3 | 0.1 | 0.1 | - | 0.0 | 0.0 | - | | | A12 | 16 | 4k×2 | 1.0e-3 | 0.1 | - | - | - | - | - | - | | | A15 | 16 | 20k×1 | 1.5e-3 | 0.3 | 0.1 | 0.1 | - | 3.0 | 3.0 | - | | | A16 | 16 | 4k×2 | 1.0e-3 | 0.1 | - | 0.1 | - | - | 3.0 | - | | | A18 | 4 | 20k×1 | 8.6e-4 | 0.3 | 8.0 | 8.0 | 1.6 | 1.0 | 1.0 | 1.0 | | | A19 | 16 | 2k×4 | 1.0e-3 | 0.1 | - | - | - | - | - | 1.0 | | | A20 | 4 | 20k×1 | 6.0e-4 | 0.3 | 8.0 | 8.0 | - | 3.0 | 3.0 | 1.0 | | | A21 | 16 | 2k×4 | 1.0e-3 | 0.1 | - | 8.0 | - | - | 3.0 | 1.0 | | | B3 | 8 | 35k×4 | 2.1e-3 | 0.2 | | | | | | | | | 10k | 0.2 | | | | | | | | | | | | B4 | 32 | 2k×24 | 1.0e-3 | 0.1 | 0.2 | 0.0 | - | - | 5.0 | 5.0 | - | | B6 | 32 | 40k×1 | 1.0e-3 | 0.1 | 0.2 | - | 0.1 | - | - | 0.0 | - | | B7 | 32 | 40k×1 | 1.0e-3 | 0.1 | 0.2 | - | 0.1 | - | - | 0.0 | - | | B8 | 32 | 2k×24 | 1.0e-3 | 0.1 | 0.2 | - | - | - | - | - | - | | B15 | 32 | 40k×1 | 1.1e-3 | 0.1 | 0.2 | - | 0.1 | - | - | 10.0 | - | | B16 | 32 | 40k×1 | 1.0e-3 | 0.1 | 0.2 | - | 0.1 | - | - | 10.0 | - | | B17 | 32 | 2k×24 | 1.0e-3 | 0.1 | 0.2 | - | 0.1 | - | - | 5.0 | - | | B18 | 32 | 20k×2 | 8.6e-4 | 0.3 | 0.2 | - | 8.0 | 1.6 | - | 0.5 | 0.5 | | B19 | 32 | 20k×2 | 7.0e-4 | 0.3 | 0.2 | - | 8.0 | 1.6 | - | 0.5 | 0.5 | | B20 | 32 | 2k×24 | 1.0e-3 | 0.1 | 0.2 | - | - | - | - | - | 0.5 | | B21 | 32 | 20k×2 | 1.5e-3 | 0.3 | 0.2 | - | 8.0 | - | - | 1.5 | 1.5 | | B22 | 32 | 20k×2 | 7.0e-4 | 0.3 | 0.2 | - | 8.0 | - | - | 5.0 | 1.5 | | B23 | 32 | 2k×24 | 1.0e-3 | 0.1 | 0.2 | - | 8.0 | - | - | 5.0 | 1.5 | | C1' | 10k | | | | | | | | | | | | C2' | 1k | 0.2 | - | - | - | - | 10.0 | - | | | | | C3 | 5k | 0.2 | - | 8.0 | - | - | 10.0 | - | | | | | C4 | 5k | 0.2 | - | 8.0 | - | - | 10.0 | - | | | | | C5' | 1k | 0.2 | - | - | - | - | - | 0.0 | | | | | C6 | 1k | 0.2 | - | 8.0 | - | - | 10.0 | 0.0 | | | | | C7 | 1k | 0.2 | - | 8.0 | - | - | 10.0 | 0.0 | | | | | 1k | | | | | | | | | | | | | 32 | 2k×30 | 5.0e-4 | 0.1 | | | | | | | | | ![18_image_0.png](18_image_0.png) S2SpecT We used a six-layer Transformer spectrogram decoder. We set (dmodel, dff,Nhead) to (512, 2048, 8). When pre-training the speech encoder with wav2vec2.0 or w2v-BERT, we doubled these three values. We set the pre-net dimension and reduction factor of the spectrogram decoder to 32 and 3, respectively. S2SpecT2 Let Nt2s be the depth of the T2S encoder. We set (N1st,N2nd,Nt2s) to (4, 6, 2) on Fisher and CVSS-C. On the multi-domain corpus, we set (N1st,N2nd,Nt2s) to (12, 6, 2) when pretraining the first-pass decoder with t-mBART. Otherwise, we set (N1st,N2nd,Nt2s) to (6, 6, 2). We used the same dmodel, dff, and Nhead as S2SpecT in all the settings. S2UT We used a six-layer Transformer unit decoder. When training models from scratch on Fisher, we set (dmodel, dff,Nhead) to (256, 2048, 4). We set (dmodel, dff,Nhead) to (512, 2048, 8) on CVSS-C. When pre-training the speech encoder with wav2vec2.0 or w2v-BERT, we set (dmodel, dff,Nhead) to (1024, 4096, 16). UnitY We used the same first-pass decoder as S2SpecT2 in all the settings. We set (N2nd,Nt2u) to (2, 2). We used the same dmodel, dff, and Nhead as the S2UT model in all the settings. S2TT We used a six-layer Transformer decoder. When initializing it with t-mBART, we set the depth to 12. ASR We used the same architecture as the S2TT model except for the vocabulary in all the settings. ## H Training Details We optimized all models with the mixed precision training using 32GB V100 GPUs (Micikevicius et al., 2018). When fine-tuning the speech encoder from wav2vec2.0 and w2v-BERT, we updated all parameters in the speech encoder. For multilingual training with speech encoder pre-training with w2vBERT on CVSS-C, we over-sampled training data of low-resource directions with an inverse temperature of 0.6, following (Arivazhagan et al., 2019). We list the training hyperparameters in Table 8. The training of A*, B*, and C* models converged within approximately 1, 3, and 5 days, respectively. | ID | Model | Encoder | ASR-BLEU (↑) dev dev2 test | | | |------------------------------------------------------------------------------------|--------------------------------------|-------------------------------|------------------------------|------|------| | A0 | Synthetic target (Lee et al., 2022a) | 88.5 | 89.4 | 90.5 | | | Cascaded systems A1 ASR → MT → TTS | LSTM (Lee et al., 2022a) | 42.1 | 43.5 | 43.9 | | | A2 | LSTM (Jia et al., 2019b) | 39.4 | 41.2 | 41.4 | | | A3 | LSTM (Jia et al., 2022b) | - | - | 43.3 | | | A4 | LSTM (Lee et al., 2022a) | 38.5 | 39.9 | 40.2 | | | A5 | Transformer (Dong et al., 2022) | 44.3 | 45.4 | 45.1 | | | A6 | Conformer | 47.8 | 48.9 | 48.3 | | | A7 | Conformer wav2vec2.0 | 51.0 | 52.2 | 52.1 | | | Direct speech-to-spectrogram systems A8 S2TT → TTS Transformer (Jia et al., 2019b) | 30.1 | 31.5 | 31.1 | | | | A9 | Transformer (Lee et al., 2022a) | - | - | 33.2 | | | A10 | Transformer (Dong et al., 2022) | 42.4 | 43.3 | 43.6 | | | A11 | Conformer | 43.9 | 44.4 | 43.8 | | | A12 | Conformer wav2vec2.0 | 45.5 | 47.6 | 46.3 | | | A13 | Translatotron2 | Conformer (Jia et al., 2022b) | - | - | 42.4 | | A14 | Conformer w2v-BERT (Li et al., 2022) | - | - | 52.2 | | | A15 | S2SpecT2 | Conformer | 50.4 | 51.1 | 50.8 | | A16 | Conformer wav2vec2.0 | 58.4 | 59.5 | 58.6 | | | Direct speech-to-unit systems A17 Transformer (Lee et al., 2022a) | - | - | 39.9 | | | | A18 | Conformer | 46.2 | 47.6 | 47.4 | | | S2UT | | | | | | | A19 | Conformer wav2vec2.0 | 53.4 | 53.9 | 53.7 | | | A20 | UnitY | Conformer | 50.5 | 51.6 | 51.4 | | A21 | Conformer wav2vec2.0 | 55.1 | 56.5 | 55.9 | | | S2SpecT | | | | | | ![19_image_0.png](19_image_0.png) ## I Additional Experimental Results In this section, we present additional experimental results in §4. FLOPs In Figure 5, we show the results of FLOPs measured with a subset of the multi-domain Es→En dev set, as discussed in §4.3. UnitY achieved 1.65× and 3.19× FLOPs reduction over S2SpecT2 and S2UT models, respectively. Fisher Es→En The results on Fisher are shown in Table 9. We report average scores over three runs with different random seeds. Among our four direct systems trained from scratch (A11, A15, A18, A20), UnitY (A20) achieved the best ASRBLEU. Our S2UT (A18) and S2SpecT2 (A15) outperformed the previous studies (A13, A17) by a large margin.9 Because S2SpecT2 outperformed S2UT, the two-pass decoding was the main factor of the improvements although it was complementary to targeting discrete units. Moreover, the two-pass direct models (A15, A20) outperformed a cascaded system (A6). Next, we pre-trained the speech encoder with wav2vec2.0 (A12, A16, A19, A21).10 We confirmed that all the models benefited from the pretraining, but the gain was small for S2SpecT. Unlike when training the models from scratch, S2SpecT2 gained the most and achieved the best test ASR-BLEU, 58.3. To the best of our knowledge, this is the new state-of-the-art S2ST result 9A15 predicts phonemes while A16 predicts subwords in the first pass. 10We did not pre-train the text decoder with t-mBART because it was not helpful on this corpus. This is because Fisher is a conversational domain, which is very different from text data used for t-mBART pre-training. We could make the text decoder pre-training effective by including conversational data during t-mBART pre-training, which we leave future work. | ID | Model | ASR-BLEU (↑) | | | | | | | | | | | | | | | | | | | | | | |------------------------------------------------------------|----------------------|----------------|------|------|------|------|------|----------------|------|----------------|------|------|------|------|------|------|------|------|------|------|------|------|------| | Avg. | High | Mid | Low | | | | | | | | | | | | | | | | | | | | | | fr | de | ca | es | fa | it | ru | zh | pt | nl | tr | et | mn | ar | lv | sl | sv | cy | ta | ja | id | | | | | B0 | Synthetic target♢ | 91.1 | 84.6 | 88.4 | 92.0 | 88.6 | 91.7 | 89.5 | 94.0 | 77.8 | 93.1 | 90.6 | 92.7 | 89.3 | 92.4 | 94.2 | 94.8 | 94.9 | 94.1 | 92.0 | 90.6 | 95.3 | 92.6 | | Cascaded systems B1 S2TT → TTS♢ | 10.6 | 31.2 | 23.9 | 26.8 | 33.3 | 3.4 | 28.1 | 24.4 | 6.8 | 14.8 | 9.8 | 5.1 | 1.7 | 0.3 | 4.1 | 2.3 | 0.6 | 1.4 | 2.1 | 0.2 | 0.7 | 0.9 | | | B2 | + ASR pre-training | 12.7 | 32.9 | 26.2 | 28.6 | 34.9 | 5.6 | 30.2 | 27.1 | 8.7 | 19.8 | 14.4 | 10.7 | 3.2 | 0.6 | 7.8 | 2.8 | 2.0 | 3.4 | 5.0 | 0.2 | 0.9 | 1.6 | | B3 | S2TT → TTS | 7.8 | 18.3 | 16.1 | 18.5 | 19.9 | 4.2 | 18.1 | 17.6 | 3.7 | 15.8 | 11.5 | 6.5 | 2.1 | 0.2 | 2.2 | 1.3 | 2.3 | 1.0 | 2.9 | 0.2 | 0.3 | 0.3 | | B4 | + w2v-BERT + t-mBART | 14.9 | 20.5 | 20.0 | 21.6 | 22.1 | 8.5 | 21.8 | 27.6 | 5.5 | 27.6 | 21.6 | 13.6 | 13.2 | 1.7 | 12.7 | 10.6 | 17.4 | 18.5 | 11.5 | 1.3 | 3.7 | 12.0 | | Direct speech-to-spectrogram systems B5 Translatotron♢ 3.4 | 15.5 | 6.9 | 11.0 | 14.1 | 1.4 | 9.3 | 4.3 | 1.5 | 2.2 | 2.1 | 1.2 | 0.1 | 0.1 | 0.1 | 0.2 | 0.3 | 0.4 | 0.3 | 0.1 | 0.2 | 0.1 | | | | B6 | S2SpecT | 7.6 | 24.1 | 17.8 | 20.3 | 25.1 | 1.8 | 20.3 | 18.7 | 2.5 | 9.8 | 9.0 | 3.8 | 0.5 | 0.1 | 0.5 | 1.2 | 0.8 | 1.3 | 0.7 | 0.1 | 0.2 | 0.2 | | B7 | + S2TT pre-training | 9.6 | 26.3 | 20.1 | 21.8 | 27.5 | 6.2 | 22.3 | 21.9 | 5.7 | 12.6 | 11.4 | 9.1 | 2.7 | 0.3 | 4.3 | 1.3 | 1.8 | 1.5 | 4.0 | 0.3 | 0.6 | 0.7 | | B8 | + w2v-BERT | 16.6 | 31.8 | 27.3 | 28.4 | 34.4 | 8.9 | 30.0 | 34.1 | 5.0 | 31.6 | 23.3 | 11.5 | 10.0 | 0.3 | 10.8 | 14.4 | 14.5 | 22.4 | 4.8 | 0.1 | 0.6 | 5.3 | | B9 | Translatotron2♢ | 8.7 | 28.3 | 19.7 | 23.5 | 30.1 | 2.4 | 24.1 | 19.6 | 4.5 | 12.5 | 6.5 | 3.8 | 0.6 | 0.2 | 1.7 | 1.5 | 0.4 | 1.3 | 0.9 | 0.1 | 0.5 | 0.4 | | B10 + Transformer decoder♠ | 10.1 | 29.5 | 22.3 | 25.0 | 30.8 | 3.4 | 26.0 | 21.7 | 5.5 | 14.3 | 10.5 | 6.6 | 1.1 | 0.2 | 3.8 | 3.0 | 2.3 | 2.8 | 1.6 | 0.1 | 0.5 | 0.8 | | | B11 + S2TT pre-training♢ | 12.0 | 32.4 | 24.8 | 28.2 | 33.4 | 6.3 | 28.6 | 23.2 | 6.3 | 18.3 | 15.8 | 10.6 | 2.5 | 0.4 | 5.4 | 2.3 | 3.1 | 3.2 | 4.5 | 0.1 | 1.0 | 1.0 | | | B12 + w2v-BERT♠ | 17.9 | 33.6 | 30.6 | 30.1 | 35.9 | 6.0 | 32.5 | 38.9 | 5.2 | 31.9 | 29.3 | 9.2 | 16.0 | 0.2 | 10.4 | 15.6 | 17.8 | 25.9 | 4.2 | 0.3 | 0.9 | 1.5 | | | B13 + mSLAM♠ | 19.3 | 33.9 | 31.5 | 30.6 | 36.8 | 7.2 | 33.7 | 41.6 | 6.4 | 34.1 | 31.1 | 16.1 | 17.1 | 0.3 | 10.0 | 14.4 | 22.9 | 28.4 | 5.4 | 0.2 | 1.3 | 2.5 | | | B14 | ++ TTS augmentation♠ | 22.0 | 34.5 | 32.0 | 30.7 | 37.1 | 8.2 | 33.8 42.6 10.6 | 34.0 | 31.8 23.9 17.2 | 1.1 | 22.4 | 15.6 | 23.3 | 31.1 | 7.6 | 0.6 | 5.5 | 18.5 | | | | | | B15 S2SpecT2 | 11.3 | 31.7 | 25.9 | 27.4 | 32.8 | 4.6 | 28.4 | 27.5 | 7.0 | 18.0 | 15.4 | 9.2 | 1.7 | 0.3 | 1.7 | 2.5 | 1.3 | 1.8 | 1.9 | 0.2 | 0.7 | 1.0 | | | B16 + S2TT pre-training | 13.1 | 31.9 | 26.1 | 28.0 | 33.3 | 7.9 | 28.8 | 28.6 | 8.5 | 20.3 | 17.8 | 13.9 | 4.6 | 0.6 | 6.4 | 2.6 | 4.8 | 2.4 | 7.4 | 0.4 | 0.6 | 1.2 | | | B17 + w2v-BERT + t-mBART | 18.6 | 32.5 | 30.9 | 31.0 | 34.1 | 13.9 | 30.7 | 36.9 | 10.6 | 31.2 | 26.1 | 18.4 | 11.6 | 1.9 | 14.7 | 10.4 | 15.1 | 16.2 | 10.6 | 1.1 | 3.9 | 9.7 | | | Direct speech-to-unit systems B18 S2UT | 9.1 | 28.3 | 21.7 | 24.6 | 29.0 | 2.5 | 25.2 | 21.7 | 4.0 | 11.1 | 10.2 | 4.9 | 0.8 | 0.1 | 0.9 | 1.8 | 1.4 | 1.2 | 0.5 | 0.1 | 0.4 | 0.7 | | | B19 + S2TT pre-training | 11.4 | 29.4 | 23.3 | 25.7 | 30.5 | 7.4 | 26.5 | 24.6 | 6.9 | 16.7 | 15.6 | 10.6 | 3.3 | 0.5 | 4.6 | 2.2 | 2.6 | 1.4 | 4.7 | 0.3 | 0.9 | 1.0 | | | B20 + w2v-BERT + u-mBART | 20.8 | 32.7 | 28.5 | 30.6 | 34.8 | 12.8 | 31.7 | 37.5 | 7.6 | 37.2 | 27.2 | 18.2 | 15.0 | 1.8 | 18.6 | 18.5 | 20.5 | 29.8 | 13.1 | 1.3 | 4.0 | 16.2 | | | B21 UnitY | 12.0 | 30.9 | 25.5 | 27.2 | 32.3 | 5.1 | 28.2 | 28.2 | 7.2 | 20.3 | 17.1 | 9.1 | 2.5 | 0.4 | 2.2 | 3.7 | 6.1 | 1.8 | 2.3 | 0.1 | 1.2 | 1.0 | | | B22 + S2TT pre-training | 13.0 | 32.1 | 26.8 | 29.1 | 33.4 | 8.3 | 29.4 | 27.6 | 7.9 | 20.3 | 19.7 | 12.1 | 3.5 | 0.6 | 4.6 | 2.5 | 4.9 | 1.9 | 5.8 | 0.3 | 1.0 | 1.0 | | | B23 + w2v-BERT + t-mBART 24.5 | 35.2 | 32.6 | 33.3 | 37.2 | 14.9 | 35.0 | 42.3 | 10.8 | 41.7 | 32.5 | 22.2 | 18.7 | 2.7 | 24.6 | 21.3 | 26.6 | 34.1 | 16.5 | 1.8 | 8.0 | 22.9 | | | | F1 | UnitY | |------|---------| | (ASR-)BLEU (↑) | | | | | |--------------------|-----------------|---------|------|----| | first-pass decoder | Text | Speech | | | | Random | 34.8 | 30.7 | | | | F2 | t-mBART | 38.3 | 33.2 | | | F3 | Unsupervised MT | 38.2 | 33.2 | | | F4 | Supervised MT1 | 36.6 | 33.0 | | | F5 | Supervised MT2 | 37.5 | 33.3 | | | F6 | S2TT (F7) | 37.8 | 32.5 | | | F7 | S2TT | t-mBART | 38.0 | - | on this corpus. However, UnitY has an advantage of decoding efficiency over S2SpecT2 as discussed in §4.3. All direct models (A16, A19, A21) except for S2SpecT outperformed the corresponding cascaded system (A7). CVSS-C We show the full results of each language direction on CVSS-C in Table 10. Pre-training first-pass decoder We explored a better pre-training strategy for the first-pass text decoder in UnitY. We investigated pre-training it with an MT model trained with bitext data from scratch (Supervised MT1, *Supervised MT2*). Supervised MT1 used CCMatrix (Schwenk et al., 2021) while Supervised MT2 is the MT model in the cascaded system11. Moreover, we fine-tuned the t-mBART model to the MT task in an unsupervised MT way via online back translation (Liu et al., 2020) on CC100 (*unsupervised MT*). Furthermore, we studied initializing the speech encoder and the text decoder with a separate direct S2TT model. The S2TT model was fine-tuned from wav2vec2.0 and t-mBART models on the same corpus. After the initialization, we fine-tuned the whole parameters of UnitY except FFN layers in the first-pass text decoder (*S2TT*). The results in Table 11 showed that pre-training the first-pass decoder with the vanilla t-mBART (F2) or the unsupervised MT model (F3) was the most effective. Pre-training with supervised MT models (F4, F5) did not improve performance, even for the first pass. This is consistent with a finding in Jia et al. (2022a) although they pre-train the first-pass phoneme decoder of Translatotron2 with a phoneme-based supervised MT model. Therefore, leveraging a separate MT system is effective for generating weak supervisions (Popuri et al., 2022) rather than parameter initialization. Pretraining a part of UnitY with an independent S2TT model (F7) was not helpful either. Surprisingly, the BLEU score from the text decoder in UnitY was 11We used OpenSubtitle2018, UNCorpus, EUBookshop v2, Europarl v10, Wikipedia v1.0, and TED2020 v1 for training. better than that of F7. Therefore, training signals from the unit decoder never affect the text decoder. Ablation study In Table 12, we show full results of the ablation study presented in §5.1. An auxiliary CTC objective for the unit decoder, as used for the S2UT model (Lee et al., 2022a), was not helpful for UnitY (D10). This was because the introduction of the first-pass decoder already eased for the second-pass decoder to learn monotonic alignments. Output unit for first-pass decoder In Table 13, we show full results of the comparison of the output units for the first-pass decoder in two-pass direct S2ST models presented in §5.2. The results showed that the subword unit was the best for UnitY regardless of pre-training the speech encoder with wav2vec2.0. In contrast, in the case of S2SpecT2, the best output unit differed according to whether we pre-trained the speech encoder or not. The phoneme unit was best when training the model from scratch (E1) while the subword unit was best when pre-training the encoder (E3'). However, predicting subwords in the first pass led to the best BLEU score for the text output in all the settings. ASR-chrF Following a finding that ASR-chrF is a more robust evaluation metric than ASR-BLEU in Salesky et al. (2021), we also calculated ASRchrF on Fisher, CVSS-C, and multi-domain corpora in Table 14, Table 15, and Table 16, respectively. Overall, we confirmed the similar trends to ASRBLEU. ## I.1 Human Evaluation Finally, we conducted an audio-only human evaluation to assess the translation quality while removing the necessity of ASR systems. We adopted crosslingual semantic textual similarity (XSTS) (Licht et al., 2022) and percent acceptable translations. Mean translation score We used XSTS, which emphasizes adequacy rather than fluency, as the most appropriate human evaluation protocol. Annotators judged the semantic similarity between the source and the translated sentence. As a result, whether a translation conveys the original meaning is more important than whether it has perfect syntax, wording, and grammar. Annotators assigned each item a score from one to five. A score of no less than three means the meaning is at least "mostly equivalent." We treat a translation that re- ![21_image_0.png](21_image_0.png) ceived a score of no less than three as having "acceptable" quality. Annotators need to be bilingual, as they compare the source and translated sentences directly. Since XSTS is an audio-only evaluation metric, it also considers the audio quality. For each system, we computed the average XSTS score across items. We set a target of over four average XSTS for systems where we expect or desire high-quality translations. We set a target of over three average XSTS for systems where we expect a medium level of quality. Percent acceptable translations For each system, we also computed the percentage of items that received an XSTS score of three or above. We refer to this as the percent acceptable translations. This metric helps us understand what percentage of translations produced by the system can preserve meaning adequately and what percentage has very low and unacceptable quality. This metric tends to be more stable and less sensitive to annotator agreement than the average XSTS score. Evaluation setting We used the mTEDx test set (989 samples) and generated the target audio from the S2ST systems. Moreover, we randomly sampled 495 samples and generated the target audio from the reference translation followed by TTS. The reference translations serve as a reference point and a ceiling against which to compare our systems. Three bilingual annotators evaluated each item and assigned it a score from one to five. The median score was taken per item. Results The results are presented in Figure 6. 12 We confirmed that UnitY consistently outperformed the cascaded and S2UT models in both metrics. 12The models used here are early versions and slightly different from the models in Table 2. | (ASR-)BLEU (↑) | | | | | | |------------------|--------------------------------------------------|--------|--------------------|------|------| | ID | Model | Fisher | Multi-domain Es→En | | | | Text | Speech | Text | Speech | | | | D1 | S2SpecT2 | 54.4 | 49.2 | 35.0 | 30.8 | | D2 | + w/o T2S encoder | 54.3 | 17.4 | 34.9 | 25.0 | | D3 | + w/o R-Drop | 51.6 | 45.9 | 34.8 | 30.3 | | D5 | UnitY | 55.4 | 50.5 | 38.3 | 33.2 | | D6 | + w/o T2U encoder | 55.0 | 49.1 | 38.1 | 30.7 | | D7 | + w/o R-Drop | 53.2 | 48.2 | 37.7 | 32.1 | | D8 | + Cross-attention to speech encoder (sequential) | 55.4 | 50.3 | 38.2 | 33.2 | | D9 | + Cross-attention to speech encoder (parallel) | 55.3 | 50.4 | 38.1 | 33.1 | | D10 | + CTC on unit decoder | 55.3 | 50.2 | n/a | n/a | Fisher Multi-domain Es→En Text Speech Text Speech E1 S2SpecT2 Phoneme - **50.4** - – E2 Character 54.0 50.2 - – E3 Subword **54.4** 49.2 - – E1' ✓ S2SpecT2 Phoneme - 58.1 - 29.4 E2' Character 61.5 58.1 31.7 28.9 E3' Subword **62.0 58.4 33.0 30.0** E4 UnitY Phoneme - 49.8 - – E5 Character 53.7 48.9 - – E6 Subword **55.4 50.5** - – E4' ✓ UnitY Phoneme - 54.7 - 27.8 E5' Character 60.9 55.0 33.2 29.6 E6' Subword **61.2 55.1 34.1 30.1** | ID | Encoder | | |--------------|-----------|----------| | pre-training | Model | Output | | E1 | S2SpecT2 | | | E1' | ✓ | S2SpecT2 | | E4 | UnitY | | | E4' | ✓ | UnitY | | (ASR-)BLEU (↑) | | | | | |------------------|--------------------|------|--------|------| | Fisher | Multi-domain Es→En | | | | | Text | Speech | Text | Speech | | | Phoneme | - | 50.4 | - | - | | Phoneme | - | 58.1 | - | 29.4 | | Phoneme | - | 49.8 | - | - | | Phoneme | - | 54.7 | - | 27.8 | | ID | Model | Encoder | ASR-chrF (↑) | | | |------------------------------------------------------------|----------------------|-----------|----------------|-------|-------| | dev | dev2 | test | | | | | Cascaded systems A6 S2TT → TTS | Conformer | 0.642 | 0.652 | 0.649 | | | A7 | Conformer wav2vec2.0 | 0.671 | 0.684 | 0.680 | | | Direct speech-to-spectrogram systems A11 S2SpecT Conformer | 0.612 | 0.621 | 0.618 | | | | A12 | Conformer wav2vec2.0 | 0.638 | 0.655 | 0.649 | | | A15 | S2SpecT2 | Conformer | 0.649 | 0.661 | 0.657 | | A16 | Conformer wav2vec2.0 | 0.695 | 0.708 | 0.702 | | | Direct speech-to-unit systems A18 S2UT Conformer | 0.626 | 0.642 | 0.643 | | | | A19 | Conformer wav2vec2.0 | 0.677 | 0.688 | 0.685 | | | A20 | UnitY | Conformer | 0.646 | 0.658 | 0.658 | | A21 | Conformer wav2vec2.0 | 0.678 | 0.692 | 0.687 | | Table 14: ASR-chrF on Fisher Es→En corpus. The decoder in all the models is initialized randomly. S2SpecT2 is our improved version of Translatotron2. | ID | Model | ASR-chrF (↑) | | | | |-------------------------------------------------------|----------------------|----------------|-------|-------|-------| | Avg. | High | Mid | Low | | | | Cascaded systems B3 S2TT → TTS | 0.304 | 0.504 | 0.384 | 0.204 | | | B4 | + w2v-BERT + t-mBART | 0.420 | 0.533 | 0.463 | 0.365 | | Direct speech-to-spectrogram systems B6 S2SpecT 0.273 | 0.498 | 0.328 | 0.175 | | | | B7 | + S2TT pre-training | 0.311 | 0.521 | 0.377 | 0.213 | | B8 | + w2v-BERT | 0.395 | 0.582 | 0.461 | 0.306 | | B15 | S2SpecT2 | 0.306 | 0.560 | 0.389 | 0.187 | | B16 | + S2TT pre-training | 0.336 | 0.566 | 0.417 | 0.226 | | B17 | + w2v-BERT + t-mBART | 0.419 | 0.592 | 0.492 | 0.331 | | Direct speech-to-unit systems B18 S2UT | 0.294 | 0.536 | 0.356 | 0.188 | | | B19 | + S2TT pre-training | 0.329 | 0.550 | 0.405 | 0.224 | | B20 | + w2v-BERT + u-mBART | 0.445 | 0.588 | 0.495 | 0.377 | | B21 | UnitY | 0.312 | 0.564 | 0.396 | 0.192 | | B22 | + S2TT pre-training | 0.333 | 0.572 | 0.415 | 0.220 | | B23 | + w2v-BERT + t-mBART | 0.474 | 0.607 | 0.521 | 0.410 | Table 15: ASR-chrF on CVSS-C corpus. We use the S2TT model in B3 for S2TT pre-training. t-mBART and u-mBART stand for text-based mBART and unit-based mBART, respectively. All w2v-BERT encoders have 0.6B parameters. | ASR-chrF (↑) | | | | | | | | | |----------------------------------------------------------------|--------------------|-------|----------|-------------|-------|-------|-------|-------| | ID | Model | En→Es | Es→En | | | | | | | Europarl-ST | MuST-C | Avg. | CoVoST-2 | Europarl-ST | mTEDx | Avg. | | | | Cascaded systems C1' ASR→MT→TTS | 0.634 | 0.587 | 0.611 | 0.611 | 0.618 | 0.569 | 0.599 | | | C2' | S2TT→TTS | 0.639 | 0.613 | 0.626 | 0.642 | 0.620 | 0.588 | 0.620 | | Direct speech-to-spectrogram systems C3 S2SpecT2 (6L→6L) 0.634 | 0.606 | 0.620 | 0.642 | 0.484 | 0.578 | 0.568 | | | | C4 | + t-mBART (12L→6L) | 0.642 | 0.611 | 0.627 | 0.642 | 0.485 | 0.583 | 0.570 | | Direct speech-to-unit systems C5' S2UT + u-mBART | 0.610 | 0.615 | 0.613 | 0.621 | 0.587 | 0.568 | 0.592 | | | C6 | UnitY (6L→6L) | 0.643 | 0.618 | 0.631 | 0.628 | 0.591 | 0.575 | 0.598 | | C7 | + t-mBART (12L→2L) | 0.641 | 0.622 | 0.632 | 0.633 | 0.606 | 0.583 | 0.607 | Table 16: ASR-chrF on multi-domain En↔Es. The encoder in all the models is pre-trained with wav2vec2.0. t-mBART and u-mBART stand for text-based mBART and unit-based mBART, respectively. N1stL→ N2ndL stands for an N1st-layer first-pass decoder with an N2nd-layer second-pass decoder. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.1 And Appendix D ✓ B1. Did you cite the creators of artifacts you used? Section 3.1 and Appendix D ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We used public corpora only. All the corpora are open-sourced except Fisher Spanish.The license is described in the corpus-related papers we cited. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We didn't modify any contents in the public corpora we used. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? They are described in the corpus-related papers we cited. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.1 and Appendix D ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We followed the same setting as previous works. ## C ✓ **Did You Run Computational Experiments?** Section 4.3 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3, Appendix E, F, G, H ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We reported average scores over 3 runs for Fisher and scores with a single run for the rest large corpora. We didn't run multiple same jobs for the latter because they are computationally expensive. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3, Appendix E, F, G, H ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We didn't hire any human annotators in our experiments. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We didn't hire any human annotators in our experiments. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We didn't hire any human annotators in our experiments. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We didn't hire any human annotators in our experiments. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We didn't hire any human annotators in our experiments.
wu-etal-2023-estimating
Estimating the Uncertainty in Emotion Attributes using Deep Evidential Regression
https://aclanthology.org/2023.acl-long.873
In automatic emotion recognition (AER), labels assigned by different human annotators to the same utterance are often inconsistent due to the inherent complexity of emotion and the subjectivity of perception. Though deterministic labels generated by averaging or voting are often used as the ground truth, it ignores the intrinsic uncertainty revealed by the inconsistent labels. This paper proposes a Bayesian approach, deep evidential emotion regression (DEER), to estimate the uncertainty in emotion attributes. Treating the emotion attribute labels of an utterance as samples drawn from an unknown Gaussian distribution, DEER places an utterance-specific normal-inverse gamma prior over the Gaussian likelihood and predicts its hyper-parameters using a deep neural network model. It enables a joint estimation of emotion attributes along with the aleatoric and epistemic uncertainties. AER experiments on the widely used MSP-Podcast and IEMOCAP datasets showed DEER produced state-of-the-art results for both the mean values and the distribution of emotion attributes.
# Estimating The Uncertainty In Emotion Attributes Using Deep Evidential Regression Wen Wu1, Chao Zhang2**, Philip C. Woodland**1 1 Department of Engineering, University of Cambridge, Cambridge, UK 2 Department of Electrical Engineering, Tsinghua University, Beijing, China {ww368, pcw}@eng.cam.ac.uk; cz277@tsinghua.edu.cn ## Abstract In automatic emotion recognition (AER), labels assigned by different human annotators to the same utterance are often inconsistent due to the inherent complexity of emotion and the subjectivity of perception. Though deterministic labels generated by averaging or voting are often used as the ground truth, it ignores the intrinsic uncertainty revealed by the inconsistent labels. This paper proposes a Bayesian approach, deep evidential emotion regression (DEER), to estimate the uncertainty in emotion attributes. Treating the emotion attribute labels of an utterance as samples drawn from an unknown Gaussian distribution, DEER places an utterance-specific normal-inverse gamma prior over the Gaussian likelihood and predicts its hyper-parameters using a deep neural network model. It enables a joint estimation of emotion attributes along with the aleatoric and epistemic uncertainties. AER experiments on the widely used MSP-Podcast and IEMOCAP datasets showed DEER produced state-of-theart results for both the mean values and the distribution of emotion attributes1. ## 1 Introduction Automatic emotion recognition (AER) is the task that enables computers to predict human emotional states based on multimodal signals, such as audio, video and text. An emotional state is defined based on either categorical or dimensional theory. The categorical theory claims the existence of a small number of basic discrete emotions ( i.e. anger and happy) that are inherent in our brain and universally recognised (Gunes et al., 2011; Plutchik, 2001). Dimensional emotion theory characterises emotional states by a small number of roughly orthogonal fundamental continuousvalued bipolar dimensions (Schlosberg, 1954; Nicolaou et al., 2011) such as valence-arousal and approach–avoidance (Russell and Mehrabian, 1977; 1Code available: https://github.com/W-Wu/DEER Russell, 1980; Grimm et al., 2007). These dimensions are also known as emotion attributes, which allow us to model more subtle and complex emotions and are thus more common in psychological studies. As a result, AER includes a classification approach based on emotion-class-based labels and a regression approach based on attribute-based labels. This paper focuses on attribute-based AER with speech input. Emotion annotation is challenging due to the inherent ambiguity of mixed emotion, the personal variations in emotion expression, the subjectivity in emotion perception, etc. Most AER datasets use multiple human annotators to label each utterance, which often results in inconsistent labels, either as emotion categories or attributes. This is also a typical manifestation of the intrinsic data uncertainty, also referred to as aleatoric uncertainty (Matthies, 2007; Der Kiureghian and Ditlevsen, 2009), that arises from the natural complexity of emotion data. It is common to replace such inconsistent labels with deterministic labels obtained by majority voting (Busso et al., 2008, 2017) or (weighted) averages (Ringeval et al., 2013; Lotfian and Busso, 2019; Kossaifi et al., 2019; Grimm and Kroschel, 2005). However, this causes a loss of data samples when a majority agreed emotion class doesn't exist (Majumder et al., 2018; Poria et al., 2018; Wu et al., 2021) and also ignores the discrepancies between annotators and the aleatoric uncertainty in emotion data. In this paper, we propose to model the uncertainty in emotion attributes with a Bayesian approach based on deep evidential regression (Amini et al., 2020), denoted deep evidential emotion regression (DEER). In DEER, the inconsistent human labels of each utterance are considered as observations drawn independently from an unknown Gaussian distribution. To probabilistically estimate the mean and variance of the Gaussian distribution, a normal inverse-gamma (NIG) prior is introduced, 15681 which places a Gaussian prior over the mean and an inverse-gamma prior over the variance. The AER system is trained to predict the hyper-parameters of the NIG prior for each utterance by maximising the per-observation-based marginal likelihood of each observed label under this prior. As a result, DEER not only models the distribution of emotion attributes but also learns both the aleatoric uncertainty and the epistemic uncertainty (Der Kiureghian and Ditlevsen, 2009) without repeating the inference procedure for sampling. Epistemic uncertainty, also known as model uncertainty, is associated with uncertainty in model parameters that best explain the observed data. Aleatoric and epistemic uncertainty are combined to induce the total uncertainty, also called predictive uncertainty, that measures the confidence of attribute predictions. As a further improvement, a novel regulariser is proposed based on the mean and variance of the observed labels to better calibrate the uncertainty estimation. The proposed methods were evaluated on the MSP-Podcast and IEMOCAP datasets. The rest of the paper is organised as follows. Section 2 summarises related work. Section 3 introduces the proposed DEER approach. Sections 4 and 5 present the experimental setup and results respectively, followed by the conclusion. ## 2 Related Work There has been previous work by AER researchers to address the issue of inconsistent labels. For emotion categories, a single ground-truth label can be obtained as either a continuous-valued mean vector representing emotion intensities (Fayek et al., 2016; Ando et al., 2018), or as a multi-hot vector obtained based on the existence of emotions (Zhang et al., 2020; Ju et al., 2020). Recently, distribution-based approaches have been proposed, which consider the labels as samples drawn from emotion distributions (Chou et al., 2022; Wu et al., 2022b). For emotion attributes, annotators often assign different values to the same attribute of each utterance. Davani et al. (2022) proposed a multiannotator model which contains multiple heads to predict each annotator's judgement. This approach is computationally viable only when the number of annotators is relatively small. The method requires sufficient annotations from each annotator to be effective. Deng et al. (2012) derived confidence measures based on annotator agreement to build emotion-scoring models. Han et al. (2017, 2021) proposed predicting the standard deviation of the attribute label values as an extra task in the multitask training framework. Dang et al. (2017, 2018) included annotator variability as a representation of uncertainty in a Gaussian mixture regression model. These techniques take the variance of human annotations either as an extra target or as an extra input. More recently, Bayesian deep learning has been introduced to the task, which models the uncertainty in emotion annotation without explicitly using the variance of human annotations. These include the use of Gaussian processes (Atcheson et al., 2018, 2019), variational auto-encoders (Sridhar et al., 2021), Bayesian neural networks (Prabhu et al., 2021), Monte-Carlo dropout (Sridhar and Busso, 2020b) and sequential Monte-Carlo methods (Markov et al., 2015; Wu et al., 2022a). So far, these methods have not distinguished aleatoric uncertainty from epistemic uncertainty which are defined in the introduction. Our proposed DEER approach can simultaneously model these two uncertainties. In addition, our approach is more generic. It has no limits on the number of annotators, the number of annotators per utterance, and the number of annotations per annotator, and thus can cope with large crowd-sourced datasets. ## 3 Deep Evidential Emotion Regression 3.1 Problem Setup In contrast to Bayesian neural networks that place priors on model parameters (Blundell et al., 2015; Kendall and Gal, 2017), evidential deep learning (Sensoy et al., 2018; Malinin and Gales, 2018; Amini et al., 2020) places priors over the likelihood function. Every training sample adds support to a learned higher-order prior distribution called the evidential distribution. Sampling from this distribution gives instances of lower-order likelihood functions from which the data was drawn. Consider an input utterance x with M emotion attribute labels y (1)*, . . . , y*(M) provided by multiple annotators. Assuming y (1)*, . . . , y*(M)are observations drawn i.i.d. from a Gaussian distribution with unknown mean µ and unknown variance σ 2, where µ is drawn from a Gaussian prior and σ 2is drawn from an inverse-gamma prior: $$\begin{array}{c}{{y^{(1)},\ldots,y^{(M)}\sim{\mathcal{N}}(\mu,\sigma^{2})}}\\ {{\mu\sim{\mathcal{N}}(\gamma,\sigma^{2}v^{-1}),\quad\sigma^{2}\sim\Gamma^{-1}(\alpha,\beta)}}\\ {{\sigma=\quad\sigma\sim\Gamma(\gamma)\;\mathrm{for}\;\;\sigma\sim\Gamma^{-1}.}}\end{array}$$ where γ ∈ R, υ > 0, and Γ(·) is the gamma function with α > 1 and β > 0. Denote {*µ, σ*2} and {*γ, υ, α, β*} as Ψ and Ω. The posterior p(Ψ|Ω) is a NIG distribution, which is the Gaussian conjugate prior: $$\begin{array}{c}{{\operatorname{p}(\Psi|\Omega)=\operatorname{p}(\mu|\sigma^{2},\Omega)\operatorname{p}(\sigma^{2}|\Omega)}}\\ {{=\mathcal{N}(\gamma,\sigma^{2}v^{-1})\,\Gamma^{-1}(\alpha,\beta)}}\\ {{=\frac{\beta^{\alpha}\sqrt{v}}{\Gamma(\alpha)\sqrt{2\pi\sigma^{2}}}\left(\frac{1}{\sigma^{2}}\right)^{\alpha+1}}}\\ {{\cdot\exp\left\{-\frac{2\beta+v(\gamma-\mu)^{2}}{2\sigma^{2}}\right\}}}\end{array}$$ Drawing a sample Ψi from the NIG distribution yields a single instance of the likelihood function N (µi, σ2 i ). The NIG distribution therefore serves as the higher-order, evidential distribution on top of the unknown lower-order likelihood distribution from which the observations are drawn. The NIG hyper-parameters Ω determine not only the location but also the uncertainty associated with the inferred likelihood function. By training a deep neural network model to output the hyper-parameters of the evidential distribution, evidential deep learning allows the uncertainties to be found by analytic computation of the maximum likelihood Gaussian without the need for repeated inference for sampling (Amini et al., 2020). Furthermore, it also allows an effective estimate of the aleatoric uncertainty computed as the expectation of the variance of the Gaussian distribution, as well as the epistemic uncertainty defined as the variance of the predicted Gaussian mean. Given an NIG distribution, the prediction, aleatoric, and epistemic uncertainty can be computed as: $$\begin{array}{l}{{\mathrm{Prediction}\colon\mathbb{E}[\mu]=\gamma}}\\ {{\mathrm{A}e\mathrm{t}o\mathrm{r}\mathrm{c}\colon\mathbb{E}[\sigma^{2}]=\frac{\beta}{\alpha-1},\quad\forall\,\alpha>1}}\\ {{\mathrm{Epistemic}\colon\mathrm{Var}[\mu]=\frac{\beta}{v(\alpha-1)},\quad\forall\,\alpha>1}}\end{array}$$ The training of DEER is structured as fitting the model to the data while enforcing the prior to calibrate the uncertainty when the prediction is wrong. 3.2.1 Maximising the data fit The likelihood of an observation y given the evidential distribution hyper-parameters Ω is computed by marginalising over the likelihood parameters Ψ: $$\begin{split}\text{p}(y|\mathbf{\Omega})&=\int_{\Psi}\text{p}(y|\mathbf{\Psi})\text{p}(\mathbf{\Psi}|\mathbf{\Omega})\text{d}\mathbf{\Psi}\\ &=\mathbb{E}_{\text{p}(\mathbf{\Psi}|\mathbf{\Omega})}[\text{p}(y|\mathbf{\Psi})]\end{split}\tag{1}$$ An analytical solution exists in the case of placing an NIG prior on the Gaussian likelihood function: $$\mathrm{p}(y|\Omega)=\frac{\Gamma(1/2+\alpha)}{\Gamma(\alpha)}\sqrt{\frac{v}{\pi}}\left(2\beta(1+v)\right)^{\alpha}$$ $$\cdot\left(v(y-\gamma)^{2}+2\beta(1+v)\right)^{-(\frac{1}{2}+\alpha)}$$ $$=\mathrm{St}_{2\alpha}\left(y|\gamma,\frac{\beta(1+v)}{v\,\alpha}\right)\tag{2}$$ where Stν (t|*r, s*) is the Student's t-distribution evaluated at t with location parameter r, scale parameter s, and ν degrees of freedom. The predicted mean and variance can be computed analytically as $$\mathbb{E}[y]=\gamma,\quad\mathrm{Var}[y]={\frac{\beta(1+v)}{v(\alpha-1)}}$$ $$(3)$$ υ(α − 1) (3) Var[y] represents the total uncertainty of model prediction, which is equal to the summation of the aleatoric uncertainty E[σ 2] and epistemic uncertainty Var[µ] according to the law of total variance: $$\begin{array}{r l}{\operatorname{Var}[y]=\mathbb{E}[\operatorname{Var}[y|\Psi]]+\operatorname{Var}[\mathbb{E}[y|\Psi]]}\\ {=\mathbb{E}[\sigma^{2}]+\operatorname{Var}[\mu]}\end{array}$$ To fit the NIG distribution, the model is trained by maximising the sum of the marginal likelihoods of each human label y (m). The negative log likelihood (NLL) loss can be computed as $$\mathcal{L}^{\mathrm{NLL}}(\pmb{\Theta})=-\frac{1}{M}\sum_{m=1}^{M}\log\mathrm{p}(y^{(m)}|\pmb{\Omega})\qquad\qquad(4)$$ $$=-\frac{1}{M}\sum_{m=1}^{M}\log\left[\mathrm{St}_{2\alpha}\left(y^{(m)}|\gamma,\frac{\beta(1+v)}{v\,\alpha}\right)\right]$$ This is our proposed per-observation-based NLL loss, which takes each observed label into consideration for AER. This loss serves as the first part of the objective function for training a deep neural network model Θ to predict the hyper-parameters {*γ, υ, α, β*} to fit all observed labels of x. ## 3.2.2 Calibrating The Uncertainty On Errors The second part of the objective function regularises training by calibrating the uncertainty based on the incorrect predictions. A novel regulariser is formulated which contains two terms: L µand L σthat respectively regularises the errors on the estimation of the mean µ and the variance σ 2 of the Gaussian likelihood. 15683 The first term L µis proportional to the error between the model prediction and the average of the observations: $${\mathcal{L}}^{\mu}(\mathbf{\Theta})=\Phi\left|{\bar{y}}-\mathbb{E}[\mu]\right|$$ where *| · |* is L1 norm, y¯ =1M $\pm\;\bar{u}\:=\:\frac{1}{m}\sum_{m=-n}^M(m)$ :. m=1 y (m)is the averaged label which is usually used as the ground truth in regression-based AER, and Φ is an uncertainty measure associated with the inferred posterior. The reciprocal of the total uncertainty is used as Φ in this paper, which can be calculated as $$\Phi={\frac{1}{\mathrm{Var}[y]}}={\frac{v(\alpha-1)}{\beta(1+v)}}$$ The regulariser imposes a penalty when there's an error in prediction and dynamically scales it by dividing by the total uncertainty of inferred posterior. It penalises the cases where the model produces an incorrect prediction with a small uncertainty, thus preventing the model from being over-confident. For instance, if the model produces an error with a small predicted variance, Φ is large, resulting in a large penalty. Minimising the regularisation term enforces the model to produce accurate prediction or increase uncertainty when the error is large. In addition to imposing a penalty on the mean prediction as in Amini et al. (2020), a second term L σis proposed in order to calibrate the estimation of the aleatoric uncertainty. As discussed in the introduction, aleatoric uncertainty in AER is shown by the different emotional labels given to the same utterance by different human annotators. This paper uses the variance of the observations to describe the aleatoric uncertainty in the emotion data. The second regularising term is defined as: $$\mathcal{L}^{\sigma}(\pmb{\Theta})=\Phi\left|\bar{\sigma}^{2}-\mathbb{E}[\sigma^{2}]\right|$$ where $\bar{\sigma}^{2}=\frac{1}{M}\sum_{m=1}^{M}(y^{(m)}-\bar{y})^{2}$. ## 3.3 Summary And Implementation Details For an AER task that consists of N emotion attributes, DEER trains a deep neural network model to simultaneously predict the hyperparameters {Ω1*, . . . ,* ΩN } associated with the N attribute-specific NIG distributions, where Ωn = {γn, υn, αn, βn}. A DEER model thus has 4N output units. The system is trained by minimising the total loss w.r.t. Θ as: Ltotal(Θ) = X N n=1 ϵnLn(Θ) (5) Ln(Θ) = L NLL n(Θ) + λn [L µ n(Θ) + L σ n(Θ)] (6) where ϵn is the weight satisfying PN satisfying $\sum_{n=1}^N\epsilon_n=1$, $\lambda_n$ is the scale coefficient that trades off the training between data fit and uncertainty regulation. At test-time, the predictive posteriors are N separate Student's t-distributions p(y|Ω1), p(y|Ω2) , . . . , p(y|ΩN ), each of the same form as derived in Eqn. (2) 2. Apart from obtaining a distribution over the emotion attribute of the speaker, DEER also allows analytic computation of the uncertainty terms, as summarised in Table 1. | Term | Expression | | |----------------------------------------|------------------------|-------| | Predicted mean | E[y] = E[µ] = γ | | | Predicted variance (Total uncertainty) | Var[y] = β(1+υ) υ(α−1) | | | Aleatoric uncertainty | E[σ 2 ] = | β α−1 | | Epistemic uncertainty | Var[µ] = | β | | υ(α−1) | | | Table 1: Summary of the uncertainty terms. ## 4 Experimental Setup 4.1 Dataset The MSP-Podcast (Lotfian and Busso, 2019) and IEMOCAP datasets (Busso et al., 2008) were used in this paper. The annotations of both datasets use N = 3 with valence, arousal (also called activation), and dominance as the emotion attributes. MSP-Podcast contains natural English speech from podcast recordings and is one of the largest publicly available datasets in speech emotion recognition. A seven-point Likert scale was used to evaluate valence (1-negative vs 7-positive), arousal (1-calm vs 7-active), and dominance (1-weak vs 7-strong). The corpus was annotated using crowd-sourcing. Each utterance was labelled by at least 5 human annotators and has an average of 6.7 annotations per utterance. Ground-truth labels were defined by the average value. Release 1.8 was used in the experiments, which contains 73,042 utterances 2Since NIG is the Gaussian conjugate prior, the posterior is in the same parametric family as the prior. Therefore, the predictive posterior has the same form as the marginal likelihood. Detailed derivations see Appendix A. from 1,285 speakers amounting to more than 110 hours of speech. The average variance of the labels assigned to each sentence is 0.975, 1.122, 0.889 for valence, arousal, and dominance respectively. The standard splits for training (44,879 segments), validation (7,800 segments) and testing (15,326 segments) were used in the experiments. The IEMOCAP corpus is one of the most widely used AER datasets. It consists of approximately 12 hours of English speech including 5 dyadic conversational sessions performed by 10 professional actors with a session being a conversation between two speakers. There are in total 151 dialogues including 10,039 utterances. Each utterance was annotated by three human annotators using a fivepoint Likert scale. Again, ground-truth labels were determined by taking the average. The average variance of the labels assigned to each sentence is 0.130, 0.225, 0.300 for valence, arousal, and dominance respectively. Unless otherwise mentioned, systems on IEMOCAP were evaluated by training on Session 1-4 and testing on Session 5. ## 4.2 Model Structure The model structure used in this paper follows the upstream-downstream framework (wen Yang et al., 2021), as illustrated in Figure 1. WavLM (Chen et al., 2022) was used as the upstream model, which is a speech foundation model pre-trained by selfsupervised learning. The BASE+ version3 of the model was used in this paper which has 12 Transformer encoder blocks with 768-dimensional hidden states and 8 attention heads. The parameters of the pre-trained model were frozen and the weighted sum of the outputs of the 12 Transformer encoder blocks was used as the speech embeddings and fed into the downstream model. The downstream model consists of two 128dimensional Transformer encoder blocks with 4head self-attention, followed by an evidential layer that contains four output units for each of the three attributes, which has a total of 12 output units. The model contains 0.3M trainable parameters. A Softplus activaton4 was applied to {*υ, α, β*} to ensure υ, α, β > 0 with an additional +1 added to α to ensure α > 1. A linear activation was used for γ ∈ R. The proposed DEER model was trained to simultaneously learn three evidential distributions for the three attributes. The weights in Eqn. (5) were set as ![4_image_0.png](4_image_0.png) ϵv = ϵa = ϵd = 1/3. The scale coefficients were set to λv = λa = λd = 0.1 for Eqn. (6) 5. A dropout rate of 0.3 was applied to the transformer parameters. The system was implemented using PyTorch and the SpeechBrain toolkit (Ravanelli et al., 2021). The Adam optimizer was used with an initial learning rate set to 0.001. Training took ∼ 8 hours on an NVIDIA A100 GPU. ## 4.3 Evaluation Metrics 4.3.1 Mean Prediction Following prior work in continuous emotion recognition (Ringeval et al., 2015, 2017; Sridhar and Busso, 2020a; Leem et al., 2022), the concordance correlation coefficient (CCC) was used to evaluate the predicted mean. CCC combines the Pearson's correlation coefficient with the square difference between the mean of the two compared sequences: $$\rho_{\mathrm{ccc}}={\frac{2\rho\,\sigma_{\mathrm{ref}}\sigma_{\mathrm{hyp}}}{\sigma_{\mathrm{ref}}^{2}+\sigma_{\mathrm{hyp}}^{2}+\left(\mu_{\mathrm{ref}}-\mu_{\mathrm{hyp}}\right)^{2}}},$$ where ρ is the Pearson correlation coefficient between a hypothesis sequence (system predictions) and a reference sequence, where µhyp and µref are the mean values, and σ 2 hyp and σ 2 ref are the variance values of the two sequences. Hypotheses that are well correlated with the reference but shifted in value are penalised in proportion to the deviation. The value of CCC ranges from -1 (perfect disagreement) to 1 (perfect agreement). 5The values were manually selected from a small number of candidates. | CCC ↑ | RMSE ↓ | NLL(avg) ↓ | NLL(all) ↓ | | | | | | | | | | |----------------|----------|--------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | MSP-Podcast | v | a | d | v | a | d | v | a | d | v | a | d | | L in Eqn. (6) | 0.506 | 0.698 | 0.613 | 0.772 | 0.680 | 0.576 | 1.334 | 1.285 | 1.156 | 1.696 | 1.692 | 1.577 | | σ = 0 | 0.451 | 0.687 | 0.607 | 0.784 | 0.679 | 0.580 | 1.345 | 1.277 | 1.159 | 1.706 | 1.705 | 1.586 | | L LNLL = L¯NLL | 0.473 | 0.682 | 0.609 | 0.808 | 0.673 | 0.566 | 1.290 | 1.060 | 0.899 | 2.027 | 2.089 | 1.969 | | IEMOCAP | v | a | d | v | a | d | v | a | d | v | a | d | | L in Eqn. (6) | 0.596 | 0.755 | 0.569 | 0.755 | 0.457 | 0.638 | 1.070 | 0.795 | 1.035 | 1.275 | 1.053 | 1.283 | | σ = 0 | 0.582 | 0.752 | 0.553 | 0.772 | 0.466 | 0.655 | 1.180 | 0.773 | 1.061 | 1.408 | 1.069 | 1.294 | | L NLL = L¯NLL | 0.585 | 0.759 | 0.555 | 0.786 | 0.444 | 0.633 | 1.001 | 0.727 | 1.036 | 1.627 | 1.329 | 1.441 | | L | | | | | | | | | | | | | The root mean square error (RMSE) averaged over the test set is also reported. Since the average of the human labels, y¯, is defined as the ground truth in both datasets, y¯ were used as the reference in computing the CCC and RMSE. However, using y¯ also indicates that these metrics are less informative when the aleatoric uncertainty is large. ## 4.3.2 Uncertainty Estimation It is common to use NLL to measure the uncertainty estimation ability (Gal and Ghahramani, 2016; Amini et al., 2020). NLL is computed by fitting data to the predictive posterior q(y). In this paper, NLL(avg) defined as − log q(¯y) and NLL(all) defined as − 1 M PM m=1 log q(y (m)) are both used. NLL(avg) measures how much the averaged label y¯ fits into the predicted posterior distribution, and NLL(all) measures how much every single human label y (m) fits into the predicted posterior. A lower NLL indicates better uncertainty estimation. ## 5 Experiments And Results 5.1 Effect Of The Aleatoric Regulariser L Σ First, by setting L σ = 0 in the total loss, an ablation study of the effect of the proposed extra regularising term L σis performed. The results are given in the 'L σ = 0' rows in Table 2. In this case, only L µis used to regularise L NLL and the results are compared to those trained using the complete loss defined in Eqn. (6), which are shown in the 'L in Eqn. (6)' rows. From the results, L σimproves the performance in CCC and NLL(all), but not in NLL(avg), as expected. ## 5.2 Effect Of The Per-Observation-Based L Nll Next, the effect of our proposed per-observationbased NLL loss defined in Eqn. (4), L NLL, is compared to an alternative. Instead of using L NLL, ## L¯Nll = − Log P(¯Y|Ω) is used to compute the total loss during training, and the results are given in the 'L NLL = L¯NLL' rows in Table 2. While L NLL considers the likelihood of fitting each individual observation into the predicted posterior, L¯NLL only considers the averaged observation. Therefore, it is expected that using L¯NLL instead of L NLL yields a smaller NLL(avg) but larger NLL(all), which have been validated by the results in the table. ## 5.3 Baseline Comparisons Three baseline systems were built: - A Gaussian Process (GP) with a radial basis function kernel, trained by maximising the per-observation-based marginal likelihood. - A Monte Carlo dropout (MCdp) system with a dropout rate of 0.4. During inference, the system was forwarded 50 times with different dropout random seeds to obtain 50 samples. - An ensemble of 10 systems initialised and trained with 10 different random seeds. The MCdp and ensemble baselines used the same model structure as the DEER system, except that the evidential output layer was replaced by a standard fully-connected output layer with three output units to predict the values of valence, arousal and | CCC ↑ | RMSE ↓ | NLL(avg) ↓ | NLL(all) ↓ | | | | | | | | | | |-------------|----------|--------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | MSP-Podcast | v | a | d | v | a | d | v | a | d | v | a | d | | DEER | 0.506 | 0.698 | 0.613 | 0.772 | 0.680 | 0.576 | 1.334 | 1.285 | 1.156 | 1.696 | 1.692 | 1.577 | | GP | 0.342 | 0.595 | 0.486 | 0.811 | 0.673 | 0.566 | 1.447 | 1.408 | 1.297 | 1.727 | 1.808 | 1.592 | | MCdp | 0.476 | 0.667 | 0.594 | 0.874 | 0.702 | 0.623 | 1.680 | 1.300 | 1.071 | 2.050 | 2.027 | 1.776 | | Ensemble | 0.511 | 0.679 | 0.608 | 0.855 | 0.692 | 0.615 | 1.864 | 1.384 | 1.112 | 2.096 | 2.066 | 1.795 | | IEMOCAP | v | a | d | v | a | d | v | a | d | v | a | d | | DEER | 0.596 | 0.756 | 0.569 | 0.755 | 0.457 | 0.638 | 1.070 | 0.795 | 1.035 | 1.275 | 1.053 | 1.283 | | GP | 0.535 | 0.717 | 0.512 | 0.763 | 0.479 | 0.657 | 1.209 | 0.791 | 1.047 | 1.295 | 1.205 | 1.380 | | MCdp | 0.539 | 0.724 | 0.568 | 0.786 | 0.561 | 0.702 | 1.291 | 0.849 | 1.133 | 1.549 | 1.325 | 1.747 | | Ensemble | 0.580 | 0.754 | 0.560 | 0.778 | 0.476 | 0.686 | 1.296 | 0.864 | 1.110 | 1.584 | 1.218 | 1.749 | dominance respectively. Following prior work (AlBadawy and Kim, 2018; Atmaja and Akagi, 2020b; Sridhar and Busso, 2020b), the CCC loss, $${\mathcal{L}}_{\mathrm{cccc}}=1-\rho_{\mathrm{ccc}}$$ was used for training the MCdp and ensemble baselines. The CCC loss was computed based on the sequence within each mini-batch of training data. The CCC loss has been shown by previous studies to improve the continuous emotion predictions compared to the RMSE loss (Povolny et al., 2016; Trigeorgis et al., 2016; Le et al., 2017). For MCdp and ensemble, the predicted distribution of the emotion attributes were estimated based on the obtained samples by kernel density estimation. The results are listed in Table 3. The proposed DEER system outperforms the baselines on most of the attributes and the overall values. In particular, DEER outperforms all baselines consistently in the NLL(all) metric. ## 5.4 Cross Comparison Of Mean Prediction Table 4 compares results obtained with those previously published in terms of the CCC value. Previous papers have reported results on both version 1.6 and 1.8 of the MSP-Podcast dataset. For comparison, we also conducted experiments on version 1.6 for comparison. Version 1.6 of MSP-Podcast database is a subset of version 1.8 and contains 34,280 segments for training, 5,958 segments for validation and 10,124 segments for testing. For IEMOCAP, apart from training on Session 1-4 and testing on Session 5 (Ses05), we also evaluated the proposed system by a 5-fold cross-validation (5CV) based on a "leave-one-session-out" strategy. In each fold, one session was left out for testing and ![6_image_0.png](6_image_0.png) the others were used for training. The configuration is speaker-exclusive for both settings. As shown in Table 4, our DEER systems achieved state-of-theart results on both versions of MSP-Podcast and both test settings of IEMOCAP. ## 5.5 Analysis Of Uncertainty Estimation 5.5.1 Visualisation Based on a randomly selected subset test set of MSP-Podcast version 1.8, the aleatoric, epistemic and total uncertainty of the dominance attribute predicted by our proposed DEER system are shown ![7_image_0.png](7_image_0.png) | Paper | Version | v | a | d | Average | |--------------------------|-----------|-------|-------|-------|-----------| | Ghriss et al. (2022) | 1.6 | 0.412 | 0.679 | 0.564 | 0.552 | | Mitra et al. (2022) | 1.6 | 0.57 | 0.75 | 0.67 | 0.663 | | Srinivasan et al. (2022) | 1.6 | 0.627 | 0.757 | 0.671 | 0.685 | | DEER | 1.6 | 0.629 | 0.777 | 0.684 | 0.697 | | Leem et al. (2022) | 1.8 | 0.212 | 0.572 | 0.505 | 0.430 | | DEER | 1.8 | 0.506 | 0.698 | 0.613 | 0.606 | | Paper | Setting | v | a | d | Average | | Atmaja and Akagi (2020a) | Ses05 | 0.421 | 0.590 | 0.484 | 0.498 | | Atmaja and Akagi (2021) | Ses05 | 0.553 | 0.579 | 0.465 | 0.532 | | DEER | Ses05 | 0.596 | 0.756 | 0.569 | 0.640 | | Srinivasan et al. (2022) | 5CV | 0.582 | 0.667 | 0.545 | 0.598 | | DEER | 5CV | 0.625 | 0.720 | 0.548 | 0.631 | ## In Figure 2. Figure 2 (a) shows the predicted mean ± square root of the predicted aleatoric uncertainty ( p E[µ] ± E[σ 2]) and the average label ± the standard deviation of the human labels (y¯ ± σ¯). It can be seen that the predicted aleatoric uncertainty (blue) overlaps with the label standard deviation (grey) and the overlapping is more evident when the mean predictions are accurate ( i.e. samples around index 80-100). Figure 2 (b) shows the predicted mean ± square root of the predicted epistemic uncertainty ( p E[µ] ± Var[µ]). The epistemic uncertainty is high when the predicted mean deviates from the target ( i.e. samples around index 40-50) while low then the predicted mean matches the target ( i.e. samples around index 80-100). Figure 2 (c) shows the predicted mean ± square root of the total epistemic uncertainty ( p E[y] ± Var[y]) which combines the aleatoric and epistemic uncertainty. The total uncertainty is high either when the input utterance is complex or the model is not confident. ## 5.5.2 Reject Option A reject option was applied to analyse the uncertainty estimation performance, where the system has the option to accept or decline a test sample based on the uncertainty prediction. Since the evaluation of CCC is based on the whole sequence rather than individual samples, its computation would be affected when the sequence is modified ![7_image_1.png](7_image_1.png) by rejection (Wu et al., 2022a). Therefore, the reject option is performed based on RMSE. Confidence is measured by the total uncertainty given in Eqn. (3). Figure 3 shows the performance of the proposed DEER system with a reject option on MSP-Podcast and IEMOCAP. A percentage of utterances with the largest predicted variance were rejected. The results at 0% rejection corresponds to the RMSE achieved on the entire test data. As the percentage of rejection increases, test coverage decreases and the average RMSE decreases showing the predicted variance succeeded in confidence estimation. The system then trades off between the test coverage and performance. ## 6 Conclusions Two types of uncertainty exist in AER: (i) aleatoric uncertainty arising from the inherent ambiguity of emotion and personal variations in emotion expression; (ii) epistemic uncertainty associated with the estimated network parameters given the observed data. This paper proposes DEER for estimating those uncertainties in emotion attributes. Treating observed attribute-based annotations as samples drawn from a Gaussian distribution, DEER places a normal-inverse gamma (NIG) prior over the Gaussian likelihood. A novel training loss is proposed which combines a per-observation-based NLL loss with a regulariser on both the mean and the variance of the Gaussian likelihood. Experiments on the MSP-Podcast and IEMOCAP datasets show that DEER can produce state-of-the-art results in estimating both the mean value and the distribution of emotion attributes. The use of NIG, the conjugate prior to the Gaussian distribution, leads to tractable analytic computation of the marginal likelihood as well as aleatoric and epistemic uncertainty associated with attribute prediction. Uncertainty estimation is analysed by visualisation and a reject option. Beyond the scope of AER, DEER could also be applied to other tasks with subjective evaluations yielding inconsistent labels. ## Limitations The proposed approach (along with other methods for estimating uncertainty in inconsistent annotations) is only viable when the raw labels from different human annotators for each sentence are provided by the datasets. However, some multipleannotated datasets only released the majority vote or averaged label for each sentence ( i.e. Poria et al., 2019). The proposed method made a Gaussian assumption on the likelihood function for the analytic computation of the uncertainties. The results show that this modelling approach is effective. Despite the effectiveness of the proposed method, other distributions could also be considered. Data collection processes for AER datasets vary in terms of recording conditions, emotional elicitation scheme, and annotation procedure, etc. This work was tested on two typical datasets: IEMOCAP and MSP-Podcast. The two datasets are both publicly available and differ in various aspects: - IEMOCAP contains emotion acted by professional actors while MSP-Podcast contains natural emotion. - IEMOCAP contains dyadic conversations while MSP-Podcast contains Podcast recordings. - IEMOCAP contains 10 speakers and MSPPodcast contains 1285 speakers. - IEMOCAP contains about 12 hours of speech and MSP-Podcast contains more than 110 hours of speech. - IEMOCAP was annotated by six professional evaluators with each sentence being annotated by three evaluators. MSP-Podcast was annotated by crowd-sourcing where a total of 11,799 workers were involved and each work annotated 41.5 sentences on average. The proposed approach has been shown effective over both datasets. We believe the proposed technique should be generic. Furthermore, although validated only for AER, the proposed method could also be applied to other tasks with disagreements in subjective annotations such as hate speech detection and language assessment. ## Ethics Statement In tasks involving subjective evaluations such as emotion recognition, it is common to employ multiple human annotators to give multiple annotations to each data instance. When annotators disagree, majority voting and averaging are commonly used to derive single ground truth labels for training supervised machine learning systems. However, in many subjective tasks, there is usually no single "correct" answer. By enforcing a single ground truth, there's a potential risk of ignoring the valuable nuance in each annotator's evaluation and their disagreements. This can cause minority views to be under-represented. The DEER approach proposed in this work could be beneficial to this concern as it models uncertainty in annotator disagreements and provides some explainability of the predictions. While our method helps preserve minority perspectives, misuse of this technique might lead to ethical concerns. Emotion recognition is at risk of exposing a person's inner state to others and this information could be abused. Furthermore, since the proposed approach takes each annotation into consideration, it is important to protect the anonymity of annotators. ## Acknowledgements Wen Wu is supported by a Cambridge International Scholarship from the Cambridge Trust. This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/T022159/1. The MSP-Podcast data was provided by The University of Texas at Dallas through the Multimodal Signal Processing Lab. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1453781 and CNS-1823166. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or The University of Texas at Dallas. ## References Ehab A AlBadawy and Yelin Kim. 2018. Joint discrete and continuous emotion prediction using ensemble and end-to-end approaches. In *Proc. ICMI*, Boulder. Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. 2020. Deep evidential regression. In *Proc. NeurIPS*, Vancouver. Atsushi Ando, Satoshi Kobashikawa, Hosana Kamiyama, Ryo Masumura, Yusuke Ijima, and Yushi Aono. 2018. Soft-target training with ambiguous emotional utterances for DNN-based speech emotion classification. In *Proc. ICASSP*, Brighton. Mia Atcheson, Vidhyasaharan Sethu, and Julien Epps. 2018. Demonstrating and modelling systematic timevarying annotator disagreement in continuous emotion annotation. In *Proc. Interspeech*, Hyderabad. Mia Atcheson, Vidhyasaharan Sethu, and Julien Epps. 2019. Using Gaussian processes with LSTM neural networks to predict continuous-time, dimensional emotion in ambiguous speech. In *Proc. ACII*, Cambridge. Bagus Tris Atmaja and Masato Akagi. 2020a. Improving valence prediction in dimensional speech emotion recognition using linguistic information. In *Proc. OCOCOSDA*, Yangon. Bagus Tris Atmaja and Masato Akagi. 2020b. Multitask learning and multistage fusion for dimensional audiovisual emotion recognition. In *Proc. ICASSP*, Conference held virtually. Bagus Tris Atmaja and Masato Akagi. 2021. Twostage dimensional emotion recognition by fusing predictions of acoustic and text networks using svm. Speech Communication, 126:9–21. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. Wav2Vec 2.0: A framework for self-supervised learning of speech representations. In *Proc. NeurIPS*, Conference held virtually. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural network. In *Proc. ICML*, Lille. C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E.M. Provost, S. Kim, J.N. Chang, S. Lee, and S.S. Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. *Language Resources and Evaluation*, 42:335–359. Carlos Busso, Srinivas Parthasarathy, Alec Burmania, Mohammed AbdelWahab, Najmeh Sadoughi, and Emily Mower Provost. 2017. MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception. *IEEE Transactions on Affective Computing*, 8(1):67–80. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. 2022. WavLM: Large-scale self-supervised pre-training for full stack speech processing. *IEEE Journal of Selected Topics in Signal Processing*. Huang-Cheng Chou, Wei-Cheng Lin, Chi-Chun Lee, and Carlos Busso. 2022. Exploiting annotators' typed description of emotion perception to maximize utilization of ratings for speech emotion recognition. In *Proc. ICASSP*, Singapore. Ting Dang, Vidhyasaharan Sethu, and Eliathamby Ambikairajah. 2018. Dynamic multi-rater Gaussian mixture regression incorporating temporal dependencies of emotion uncertainty using Kalman filters. In *Proc.* ICASSP, Calgary. Ting Dang, Vidhyasaharan Sethu, Julien Epps, and Eliathamby Ambikairajah. 2017. An investigation of emotion prediction uncertainty using gaussian mixture regression. In *Proc. Interspeech*, Stockholm. Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. *Transactions of the Association for Computational Linguistics*, 10:92–110. Jun Deng, Wenjing Han, and Björn Schuller. 2012. Confidence measures for speech emotion recognition: A start. In *Speech Communication; 10. ITG Symposium*, pages 1–4. VDE. Armen Der Kiureghian and Ove Ditlevsen. 2009. Aleatory or epistemic? does it matter? Structural Safety, 31(2):105–112. H.M. Fayek, M. Lech, and L. Cavedon. 2016. Modeling subjectiveness in emotion recognition with deep neural networks: Ensembles vs soft labels. In Proc. IJCNN, Vancouver. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *Proc. ICML*, New York City. Ayoub Ghriss, Bo Yang, Viktor Rozgic, Elizabeth Shriberg, and Chao Wang. 2022. Sentiment-aware automatic speech recognition pre-training for enhanced speech emotion recognition. In Proc. ICASSP, Singapore. Michael Grimm and Kristian Kroschel. 2005. Evaluation of natural emotions using self assessment manikins. In *Proc. ASRU*, Cancun. Michael Grimm, Kristian Kroschel, Emily Mower, and Shrikanth Narayanan. 2007. Primitives-based evaluation and estimation of emotions in speech. *Speech* Communication, 49(10-11):787–800. Hatice Gunes, Björn Schuller, Maja Pantic, and Roddy Cowie. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Proc. FG, Santa Barbara. Jing Han, Zixing Zhang, Zhao Ren, and Björn Schuller. 2021. Exploring perception uncertainty for emotion recognition in dyadic conversation and music listening. *Cognitive Computation*, 13(2):231–240. Jing Han, Zixing Zhang, Maximilian Schmitt, Maja Pantic, and Björn Schuller. 2017. From hard to soft: Towards more human-like emotion recognition by modelling the perception uncertainty. In Proc. ACM MM, Mountain View. Xincheng Ju, Dong Zhang, Junhui Li, and Guodong Zhou. 2020. Transformer-based label set generation for multi-modal multi-label emotion detection. In Proc. ACM MM, Seattle. Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? In *Proc. NeurIPS*, Long Beach. Jean Kossaifi, Robert Walecki, Yannis Panagakis, Jie Shen, Maximilian Schmitt, Fabien Ringeval, Jing Han, Vedhas Pandit, Antoine Toisoul, Björn Schuller, et al. 2019. SEWA DB: A rich database for audiovisual emotion and sentiment research in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(3):1022–1040. Duc Le, Zakaria Aldeneh, and Emily Mower Provost. 2017. Discretized continuous speech emotion recognition with multi-task deep recurrent neural network. In *Proc. Interspeech*, Stockholm. Seong-Gyun Leem, Daniel Fulford, Jukka-Pekka Onnela, David Gard, and Carlos Busso. 2022. Not all features are equal: Selection of robust features for speech emotion recognition in noisy environments. In *Proc. ICASSP*, Singapore. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*. R. Lotfian and C. Busso. 2019. Building naturalistic emotionally balanced speech corpus by retrieving emotional speech from existing podcast recordings. *IEEE Transactions on Affective Computing*, 10(4):471–483. N. Majumder, D. Hazarika, A. Gelbukh, E. Cambria, and S. Poria. 2018. Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-Based Systems, 161:124–133. Andrey Malinin and Mark Gales. 2018. Predictive uncertainty estimation via prior networks. In Proc. NeurIPS, Montréal. Konstantin Markov, Tomoko Matsui, Francois Septier, and Gareth Peters. 2015. Dynamic speech emotion recognition with state-space models. In *Proc. EUSIPCO*, Nice. Hermann G Matthies. 2007. Quantifying uncertainty: Modern computational representation of probability and applications. In Extreme man-made and natural hazards in dynamics of structures, pages 105–135. Springer. Vikramjit Mitra, Hsiang-Yun Sherry Chien, Vasudha Kowtha, Joseph Yitan Cheng, and Erdrin Azemi. 2022. Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation. In *Proc. Interspeech*, Incheon. Mihalis A Nicolaou, Hatice Gunes, and Maja Pantic. 2011. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. *IEEE Transactions on Affective Computing*, 2(2):92–105. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In *Proc.* ICASSP, South Brisbane. Robert Plutchik. 2001. The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. *American Scientist*, 89(4):344–350. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proc. ACL, Florence. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Erik Cambria, Alexander Gelbukh, and Amir Hussain. 2018. Multimodal sentiment analysis: Addressing key issues and setting up the baselines. IEEE Intelligent Systems, 33(6):17–25. Filip Povolny, Pavel Matejka, Michal Hradis, Anna Popková, Lubomír Otrusina, Pavel Smrz, Ian Wood, Cecile Robin, and Lori Lamel. 2016. Multimodal emotion recognition for avec 2016 challenge. In *Proc.* ACM MM, Amsterdam. Navin Raj Prabhu, Guillaume Carbajal, Nale LehmannWillenbrock, and Timo Gerkmann. 2021. End-toend label uncertainty modeling for speech emotion recognition using bayesian neural networks. *arXiv* preprint arXiv:2110.03299. Mirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, Ju-Chieh Chou, Sung-Lin Yeh, Szu-Wei Fu, Chien-Feng Liao, Elena Rastorgueva, François Grondin, William Aris, Hwidong Na, Yan Gao, Renato De Mori, and Yoshua Bengio. 2021. SpeechBrain: A general-purpose speech toolkit. ArXiv:2106.04624. Fabien Ringeval, Björn Schuller, Michel Valstar, Roddy Cowie, and Maja Pantic. 2015. AVEC 2015: The 5th international audio/visual emotion challenge and workshop. In *Proc. ACM MM*, Brisbane. Fabien Ringeval, Björn Schuller, Michel Valstar, Jonathan Gratch, Roddy Cowie, Stefan Scherer, Sharon Mozgai, Nicholas Cummins, Maximilian Schmitt, and Maja Pantic. 2017. AVEC 2017: Reallife depression, and affect recognition workshop and challenge. In *Proc. ACM MM*, Mountain View. Fabien Ringeval, Andreas Sonderegger, Jürgen Sauer, and Denis Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In *Proc. FG*, Shanghai. James A Russell. 1980. A circumplex model of affect. *Journal of Personality and Social Psychology*, 39(6):1161. James A Russell and Albert Mehrabian. 1977. Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(3):273–294. Harold Schlosberg. 1954. Three dimensions of emotion. Psychological Review, 61(2):81. Murat Sensoy, Lance Kaplan, and Melih Kandemir. 2018. Evidential deep learning to quantify classification uncertainty. In *Proc. NeurIPS*, Montréal. Kusha Sridhar and Carlos Busso. 2020a. Ensemble of students taught by probabilistic teachers to improve speech emotion recognition. In *Proc. Interspeech*, Shanghai. Kusha Sridhar and Carlos Busso. 2020b. Modeling uncertainty in predicting emotional attributes from spontaneous speech. In *Proc. ICASSP*, Conference held virtually. Kusha Sridhar, Wei-Cheng Lin, and Carlos Busso. 2021. Generative approach using soft-labels to learn uncertainty in predicting emotional attributes. In Proc. ACII, Chicago. Sundararajan Srinivasan, Zhaocheng Huang, and Katrin Kirchhoff. 2022. Representation learning through cross-modal conditional teacher-student training for speech emotion recognition. In *Proc. ICASSP*, Singapore. Andreas Triantafyllopoulos, Johannes Wagner, Hagen Wierstorf, Maximilian Schmitt, Uwe Reichel, Florian Eyben, Felix Burkhardt, and Björn W. Schuller. 2022. Probing speech emotion recognition transformers for linguistic knowledge. In *Proc. Interspeech*, Incheon. George Trigeorgis, Fabien Ringeval, Raymond Brueckner, Erik Marchi, Mihalis A Nicolaou, Björn Schuller, and Stefanos Zafeiriou. 2016. Adieu features? endto-end speech emotion recognition using a deep convolutional recurrent network. In *Proc. ICASSP*, Shanghai. Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, ChengI Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, TzuHsien Huang, Wei-Cheng Tseng, Ko tik Lee, DaRong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung yi Lee. 2021. SUPERB: Speech processing universal performance benchmark. In *Proc. Interspeech*, Brno. Jingyao Wu, Ting Dang, Vidhyasaharan Sethu, and Eliathamby Ambikairajah. 2022a. A novel sequential Monte Carlo framework for predicting ambiguous emotion states. In *Proc. ICASSP*, Singapore. Wen Wu, Chao Zhang, and Philip C. Woodland. 2021. Emotion recognition by fusing time synchronous and time asynchronous representations. In *Proc. ICASSP*, Toronto. Wen Wu, Chao Zhang, Xixin Wu, and Philip C. Woodland. 2022b. Estimating the uncertainty in emotion class labels with utterance-specific dirichlet priors. IEEE Transactions on Affective Computing, Early access. Dong Zhang, Xincheng Ju, Junhui Li, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. 2020. Multimodal multi-label emotion detection with modality and label dependence. In *Proc. EMNLP*, Conference held virtually. ## A Derivation Of The Predictive Posterior Since NIG is the Gaussian conjugate prior, $$\begin{array}{c}{{\mathrm{p}(\mathbf{\Psi}|\mathbf{\Omega})=\mathcal{N}(\gamma,\sigma^{2}v^{-1})\,\Gamma^{-1}(\alpha,\beta)}}\\ {{=\frac{\beta^{\alpha}\sqrt{v}}{\Gamma(\alpha)\sqrt{2\pi\sigma^{2}}}\left(\frac{1}{\sigma^{2}}\right)^{\alpha+1}}}\\ {{\cdot\exp\left\{-\frac{2\beta+v(\gamma-\mu)^{2}}{2\sigma^{2}}\right\}}}\end{array}$$ its posterior p(Ψ|D) is in the same parametric family as the prior p(Ψ|Ω). Therefore, given a test utterance x∗, the predictive posterior p(y∗|D) has 15692 ![12_image_0.png](12_image_0.png) the same form as the marginal likelihood p(y|Ω), where D denotes the training set. $$\mathrm{p}(y_{*}|{\mathcal{D}})=\int\mathrm{p}(y_{*}|\,\Psi)\mathrm{p}(\Psi|{\mathcal{D}})\,\mathrm{d}\Psi\qquad(7)$$ $$\mathrm{p}(y|\Omega)=\int\mathrm{p}(y|\Psi)\mathrm{p}(\Psi|\Omega)\,\mathrm{d}\Psi\qquad(8)$$ In DEER, the predictive posterior and posterior are both conditioned on Ω, written as p(y∗|D, Ω) and p(Ψ|D, Ω) to be precise. Also, the information of D is contained in Ω∗ since Ω∗ = fΘˆ (x∗) and Θˆ is the optimal model parameters obtained by training on D. Then the predictive posterior can be written as p(y∗|Ω∗). Given the conjugate prior, the predictive posterior in DEER can be computed by directly substituting the predicted Ω∗ into the expression of marginal likelihood derived in Eqn. (2), skipping the step of calculating the posterior. ## B Fusion With Text Modality This appendix presents bi-modal experiments that incorporate text information into the DEER model. Transcriptions were obtained from a publicly available automatic speech recognition (ASR) model "wav2vec2-base-960h" 6 which fine-tuned the wav2vec 2.0 (Baevski et al., 2020) model on 960 hours Librispeech data (Panayotov et al., 2015). Transcriptions were first encoded by a RoBERTa model (Liu et al., 2019) and fed into another twolayer Transformer encoder. As shown in Figure 4, outputs from the text Transformer were concatenated with the outputs from the audio Transformer encoder and fed into the evidential output layer. Results are shown in Table 5. Incorporating text 6https://huggingface.co/facebook/wav2vec2-base-960h | Modality | v | a | d | | |-------------|-------|-------|-------|-------| | MSP-podcast | A | 0.506 | 0.698 | 0.613 | | A+T | 0.559 | 0.699 | 0.614 | | | Modality | v | a | d | | | IEMOCAP | A | 0.596 | 0.756 | 0.569 | | A+T | 0.609 | 0.754 | 0.575 | | information improves the estimation of valence but not necessarily for arousal and dominance. Similar phenomena were observed by (Triantafyllopoulos et al., 2022). A possible explanation is that text is effective for sentiment analysis (positive or negative) but may not be as informative as audio to determine a speaker's level of excitement. CCC for dominance improves more for IEMOCAP than MSP-Podcast possibly because IEMOCAP is an acted dataset and the emotion may be exaggerated compared with MSP-Podcast which contains natural emotion. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? "Limitations" section ✓ A2. Did you discuss any potential risks of your work? "Ethics statement" section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, introduction, and conclusions ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 & 5 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The resources used are aligned with their intended use (in Section 4). ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No known offensive content and identifiers. The database providers ensured the datasets are suitable for research use. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 & 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.2 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Human annotators were used in the corpora provided but no new human annotations collected. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
liu-strube-2023-annotation
Annotation-Inspired Implicit Discourse Relation Classification with Auxiliary Discourse Connective Generation
https://aclanthology.org/2023.acl-long.874
Implicit discourse relation classification is a challenging task due to the absence of discourse connectives. To overcome this issue, we design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB. Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives. To prevent our relation classifier from being misled by poor connectives generated at the early stage of training while alleviating the discrepancy between training and inference, we adopt Scheduled Sampling to the joint learning. We evaluate our method on three benchmarks, PDTB 2.0, PDTB 3.0, and PCC. Results show that our joint model significantly outperforms various baselines on three datasets, demonstrating its superiority for the task.
# Annotation-Inspired Implicit Discourse Relation Classification With Auxiliary Discourse Connective Generation ## Wei Liu And **Michael Strube** Heidelberg Institute for Theoretical Studies gGmbH {wei.liu, michael.strube}@h-its.org ## Abstract Implicit discourse relation classification is a challenging task due to the absence of discourse connectives. To overcome this issue, we design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB. Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives. To prevent our relation classifier from being misled by poor connectives generated at the early stage of training while alleviating the discrepancy between training and inference, we adopt Scheduled Sampling to the joint learning. We evaluate our method on three benchmarks, PDTB 2.0, PDTB 3.0, and PCC. Results show that our joint model significantly outperforms various baselines on three datasets, demonstrating its superiority for the task. ## 1 Introduction Discourse relations, such as *Cause* and *Contrast*, describe the logical relation between two text spans (Pitler et al., 2009). Recognizing discourse relations is beneficial for various NLP tasks, including coherence modeling (Lin et al., 2011), reading comprehension (Mihaylov and Frank, 2019), argumentation mining (Habernal and Gurevych, 2017; Hewett et al., 2019), and machine translation (Meyer, 2015; Longyue, 2019). Discourse connectives (e.g., but, *as a result*) are words or phrases that signal the presence of a discourse relation (Pitler and Nenkova, 2009). They can be explicit, as in (1), or implicit, as in (2): (1) [I refused to pay the cobbler the full $95]Arg1 because [he did poor work.]Arg2 (2) [They put the treasury secretary back on the board.]Arg1 (**Implicit=However**) [There is doubt that the change would accomplish much.]Arg2 When discourse connectives are explicitly present between arguments, classifying the sense of a discourse relation is straightforward. For example, Pitler and Nenkova (2009) proved that using only connectives in a text as features, the accuracy of 4-way explicit discourse relation classification on PDTB 2.0 can reach 85.8%. However, for implicit cases, there are no connectives to explicitly mark discourse relations, which makes implicit discourse relation classification challenging (Zhou et al., 2010; Shi et al., 2017). Existing work attempts to perform implicit discourse relation classification directly from arguments. They range from designing linguistically informed features from arguments (Lin et al., 2009; Pitler et al., 2009) to modeling interaction between arguments using neural networks (Lei et al., 2017; Guo et al., 2018). Despite their impressive performance, the absence of explicit discourse connectives makes the prediction extremely hard and hinders further improvement (Lin et al., 2014; Qin et al., 2017). The huge performance gap between explicit and implicit classification (85.8% vs. 57.6%) (Liu and Li, 2016) motivates recent studies to utilize implicit connectives for the training process of implicit relation classifiers. For instance, Qin et al. (2017) developed an adversarial model to transfer knowledge from the model supplied with implicit connectives to the model without such information, while Kishimoto et al. (2020) proposed a multi-task learning framework to incorporate implicit connectives prediction as another training objective. However, we argue that these methods are suboptimal since connectives are still not explicitly present in input texts. This is demonstrated by Kishimoto et al. (2020), concluding that adding implicit connective prediction as a training objective provides only negligible gain for implicit relation classification on PDTB 2.0 (we empirically found the conclusion also held on the adversarial model). In this paper, we design a novel end-to-end 15696 model to leverage discourse connectives for the task of implicit discourse relation classification. The key inspiration is derived from the annotation process of implicit discourse relations in PDTB, which consists of inserting a connective that best conveys the inferred relation, and annotating the relation label based on both the inserted implicit connectives and contextual semantics (Prasad et al., 2008). We imitate this process by explicitly generating discourse connectives for the implicit relation classifier. Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives. A potential drawback of this joint model is that the poorly generated connectives at the early stage of joint training may mislead the relation classifier. One possible solution is always feeding true connectives to the implicit relation classifier for training. But it leads to severe discrepancies between training and inference (Sporleder and Lascarides, 2008), since manually-annotated connectives are unavailable during evaluation (Prasad et al., 2008). To address this issue, we adopt Scheduled Sampling (Bengio et al., 2015) into our method. To be more specific, our relation classifier is first trained with hand-annotated implicit connectives and then gradually shifts to use generated connectives. We evaluate our model1 on two English corpora, PDTB 2.0 (Prasad et al., 2008), PDTB 3.0 (Webber et al., 2019), and a German corpus, PCC (Bourgonje and Stede, 2020), and compare it with other connective-enhanced approaches and existing stateof-the-art works. Results show that our method significantly outperforms those connective-enhanced baselines on three datasets while offering comparable performance to existing sota models. In addition, we perform the first systematic analysis of different connective-enhanced models to investigate why our method works better. Our studies show that: (1) models learn to use connectives more effectively when putting connectives in the input rather than using them as training objectives; (2) end-to-end training can improve models' robustness to incorrectly-predicted connectives; (3) our method shows a better balance between arguments and connectives for relation prediction than other baselines. Finally, we show that connectives can effectively improve the predictive performance on frequent relations while failing on those with ## 2 Related Work Implicit discourse relation classification, as a challenging part of shallow discourse parsing, has drawn much attention since the release of PDTB 2.0 (Prasad et al., 2008). Most of the work focused on predicting implicit relations directly from input arguments. For example, early statistical methods have put much effort into designing linguistically informed features from arguments (Pitler et al., 2009; Pitler and Nenkova, 2009; Lin et al., 2009; Rutherford and Xue, 2014). More recently, neural networks (Zhang et al., 2015; Kishimoto et al., 2018; Liu et al., 2020; Wu et al., 2022; Long and Webber, 2022) have been applied to learning useful semantic and syntactic information from arguments due to their strength in representation learning. Despite achieving impressive results, the absence of connectives makes their performance still lag far behind explicit discourse parsing. The question of how to leverage discourse connectives for implicit discourse relation classification has received continued research attention. Zhou et al. (2010) proposed a pipeline method to investigate the benefits of connectives recovered from an n-gram language model for implicit relation recognition. Their results show that using recovered connectives as features can achieve comparable performance to a strong baseline. This pipeline-based method is further improved by following efforts, including integrating pre-trained models (Kurfalı and Östling, 2021; Jiang et al., 2021) and using prompt strategies (Xiang et al., 2022; Zhou et al., 2022). However, some works (Qin et al., 2017; Xiang and Wang, 2023) pointed out that pipeline methods suffer cascading errors. Recent studies have shifted to using end-to-end neural networks. Qin et al. (2017) proposed a feature imitation framework in which an implicit relation network is driven to learn from another neural network with access to connectives. Shi and Demberg (2019) designed an encoder-decoder model that generates implicit connectives from texts and learns a relation classifier using the representation of the encoder. Kishimoto et al. (2020) investigated a multi-task learning approach to predict connectives and discourse relations simultaneously. Our method is in line with those recent approaches exploiting connectives with an end-to-end neural network. The main difference is that those models 1https://github.com/liuwei1206/ConnRel ![2_image_0.png](2_image_0.png) focus on using implicit connectives in a non-input manner (i.e. they do not input implicit connectives as features but utilize them as another training signal), whereas our method explicitly generates connectives and inputs both arguments and the generated connectives into the relation classifier. Our method can be viewed as a joint learning framework. Such a framework has been used to learn information exchange and reduce error propagation between related tasks (Zhang, 2018). Collobert et al. (2011) designed a unified neural model to perform tagging, chunking, and NER jointly. Søgaard and Goldberg (2016) refined this unified framework by putting low-level tasks supervised at lower layers. Miwa and Bansal (2016) presented an LSTM-based model to extract entities and the relations between them. Strubell et al. (2018) proposed a joint model for semantic role labeling (SRL), in which dependency parsing results were used to guide the attention module in the SRL task. Compared with these works, our joint learning framework is different in both motivation and design. For example, instead of simply sharing an encoder between tasks, we input the results of connective generation into the relation classifier. ## 3 Method Inspired by the annotation process of PDTB, we explicitly generate discourse connectives for implicit relation classification. Following previous work (Lin et al., 2009), we use the gold standard arguments and focus on relation prediction. Figure 1 shows the overall architecture of our proposed model. It consists of two components: (1) generating a discourse connective between arguments; (2) predicting discourse relation based on arguments and the generated connective. In this section, we describe these two components in detail and show the challenges during training and our solutions. Formally, let X1 = {x1*, ..., x*n} and X2 = {xn+1*, ..., x*n+m} be the two input arguments (Arg1 and Arg2) of implicit relation classification, where xi denotes the i-th word in Arg1 and xn+j denotes the j-th word in Arg2. We denote the relation between those two arguments as y. Similar to the setup in existing connective enhanced methods, each training sample (X1, X2*, c, y*) also includes an annotated implicit connective c that best expresses the relation. During the evaluation, only arguments (X1, X2) are available to the model. ## 3.1 Connective Generation Connective generation aims to generate a discourse connective between two arguments (shown in the left part of Figure 1). We achieve this by using bidirectional masked language models (Devlin et al., 2019), such as RoBERTa. Specifically, we insert a [MASK] token between two arguments and generate a connective on the masked position. Given a pair of arguments Arg1 and Arg2, we first concatenate a [CLS] token, argument Arg1, a [MASK] token, argument Arg2, and a [SEP] token into Xe = {[CLS] X1 [MASK] X2 [SEP]}. For each token x˜iin Xe, we convert it into the vector space by adding token, segment, and position embeddings, thus yielding input embeddings E ∈ R (n+m+3)×d, where d is the hidden size. Then we input E into L stacked Transformer blocks, and each Transformer layer acts as follows: $$\begin{array}{l}{{G=\mathrm{LN}(H^{l-1}+\mathrm{MHAttn}(H^{l-1}))}}\\ {{H^{l}=\mathrm{LN}(G+\mathrm{FFN}(G))}}\end{array}\quad\mathrm{(1)}$$ where Hl denotes the output of the l-th layer and H0 = E; LN is layer normalization; MHAttn is the multi-head attention mechanism; FFN is a twolayer feed-forward network with ReLU as hidden activation function. To generate a connective on the masked position, we feed the hidden state of the [MASK] token after L Transformer layers into a language model head (LMHead): $$\mathbf{p}^{c}=\mathrm{LMHead}(h_{[\mathrm{MASK}]}^{L})$$ [MASK]) (2) where p c denotes the probabilities over the whole connective vocabulary. However, a normal LMHead can only generate one word without the capacity to generate multi-word connectives, such as "for instance". To overcome this shortcoming, we create several special tokens in LMHead's vocabulary to represent those multi-word connectives, and initialize their embedding with the average embedding of the contained single words. Taking "for instance" as an example, we create a token [for_instance] and set its embedding as Average(embed("for"), embed("instance")). We choose cross-entropy as loss function for the connective generation module: $${\mathcal{L}}_{C o n n}=-\sum_{i=0}^{N}\sum_{j=0}^{C N}C_{i j}\log(P_{i j}^{c})\qquad(3)$$ where Ciis the annotated implicit connective of the i-th sample with a one-hot scheme, CN is the total number of connectives. ## 3.2 Relation Classification The goal of relation classification is to predict the implicit relation between arguments. Typically, it is solved using only arguments as input (Zhang et al., 2015; Kishimoto et al., 2018). In this work, we propose to predict implicit relations based on both input arguments and the generated connectives (shown in the right part of Figure 1). First, we need to obtain a connective from the connective generation module. A straightforward way to do so is to apply the arg max operation on the probabilities output by LMHead, i.e. Conn = arg max(p c). However, it is a non-differentiable process, which means the training signal of relation classification can not be propagated back to adjust the parameters of the connective generation module. Hence, we adopt the Gumbel-Softmax technique (Jang et al., 2017) for the task. The Gumbel-Softmax technique has been shown to be an effective approximation to the discrete variable (Shi et al., 2021). Therefore, we use $$\begin{array}{l}{{g=-\log(-\log(\xi)),\ \xi\sim\mathrm{U}(0,1)}}\\ {{{\bf c}_{i}=\frac{\exp((\log(p_{i}^{c})+g_{i})/\tau)}{\sum_{j}\exp((\log(p_{j}^{c})+g_{j})/\tau)}}}\end{array}\quad\quad(4)$$ $$(2)$$ as the approximation of the one-hot vector of the generated connective on the masked position (denoted as Conn in Figure 1), where g is the Gumbel distribution, U is the uniform distribution, p c i is the probability of i-th connective output by the LMHead, τ ∈ (0, ∞) is a temperature parameter. After we have obtained the generated connective "Conn", we concatenate it with arguments and construct a new input as X¯ = {[CLS] X1 Conn X2 [SEP]}. This new form of input is precisely the same as the input in explicit discourse relation classification. We argue that the key to fully using connectives is to insert them into the input texts instead of treating them simply as a training objective. Like the connective generation module, we feed X¯ into an Embedding layer and L stacked Transformer blocks. Note that we share the Embedding Layer and Transformers between connective generation and relation classification modules. Doing so can not only reduce the total memory for training the model but also prompt the interaction between two tasks. Finally, we feed the outputs of the L-th Transformer at [CLS] position to a relation classification layer: $$\mathbf{p}^{r}=\mathrm{softmax}(\mathbf{W}_{r}h_{[\mathrm{CLS}]}^{L}+\mathbf{b}_{r})$$ [CLS] + br) (5) where Wr and br are learnable parameters. Similarly, we use cross-entropy for training, and the loss is formulated as: loss is formulated as: $$\mathcal{L}_{Rel}=-\sum_{i=0}^{N}\sum_{j=0}^{RN}Y_{ij}\log(P_{ij}^{r})\tag{6}$$ where $Y_{i}$ is the ground truth relation of the $i$-th $$(5)$$ sample with a one-hot scheme, RN is the total number of relations. ## 3.3 Training And Evaluation To jointly train those two modules, we use a multitask loss: $${\mathcal{L}}={\mathcal{L}}_{C o m n}+{\mathcal{L}}_{R e l}$$ L = LConn + LRel (7) A potential issue of this training is that poorly generated connectives at an early stage of joint training ## Algorithm 1 Scheduled Sampling In Training $$\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}\cdot\mathbf{c}$$ . Input: relation classifier RelCls, arguments X1, X2, annotated connective true_conn, generated connective gene_conn, training step t, hyperparameter in decay k Output: logits 1: p = random() ▷ [0.0, 1.0) 2: ϵt =k $\frac{\rlap{/}k}{\rlap{/}k+\exp(t/k)}\\ \leq\ \epsilon_1$ then ... 3: if p < ϵt **then** 4: logits = RelCls(X1, X2,true_conn) 5: **else** 6: logits = RelCls(X1, X2, gene_conn) 7: **end if** may mislead the relation classifier. One possible solution is always providing manually annotated implicit connectives to the relation classifier, similar to Teacher Forcing (Ranzato et al., 2016). But this might lead to a severe discrepancy between training and inference since manually annotated connectives are not available during inference. We solve those issues by introducing Scheduled Sampling (Bengio et al., 2015) into our method. Scheduled Sampling is designed to sample tokens between gold references and model predictions with a scheduled probability in seq2seq models. We adopt it into our training by sampling between manuallyannotated and the generated connectives. Specifically, we use the inverse sigmoid decay (Bengio et al., 2015), in which probability of sampling manually annotated connectives at the t-th training step is calculated as follows: $$\epsilon_{t}=\frac{\kappa}{k+\exp(t/k)}$$ k + exp(t/k)(8) where k ≥ 1 is a hyper-parameter to control the convergence speed. In the beginning, training is similar to Teacher Forcing due to ϵt ≈ 1. As the training step t increases, the relation classifier gradually uses more generated connectives, and eventually uses only generated ones (identical to the evaluation setting) when ϵt ≈ 0. We show the sampling process during training in Algorithm 1. During inference, we generate a connective Conn through arg max(p c), feed the generated Conn and arguments into the relation classifier, and choose the relation type that possesses the maximum value in p r. ## 4 Experiments We carry out a set of experiments to investigate the effectiveness of our method across different corpora and dataset splittings. In addition, we perform analyses showing that our model learns a better balance between using connectives and arguments than baselines. ## 4.1 Experimental Settings Datasets. We evaluate our model on two English corpora, PDTB 2.0 (Prasad et al., 2008), PDTB 3.0 (Webber et al., 2019), and a German corpus, PCC (Bourgonje and Stede, 2020). In PDTB, instances are annotated with senses from a three-level sense hierarchy. We follow previous works (Ji and Eisenstein, 2015; Kim et al., 2020) to use top-level 4-way and second-level 11-way classification for PDTB 2.0, and top-level 4-way and second-level 14-way for PDTB 3.0. As for the dataset split, we adopt two different settings for both PDTB 2.0 and PDTB 3.0. The first one is proposed by Ji and Eisenstein (2015), where sections 2-20, sections 0-1, and sections 21-22 are used as training, development, and test set. The second one is called section-level cross-validation (Kim et al., 2020), in which 25 sections are divided into 12 folds with 2 validation, 2 test, and 21 training sections. There are over one hundred connectives in PDTB (e.g., 102 in PDTB 2.0), but some rarely occur (e.g., only 7 for "next" in PDTB 2.0). To reduce the complexity of connective generation and ensure each connective has sufficient training data, we only consider connectives with a frequency of at least 100 in the experiments. PCC is a German corpus following the annotation guidelines of PDTB. For this corpus, we only use the second-level 8-way classification since the distribution of top-level relations is highly uneven (Bourgonje, 2021). A more detailed description and statistics of the datasets are given in Appendix A. Implementation Details. We implement our model using the Pytorch library. The bidirectional masked language model used in our work is RoBERTabase, which is initialized with the pretrained checkpoint from Huggingface. For hyperparameter configurations, we mainly follow the settings in RoBERTa (Liu et al., 2019). We use the AdamW optimizer with an initial learning rate of 1e-5, a batch size of 16, and a maximum epoch number of 10 for training. Considering the training variability in PDTB, we report the mean performance of 5 random restarts for the "Ji" splits and that of the section-level cross-validation (Xval) like Kim et al. (2020). For PCC, we conduct a 5-fold | Level1 4-way | Level2 11-way | | | | | | | | |------------------------|-----------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Ji | Xval | Ji | Xval | | | | | | | Models | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | | Liu et al. (2020) | 69.060.43 | 63.390.56 | - | - | 58.130.67 | - | - | - | | Kim et al. (2020) | 66.30 | 56.00 | - | - | 54.730.79 | - | 52.980.29 | - | | Wu et al. (2022) | 71.18 | 63.73 | - | - | 60.33 | 40.49 | - | - | | Zhou et al. (2022) | 70.84 | 64.95 | - | - | 60.54 | 41.55 | - | - | | Long and Webber (2022) | 72.18 | 69.60 | - | - | 61.69 | 49.66 | - | - | | RoBERTa | 68.610.73 | 60.890.19 | 68.661.29 | 60.491.86 | 58.840.48 | 39.310.83 | 55.401.65 | 36.512.75 | | RoBERTaConn | 55.340.39 | 37.472.27 | 54.282.12 | 34.712.75 | 31.972.75 | 17.102.81 | 32.122.63 | 17.912.12 | | Adversarial | 69.430.70 | 62.440.61 | 69.131.14 | 60.631.47 | 57.631.10 | 38.812.25 | 54.431.79 | 36.792.24 | | Multi-Task | 70.820.72 | 63.790.82 | 70.021.40 | 62.191.84 | 60.210.94 | 39.750.70 | 56.851.13 | 36.832.42 | | Pipeline | 71.010.89 | 64.651.03 | 69.121.03 | 61.650.89 | 59.420.54 | 40.840.39 | 55.241.72 | 37.032.83 | | Our Model | 74.590.44 | 68.640.67 | 71.331.25 | 63.841.96 | 62.750.59 | 42.360.38 | 57.981.22 | 39.053.53 | Table 1: Results on PDTB 2.0. Subscripts are the standard deviation of the mean performance. cross-validation (Xval) on this corpus due to its limited number of data. We use standard accuracy (Acc, %) and F1-macro (F1, %) as evaluation metrics. We show more detailed settings and hyperparameters in Appendix B. Baselines. To demonstrate the effectiveness of our model, we compare it with state-of-the-art connective-enhanced methods and several variants of our model: - **RoBERTa**. Finetune RoBERTa for implicit relation classification. Only arguments (Arg1, Arg2) are input for training without using any implicit discourse connective information. - **RoBERTaConn**. A variant of the RoBERTa baseline. During training, we feed both arguments and annotated connectives, i.e., (Arg1, Arg2, true_conn), to RoBERTa. During inference, only arguments (Arg1, Arg2) are input to the model. - **Adversarial**. An adversarial-based connective enhanced method (Qin et al., 2017), in which an implicit relation network is driven to learn from another neural network with access to connectives. We replace its encoder with RoBERTabase for a fair comparison. - **Multi-Task**. A multi-task framework for implicit relation classification (Kishimoto et al., 2020), in which connective prediction is introduced as another training task. We equip it with the same RoBERTabase as our method. - **Pipeline**. A pipeline variant of our method, in which we first train a connective generation model, then learn a relation classifier with arguments and the generated connectives. Note that these two modules are trained separately. Further, we compare our method against previous state-of-the-art models on each corpus. ## 4.2 Overall Results PDTB 2.0. Table 1 shows the experimental results on PDTB 2.0. RoBERTaConn shows a much worse performance than the RoBERTa baseline on this corpus, indicating that simply feeding annotated connectives to the model causes a severe discrepancy between training and evaluation. This is also somewhat in accord with Sporleder and Lascarides (2008), which shows that models trained on explicitly-marked examples generalize poorly to implicit relation identification. Discourse connective-enhanced models, including Adversarial, Multi-Task, Pipeline and Our Method, achieve better performance than the RoBERTa baseline. This demonstrates that utilizing the annotated connectives information for training is beneficial for implicit relation classification. The improvement of Adversarial and Multi-task over the RoBERTa baseline is limited and unstable. We argue this is because they do not exploit connectives in the way of input features but treat them as training objectives, thus limiting connectives' contributions to implicit relation classification. Pipeline also shows limited performance gain over the baseline. We speculate that this is due to its pipeline setting (i.e. connective generation → relation classification), which propagates errors in connective generation to relation classification (Qin et al., 2017). Compared to the above connectiveenhanced models, our method's improvement over the RoBERTa baseline is bigger, which suggests that our approach is more efficient in utilizing connectives. To further show the efficiency of our method, we compare it against previous state-of- ![6_image_0.png](6_image_0.png) the-art models on PDTB 2.0 (Liu et al., 2020; Kim et al., 2020; Wu et al., 2022; Zhou et al., 2022; Long and Webber, 2022). The first block of Table 1 shows the results of those models, from which we observe that our model outperforms most of them, especially on accuracy, achieving the best results on this corpus. The only exception is that the F1-score of our method lags behind Long and Webber (2022), particularly on level2 classification. This is because our method cannot predict several fine-grained relations (see Section 4.4), such as Comparison.Concession, which leads to the low averaged F1 at the label-level. PDTB 3.0 / PCC. Results on PDTB 3.0 and PCC are shown in Table 2. Similar to the results on the PDTB 2.0 corpus, simply feeding connectives for training (RoBERTaConn) hurts the performance, especially on the Level2 classification of PDTB 3.0. Adversarial and Multi-Task perform better than the RoBERTa baseline, although their improvement is limited. Despite suffering cascading errors, Pipeline shows comparative and even better results than Adversarial and Multi-Task on the two corpora. This indicates the advantage of utilizing connectives as input features rather than a training objective, particularly on PCC. Consistent with the results on PDTB 2.0, our method outperforms Adversarial, Multi-task, and Pipeline on both datasets, demonstrating the superiority of inputting connectives to the relation classifier in an end-toend manner and also showing that it works well on different languages. We further compare our method with three existing sota models on PDTB 3.0, Kim et al. (2020), Xiang et al. (2022), and Long and Webber (2022). Results in Table 2 show that our approach performs better than these three models. ## 4.3 Performance Analysis To figure out why our model works well, we first perform analyses on its behavior answering two ![6_image_1.png](6_image_1.png) questions: (1) whether it really benefits from discourse connectives; (2) whether it can also make correct predictions when connectives are missing. We then investigate the relation classifier's performance in the different models when connectives are correctly and incorrectly generated (or predicted). We perform the first analysis by replacing the generated connectives in our model with manuallyannotated ones2, and compare its performance before and after this setup. Intuitively, if our model benefits from discourse connectives, accuracy and F1-macro should increase after the change. For comparison, we apply the same setup to other connective-enhanced models. We conduct experiments3 on the Level1 classification of PDTB 2.0 (Ji split), and show the accuracy results in Figure 2. As expected, our model's performance shows a substantial improvement, demonstrating that it does learn to use discourse connectives for implicit relation classification. Other connective-enhanced models also perform better in such a setup but with 2In PDTB 2.0 and PDTB 3.0, each instance contains annotated implicit connectives, making this analysis possible. 3We show more detailed results and also case studies in Appendix C. ![7_image_0.png](7_image_0.png) a different degree of gain. Specifically, models that use connectives as input features during training (RoBERTaConn, Pipeline, and Our Method) show more increase and have higher upper bounds than models that use connectives as training objectives (Adversarial and Multi-Task). This aligns with our assumption that putting connectives in the input is more efficient for a model learning to use discourse connectives for implicit relation classification than treating them as training objectives. However, inputting connectives for training can lead to another severe issue, i.e., the model relies too much on connectives for prediction. For instance, the RoBERTaConn's performance will drop from 96.69% to 55.34% when manually-annotated connectives are not available. To probe whether our model suffers such an issue, we perform the second analysis by removing the generated connectives in our model and observing changes in its performance. The same setting is applied to Pipeline for comparison. Figure 3 shows the Level1 classification results3 on PDTB 2.0 (Ji split). Both models see a performance drop but still outperform RoBERTaConn. This is because these two models' relation classifiers input the generated connectives rather than the annotated ones for training, alleviating their reliance on connectives. The decrease of Our Method (74.59% → 72.27%) is much smaller than that of Pipeline (71.01% → 58.15%). We speculate that the end-to-end training enables our model to learn a good balance between arguments and discourse connectives for relation classification. By contrast, Pipeline fails to do so due to the separate training of connectives generation and relation classification. | Models | Correct Group | Incorrect Group | |----------------|-----------------|-------------------| | BaseMulti-Task | 83.67 | 59.82 | | Multi-Task | 90.60(+6.93) | 59.88(+0.06) | | BasePipeline | 78.87 | 61.46 | | Pipeline | 89.29(+10.4) | 59.81(-1.64) | | BaseOur Model | 80.28 | 60.56 | | Our Model | 94.04(+13.8) | 62.22(+1.66) | method4 on PDTB 2.0 when connectives are correctly and incorrectly generated or predicted. Note that these three models' results are not directly comparable in the correct and incorrect groups since their predictions on connectives are different3(not overlap). To solve this, we calculate the performance gain of each model over the RoBERTa baseline and compare them from the gain perspective. When connectives are correctly generated, Pipeline and Our Model outperform the RoBERTa baseline by more than 10% in accuracy, while Multitask's improvement is only 6.9%. This suggests that Pipeline and Our Model utilize connectives more efficiently than Multi-Task. On the other hand, when the connectives' prediction is incorrect, Pipeline's performance is worse than the RoBERTa baseline by 1.64%. Compared to it, Multi-task and Our Method achieve comparable performance to RoBERTa, showing good robustness when exposed to incorrect connectives. Despite achieving better results than baselines in both groups, our model performs significantly worse in the incorrect connective group than in the correct one. This indicates that its major performance bottleneck originates from the incorrectly generated connectives. A possible improvement is first pre-training our model on a large explicit connectives corpus, like Sileo et al. (2019). By doing so, the connective generation module may generate more correct connectives, thus improving classification performance, which we leave for future work. ## 4.4 Relation Analysis We investigate which relations benefit from the joint training of connective generation and relation classification and compare it with other baselines. Table 4 shows different models' F1-score for each second-level sense of PDTB 2.0 (Ji split). Generally, connectives benefit the prediction of most Finally, we show in Table 3 the results of relation classifiers in Multi-Task, Pipeline, and Our 4This analysis is not performed on other models (e.g., Adversarial) because they don't generate or predict connectives. | Labels | RoBERTa | Adversarial | Multi-Task | Pipeline | Our Model | |-----------------------------|-----------|---------------|--------------|------------|-------------| | Temporal.Asynchronous | 54.62 | 55.01 | 58.37 | 55.69 | 59.48 | | Temporal.Synchrony | 00.00 | 06.03 | 00.00 | 04.00 | 00.00 | | Contingency.Cause | 60.03 | 59.00 | 64.24 | 65.40 | 66.35 | | Contingency.Pragmatic cause | 00.00 | 05.00 | 00.00 | 00.00 | 00.00 | | Comparison.Contrast | 60.44 | 58.20 | 61.73 | 60.78 | 65.75 | | Comparison.Concession | 00.00 | 01.14 | 00.00 | 01.82 | 00.00 | | Expansion.Conjunction | 56.03 | 53.26 | 58.94 | 54.79 | 57.04 | | Expansion.Instantiation | 74.07 | 72.85 | 74.12 | 70.76 | 73.87 | | Expansion.Restatement | 57.87 | 56.94 | 59.68 | 57.75 | 60.94 | | Expansion.Alternative | 49.06 | 44.76 | 54.82 | 43.96 | 51.13 | | Expansion.List | 18.07 | 11.68 | 11.43 | 29.96 | 25.47 | | PDTB 2.0 | PDTB 3.0 | | | | |-------------|------------|-------|-------|-------| | Models | Acc | F1 | Acc | F1 | | Our Model | 74.59 | 68.64 | 76.23 | 71.15 | | - SS | 73.42 | 66.68 | 75.87 | 70.68 | | - SS, LConn | 70.63 | 63.43 | 74.58 | 69.17 | | RoBERTa | 68.61 | 60.89 | 73.51 | 67.98 | Table 4: F1 results for each second-level relation of PDTB 2.0. Table 5: Ablation study for Scheduled Sampling and connective generation loss L*Conn*. relation types, especially in Multi-Task, Pipeline, and Our Method. For example, these three models outperform the RoBERTa baseline by more than 4% in the F1-score on the Contingency.Cause relation. On some relations, such as Expansion.Instantiation, connective-enhanced models show different tendencies, with some experiencing improvement while others drop. Surprisingly, all models fail to predict Temporal.Synchrony, Contingency.Pragmatic cause, and Comparison.Concession despite using manually-annotated connectives during training. We speculate this is caused by their limited number of training instances, making models tend to predict other frequent labels. One feasible solution to this issue is Contrastive Learning (Chen et al., 2020), which has been shown to improve the predictive performance of these three relations (Long and Webber, 2022). We leave integrating Contrastive Learning with our method to future work. ## 4.5 Ablation Study We conduct ablation studies to evaluate the effectiveness of Scheduled Sampling (SS) and the Connective generation loss L*Conn*. To this end, we test the performance of our method by first removing SS and then removing L*Conn*. Note that removing L*Conn* means that our whole model is trained with only gradients from LRel. Table 5 shows the Level1 classification results on PDTB 2.0 and PDTB 3.0 (Ji split). We can observe from the table that eliminating any of them would hurt the performance, showing their essential to achieve good performance. Surprisingly, our model training with only LRel performs much better than the RoBERTa baseline. This indicates that the performance gain of our full model comes not only from the training signals provided by manually-annotated connectives but also from its well-designed structure inspired by PDTB's annotation (i.e. the connective generation module and relation prediction module). ## 5 Conclusion In this paper, we propose a novel connectiveenhanced method for implicit relation classification, inspired by the annotation of PDTB. We introduce several key techniques to efficiently train our model in an end-to-end manner. Experiments on three benchmarks demonstrate that our method consistently outperforms various baseline models. Analyses of the models' behavior show that our approach can learn a good balance between using arguments and connectives for implicit discourse relation prediction. ## 6 Limitations Despite achieving good performance, there are some limitations in our study. The first is how to handle ambiguous instances in the corpus. 3.45% of the implicit data in PDTB 2.0 and 5% in PDTB 3.0 contains more than one label. Currently, we follow previous work and simply use the first label for training. But there might be a better solution to handle those cases. Another is the required time for training. To mimic the annotation process of PDTB, our model needs to pass through the embedding layer and transformers twice, so it takes more time to train than the RoBERTa baseline. However, our training time is shorter than Pipeline and Adversarial due to those two models' pipeline setup and adversarial training strategy. Also, note that our method has a similar number of parameters to the RoBERTa baseline since we share embedding layers and transformers between the connection generation and relation classification modules in our approach. Therefore, the memory required to train our model is not much different from that required to train the RoBERTa baseline. ## Acknowledgements The authors would like to thank the three anonymous reviewers for their comments. We also thank Xiyan Fu for her valuable feedback on earlier drafts of this paper. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a Heidelberg Institute for Theoretical Studies Ph.D. scholarship. ## References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In *Advances in Neural Information Processing Systems*, volume 28. Peter Bourgonje. 2021. Shallow Discourse Parsing for German. Doctoral Thesis, Universität Potsdam. Peter Bourgonje and Manfred Stede. 2020. The Potsdam commentary corpus 2.2: Extending annotations for shallow discourse parsing. In *Proceedings of the* Twelfth Language Resources and Evaluation Conference, pages 1061–1066, Marseille, France. European Language Resources Association. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning*, pages 1597–1607. PMLR. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. *Journal of Machine Learning Research*, 12:2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Fengyu Guo, Ruifang He, Di Jin, Jianwu Dang, Longbiao Wang, and Xiangang Li. 2018. Implicit discourse relation recognition using neural tensor network with interactive attention and sparse learning. In *Proceedings of the 27th International Conference* on Computational Linguistics, pages 547–558, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ivan Habernal and Iryna Gurevych. 2017. Argumentation mining in user-generated web discourse. *Computational Linguistics*, 43(1):125–179. Freya Hewett, Roshan Prakash Rane, Nina Harlacher, and Manfred Stede. 2019. The utility of discourse parsing features for predicting argumentation structure. In *Proceedings of the 6th Workshop on Argument Mining*, pages 98–103, Florence, Italy. Association for Computational Linguistics. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. *Transactions of the Association for Computational Linguistics*, 3:329–344. Congcong Jiang, Tieyun Qian, Zhuang Chen, Kejian Tang, Shaohui Zhan, and Tao Zhan. 2021. Generating pseudo connectives with mlms for implicit discourse relation recognition. In *PRICAI 2021: Trends* in Artificial Intelligence, pages 113–126, Cham. Springer International Publishing. Najoung Kim, Song Feng, Chulaka Gunasekara, and Luis Lastras. 2020. Implicit discourse relation classification: We need to talk about evaluation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5404– 5414, Online. Association for Computational Linguistics. Yudai Kishimoto, Yugo Murawaki, and Sadao Kurohashi. 2018. A knowledge-augmented neural network model for implicit discourse relation classification. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 584–595, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Yudai Kishimoto, Yugo Murawaki, and Sadao Kurohashi. 2020. Adapting BERT to implicit discourse relation classification with a focus on discourse connectives. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1152– 1158, Marseille, France. European Language Resources Association. Murathan Kurfalı and Robert Östling. 2021. Let's be explicit about that: Distant supervision for implicit discourse relation classification via connective prediction. In *Proceedings of the 1st Workshop on Understanding Implicit and Underspecified Language*, pages 1–10, Online. Association for Computational Linguistics. Wenqiang Lei, Xuancong Wang, Meichun Liu, Ilija Ilievski, Xiangnan He, and Min-Yen Kan. 2017. Swim: A simple word interaction model for implicit discourse relation recognition. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 4026–4032. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In *Proceedings of the 2009 Conference on Empirical Methods in Natural Language* Processing, pages 343–351, Singapore. Association for Computational Linguistics. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 997–1006, Portland, Oregon, USA. Association for Computational Linguistics. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. *Natural* Language Engineering, 20(2):151–184. Xin Liu, Jiefu Ou, Yangqiu Song, and Xin Jiang. 2020. On the importance of word and sentence representation learning in implicit discourse relation classification. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20*, pages 3830–3836. International Joint Conferences on Artificial Intelligence Organization. Main track. Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1224–1233, Austin, Texas. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT Pretraining Approach. *CoRR*, abs/1907.11692. Wanqiu Long and Bonnie Webber. 2022. Facilitating contrastive learning of discourse relational senses by exploiting the hierarchy of sense relations. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 10704– 10716, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Wang Longyue. 2019. *Discourse-aware neural machine* translation. Ph.D. thesis, Dublin City University. Thomas Meyer. 2015. *Discourse-level features for statistical machine translation*. Ph.D. thesis, École polytechnique fédérale de Lausanne (EPFL). Todor Mihaylov and Anette Frank. 2019. Discourseaware semantic self-attention for narrative reading comprehension. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 2541–2552, Hong Kong, China. Association for Computational Linguistics. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1105–1116, Berlin, Germany. Association for Computational Linguistics. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 683–691, Suntec, Singapore. Association for Computational Linguistics. Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 13–16, Suntec, Singapore. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In *Proceedings of the Sixth International Conference* on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric Xing. 2017. Adversarial connective-exploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1006–1017, Vancouver, Canada. Association for Computational Linguistics. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016. Attapol Rutherford and Nianwen Xue. 2014. Discovering implicit discourse relations through brown cluster pair representation and coreference patterns. In *Proceedings of the 14th Conference of the European* Chapter of the Association for Computational Linguistics, pages 645–654, Gothenburg, Sweden. Association for Computational Linguistics. Jihao Shi, Xiao Ding, Li Du, Ting Liu, and Bing Qin. 2021. Neural natural logic inference for interpretable question answering. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 3673–3684, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Wei Shi and Vera Demberg. 2019. Learning to explicitate connectives with Seq2Seq network for implicit discourse relation classification. In Proceedings of the 13th International Conference on Computational Semantics - Long Papers, pages 188–199, Gothenburg, Sweden. Association for Computational Linguistics. Wei Shi, Frances Yung, Raphael Rubino, and Vera Demberg. 2017. Using explicit discourse connectives in translation for implicit discourse relation classification. In *Proceedings of the Eighth International Joint* Conference on Natural Language Processing (Volume 1: Long Papers), pages 484–495, Taipei, Taiwan. Asian Federation of Natural Language Processing. Damien Sileo, Tim Van De Cruys, Camille Pradel, and Philippe Muller. 2019. Mining discourse markers for unsupervised sentence representation learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3477–3486, Minneapolis, Minnesota. Association for Computational Linguistics. Anders Søgaard and Yoav Goldberg. 2016. Deep multitask learning with low level tasks supervised at lower layers. In *Proceedings of the 54th Annual Meeting of* the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–235, Berlin, Germany. Association for Computational Linguistics. Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: an assessment. *Natural Language Engineering*, 14(3):369–416. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguisticallyinformed self-attention for semantic role labeling. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics. Bonnie Webber, Rashmi Prasad, Alan Lee, and Aravind Joshi. 2019. The Penn Discourse TreeBank 3.0 annotation manual. *Philadelphia, University of Pennsylvania*, 35:108. Changxing Wu, Liuwen Cao, Yubin Ge, Yang Liu, Min Zhang, and Jinsong Su. 2022. A label dependenceaware sequence generation model for multi-level implicit discourse relation recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):11486–11494. Wei Xiang and Bang Wang. 2023. A survey of implicit discourse relation recognition. *ACM Computing Surveys*, 55(12):1–34. Wei Xiang, Zhenglin Wang, Lu Dai, and Bang Wang. 2022. ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 902–911, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Attapol Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. CoNLL 2016 shared task on multilingual shallow discourse parsing. In *Proceedings of* the CoNLL-16 shared task, pages 1–19, Berlin, Germany. Association for Computational Linguistics. Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolutional neural network for implicit discourse relation recognition. In *Proceedings of the 2015 Conference on* Empirical Methods in Natural Language Processing, pages 2230–2235, Lisbon, Portugal. Association for Computational Linguistics. Yue Zhang. 2018. Joint models for NLP. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, Melbourne, Australia. Association for Computational Linguistics. Hao Zhou, Man Lan, Yuanbin Wu, Yuefeng Chen, and Meirong Ma. 2022. Prompt-based connective prediction method for fine-grained implicit discourse relation recognition. In *Findings of the Association* for Computational Linguistics: EMNLP 2022, pages 3848–3858, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zhi Min Zhou, Man Lan, Zheng Yu Niu, Yu Xu, and Jian Su. 2010. The effects of discourse connectives prediction on implicit discourse relation recognition. In *Proceedings of the SIGDIAL 2010 Conference*, pages 139–146, Tokyo, Japan. Association for Computational Linguistics. ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) | PDTB 2.0 | PDTB 3.0 | |-----------------------------|------------------------------------------------------------------------------------| | Comparison | Comparison | | Contingency | Contingency | | Expansion | Expansion | | Temporal | Temporal | | Comparison.Concession | Comparison.Concession | | Comparison.Contrast | Comparison.Contrast | | Contingency.Cause | Contingency.Cause | | Contingency.Pragmatic cause | Contingency.Cause+Belief | | Expansion.Conjunction | Contingency.Condition | | Expansion.Instantiation | Contingency.Purpose | | Expansion.Alternative | Expansion.Conjunction | | Expansion.List | Expansion.Equivalence | | Expansion.Restatement | Expansion.Instantiation | | Temporal.Asynchronous | Expansion.Level-of-detail | | Temporal.Synchrony | Expansion.Manner Expansion.Substitution Temporal.Asynchronous Temporal.Synchronous | ![12_image_0.png](12_image_0.png) Table 6: Top-level (L1) and second-level (L2) relations of PDTB 2.0 and PDTB 3.0 used in our experiments. Train Dev Test PDTB 2.0 12632 1183 1046 PDTB 3.0 17085 1653 1474 Table 7: Dataset statistics for the "Ji" split. ## A Data Description The Penn Discourse TreeBank (PDTB) is the most common corpus for the task of implicit discourse relation classification. The annotation of this corpus follows a specific strategy, which consists of inserting a connective that best conveys the inferred relation, and annotating the relation label based on both the inserted implicit connectives and contextual semantics. Prasad et al. (2008) claimed that this annotation strategy significantly improves the inter-annotator agreement. PDTB has two widely used versions, PDTB 2.0 (Prasad et al., 2008) and PDTB 3.0 (Webber et al., 2019). In both versions, instances are annotated with senses5from a threelevel sense hierarchy. We follow previous work (Ji and Eisenstein, 2015; Kim et al., 2020) to use top-level 4-way and second-level 11-way classification for PDTB 2.0, and top-level 4-way and second-level 14-way for PDTB 3.0, and show these relations in Table 6. We show the statistics information of Ji and Eisenstein (2015) and Kim et al. (2020) in Tables 7 and 8, respectively. The Potsdam Commentary Corpus (PCC) is a German corpus constructed following the annotation guideline of PDTB (Bourgonje and Stede, 2020). In this dataset, relations are also organized 5Some instances in PDTB have more than one label. We follow previous work to use the first label for training. While evaluating, a prediction is regarded as correct if it matches one of the annotated labels (Xue et al., 2016). fold splitting PDTB 2.0 PDTB 3.0 1 0-1 / 2-22 / 23-24 1183 / 13678 / 1192 1653 / 18559 / 1615 2 2-3 / 4-24 / 0-1 1154 / 13716 / 1183 1579 / 18595 / 1653 3 4-5 / 6-1 / 2-3 1527 / 13372 / 1154 2039 / 18209 / 1579 4 6-7 / 8-3 / 4-5 1247 / 13279 / 1527 1730 / 18058 / 2039 5 8-9 / 10-5 / 6 -7 881 / 13925 / 1247 1138 / 18959 / 1730 6 10-11 / 12-7 / 8-9 1452 / 13720 / 881 1944 / 18745 / 1138 7 12-13 / 14-9 / 10-11 1589 / 13012 / 1452 2203 / 17680 / 1944 8 14-15 / 16-11 / 12-13 1434 / 13030 / 1589 1940 / 17684 / 2203 9 16-17 / 18-13 / 14-15 1480 / 13139 / 1434 2011 / 17876 / 1940 10 18-19 / 20-15 / 16-17 1241 / 13332 / 1480 1667 / 18149 / 2011 11 20-21 / 22-17 / 18-19 1151 / 13661 / 1241 1585 / 18575 / 1667 12 22-23 / 24-19 / 20-21 1291 / 13611 / 1151 1733 / 18509 / 1585 Table 9: Second-level (L2) relations of PCC used in our experiments. in a three-level hierarchy structure. However, this corpus is relatively small, containing only 905 implicit data, and the distribution of its relations is highly uneven, especially the top-level relations. For example, the "Expansion" (540) and "Contingency" (246) account for more than 86% of the data among all top-level relations. Bourgonje (2021) concluded that two of four relations were never predicted in his classifier due to the highly uneven distribution of the top-level relation data. Therefore, we only use the second-level relations in our experiments. Furthermore, we use a similar setup to PDTBs for PCC, considering only relations whose frequency is not too low (over 10 in our setting). The final PCC used in our experiments contains 891 data covering 8 relations (shown in Table 9). As for connectives, here, we only consider connectives with a frequency of at least 5 due to the limited size of this corpus. ## B Implementation Details | Comparison.Concession | Comparison.Contrast | |---------------------------|-------------------------| | Contingency.Cause | Expansion.Conjunction | | Expansion.Equivalence | Expansion.Instantiation | | Expansion.Level-of-detail | Temporal.Asynchronous | Table 10 shows the hyperparameter values for our model, most of which follow the default settings of RoBERTa (Liu et al., 2019). The value of temperature τ adopts from the default setting in Gumbel-Softmax. The k in inverse sigmoid decay is set to 100 for PDTB 2.0, 200 for PDTB 3.0, and 10 for PCC. We use different k for the three datasets because of their different sizes, and bigger datasets are assigned larger values. For a fair comparison, we equip baseline models with the same | Hyperparam | Value | Hyperparam | Value | |-------------------|---------|-------------------|--------------| | Learning Rate | 1e-5 | Batch Size | 16 | | Weigh Decay | 0.1 | Max Epochs | 10 | | LR Decay | Linear | Warmup Ratio | 0.06 | | Gradient Clipping | 2.0 | Max Seq Length | 256 | | τ in Equation (4) | 1.0 | k in Equation (8) | 100, 200, 10 | Table 10: Hyperparameters for training our model. | PDTB 2.0 | | | |-------------|--------------|--------------| | Models | Acc | F1 | | RoBERTaConn | 96.69(+41.3) | 95.58(+58.1) | | Adversarial | 74.93(+5.50) | 68.62(+6.18) | | Multi-Task | 76.53(+5.71) | 70.65(+6.86) | | Pipeline | 90.13(+19.1) | 89.13(+24.5) | | Our Model | 83.71(+9.12) | 79.25(+10.6) | | PDTB 2.0 | | | |------------|--------------|--------------| | Models | Acc | F1 | | Pipeline | 58.15(-12.9) | 46.68(-17.9) | | Our Model | 72.27(-2.32) | 65.49(-3.15) | Table 12: Level1 classification results on PDTB 2.0 (Ji split) when generated connectives are removed from Pipeline and Our Method. The numbers in brackets are performance drops compared to the default settings. RoBERTabase 67as our method and apply the same experimental settings (e.g. GPU, optimizer, learning rate, batch size, etc.) to them. For baselines that contain model-specific hyperparameters, such as the adversarial model (Qin et al., 2017), we follow their default setting described in the paper. Considering the variability of training on PDTB, we report the mean performance of 5 random restarts for the "Ji" split (Ji and Eisenstein, 2015) and that of section-level cross-validation (Xval) like Kim et al. (2020). For PCC, we perform a 5-fold cross-validation on this corpus due to its limited number of data and report the mean results. We conduct all experiments on a single Tesla P40 GPU with 24GB memory. It takes about 110 minutes to train our model on every fold of PDTB 2.0, 150 minutes on every fold of PDTB 3.0, and 5 minutes on every fold of PCC. For evaluation, we follow previous work (Ji and Eisenstein, 2015) to use accuracy (Acc, %) and F1-macro (F1, %) as metrics in our experiments. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) - ## C Performance Analysis Table 11 shows the Level1 classification results on PDTB 2.0 (Ji split) when manually-annotated connectives are fed to connective-enhanced models. Note that for models that do not use generated connectives, we insert the true connectives into their input in this setup. We also show the F1 results in Figure 4. Table 12 shows the Level1 classification results on PDTB 2.0 when the generated connectives are removed from the inputs of relation classifiers in Pipeline and Our Method. This setting is not applied to other baselines, such as Multi-Task, because they either don't generate connectives or don't input the generated connectives into the relation classifiers. We also show the F1 results in Figure 5. We investigate relation classifiers' performance of Multi-Task, Pipeline, and Our Model when connectives are correctly and incorrectly generated | !$%&!'( !"# | 78#3-9'(,: | | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-----------------------|---------------------------------|----------------------------------------------------------------------|-----------|----|-------------|----|-------------| | !"# | 2$++ | !"# | | | | | | | | | 2$+3-+4"+56 | %"5(8," 2$+3-+4"+56 | | | | | | | | | | ./0"1,(1-(# | ;-*"#-+" | <81 7$/"# | | | | | | | | | '")3 | =(>"# | %"5(8," 2$+3-+4"+56 | | | | | | | | | 2$++ | !"# | 2$++ | !"# | | | | | | | | ?@ A(, 316-+4 3$ B"#* :-/, -+ (+ 8+C(-1 3",3-+4 ,-38(3-$+DE!"#$ F@G*#-5-3H!"#$%&"I | 2$+3-+4"+56 | | | | | | | | | | ?<+#6 C-0" $C 3B" JK L8",3-$+, A"1" 4"$41(*B6 L8",3-$+,D E!"#% ?'B" MNO >8/4"3 B(, /1$**"/ >6 G$1" 3B(+ PKQ ,-+5" RSTKDE!"#$ F@G*#-5-3H'(I | 2$+3-+4"+56 | &)*(+,-$+ | &)*(+,-$+ | W$ | &)*(+,-$+ | W$ | 2$+3-+4"+56 | W$ | 2$+3-+4"+56 | | ?U"V0" 3(:"+ G$1" 3B(+ $81 C(-1 ,B(1"D E!"#% | &)*(+,-$+ | %"5(8," 2$+3-+4"+56 | | | | | | | | | ?@+ B-, #(A,8-3X 71D '18/"(8 ,(6, 3B" ,31-:" -##"4(##6 -+5#8/"/ O(1:B$1,"DE!"#$ F@G*#-5-3H)* +"&,(*&"I ?. ,*$:",G(+ C$1 3B" 48-#/ ,(-/ 3B" 8+-$+V, #(A6"1, (1" | &)*(+,-$+ | 2$G*(1-,$+ 2$G*(1-,$+ | &)*(+,-$+ | | | | | | | | M$A"0"1 | M$A"0"1 2$G*(1-,$+ M$A"0"1 | &)*(+,-$+ | | | | | | | | | 1"0-"A-+4 3B" ,8-3DE!"#% ?Y(*(+Z, ,A"##-+4 -+0",3G"+3 -+ W$83B"(,3 .,-( -, *(13 $C -3, "5$+$G-5 "0$#83-$+DE!"#$ F@G*#-5-3H)* ,$-./#%0$-I ?@+ 3B" *(,3 /"5(/"X Y(*(+"," G(+8C(5381"1, 5$+5"+31(3"/ $+ /$G",3-5 *1$/853-$+ C$1 ")*$13DD @+ 3B" RSSK,X ,*811"/ >6 1-,-+4 #(>$1 5$,3, (+/ 3B" ,31$+4 6"+X 3B"," 5$G*(+-", A-## -+51"(,-+4#6 381+ 3B"G,"#0", -+3$ G8#3-+(3-$+(#, A-3B *#(+3, (1$8+/ 3B" A$1#/DE!"#% | &)*(+,-$+ | 2$+3-+4"+56 | &)*(+,-$+ | W*"5-C-5(##6 &)*(+,-$+ W*"5-C-5(##6 &)*(+,-$+ W*"5-C-5(##6 &)*(+,-$+ | | | | | | | ?W(+ [1(+5-,5$ \-(+3, $A+"1 %$> =81-" B$*", 3$ B(0" ( +"A B$G" C$1 3B"GDE!"#$ F@G*#-5-3H'(I ?M" -, (+ (0-/ C(+ $C ( *1$*$,-3-$+ $+ +")3 A"":V, >(##$3 3$ B"#* 2$+3-+4"+56 2$+3-+4"+56 | &)*(+,-$+ | @+ C(53 | &)*(+,-$+ [$1 ")(G*#" &)*(+,-$+ | &)*(+,-$+ | | | | | | | .+/ | | | | | | | | | | | >8-#/ ( 1"*#(5"G"+3 C$1 2(+/#",3-5: ;(1:DE!"#% | | | | | | | | | | (or predicted). Other baselines, such as Adversarial, are not included in this analysis because they don't predict or generate connectives. We mentioned in Section 4.3 that Multi-task, Pipeline, and Our Model's prediction on connective are different. Specifically, their predictions do not overlap and show different performances, with a mean accuracy of 31.30%, 33.21%, and 32.83% for Multi-Task, Pipeline, and Our Model, on PDTB 2.0, respectively. Here, we show both good and bad cases of all models from correct and incorrect connective prediction groups in Figure 6. For comparison, we also show results from the RoBERTa and Adversarial baselines. In the first example, connective enhanced models, including Adversarial, MultiTask, Pipeline, and Our Model, make the correct prediction on implicit relation with the help of connective information, while the RoBERTa baseline gives the wrong prediction. In the second example, Multi-Task, Pipeline, and Our Model all make the correct prediction on connectives. However, only the latter two correctly predict the implicit relations. We speculate this is because treating connectives as training objectives can not make full use of connectives. In the third example, all three models incorrectly predict the connective as "However". As a result, Pipeline incorrectly predicts the relation as "Comparison" due to the connective "However". Compared to it, both Multi-Task and Our Model correctly predict the relation "Expansion", showing better robustness. In the fourth example, all three models predict the connective as "Specifically", which is wrong but semantically similar to the manually-annotated connective "In particular". Consequently, those models all correctly predict the relation as "Expansion". In the final example, Multi-Task, Pipeline, and Our Model wrongly predict the connective as "In fact", "For example", and "And", respectively. And all three models are misled by the incorrect connectives, predicting the relation as "Expansion". ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In Section 6. ✗ A2. Did you discuss any potential risks of your work? Our paper is an entirely technical work. We don't think it has any risk of bias or otherwise. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Abstract section and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 4.1. ✓ B1. Did you cite the creators of artifacts you used? In Section 4.1. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The datasets and tools we use are allowed for research purposes. For example, we are the member of LDC, so we can use the PDTB dataset for research. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Because the artifacts we used are produced for research purpose. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Those corpora are extracted from news domain, and have been widely used in the field for a long time. We don't think it contains any offensive content. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In Section 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Appendix A ## C ✓ **Did You Run Computational Experiments?** In Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Section 4.1 and Appendix B. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Appendix B. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Section 4.2, 4.3, 4.4. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xiao-etal-2023-plug
Plug-and-Play Document Modules for Pre-trained Models
https://aclanthology.org/2023.acl-long.875
Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP tasks, such as question answering. However, the encoding-task coupling requirement results in the repeated encoding of the same documents for different tasks and queries, which is highly computationally inefficient. To this end, we target to decouple document encoding from downstream tasks, and propose to represent each document as a plug-and-play document module, i.e., a document plugin, for PTMs (PlugD). By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders. Extensive experiments on 8 datasets of 4 typical NLP tasks show that PlugD enables models to encode documents once and for all across different scenarios. Especially, PlugD can save 69{\%} computational costs while achieving comparable performance to state-of-the-art encoding-task coupling methods. Additionally, we show that PlugD can serve as an effective post-processing way to inject knowledge into task-specific models, improving model performance without any additional model training. Our code and checkpoints can be found in \url{https://github.com/thunlp/Document-Plugin}.
## Plug-And-Play Document Modules For Pre-Trained Models Chaojun Xiao1,2,3**, Zhengyan Zhang**1,2,3**, Xu Han**1,2,3∗**, Chi-Min Chan**1,2,3**, Yankai Lin**4,5 Zhiyuan Liu1,2,3∗, Xiangyang Li6, Zhonghua Li6, Zhao Cao6**, Maosong Sun**1,2,3∗ 1NLP Group, DCST, IAI, BNRIST, Tsinghua University, Beijing 2International Innovation Center of Tsinghua University, Shanghai 3Quan Cheng Laboratory 4Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 5Beijing Key Laboratory of Big Data Management and Analysis Methods 6Huawei Technologies Co., Ltd. xiaocj20@mails.tsinghua.edu.cn, {hanxu2022,liuzy,sms}@tsinghua.edu.cn ## Abstract Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP tasks, such as question answering. However, the encoding-task coupling requirement results in the repeated encoding of the same documents for different tasks and queries, which is highly computationally inefficient. To this end, we target to decouple document encoding from downstream tasks, and propose to represent each document as a plug-and-play document module, i.e., a document plugin, for PTMs (PlugD). By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders. Extensive experiments on 8 datasets of 4 typical NLP tasks show that PlugD enables models to encode documents once and for all across different scenarios. Especially, PlugD can save 69% computational costs while achieving comparable performance to state-of-the-art encoding-task coupling methods. Additionally, we show that PlugD can serve as an effective post-processing way to inject knowledge into task-specific models, improving model performance without any additional model training. Our code and checkpoints can be found in https://github.com/ thunlp/Document-Plugin. ## 1 Introduction In recent years, large-scale pre-trained models (PTMs) (Devlin et al., 2019; Raffel et al., 2020) have been widely adopted and achieved breakthrough performance for document-oriented NLP tasks, such as question answering. However, due to the tight coupling of document encoding and concrete tasks, PTMs have to dynamically generate document representations according to specific tasks and queries, leading to the repeated encoding ∗Corresponding authors. ![0_image_0.png](0_image_0.png) of the same documents in different applications. For example, Wikipedia documents are commonly used in various knowledge-intensive tasks such as question answering (Chen et al., 2017), fact verification (Thorne et al., 2018), and dialogue generation (Dinan et al., 2019). In this case, existing methods separately encode one document for each task or even for each input query (e.g., a question for question answering, a claim for fact verification), making them highly computationally inefficient. To this end, it raises a natural question: can we decouple document encoding from concrete tasks, encoding documents only once and with guaranteed transferability across multiple tasks? For this question, we propose a novel framework based on PTMs to decouple document encoding from tasks, named PlugD. Specifically, PlugD incorporates plug-and-play modules to store document information and utilizes a PTM backbone to capture information from plugins for task reasoning. As shown in Figure 1, documents are encoded into pluggable plugins once and for all before task adaptation. The semantics and knowledge of documents can be injected into task-specific models by plugging document plugins. During task reasoning, the task-specific models can activate the information encoded in the document plugins to handle the input queries. In this way, PlugD can decouple the document encoding from downstream task reasoning and reduce the computation costs. For representing documents as pluggable modules, there are two main challenges: (1) Plugin learning: The document plugins must be effective for various downstream tasks, requiring them to contain sufficient semantics and knowledge. (2) Plugin utilization: Once the document plugins are ready, it is important for task-specific models to capture relevant information from them effectively for task reasoning. As for plugin learning, we adopt a selfsupervised method, which requires document plugins to provide sufficient knowledge for the PTM to make predictions. Specifically, for each document, we first randomly select parts of sentences as a query and use the remaining sentences as context to learn plugins. Then, after encoding the context into plugins, the model is required to recover the masked recurring spans or generate the next sentences for the query based on the plugin knowledge. As for plugin utilization, we propose two strategies to utilize document plugins for downstream tasks: *plugging during tuning* and plugging after tuning1. For plugging during tuning, document plugins are utilized in both tuning and inference stages. Task data and document plugins are combined together to train task-specific models. For plugging after tuning, document plugins are only utilized in the inference stage to provide external knowledge. Document plugins are adopted as a post-processing way to inject knowledge into task-specific models without additional training. To verify the effectiveness of our plug-and-play framework, we adopt Wikipedia as our document collection and conduct experiments on 8 datasets of 4 typical knowledge-intensive NLP tasks. The results show that we can generate document plugins once and successfully adapt plugins to various downstream tasks. Compared to competitive baselines that encode documents and task-specific inputs simultaneously, our plugin-based method can save 69% computational costs with comparable 1Here tuning refers to tuning PTMs for downstream tasks, including full-parameter fine-tuning and parameter-efficient tuning. performance. Besides, utilizing document plugins works as an effective post-processing approach to introducing the knowledge of documents into task-specific models and achieving performance improvements without model training. We argue that with the current trend of increasing the model size of PTMs, decoupling document encoding from concrete tasks like PlugD can be a promising direction that enables large PTMs to effectively and efficiently serve diverse downstream tasks. ## 2 Related Work 2.1 Plug-And-Play Modules For Ptms Recent PTMs have shown to be effective in various downstream tasks (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Radford et al., 2018; Brown et al., 2020; Han et al., 2021; Chowdhery et al., 2022). However, training and tuning largescale PTMs for ever-increasing tasks is expensive in computation and storage. To address this issue, building plug-and-play modules with various capabilities for PTMs has received increasing attention recently. For instance, parameter-efficient tuning, which is also known as delta tuning, is proposed to perform task adaptation by fine-tuning only small amounts of parameters and keeping other parameters fixed (Zaken et al., 2022; Houlsby et al., 2019; Lester et al., 2021; Liu et al., 2021; Hu et al., 2021; Ding et al., 2022). The task-specific modules possess play-and-play characteristics and can effectively inject task ability into PTMs. Besides, some researchers explore combining pluggable modules with large-scale PTMs for efficient controllable text generation (Dathathri et al., 2020; Madotto et al., 2020; Pascual et al., 2021), domain adaptation (Chronopoulou et al., 2022; Pfeiffer et al., 2020), information retrieval (Shi et al., 2023; Yu et al., 2023), knowledge injection (Zhang et al., 2023), model debias (Lauscher et al., 2021), and model integration (Xu et al., 2023; Alayrac et al., 2022). Owing to the powerful abilities of largescale PTMs, these modules can effectively activate the model's capabilities with limited parameters. Different from previous functional modules, we attempt to build document plugins to provide knowledge and context information for PTMs. ## 2.2 Language Representation Learning Language representation learning is a fundamental NLP task (Bengio et al., 2013; Devlin et al., 2019; Radford et al., 2018) that aims to effectively represent rich semantics distributed in text and benefit various downstream tasks. Previous efforts attempt to map the language inputs into intermediate distributed features, such as word embeddings (Mikolov et al., 2013; Kiros et al., 2015; Pennington et al., 2014; Peters et al., 2018), sentence embeddings (Conneau et al., 2017; Reimers and Gurevych, 2019; Gao et al., 2021), and document embeddings (Dai et al., 2015; Wu et al., 2018), which are further used as inputs of downstream task-specific models to generate the final task-specific document representations. Furthermore, some researchers make preliminary exploration to decouple document encoding from tasks by freezing the part of layers of document encoders (Du et al., 2020; Saad-Falcon et al., 2022). But these works only achieve semi-decoupling of document encoding from tasks, and can only be used for the plugging during tuning setting. Notably, many efforts have been devoted to exploring the effective architectures, such as sparse attention, of PTMs to encode long documents (Beltagy et al., 2020; Zaheer et al., 2020; Zhang et al., 2021; Mehta et al., 2022; Tay et al., 2022). These works are parallel to ours, and we can adopt sparseattention layers to further improve efficiency. ## 3 Methodology In this section, we will first present the paradigm description and the overall framework of PlugD Then we introduce the self-supervised plugin learning method to make document plugins contain sufficient semantics and two strategies about how to utilize document modules. ## 3.1 Plug-And-Play Document Modules In this paper, we focus on decoupling document encoding with specific tasks. Different from encoding-task coupling methods which simultaneously encode the documents and task-specific queries, PlugD aims to encode documents once and for all before task adaptation. Specifically, given a PTM backbone M and a document d, we first use the PTM to encode the document into a task-agnostic pluggable module, D, i.e., a document plugin. Equipped with the document plugin, the PTM is injected into the corresponding document knowledge. Then we adopt task data to tune the PTM to obtain task-specific models. During inference, we can quickly obtain predictions for an input query by inserting the relevant document plugin into the task-specific models, avoiding reencoding the document. ## 3.2 Overall Framework As shown in Figure 1, we design PlugD, which consists of three components: a PTM backbone, document plugins that provide document knowledge, and task-specific models derived from the PTM to handle specific tasks. We will present these components below. PTM Backbone. PTMs have been proven effective in a wide range of downstream tasks, and raise a paradigm shift to solve multiple tasks with one unified model (Bommasani et al., 2021; Brown et al., 2020; Chowdhery et al., 2022). In view of this, we further explore the decoupling of document encoding and tasks, unifying document representations across tasks. PlugD relies on a large-scale PTM, which can serve as a fundamental infrastructure to learn plugins from documents and as an initialization for task-specific models. Note that, for our framework, any PTM with large-scale parameters can be used as the backbone. Specifically, we adopt a widely-used sequence-to-sequence PTM, T5 (Raffel et al., 2020), in this paper. As the pretraining objectives of the PTM do not involve the document plugins, we further conduct plugin learning for the PTM so that it can generate and utilize document plugins. The training tasks are introduced in the following sections. Document Plugin. Document plugins store document knowledge and are obtained before utilizing these documents for specific tasks. Inspired by recent progress in model interpretation (Petroni et al., 2019; Jiang et al., 2020; Roberts et al., 2020; Dai et al., 2022; Mitchell et al., 2022), which claims that the parameters of PTMs store vast amounts of knowledge, we propose to encode the semantics and knowledge of documents into pluggable parameters. In this way, when the document plugin is inserted into the PTM, the PTM is empowered with the corresponding document knowledge. Inspired by prefix-tuning (Li and Liang, 2021), we represent documents as prefix tokens for attention layers. When the document plugin is inserted into the backbone, we concatenate the corresponding prefix tokens with the hidden vectors of task-specific queries in attention layers to provide document knowledge. Specifically, given a document d with Ld tokens, we first encode the document with the PTM to get the raw document ![3_image_0.png](3_image_0.png) representations Hd = {h1*, ...,* hLd}. Then, we adopt a mapping network to project the representation vectors into prefix tokens: Pd = {p1*, ...,* pLd}, where pi = hi + MLP(hi). The prefix tokens are further inserted into the attention layers. Let Hq = {h q 1 , ..., h q Lq} denote the hidden vectors of query in the attention layer. We calculate the attention output as follows: $\{\bf x,\,224y\}$ if $\bf x,\,224(2^{2}\,a,\,224y)$ Ho q = Attn(HqWq, cat(Pd, Hq)Wk, cat(Pd, Hq)Wv), (1) where Wq, Wk, and Wv are trainable parameters for the self-attention layer. Then Ho q is fed into the feed-forward layer as the original Transformer (Vaswani et al., 2017) layer. Different from encoding documents during task adaptation or inference, prefix tokens do not involve the computation of feed-forward layers. Moreover, to better integrate the semantics of documents and queries for handling tasks, document plugins are only inserted in the near-top layers of the PTM backbone. Therefore, these document plugins in the form of prefix tokens only increase limited computational requirements, whereas PlugD can achieve a significant computational speedup as a result. Due to the high storage requirement of adding different prefix tokens to different attention layers, we share Pd for all attention layers. Note that, we can also utilize other model structures, such as bias parameter (Zaken et al., 2022) and LoRA (Hu et al., 2021), to represent documents in PlugD, which we leave for future work. Task-specific Models. Task-specific models are derived from the PTM backbone and tuned on the supervised task data to obtain task reasoning ability. During downstream tuning, we freeze the document plugins, and only the task-specific models and the mapping network of the document plugins are trainable so that the document plugins can be reused across different tasks. We adopt two training methods for task-specific models, including vanilla full-parameter fine-tuning and parameterefficient tuning (PET). Note that, deploying largescale PTMs with full-parameter fine-tuning will lead to exacerbated computational and storage burdens for multi-task scenarios. Thus, it is worth exploring PlugD with PET for efficient task adaption in real-world applications. Both two training methods adopt task-specific objectives to optimize the parameters. Fine-tuning optimizes all parameters of the PTM backbone, while parameter-efficient tuning only optimizes parts of the parameters and keeps other parameters frozen. Specifically, we adopt adapters for parameter-efficient tuning (Pfeiffer et al., 2021). Given the hidden vector h ∈ R d, where d is the hidden size, the output of the adapter layer is calculated as: $$h_{\mathrm{out}}=h+\phi(h W_{\mathrm{down}})W_{\mathrm{up}},$$ $\downarrow$ . where Wdown ∈ R d×r, Wup ∈ R r×d, and r ≪ d refer to the bottleneck dimension. Computational Complexity. PlugD encodes the documents before task adaptation and thus can reduce the computation costs. In this paragraph, we discuss the computational complexity of PlugD in detail. Assume the lengths of the query and document are Lq and Ld, respectively. For the traditional encoding-task coupling models, which simultaneously encode documents and queries, the computational complexity of the attention layer is O((Lq + Ld) 2), and the computational complexity of the feed-forward layer is O(Lq + Ld). For PlugD, the document plugins are inserted into the attention layer, whose computational complexity is O(Lq(Lq + Ld)). And the document plugins do not involve the computation of the feed-forward layer, and thus its computational complexity is O(Lq). In real-world applications, the documents usually are much longer than the queries. Therefore, PlugD can achieve significant computational speedup compared with conventional encodingtask coupling models. ## 3.3 Plugin Learning To enable the document plugins to contain sufficient document knowledge, we futher explore self-supervised plugin learning in this section. As shown in Figure 2, we adopt two self-supervised tasks, recurring span prediction, and next sentence generation to augument the comphrension and generation ability of PlugD. Both two tasks require document plugins to provide context information for the model to make the predictions. Let d = {s1*, ..., s*n} denote the input document with n sentences. We perform plugin learning as: Recurring span prediction (RSP). Inspired by Ram et al. (2021), we utilize recurring spans to construct self-supervision signals. Recurring spans occur multiple times in the documents, and usually contain important semantics for document understanding. Masking the recurring spans and requiring the PTM to recover them can help the PTM to capture document semantics. Specifically, we concatenate sentences randomly sampled from the document d as query q, and treat the remaining sentences as context c. Then we generate the document plugin Pc based on c, and replace the recurring spans in q as special mask tokens. The PTM is required to predict the masked spans in q conditioned on Pc. Different from the traditional masked language model task (Devlin et al., 2019; Raffel et al., 2020), which mainly focuses on local information around the masked spans, RSP usually requires the PTM to integrate global information from document plugins. Next sentence generation (NSG). To enable the document plugins to benefit generation tasks, we adopt NSG as a training task. We first randomly sample three consecutive sentences {si, si+1, si+2} from the document d. The remaining sentences are treated as the context c = {s1, ..., si−1, si+3*, ..., s*n} to generate the document plugin Pc. Then we regard si as the query, and require the PTM to generate the following two sentences {si+1, si+2} conditioned on Pc. These two tasks require the PTM to capture both local information from the queries and global information from the document plugins. Therefore, after plugin learning, the PTM is supposed to be able to build informative document plugins and serve as a good initialization for task-specific models to capture knowledge from document plugins. Both two tasks are sequence-to-sequence tasks, and we adopt the negative likelihood as the training objectives for two tasks. The model is trained in a multi-task fashion, and the final training loss is the sum of the loss of two tasks. During plugin learning, the document plugins are calculated in real time for different documents. All parameters of the PTM are tuned for plugin learning. After that, the document plugins can be calculated and stored for further downstream task tuning and inference. ## 3.4 Plugging Strategies To flexibly utilize the document plugins, we propose two plugging strategies: Plugging during tuning. In this setting, the document plugins are adopted in both the training and inference of task-specific models. Given an instance with the query and document as inputs, we first insert the corresponding document plugin, which is computed before fine-tuning, into the models. Then task-specific models are trained with taskspecific objectives to capture relevant information from the document plugins. Plugging after tuning. In this setting, the document plugins are adopted only in the inference of task-specific models. Document plugins can provide external knowledge, and serve as a postprocessing method to inject knowledge into taskspecific models. During inference, given an instance, we directly insert related document plugins into the task-specific models to achieve knowledge injection. This setting does not require additional training for existing task-specific models and can be used to flexibly inject textual knowledge. ## 4 Experiments 4.1 Evaluation Settings Datasets. We adopt widely-used Wikipedia articles as our document collection and select typical knowledge-intensive tasks for evaluation. We adopt a typical multi-task benchmark, KILT (Petroni et al., 2021), to evaluate our models. The tasks in KILT are grounded in the same snapshot of Wikipedia pages. In particular, we evaluate PlugD on a fact verification dataset, FEVER (Thorne et al., 2018), four question answering datasets, including Natural Questions (NQ) (Kwiatkowski et al., 2019), HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017), ELI5 (Fan et al., 2019), a dialogue generation dataset, Wizard of Wikipedia (WoW) (Dinan et al., 2019), and two slot filling dataset, Zero Shot RE (zsRE) (Levy et al., 2017), T-REx (ElSahar et al., 2018). These tasks are diverse and | Models | FEVER | NQ | TriviaQA | HotpotQA | ELI5 | WoW | zsRE | T-Rex | Avg. | | | | |----------------------------|---------|-------|------------|------------|--------|-------|--------|---------|--------|-------|-------|-------| | Acc. | EM | F1 | EM | F1 | EM | F1 | RL | F1 | Acc. | Acc. | | | | Parameter-efficient Tuning | | | | | | | | | | | | | | ED2LM | 83.13 | 38.34 | 46.04 | 53.84 | 62.05 | 19.84 | 28.63 | 11.24 | 15.24 | 31.15 | 46.34 | 37.39 | | EmbRecy | 84.59 | 37.42 | 45.43 | 53.02 | 61.05 | 18.98 | 27.70 | 11.57 | 16.91 | 27.20 | 44.16 | 36.73 | | ED2LMf ♣ | 81.81 | 35.62 | 44.18 | 52.01 | 59.82 | 19.07 | 27.81 | 11.01 | 15.20 | 27.09 | 44.78 | 35.82 | | EmbRecyf ♣ | 84.59 | 32.13 | 40.17 | 47.59 | 55.37 | 18.18 | 26.79 | 11.92 | 16.65 | 20.76 | 41.22 | 34.13 | | PlugD♣ | 86.56 | 41.54 | 49.76 | 57.29 | 65.43 | 23.04 | 32.51 | 11.37 | 17.15 | 32.12 | 48.38 | 39.68 | | w/o PT♣ | 86.33 | 40.24 | 47.72 | 57.67 | 64.91 | 22.04 | 31.44 | 11.67 | 17.07 | 30.64 | 48.26 | 39.24 | | UpperBound | 88.20 | 42.60 | 50.86 | 61.77 | 69.14 | 23.84 | 33.71 | 11.80 | 17.92 | 33.65 | 49.96 | 41.22 | | Full-parameter Fine-tuning | | | | | | | | | | | | | | ED2LM | 80.59 | 42.07 | 49.79 | 58.94 | 66.68 | 22.80 | 32.32 | 11.66 | 16.10 | 31.77 | 50.84 | 39.35 | | EmbRecy | 84.34 | 42.71 | 50.55 | 59.31 | 66.67 | 23.57 | 33.46 | 12.01 | 17.30 | 30.10 | 50.12 | 39.93 | | ED2LMf ♣ | 84.17 | 40.84 | 48.57 | 57.05 | 64.92 | 21.61 | 30.70 | 11.94 | 15.83 | 24.19 | 48.04 | 37.96 | | EmbRecyf ♣ | 85.04 | 39.89 | 47.58 | 57.91 | 65.37 | 21.59 | 30.92 | 11.92 | 16.69 | 27.82 | 50.28 | 38.89 | | PlugD♣ | 86.34 | 42.53 | 50.42 | 59.46 | 67.07 | 23.46 | 33.07 | 12.30 | 17.61 | 30.99 | 52.22 | 40.61 | | w/o PT♣ | 85.97 | 42.25 | 49.80 | 58.88 | 66.60 | 23.05 | 32.20 | 12.16 | 17.40 | 29.94 | 52.40 | 40.26 | | UpperBound | 86.42 | 45.03 | 52.92 | 62.50 | 69.82 | 24.54 | 34.66 | 12.33 | 18.39 | 32.60 | 52.50 | 41.79 | require the model to exploit document knowledge fully. As shown in the paper of KILT, external document knowledge can not benefit the entity linking task. Thus, we do not use them for evaluation in this paper. Following Petroni et al. (2021), we use dense passage retrieval (Karpukhin et al., 2020) to retrieve relevant documents from Wikipedia articles for each query. Please refer to Appendix for evaluation results of document retrieval. Metrics. Following previous work, we adopt accuracy for the fact verification task (FEVER) and slot filling tasks (zsRE, T-REx); exact match (EM) and F1 score for the extractive question answering tasks (NQ, HotpotQA, TriviaQA); ROUGE-L (RL) for the long abstractive question answering tasks (ELI5); F1 score for the dialogue generation task (WoW). Besides, to evaluate the overall performance, we calculate average scores for these tasks as an evaluation metric, in which EM scores are used for extractive question answering tasks. ## 4.2 Training Details We utilize the widely used T5-large (Raffel et al., 2020), as our PTM backbone. For the PET training method, we set the bottleneck dimension of adapters as 16. We insert document plugins in the last 12 layers. We conduct plugin learning on a large-scale unsupervised corpus, C4 (Raffel et al., 2020) for 36k steps. We use Adam to optimize our models. Due to the high computational costs of full-parameter fine-tuning, in the following experiments, we adopt the PET method to train the models unless otherwise specified. We train models with a half-precision floating point on 8 NVIDIA A100 GPUs, and the plugin learning process takes 18 hours. Please refer to Appendix for more details. ## 4.3 Baselines Plugging during tuning. Here we compare PlugD with several representative baselines, which encode the documents and queries with two different encoders. In this way, these models can reuse document representations across different queries, but they still need to generate different document representations for different tasks. (1) **ED2LM** (Hui et al., 2022) utilizes the encoder-decoder architecture to encode the queries and documents separately, and then the document can be pre-encoded before inference. In particular, the documents are inputted into the encoder, and queries are inputted into the decoder. (2) **EmbRecy** (Saad-Falcon et al., 2022) proposes to reuse the intermediate activations of the documents to achieve speedup for finetuning and inference. EmbRecy caches an intermediate layer's output as the document representation and the remaining near-top layers are tuned to fuse the information of documents and queries. (3) Besides, to meet the setting of decoupling document encoding from tasks, we freeze the document encoders of ED2LM and EmbRecy to make | Models | Avg. | FLOPs | Time | |------------|--------|---------|--------| | G | ms | | | | ED2LM | 37.39 | 114.9 | 60 | | EmbRecy | 35.54 | 197.5 | 142 | | PlugD | 39.68 | 139.3 | 98 | | UpperBound | 41.22 | 453.1 | 226 | the document representations unified across different tasks. We denote the two task-agnostic methods as **ED2LM**f and **EmbRecy**f . (4) As PlugD conducts further self-supervised training for plugand-play representation learning, we also present the results of PlugD without plugin learning (w/o PT) to show the effectiveness of the architecture of PlugD. (5) **UpperBound** follows the traditional settings, in which the queries and documents are concatenated together and fed into the model. The document representations generated by this baseline are query-specific. The model needs to encode a single document multiple times for different tasks and different queries, which is the upper bound of task-agnostic methods. Plugging after tuning. We attempt to inject unstructured textual knowledge into PTMs after downstream tuning. Existing methods mainly focus on enhancing PTMs with structural knowledge during pre-training or fine-tuning (Zhang et al., 2019; Wang et al., 2021; Bosselut et al., 2019). These methods require retraining the task-specific models to achieve knowledge injection, which thus cannot be adopted in this setting. Therefore, we present the results of the following models: (1) We adopt T5 (Raffel et al., 2020) and **PlugD** as the backbones, which are trained with only the queries as inputs and do not utilize external document knowledge in evaluation. (2) Based on the trained T5 and PlugD, we adopt different post-processing methods to incorporate document knowledge. For T5, we directly concatenate the documents and queries as inputs for evaluation (**+Concat**). For PlugD, we insert the document knowledge with document plugins (**+DPlug**). The setting is challenging as there is a gap between the training and evaluation. ## 4.4 Plugging During Tuning We present the comparison results between baseline models and PlugD in Table 1. From this table, we can observe that: (1) The baseline models which generate task-agnostic document representa- Models FEVER NQ WoW zsRE Acc. EM F1 F1 Acc. T5 79.10 11.35 17.11 16.59 2.52 +Concat 76.84 14.45 22.16 14.26 19.17 ∆ -2.26 +3.1 +5.05 -2.33 +16.65 PlugD 79.56 11.17 16.39 16.58 2.23 +DPlug 82.54 23.01 32.68 15.28 21.13 ∆ **+2.98 +11.84 +16.29** -1.03 **+18.90** tions perform worse than the corresponding models which generate task-specific representations. It indicates that decoupling document representation from concrete tasks is challenging and existing methods cannot achieve satisfactory performance. (2) Compared with the task-agnostic baseline models (ED2LMf and EmbRecyf ), PlugD can achieve significant performance improvements across different tasks. Besides, compared with ED2LM and EmbRecy, PlugD can also achieve superior results on many datasets, especially for parameter-efficient tuning. In addition, ED2LM and EmbRecy need to generate document representations for different tasks separately. Thus they require more storage than PlugD. In contrast, PlugD can generate informative unified representations with fewer storage requirements and achieve superior results across different tasks. (3) Compared with the traditional encoding-task coupling model (UpperBound), sharing document representation across different tasks in PlugD only leads to a limited performance drop (39.68 vs. 41.22, and 40.61 vs. 41.79 on average). And as PlugD does not need to encode documents during downstream tuning and inference, PlugD enables significant computational acceleration. The results suggest that PlugD can effectively capture document semantics and inject them into the PTM to provide knowledge. (4) Even PlugD without further plugin learning can outperform the baselines on several datasets. It proves that PlugD benefits from both the self-supervised tasks and the model architecture. Besides, it also indicates that the contextualized document representations generated by the original PTM (PlugD w/o PT) are powerful if we utilize them correctly. Computational Cost. We compare the computational cost of PlugD and baseline models. Here, we present the floating point operations (FLOPs) and calculation time required to process one data in inference for each method. We assume that the document, query, and answer contain 512, 48, and 32 to- | Datasets | FEVER | NQ | WoW | zsRE | | |------------|---------|-------|-------|--------|-------| | Acc. | EM | F1 | F1 | Acc. | | | PlugD | 86.56 | 41.54 | 49.76 | 17.15 | 32.12 | | w/ RSP | 86.17 | 41.23 | 49.21 | 16.98 | 31.66 | | w/ NSG | 86.03 | 40.80 | 49.06 | 17.62 | 28.92 | | w/o PT | 86.33 | 40.24 | 47.72 | 17.07 | 30.64 | kens, respectively. The results are shown in Table 2. From the results, we can observe that: (1) The methods for task-agnostic representation require much less computational cost than encoding-task coupling methods. Especially, our method PlugD can achieve 3.25× speed up (139.3 GFLOPs vs. 453.1 GFLOPs). (2) The methods for task-agnostic representation generally are inferior to encodingtask coupling methods. PlugD can achieve better average scores than other baselines and preserve low computational costs. (3) Both task-agnostic and query-agnostic models need to pre-encode and store document representations before downstream tuning for inference speed up. However, models generating query-agnostic and task-specific representations require separate document representations for each task. In contrast, our PlugD generates task-agnostic representations for all tasks, resulting in better results and lower storage requirements. ## 4.5 Plugging After Tuning The comparison results are shown in Table 3. From the results, we can observe that: (1) Both T5 and PlugD cannot achieve consistent improvement from post-processing knowledge injection on these tasks. It proves that plugging after tuning is a challenging setting as there is a gap between training and evaluation. (2) PlugD can achieve significant improvement on FEVER, NQ, and zsRE, which further indicates the effectiveness of PlugD. However, PlugD cannot achieve improvement on WoW. As the ability to acquire knowledge from the document plugins is obtained from plugin learning, further downstream task tuning may lead the models to forget the ability. Thus, even PlugD can not achieve consistent improvement. (3) Without document knowledge, PlugD and T5 achieve comparable results. It indicates that the plugin learning process does not improve the fundamental ability of PTMs. The improvement achieved by PlugD in both plugging during/after tuning settings comes from the effective plug-and-play framework. ![7_image_0.png](7_image_0.png) ## 4.6 Ablation Study In this section, we conduct an ablation study to verify the effectiveness of our proposed plugin learning tasks. We show the results of the models, which are trained with only recurring span prediction task (**w/ RSP**), with only next sentence generation task (**w/ NSG**), or without plugin learning (**w/o PT**). We evaluate the models on four datasets for the plugging during tuning setting. The results are shown in Table 4. We can find that (1) PlugD without plugin learning leads to a significant performance drop, which further indicates that the proposed training tasks can help the PTM to effectively encode the document knowledge into plugins. (2) Two tasks can cooperate with each other to improve the model performance. Though training PlugD with only one task will lead to performance deterioration on some tasks, training with two tasks can achieve consistent improvement over the model without plugin learning. (3) When PlugD is trained with only NSG, the model can achieve superior results for WoW. But the task harms the performance for FEVER and zsRE. This is because NSG requires the model to generate long sentences, which is similar to WoW, while FEVER and zsRE only require short outputs. In contrast, training with only RSP will also lead to a performance drop for WoW. It indicates that diverse plugin learning tasks are important for expressive document plugins. ## 4.7 Transferability Analysis In this section, we want to explore the effectiveness of supervised tasks on document representation transferability. Here we present the results of ED2LM, which can outperform other baselines. Specifically, we train the task-specific document encoder on a source task, and then reuse the encoder on other target tasks to continually train the rest of the model. The results are shown in Figure 3. From the results, we can observe that 1) The non-diagonal values of the matrix are consistently smaller than the diagonal values. It suggests that training the document encoder with existing supervised tasks can hardly benefit other target tasks. PlugD trained with two self-supervised objectives can provide transferable document representation and achieve superior results. 2) The encoders trained on the NQ dataset can outperform encoders trained on other tasks. It indicates that training with challenging tasks may lead to better performance. ## 5 Conclusion In this paper, we explore a new paradigm, which aims to represent documents as pluggable modules for PTMs. In this setting, we can get rid of encoding the same document multiple times for different tasks. The extensive experiments prove that our proposed PlugD can significantly reduce the computational cost and effectively inject document knowledge into PTMs to improve performance. In the future, we will explore more effective plugin learning tasks and further attempt to represent knowledge graphs, and figures as plugins to provide knowledge for PTMs. ## Limitations We discuss the limitations of PlugD in this section: (1) We explore decoupling document encoding from concrete tasks in this paper, and propose to represent documents as pluggable modules before task adaptation. Therefore, PlugD has a higher storage requirement than conventional encodingcoupling methods. We encourage (2) In the experiments, we adopt T5 as our PTM backbone. Actually, the proposed framework can also be applied to more pre-trained models with various model architectures. Besides, recent trends show that larger models tend to build more expressive text representations. It is worth exploring PlugD with larger PTMs with billions of parameters to learn informative document plugins. (3) In this paper, we adopt an external retriever to retrieve relevant documents for each input query. Recent progress in retrievalaugmented language models shows that training the PTMs with an end-to-end textual knowledge retriever can promote downstream performance. We believe document plugins can also serve as the external knowledge base and enhancing PlugD with end-to-end retrieval is a promising direction. ## Acknowledgement This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), National Natural Science Foundation of China (No. 62236004). Author Contributions Chaojun Xiao and ChiMin Chan wrote the code and conducted the experiments. Chaojun Xiao, Zhengyan Zhang, and Xu Han wrote the initial draft. Yankai Lin, Zhiyuan Liu, and Xiangyang Li significantly edited and improved the paper. Zhonghua Li, Zhao Cao, and Maosong Sun provided valuable advice to the research. ## References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karén Simonyan. 2022. Flamingo: a visual language model for few-shot learning. In *NeurIPS*. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *CoRR*, abs/2004.05150. Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. *IEEE Trans. Pattern Anal. Mach. Intell.*, 35(8):1798–1828. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, and et al. 2021. On the opportunities and risks of foundation models. *CoRR*, abs/2108.07258. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the ACL, pages 4762–4779. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of NeurIPS. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In *Proceedings of ACL*, pages 1870–1879. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311. Alexandra Chronopoulou, Matthew E. Peters, and Jesse Dodge. 2022. Efficient hierarchical domain adaptation for pretrained language models. In Proceedings of NAACL-HLT, pages 1336–1351. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of EMNLP, pages 670–680. Andrew M. Dai, Christopher Olah, and Quoc V. Le. 2015. Document embedding with paragraph vectors. CoRR, abs/1507.07998. Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In *Proceedings of ACL*, pages 8493–8502. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In Proceedings of ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pages 4171–4186. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In *Proceedings of ICLR*. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, and Maosong Sun. 2022. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. *CoRR*, abs/2203.06904. Jingfei Du, Myle Ott, Haoran Li, Xing Zhou, and Veselin Stoyanov. 2020. General purpose text embeddings from pre-trained language models for scalable inference. In *Findings of EMNLP*, volume EMNLP 2020, pages 3018–3030. Hady ElSahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon S. Hare, Frédérique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In *Proceedings of LREC*. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: long form question answering. In Proceedings of ACL, pages 3558–3567. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In *Proceedings of EMNLP*, pages 6894– 6910. Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021. Pre-trained models: Past, present and future. AI Open, 2:225–250. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *Proceedings of ICML*, volume 97, pages 2790–2799. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *CoRR*, abs/2106.09685. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Kai Hui, Honglei Zhuang, Tao Chen, Zhen Qin, Jing Lu, Dara Bahri, Ji Ma, Jai Prakash Gupta, Cícero Nogueira dos Santos, Yi Tay, and Donald Metzler. 2022. ED2LM: encoder-decoder to language model for faster document re-ranking inference. In Findings of ACL, pages 3747–3758. Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-play conversational models. In *Findings of EMNLP*, volume EMNLP 2020, pages 2422–2433. Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. 2022. Long range language modeling via gated state spaces. *CoRR*, abs/2206.13947. Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In *Proceedings of NeurIPS*, pages 3111– 3119. Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plugand-play method for controlled text generation. In Findings of EMNLP, pages 3973–3997. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of EMNLP*, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of NAACL-HLT*, pages 2227–2237. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know. *TACL*, 8:423–438. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of ACL*, pages 1601–1611. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of EMNLP, pages 6769–6781. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2022. Fast model editing at scale. In *Proceedings of ICLR*. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of NeurIPS, pages 3294–3302. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *TACL*, 7:452–466. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick S. H. Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of NAACL-HLT, pages 2523–2544. Anne Lauscher, Tobias Lüken, and Goran Glavas. 2021. Sustainable modular debiasing of language models. In *Findings of ACL: EMNLP*, pages 4782–4797. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of EMNLP*, pages 3045– 3059. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of EMNLPIJCNLP*, pages 2463–2473. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In *Proceedings of CoNLL*, pages 333–342. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In *Proceedings of EACL*, pages 487–503. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of ACL-IJCNLP, pages 4582–4597. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: an adapter-based framework for multi-task cross-lingual transfer. In *Proceedings of EMNLP*, pages 7654–7673. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JMLR*, 21:140:1–140:67. Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, and Omer Levy. 2021. Few-shot question answering by pretraining span selection. In *Proceedings of ACL/IJCNLP*, pages 3066–3079. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *Proceedings of EMNLP-IJCNLP*, pages 3980– 3990. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In *Proceedings of EMNLP*, pages 5418–5426. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how BERT works. *Trans. Assoc. Comput. Linguistics*, 8:842–866. Jon Saad-Falcon, Amanpreet Singh, Luca Soldaini, Mike D'Arcy, Arman Cohan, and Doug Downey. 2022. Embedding recycling for language models. CoRR, abs/2207.04993. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *The Tenth International Conference on* Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. REPLUG: retrieval-augmented black-box language models. *CoRR*, abs/2301.12652. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM Computing Surveys, 55(6):1–28. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In *Proceedings of NAACL-HLT*, pages 809–819. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceedings of NeurIPS*, pages 5998– 6008. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-adapter: Infusing knowledge into pre-trained models with adapters. In Findings of ACL, volume ACL/IJCNLP 2021, pages 1405–1418. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In Proceedings of ICLR. OpenReview.net. Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J. Witbrock. 2018. Word mover's embedding: From word2vec to document embedding. In *Proceedings of EMNLP*, pages 4524–4534. Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, and Julian McAuley. 2023. Small models are valuable plug-ins for large language models. *CoRR*, abs/2305.08848. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In *Proceedings of EMNLP*, pages 2369–2380. Zichun Yu, Chenyan Xiong, Shi Yu, and Zhiyuan Liu. 2023. Augmentation-adapted retriever improves generalization of language models as a zero-shot plug-in. In *Proceedings of ACL*. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In *Proceedings of* NeurIPS. Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of ACL, pages 1–9. Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, and Weizhu Chen. 2021. Poolingformer: Long document modeling with pooling attention. In *Proceedings of ICML*, volume 139, pages 12437–12446. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: enhanced language representation with informative entities. In Proceedings of ACL, pages 1441–1451. Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2023. Plug-and-play knowledge injection for pre-trained language models. In *Proceedings of ACL*. Junru Zhou, Zhuosheng Zhang, Hai Zhao, and Shuailiang Zhang. 2020. LIMIT-BERT : Linguistics informed multi-task BERT. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 4450–4461. ## A Appendix A.1 Discussion In this paper, we propose to decouple document encoding from concrete tasks, achieving encoding documents once and for all across different tasks. In this section, we further discuss the potential of PlugD. Unified Model across Multiple Tasks. Recently, with the rapid progress of large-scale PTMs, handling multiple tasks with a unified PTM has received rapidly increasing attention. For example, many researchers explore instruction tuning (Sanh et al., 2022; Wei et al., 2022) to enable a unified PTM to perform multiple tasks with natural language description. We attempt to extend the paradigm to document representation, enhancing the unified PTM with unified document representation across multiple tasks. In this way, we can provide the PTM with various external knowledge flexibly and efficiently, avoiding encoding documents multiple times for different tasks and user input queries. Heterogeneous Knowledge Base. Enhancing large-scale PTMs with various knowledge is an important topic for natural language processing. Many researchers attempt to incorporate knowledge graphs (Zhang et al., 2019; Wang et al., 2021), linguistic knowledge (Zhou et al., 2020) into PTMs. We argue that PlugD provides a new way for knowledge injection. We can encode various knowledge, such as images, and knowledge graphs, into the plugins of PTMs. In this way, we can build a heterogeneous plugin knowledge base for PTMs to improve downstream performance. Continual Learning. Previous researches show that PTMs can implicitly encode knowledge in the model parameters (Petroni et al., 2019; Jiang et al., 2020; Roberts et al., 2020; Dai et al., 2022; Mitchell et al., 2022), which is not editable for continual updates. PlugD provides a new way for the continual learning of PTMs. We can insert and update new knowledge for PTMs by continually learning and updating new document plugins, which will be further utilized to provide knowledge to PTMs. ## A.2 Document Retrieval In this paper, we adopt dense passage retrieval, DPR (Karpukhin et al., 2020), to retrieve relevant documents for each input query. Following Petroni et al. (2021), we adopt R-Precision, Precision@k and Recall@k, as the evaluation metrics. We adopt | Datasets | FEVER | NQ | TriviaQA | HotpotQA | ELI5 | WoW | zsRE | TRex | |-------------|---------|-------|------------|------------|--------|-------|--------|--------| | R-Precision | 56.29 | 55.97 | 46.76 | 25.58 | 16.46 | 27.37 | 14.72 | 43.74 | | Precision@3 | 24.01 | 27.71 | 24.82 | 2.82 | 10.62 | 15.04 | 6.87 | 18.40 | | Recall@3 | 70.25 | 59.25 | 51.64 | 8.46 | 23.90 | 45.12 | 19.20 | 55.21 | the evaluation scripts provided by the KILT paper. Please refer to the original KILT paper for the details of the metrics. From the results, we can see that the retrieval performance is not satisfactory for some datasets, which may bring the noise in the downstream tasks. And we encourage the community to develop retrievers, which can achieve satisfactory performance across different tasks. ## A.3 Impacts Of Insertion Layers | Datasets | FEVER | NQ | WoW | zsRE | | |------------|---------|-------|-------|--------|-------| | Acc. | EM | F1 | F1 | Acc. | | | PlugD (6) | 85.22 | 39.78 | 48.12 | 17.15 | 28.44 | | PlugD (12) | 86.33 | 40.24 | 47.72 | 17.07 | 30.64 | | PlugD (24) | 86.64 | 40.52 | 48.77 | 16.86 | 29.14 | Table 6: The results of PlugD with different number of insertion layers. Here PlugD (n) indicates that the document plugins are inserted into the top-n layers. PlugD inserts the document plugins into the selfattention layers to provide document knowledge. As the pre-trained models tend to capture linguistic features in the bottom layers and capture the taskspecific information in the top layers (Rogers et al., 2020). Therefore, to reduce computational costs, we only insert the document plugins in the top layers. In this section, we explore the impact of insertion layers of document plugins. We present the results of PlugD with document plugins inserted in the last 6 layers, 12 layers, and all 24 layers. Here, we do not conduct plugin learning for PlugD to speed up experiments. The results are shown in Table 6. From the results, we can see that: (1) With the increasing of insertion layers, the performance on FEVER and NQ improves. But PlugD with document plugins in all layers can not outperform the PlugD with document plugins in the top layers on WoW and zsRE. That is because the fact verification and question answering tasks require the models to select useful information via both lexical matching and semantic matching. In contrast, the dialogue generation and slot filling tasks rely on document semantics to provide knowledge, and inserting the document plugins in the bottom layers can not benefit the performance. (2) The three models can obtain similar performance on these tasks. Therefore, in order to reduce the computational costs and maintain the performance, we only insert document plugins in the top 12 layers for other experiments. ## A.4 Impacts Of Plugin Sharing Across Layers As mentioned in previous sections, PlugD inserts the same prefix tokens for different attention layers to save the storage. In this section, we study the impacts of sharing plugins across different layers. To this end, we attempt to insert different prefix tokens for different layers. Specifically, we encode the document d to obtain the raw hidden state Hld from the l-th layer, and then adopt the mapping network tailored to the l-th layer to map the hidden state into the prefix tokens. The prefix tokens are then inserted into the l-th layer for query encoding. Similar to PlugD, we insert the representations into the top 12 layers for this model. We term the model as All-Hidden. The comparison results are shown in Table 7. From the results, we can observe that All-Hidden can achieve superior results on three datasets, including FEVER, WoW, and zsRE. But All-Hidden requires 12× storage than PlugD, which is impractical for large-scale document collections. And PlugD can achieve comparable performance to AllHidden. Therefore, to reduce the storage requirement, we choose to share the document plugins across different attention layers. ## A.5 Experimental Details Model Implementation. The mapping network of document plugins is used to map the raw document representations into the document plugins for different tasks. Given a hidden vector, hi, we calculate the corresponding prefix token as pi = hi + W2mReLU(W1mhi), where hi ∈ R d, W1m ∈ R d×2d, W2m ∈ R 2d×d, and d ∈ R is the hidden size. As for the parameter-efficient tuning method, we adopt adapter layers to tune the model. We add the adapters after the layernorm operation of feed- | Datasets | FEVER | NQ | WoW | zsRE | | |------------|---------|-------|-------|--------|-------| | Acc. | EM | F1 | F1 | Acc. | | | PlugD | 85.22 | 39.78 | 48.12 | 17.15 | 28.44 | | All-Hidden | 86.73 | 39.46 | 47.71 | 17.28 | 32.20 | forward layers. The parameters of adapters are randomly initialized following a zero-mean Gaussian distribution with standard deviation as 10−2. Plugin Learning. For the recurring span prediction task, we first identify spans that occur multiple times from the documents. Then we filter out the stopwords and personal pronouns, and keep the longest 15 spans as the recurring spans for further masking. Then we randomly sample 5 sentences, which contain the recurring spans from the document as the query. For the next sentence generation task, we randomly sample three consecutive sentences from the documents, where the first sentence is treated as the query, and the last two sentences are treated as the answers. The model is trained in a multi-task fashion, and 70% documents are used for recurring span prediction, and 30% documents are used for next sentence generation. The maximal length for queries and answers are set as 196 and 128, respectively. We set the learning rate as 2 × 10−5and batch size as 256. Downstream Task Tuning. For downstream tasks, we set the training batch size as 64. The learning rate is selected from {10−4, 5 × 10−4, 10−3} for PET. And as full-parameter finetuning require amounts of computation, we do not conduct grid search for this setting. We set the learning rate for full-parameter fine-tuning as 2 × 10−5. For fact verification, we take the claims as the input queries and take the logits of "yes" and "no" for classification. For other tasks, we treat them as text-to-text generation problems, and during the inference, we adopt the greedy strategy for decoding. The evaluation scripts are written by our own, and will be released with the paper. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✗ A2. Did you discuss any potential risks of your work? The model is designed and evaluated on established tasks and datasets, which should not cause severe risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? In Section 4.2 Training Details and Appendix A.5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? These tools and datasets are publicly available and free of use for research purposes. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The use is consistent with their intended use. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use established public datasets, which should not cause privacy issues. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. I use the publicly available datasets, and we directly adopt the original split datasets. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 and Appendix A.5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix A.5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
xie-lukasiewicz-2023-empirical
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
https://aclanthology.org/2023.acl-long.876
The increasingly large size of modern pre-trained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) areless effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning.
# An Empirical Analysis Of Parameter-Efficient Methods For Debiasing Pre-Trained Language Models Zhongbin Xie1**, Thomas Lukasiewicz**2,1 1 University of Oxford, UK 2 Vienna University of Technology, Austria zhongbin.xie@cs.ox.ac.uk, thomas.lukasiewicz@tuwien.ac.at ## Abstract The increasingly large size of modern pretrained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) are less effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning. ## 1 Introduction Pre-trained language models are able to encode rich linguistic and factual knowledge by learning the co-occurrence information of words in large realworld corpora (Devlin et al., 2019; Petroni et al., 2019; Raffel et al., 2020; Brown et al., 2020). Since most of these corpora are internet-based and not carefully curated, they are likely to contain unbalanced or stereotyped information for certain demographic groups. As a result, pre-trained language models are often demonstrated to inherit bias from human society and exhibit potential harms (Blodgett et al., 2020; Bender et al., 2021; May et al., 2019; Zhao et al., 2019; Sheng et al., 2019; Nangia et al., 2020; Nadeem et al., 2021). Hence, much research effort has been devoted to debias pre-trained language models (Meade et al., 2022). With the size of language models becoming incredibly large (Brown et al., 2020; Hoffmann et al., 2022; Smith et al., 2022), they are not only at higher risk of exhibiting biased behaviors (Bender et al., 2021), but also hard to debias because of prohibitive computational cost. Therefore, recent parameter-efficient methods (He et al., 2022; Ding et al., 2022) have been applied to bias mitigation, where only a small portion of the parameters are updated (Lauscher et al., 2021; Gira et al., 2022). However, these works are limited in terms of evaluation dimensions, making it unclear how different parameter-efficient methods' performance compare to each other, whether one parameter-efficient method is effective across different types of language models, and whether they are also effective for mitigating religious and racial bias in addition to gender bias. Moreover, direct comparisons with strong post-hoc debiasing methods (Liang et al., 2020; Schick et al., 2021), as well as evaluations of bias mitigation's impact on the language model's internal knowledge, are often insufficient. Given these observations, we investigate three popular parameter-efficient methods, i.e., prefix tuning (Li and Liang, 2021), prompt tuning (Lester et al., 2021), and adapter tuning (Houlsby et al., 2019), in combination with counterfactual data augmentation (CDA, Zhao et al., 2018; Zmigrod et al., 2019; Webster et al., 2020) to debias pretrained language models. We conduct extensive experiments to study the parameter-efficient methods' performance on two types of language models (BERT (Devlin et al., 2019) for masked language models and GPT-2 (Radford et al., 2019) for autoregressive language models), three types of social biases (gender, race, and religion), and four types of performance measures (debiasing performance on CrowS-Pairs (Nangia et al., 2020) and StereoSet (Nadeem et al., 2021), language modeling per15730 formance on WikiText-2 (Merity et al., 2017) and StereoSet (Nadeem et al., 2021), fact retrieval performance on LAMA (Petroni et al., 2019), as well as downstream fine-tuning performance on WinoBias (Zhao et al., 2018)). We empirically compare to the performance of full fine-tuning and two posthoc debiasing methods (SentenceDebias (Liang et al., 2020) and SelfDebias (Schick et al., 2021)), aiming to comprehensively study the effectiveness of parameter-efficient methods for bias mitigation.1 Our main findings are as follows: - The parameter-efficient methods are effective in mitigating gender bias. Within the three parameter-efficient methods, adapter tuning is consistently the most effective one for mitigating bias across different types of language models, while prompt tuning is more suitable for GPT-2 than BERT. Comparing to strong post-hoc debiasing methods, parameterefficient methods are better at preserving the language modeling ability, while still achieving a competitive and sometimes superior debiasing performance. - The parameter-efficient methods are less effective when it comes to mitigating racial and religious bias, where the post-hoc debiasing methods could achieve a more favorable overall performance. - The parameter-efficient methods can perform similarly to or sometimes better than full finetuning, with improved time and memory efficiency. - The parameter-efficient methods can largely maintain the internal knowledge in both BERT and GPT-2, with the reduction in Precision@10 ranging from 0 to 6.8% across all the LAMA datasets when compared to the original pre-trained model, and with the reduction in average F1 scores less than 3.3% on the hard type-1 examples of WinoBias when compared to full fine-tuning. ## 2 Parameter-Efficient Methods In this section, we briefly review three popular parameter-efficient methods investigated in our study: prefix tuning (Li and Liang, 2021), prompt 1The code of this paper is available at https://github. com/x-zb/pedb. tuning (Lester et al., 2021), and adapter tuning (Pfeiffer et al., 2021). In contrast to traditional full fine-tuning where all the model parameters are updated during training, these parameter-efficient methods introduce a small number of *extra* tunable parameters φ on top of a frozen pre-trained language model. Pre-trained language models usually adopt the transformer architecture (Vaswani et al., 2017) consisting of multiple stacked layers. Assume that there are N*layer* layers, and H (i) 0 ∈ R T ×dis the input to the i-th layer, where T is the sequence length, and d is the model dimension. Then, H (i) 0 is transformed by the following equations to obtain the output of the i-th layer H (i) 5, which is in turn adopted as the input for the (i + 1)-th layer: $$H_{1,h}^{(i)}=\mathrm{Attn}(H_{0}^{(i)}W_{Q,h}^{(i)},H_{0}^{(i)}W_{K,h}^{(i)},H_{0}^{(i)}W_{V,h}^{(i)}),$$ $$h=1,2,\ldots,N_{head},\tag{1}$$ $$H_{2}^{(i)}=[H_{1,1}^{(i)};\ldots;H_{1,N_{head}}^{(i)}]W_{O}^{(i)},$$ (2) $$H_{3}^{(i)}=\mathrm{LayerNorm}(H_{0}^{(i)}+H_{2}^{(i)}),$$ (3) $$H_{4}^{(i)}=\mathrm{ReLU}(H_{3}^{(i)}W_{1}^{(i)}+b_{1}^{(i)})W_{2}^{(i)}+b_{2}^{(i)},$$ (4) $$H_{5}^{(i)}=\mathrm{LayerNorm}(H_{3}^{(i)}+H_{4}^{(i)}).\tag{5}$$ Here, Eqs. (1) and (2) constitute the multi-head attention sublayer, where W (i) Q,h, W (i) K,h, and W (i) V,h denote the projection matrix for the query, key, and value of the h-th attention head, respectively; N*head* is the number of attention heads, and H (i) 1,h ∈ R T ×(d/N*head*). Eq. (4) denotes the feed-forward sublayer. [; ] denotes the concatenation operation. H (i) j ∈ R T ×dfor j = 0, 2, 3, 4, 5. The input to the 1st layer is the embeddings of the input tokens H (1) 0 = X ∈ R T ×d. Prefix tuning. Li and Liang (2021) prepend l tunable prefix vectors to the key vectors (H (i) 0 W (i) K,h) and value vectors (H (i) 0 W (i) V,h) of the attention function in Eq. (1) for each layer: $$H_{1,h}^{(i)}=\mbox{Attn}(H_{0}^{(i)}W_{Q,h}^{(i)},[P_{K,h}^{(i)};H_{0}^{(i)}W_{K,h}^{(i)}],$$ $$[P_{V,h}^{(i)};H_{0}^{(i)}W_{V,h}^{(i)}]),\ h=1,2,\ldots,N_{head}.\tag{6}$$ Here, P (i) K,h, P(i) V,h ∈R l×(d/N*head*) denote the tunable prefix vectors, and the total tunable parameters are φ={P (i) K,h, P(i) K,h | h= 1, 2, . . . , Nhead, i= 1, 2, . . . , N*layer*}. 15731 Prompt tuning. Lester et al. (2021) prepend l tunable prompt vectors (continuous tokens) only to the input embeddings (X), and compute the activations of these prompt vectors in the subsequent layers using the pre-trained transformer's parameters. So, the only modification is: $$H_{0}^{(1)}=[P;X]\in\mathbb{R}^{(l+T)\times d},\qquad\quad(7)$$ where P ∈ R l×d denotes the tunable prompt vectors, and φ = {P}. Adapter tuning. Houlsby et al. (2019) insert the following adapter module between the transformer's sublayers: $$H_{j}^{(i)}\gets H_{j}^{(i)}+f(H_{j}^{(i)}W_{down}^{(i)})W_{up}^{(i)},\tag{8}$$ where the intermediate activations H ons $H_{i}^{(i)}$ are ## Jare First Down-Projected By W (I) Down ∈ R D×(D/R)To A Lower Dimension D/R, And Then Up-Projected Back By W (I) Up ∈ R (D/R)×Dto The Model Dimension D. The Adapter Also Contains A Non-Linear Function F And A Residual Connection. The Hyperparameter R Is Called The *Reduction Factor*, Which Determines The Bottleneck Dimension D/R And Controls The Trade-Off Between Parameter Efficiency And Model Capacity. In Our Implementation, We Adopt Pfeiffer Et Al. (2021)'S Setting Where Only A Single Adapter Is Inserted After The Feed-Forward Sublayer, Since It Is Found To Be The Optimal Setting Among Other Alternatives (Pfeiffer Et Al., 2021). Thus, All The Tunable Parameters Are Φ = {W (I) Down, W(I) Up | I = 1, 2, . . . , N*Layer*}. 2 3 Parameter-Efficient Debiasing Through Counterfactual Data Augmentation We adopt counterfactual data augmentation (CDA, Zhao et al., 2018; Zmigrod et al., 2019; Webster et al., 2020) as our debiasing method to work together with parameter-efficient tuning methods. Since the encoded biases in pre-trained language models originate from the unbalanced training corpora, it is natural to mitigate these biases by rebalancing the training corpora. For example, when we want to mitigate gender bias between the male and female demographic group and encounter the training sentence "*He is a doctor.*", CDA would substitute the bias attribute word "He" with its 2Pfeiffer et al. (2021) also insert an additional "add & layer norm" sublayer before the adapter module, so the actual number of tunable parameters is a bit larger. Algorithm 1 Counterfactual Data Augmentation Input: original corpus D0, # demographic groups N, # samples S(≤ N − 1), bias attribute word list {(w (i) 1 , . . . , w (i) N )}M i=1 Output: augmented corpus D1 1: D1 ← ∅ 2: for text sequence x ∈ D0 do 3: Identify the number of demographic groups n(≤ N) contained in x 4: if n > 0 **then** 5: Generate all the permutations of N demographic groups considered n demographic groups at a time: Π = {πj} P n N j=1, where πj = (g1, . . . , gn), {g1, . . . , gn} ⊂ $\{1,\ldots,N\}$ if $p=N$ 21. 6: 7: 8: **end if** 9: Sample w/o replacement S permutations ΠS = {πs} S s=1 from Π 10: for πs ∈ ΠS do 11: xs←Substitute all bias attribute words 13: **end for** 14: D1 ← D1 ∪ {x} 15: **end if** 16: **end for** $$\begin{array}{c}{{\bar{w_{k}^{(i)}}\mathrm{~contained~in~}x\mathrm{~with~}w_{\pi_{s}[k]}^{(i)}}}\\ {{\mathcal{D}_{1}\leftarrow\mathcal{D}_{1}\cup\{x_{s}\}}}\\ {{\mathbf{end~for~}}}\\ {{\mathcal{D}_{1}\leftarrow\mathcal{D}_{1}\cup\{x\}}}\\ {{\mathbf{end~for~}}}\\ {{\mathbf{end~for~}}}\\ {{\mathbf{?}}}\end{array}$$ $\{1,...,N\}$ **if $n=N$ and $(1,2,...,N)\in\Pi$ then $\Pi\leftarrow\Pi\setminus\{(1,2,...,N)\}$** **if $n=N$ and $(1,2,...,N)\in\Pi$ then $\Pi\leftarrow\Pi\setminus\{(1,2,...,N)\}$** **if $n=N$ and $(1,2,...,N)\in\Pi$ then $\Pi\leftarrow\Pi\setminus\{(1,2,...,N)\}$** **if $n=N$ and $(1,2,...,N)\in\Pi$ then $\Pi\leftarrow\Pi\setminus\{(1,2,...,N)\}$** **if $n=N$ and $(1,2,...,N)\in\Pi$ then $\Pi\leftarrow\Pi\setminus\{(1,2,...,N)\}$** **if $n=N$ and $(1,2,...,N)\in\Pi$ then \(\ counterpart "She" to obtain an additional training sentence "*She is a doctor.*", so that both gender groups would have equal association with the gender-neutral word "*doctor*". Once we have a list of bias attribute words like {(he, she), (man, woman), (husband, *wife*), . . . }, we could retrieve all the occurrences of these bias attribute words in the training corpus, and substitute all of them with their counterparts. For religious and racial bias where more than two demographic groups are considered, we need to maintain two key properties: (i) we should guarantee *consistency*, i.e., we should avoid the case where some occurrences of the bias attribute words in group A are substituted with those in group B, while the other occurrences of (possibly different) bias attribute words in group A are substituted with those in group C, and (ii) we should avoid *collisions*, i.e., we should avoid the case where both groups A and B are substituted with group C. To this end, we should not consider each group independently and adopt random substitution. Rather, we should substitute according to permutations of all the occurred demographic groups in a sentence. Our complete CDA method is formally summarized in Algorithm 1. Note that in Algorithm 1, for convenience, we propose to sample a fixed number (S) of substitutions for each sentence. This is because the number of possible substitutions (P n N − 1) for each sentence may vary when the number of occurred demographic groups (n) in the sentence varies. In practice, we adopt N = 3 and S = 2 for religious and racial bias. Finally, the parameter-efficient debiasing framework works as follows: we first use Algorithm 1 to augment an original corpus D0 and obtain the debiasing corpus D1; next, we use the parameterefficient tuning methods from Section 2 to solve the following optimization problem: $$\operatorname*{min}_{\varphi}{\mathcal{L}}(\theta_{0},\varphi;{\mathcal{D}}_{1}),$$ φL(θ0, φ; D1), (9) where L is either the masked language modeling loss (Devlin et al., 2019) or causal language modeling loss (Radford et al., 2019), θ0 denotes the frozen parameters in the pre-trained language model, and φ denotes the tunable parameters defined in Section 2. ## 4 Conceptual Comparisons With Existing Debiasing Methods Most existing debiasing methods are *trainingbased*, where they introduce a specific debiasing loss to fine-tune a pre-trained model on certain balanced debiasing corpora (Kaneko and Bollegala, 2021; Garimella et al., 2021; Ravfogel et al., 2020; Cheng et al., 2021; Guo et al., 2022). These methods are, in general, orthogonal to our parameterefficient debiasing framework in that we could substitute the (masked) language modeling loss in Eq. (9) with their specific debiasing loss. In this paper, we only focus on the simple language modeling loss, and leave other kinds of debiasing loss for future work. Another important line of debiasing methods applies *post-hoc* mathematical operations on the frozen representations of a language model, such as SentenceDebias (Liang et al., 2020) and SelfDebias (Schick et al., 2021). We briefly review these methods below and make empirical comparisons to parameter-efficient debiasing methods in Section 5. SentenceDebias. Liang et al. (2020) assume that there is a linear subspace that can capture demographic information in the embedding space, thus trying to identify and remove the demographic information via linear algebra operations. Specifically, they first leverage a procedure similar to CDA to extract and augment sentences containing bias attribute words from a source corpus. Then, they encode the sentences to embeddings with a pre-trained language model, and obtain a set of difference vectors between the embeddings of sentences in different demographic groups. Next, they perform principle component analysis on the set of difference vectors, and use the first K components to expand a bias subspace. Once the bias subspace is identified, we could debias a new sentence embedding by subtracting its projection on the bias subspace. SelfDebias. Schick et al. (2021) assume that a pre-trained language model has a self-diagnosis ability, which can be used to adjust the output probabilities over the vocabulary during language generation. Specifically, SelfDebias relies on handcrafted descriptions for each type of bias. It first puts the bias description and the currently generated sentence into a self-diagnosis template, which encourages the language model to generate biased words for the next time step. Then, the probabilities of these detected biased words are scaled down in the actual generation process. Although no training is needed for these posthoc debiasing methods, their strong assumptions about bias may harm the language modeling ability of a language model. On the contrary, CDA-based parameter-efficient methods adhere to the original language modeling loss without additional assumptions, which may largely reserve the language modeling ability. Another advantage of CDA-based parameter-efficient methods is that nearly no additional computation is required during inference. ## 5 Experiments On Bias Mitigation 5.1 Experimental Setup Datasets. To measure gender, religious, and racial bias in pre-trained language models, we adopt two crowd-sourced datasets: CrowSPairs (Nangia et al., 2020) and StereoSet (Nadeem et al., 2021). CrowS-Pairs consists of pairs of contrasting sentences, where one is more stereotyping than the other. Its gender, religious, and racial subsets contain 262, 105, and 516 examples, respectively. For StereoSet, we adopt its intrasentence test, where each example consists of a context sentence and three candidate completions corresponding to stereotypical, anti-stereotypical, and unrelated associations, respectively. We again only adopt the gender, religious, and racial subsets, whose sizes are 1026, 623, and 3996, respectively. Evaluation Metrics. Our evaluation protocol follows Meade et al. (2022). We adopt the "stereotype score", defined as the percentage of examples for which the language model favors the stereotypical association (or the stereotyping sentence) to the anti-stereotypical association (or the less stereotyping sentence), as the measure of bias. An ideal model that is free of the considered bias should achieve a stereotype score of 50%. To measure the language modeling ability, we adopt the first 10% of WikiText-2 (Merity et al., 2017) to compute the perplexity (for autoregressive language models) or pseudo-perplexity (Salazar et al., 2020, for masked language models). We also compute the "language modeling (LM) score" (Nadeem et al., 2021) on all the bias subsets of StereoSet as our second measure of language modeling ability. Training Details. We choose to debias BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), which represent masked language models and autoregressive language models, respectively. Our implementation is based on the Hugging Face Transformers (Wolf et al., 2020) and Adapter Hub (Pfeiffer et al., 2020), and the adopted checkpoints are bert-base-uncased (109'514'298 parameters) and gpt2 (124'439'808 parameters). We adopt the English Wikipedia as our original debiasing corpus3, and counterfactually augment it using Algorithm 1. The adopted bias attribute words for each type of bias are listed in Appendix A. Next, we randomly down-sample 20% of the augmented Wikipedia as our debiasing corpus. All the CDAbased debiasing methods are trained for two epochs on one TITAN RTX GPU with 24 GB memory. We select optimal training hyperparameters according to the language modeling loss on a validation set (we use 5% of the augmented debiasing corpus for validation), since the language modeling loss on a balanced dataset is a reasonable proxy for both debiasing performance and language modeling ability. We select hyperparameters using the default seed of 42, and re-train the models for four additional times with different random seeds, to account for CrowS-Pairs and StereoSet's sensitivity to pre-training seeds (Aribandi et al., 2021). More details are in Appendix B. Baselines. We compare the parameter-efficient methods to full fine-tuning, where all the parameters of a language model are tuned, For post-hoc debiasing methods, we compare to SentenceDebias (Liang et al., 2020) and Self-Debias (Schick et al., 2021), as described in Section 4. ## 5.2 Mitigating Gender Bias For experiments on mitigating gender bias, we adopt a default reduction factor of r = 48 in adapter tuning, leading to 304'320 tunable parameters, which are less than 0.3% of all the parameters in BERT (109'514'298) or GPT-2 (124'439'808). For prefix tuning, we adopt a prefix length of l = 16 to obtain a similar amount of tunable parameters (294'912) to adapter tuning. Obtaining a similar amount of tunable parameters for prompt tuning would require an exceptionally large prompt length, even approaching the maximum acceptable sequence length of the pre-trained language models. Therefore, we only set the prompt length l = 16 (which corresponds to 12'288 tunable parameters) to compare with prefix tuning under the same number of prepending tokens. Evaluation results are shown in Table 1. 4 In general, the parameter-efficient methods are effective in reducing stereotype scores, and the reductions are statistically significant (p < 0.05) under a permutation test (Ernst, 2004). ## Among The Three Parameter-Efficient Methods, adapter tuning achieves the best debiasing performance on both CrowS-Pairs and StereoSet, for both BERT and GPT-2. This demonstrates adapter tuning to be a reliable parameter-efficient method for bias mitigation across different types of language models. Note that our results are also consistent with He et al. (2022)'s finding that modifying transformer representations at the feed-forward sublayers (adapter tuning) is more effective than modifying those at the multi-head attention sublayers (prefix tuning). | Gender Bias | CrowS-Pairs | StereoSet | WikiText2 | StereoSet LM | |-----------------------|------------------|----------------|--------------|----------------| | Stereotype Score | Stereotype Score | Perplexity (↓) | Score (↑) | | | BERT | 57.25 | 60.28 | 5.167 | 84.17 | | +Full Fine-Tune | 56.11±2.15 | 56.43±0.72∗ | 5.517±0.080 | 84.22±0.19 | | +Prefix Tune (l= 16) | 53.59±0.19∗ | 57.82±0.46∗ | 4.425±0.015 | 84.75±0.15 | | +Prompt Tune (l= 16) | 57.56±1.41 | 58.07±0.60∗ | 4.641±0.033 | 84.71±0.16 | | +Adapter Tune (r= 48) | 51.68±0.52∗∗ | 56.04±0.43∗∗ | 4.931±0.043 | 84.97±0.14 | | +SentenceDebias | 52.29 | 59.37 | 5.181 | 84.20 | | +SelfDebias | 52.29 | 59.34 | 7.070 | 84.09 | | GPT-2 | 56.87 | 62.65 | 29.669 | 91.01 | | +Full Fine-Tune | 55.88±1.27 | 61.88±0.55∗ | 81.778±0.655 | 90.24±0.14 | | +Prefix Tune (l= 16) | 54.73±0.66∗ | 61.35±0.60∗ | 31.400±0.108 | 91.24±0.07 | | +Prompt Tune (l= 16) | 54.12±1.14∗ | 61.30±0.43∗ | 30.630±0.099 | 91.37±0.08 | | +Adapter Tune (r= 48) | 52.29±1.13∗∗ | 60.33±0.46∗∗ | 35.255±0.345 | 90.87±0.11 | | +SentenceDebias | 56.11 | 56.05 | 56.891 | 87.43 | | +SelfDebias | 56.11 | 60.84 | 31.482 | 89.07 | Prompt tuning is more effective on GPT-2 than BERT. Prompt tuning is ineffective in reducing the CrowS-Pairs stereotype score on BERT, but can successfully reduce it on GPT-2, where it even achieves a similar debiasing performance to prefix tuning. This is remarkable given that prompt tuning has much less tunable parameters than prefix tuning. This is also consistent with prompt tuning being more effective when T5 (Raffel et al., 2020) is continuously pre-trained with an autoregressive language modeling loss (Lester et al., 2021). Comparing to post-hoc debiasing methods, parameter-efficient methods are better at maintaining the language modeling ability while achieving a similar debiasing performance. Note that post-hoc debiasing methods sometimes significantly worsen the language modeling ability, e.g., a perplexity of 7.070 for SelfDebias on BERT, a perplexity of 56.891, and a LM score of 87.43 for SentenceDebias on GPT-2. Since a completely random language model would achieve the perfect stereotype score (50), but is useless as a language model (Nadeem et al., 2021), the degraded language modeling ability of the post-hoc debiasing methods undermines their true effectiveness for bias mitigation. On the contrary, parameterefficient methods keep the language modeling loss during CDA training, which helps to preserve or even enhance the language modeling ability. Comparing to full fine-tuning, parameter-efficient methods can achieve a better or similar performance with improved time and memory efficiency. Since full fine-tuning updates all the parameters of the language model, it is computationally expensive and prone to be overfitting. When debiasing BERT, full fine-tuning consumes around 19 GB memory, while the parameter-efficient methods consume 12~17 GB memory. Training on the debiasing corpus for full fine-tuning lasts around 6 hours, while that for the parameter-efficient methods lasts 4~5 hours. For GPT-2, full fine-tuning consumes around 18 GB memory with the training time being around 7 hours, while the parameterefficient methods consume 15~16 GB memory and 5 hours of training time. ## 5.3 Mitigating Racial And Religious Bias When mitigating racial and religious bias, we find that a prefix length of l = 16 (or, equivalently, a reduction factor of r = 48 for adapter tuning) is no longer sufficient for successful debiasing. Therefore, we search l in a broader range of {48, 96, 192, 384} (and, correspondingly, r in {16, 8, 4, 2}). The results are shown in Table 2. In general, the parameter-efficient methods are less effective when it comes to racial and religious bias. Even the previously strongest method, adapter tuning, is ineffective in many cases such as debiasing BERT on the religion subsets of CrowSPairs and StereoSet, and GPT-2 on the race subset of CrowS-Pairs. For GPT-2, prompt tuning is consistently effective on the race subsets of both | Racial Bias | CrowS-Pairs | StereoSet | WikiText2 | StereoSet LM | |-----------------------|------------------|----------------|---------------|----------------| | Stereotype Score | Stereotype Score | Perplexity (↓) | Score (↑) | | | BERT | 62.33 | 57.03 | 4.899 | 84.17 | | +Full Fine-Tune | 57.65±3.61∗ | 57.67±0.70 | 5.291±0.064 | 83.44±0.29 | | +Prefix Tune (l= 192) | 57.44±1.90∗ | 56.95±0.39 | 4.448±0.008 | 84.35±0.12 | | +Prompt Tune (l= 192) | 58.25±3.90∗ | 58.17±0.55 | 4.572±0.019 | 83.41±0.80 | | +Adapter Tune (r= 4) | 57.20±4.16∗ | 59.10±0.45 | 4.903±0.071 | 84.34±0.20 | | +SentenceDebias | 62.72 | 57.78 | 4.949 | 83.95 | | +SelfDebias | 56.70 | 54.30 | 6.187 | 84.24 | | GPT-2 | 59.69 | 58.90 | 32.712 | 91.01 | | +Full Fine-Tune | 60.04±0.48 | 56.68±0.37∗ | 41.781±0.240 | 89.44±0.05 | | +Prefix Tune (l= 384) | 59.61±0.51 | 57.53±0.23∗ | 35.346±0.073 | 89.48±0.08 | | +Prompt Tune (l= 384) | 58.76±0.92∗ | 57.72±0.33∗ | 33.983±0.266 | 89.18±0.10 | | +Adapter Tune (r= 2) | 61.28±1.27 | 57.77±0.44∗ | 35.818±0.304 | 89.01±0.68 | | +SentenceDebias | 55.43 | 56.43 | 37.826 | 91.38 | | +SelfDebias | 53.29 | 57.33 | 34.851 | 89.53 | | Religious Bias | CrowS-Pairs | StereoSet | WikiText2 | StereoSet LM | | Stereotype Score | Stereotype Score | Perplexity (↓) | Score (↑) | | | BERT | 62.86 | 59.70 | 6.172 | 84.17 | | +Full Fine-Tune | 65.33±2.73 | 60.76±1.38 | 6.762±0.059 | 83.67±0.18 | | +Prefix Tune (l= 384) | 72.76±1.55 | 60.61±0.98 | 5.372±0.010 | 85.42±0.09 | | +Prompt Tune (l= 384) | 83.05±1.85 | 60.07±1.12 | 5.483±0.048 | 83.80±0.58 | | +Adapter Tune (r= 2) | 68.00±4.33 | 58.93±1.19 | 6.135±0.019 | 84.45±0.19 | | +SentenceDebias | 63.81 | 58.73 | 6.185 | 84.26 | | +SelfDebias | 56.19 | 57.26 | 7.624 | 84.23 | | GPT-2 | 62.86 | 63.26 | 32.712 | 91.01 | | +Full Fine-Tune | 54.86±1.29∗ | 64.36±0.81 | 45.525±0.065 | 90.20±0.06 | | +Prefix Tune (l= 384) | 60.95±0.60∗ | 65.16±0.56 | 35.226±0.073 | 90.95±0.03 | | +Prompt Tune (l= 384) | 58.29±1.52∗ | 64.89±1.52 | 43.177±17.750 | 90.68±0.12 | | +Adapter Tune (r= 2) | 62.10±2.72 | 62.05±0.66∗ | 39.732±0.695 | 90.31±0.10 | | +SentenceDebias | 35.24 | 59.62 | 60.204 | 90.53 | | +SelfDebias | 58.10 | 60.45 | 35.174 | 89.36 | CrowS-Pairs and StereoSet, but cannot obtain a similar performance on StereoSet's religion subset. In three out of the eight debiasing cases, none of the parameter-efficient methods could reduce the stereotype score in a statistically significant way. Moreover, SelfDebias exhibits a superior debiasing performance over the parameter-efficient methods, and its language modeling ability does not severely degenerate as in mitigating gender bias. Indeed, when we calculate the *icat* score (Nadeem et al., 2021), defined as lms ∗ min(ss, 100 − ss)/50 (lms stands for the LM score, and ss stands for the stereotype score on StereoSet), to integrate the debiasing performance and language modeling ability, we can clearly see a better overall performance of SelfDebias over adapter tuning (e.g., on StereoSet's religion subset, the *icat* score of SelfDebias and adapter tuning is 72.00 vs. 69.37 for BERT, and 70.68 vs. 68.55 for GPT-2). The less successful performance of parameterefficient methods may be attributed to some limitations of the CDA debiasing method. The bias attribute word lists for race and religion are shorter and contain more noise (i.e., words with multiple or ambiguous meanings) than that for gender, which may undermine the diversity and quality of the augmented training corpus. On the contrary, SelfDebias relies on bias descriptions that contain less noise and could generalize with the help of the language model's own knowledge. Given this analysis, future work could explore how to adopt parameterefficient methods to debiasing techniques other than CDA to overcome these limitations. ## 6 Impact On Internal Knowledge 6.1 Fact Retrieval To investigate the impact of bias mitigation on the factual knowledge encoded in pre-trained language models, we take the gender-debiased models from Section 5.2 and evaluate them on the four LAMA datasets (Petroni et al., 2019).6 The results are shown in Table 3. We report the results from a single run (with the default seed 42) to save computation in Table 3 and 4. The parameter-efficient methods can largely maintain the factual knowledge of a language model, with the reduction in Precision@10 ranging from 0 to 6.8% across all the datasets and pre-trained models. Surprisingly, for BERT on SQuAD and GPT-2 on all the four datasets, quite a number of the results are actually improved. We attribute these improvements to the fact that Wikipedia contains a lot of factual knowledge, and continuously training on it can enhance the internal knowledge of a language model. Comparing the performance between full finetuning and parameter-efficient tuning, we find that the former performs best on SQuAD with BERT and Google-RE with GPT-2, while the latter performs better in the rest of the settings. In general, the performance gaps are marginal. ## 6.2 Downstream Fine-Tuning We further investigate the impact of bias mitigation on knowledge transfer to downstream tasks via fine-tuning. Since neural network models suffer from catastrophic forgetting (French, 1999), a debiased model may forget the encoded knowledge in the original language model, and conversely a finetuned model may forget the debiasing knowledge in the debiased model. Therefore, it is important to adopt an evaluation dataset that can simultaneously evaluate downstream task performance and debiasing performance. We choose the coreference resolution dataset WinoBias (Zhao et al., 2018) to fulfill the above requirements. We append each example from WinoBias (e.g., The physician hired the secretary because he was overwhelmed with clients.) with the suffix "{Pronoun} *refers to the* {Candidate}." ({Pronoun} is "He" in this example), and then measure the probability of the model completing the sentence with different candidates ("*physician*" and "*secretary*" in this example) to determine the coreference result. We adopt both the type-1 and type-2 test sets of WinoBias, where type-1 examples are harder to resolve as they contain no syntactic cues. We adopt WinoBias' dev set to fine-tune an original pre-trained language model using either full finetuning or parameter-efficient tuning.7 The results are shown in Table 4. On type-1 examples, adapter tuning achieves a comparable performance to full fine-tuning for both BERT and GPT-2, with the reduction in average F1 **scores less than 3.3%.** On BERT, adapter tuning achieves a much better debiasing performance (Diff= 0.51) than full fine-tuning, while on GPT-2 it is slightly more biased. Nevertheless, both of them can be considered effective simultaneously on the coreference resolution task and debiasing task. The performance gap between full fine-tuning and prefix/prompt tuning is more significant, but the latter can still achieve a nearly perfect performance on the easier type-2 examples. ## 7 Conclusion In this study, we investigated the performance of prefix tuning, prompt tuning, and adapter tuning on mitigating social bias and preserving the linguistic and factual knowledge for two types of pre-trained language models. Our results demonstrated the effectiveness and efficacy of parameter-efficient methods in combination with CDA, and also revealed their performance limitations by comparing to post-hoc debiasing methods. We hope that our study can make it more accessible for others to debias pre-trained language models with reduced computational requirements, and contribute to fair and inclusive NLP. ## 8 Limitations Due to the restrictions of the adopted benchmarks and resources, our evaluation bears the following limitations: (i) We only focus on social biases in the English language and North American cultures. This is due to the fact that both CrowSPairs and StereoSet are generated by crowd workers from North America. Future work can extend our analysis to other languages and cultures with 7See Appendix B for more details. | Google-RE | T-REx | ConceptNet | SQuAD | | | | | | | | | | |-----------------------|---------|--------------|---------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | P@1 | P@10 | MRR | P@1 | P@10 | MRR | P@1 | P@10 | MRR | P@1 | P@10 | MRR | | | BERT | 9.25 | 28.69 | 15.96 | 29.48 | 56.87 | 38.63 | 15.11 | 38.77 | 23.10 | 13.11 | 44.59 | 23.30 | | +Full Fine-Tune | 7.47 | 21.43 | 12.43 | 26.93 | 53.72 | 35.85 | 14.89 | 37.59 | 22.57 | 14.43 | 47.21 | 24.78 | | +Prefix Tune (l= 16) | 8.23 | 22.54 | 13.43 | 27.68 | 53.64 | 36.38 | 15.05 | 37.42 | 22.73 | 12.79 | 46.56 | 23.91 | | +Prompt Tune (l= 16) | 8.68 | 23.19 | 14.04 | 28.28 | 53.59 | 36.88 | 14.58 | 36.58 | 22.11 | 12.79 | 46.89 | 23.54 | | +Adapter Tune (r= 48) | 8.51 | 21.97 | 13.39 | 26.92 | 51.65 | 35.27 | 14.75 | 36.47 | 22.13 | 11.80 | 44.26 | 22.59 | | GPT-2 | 1.51 | 10.88 | 5.04 | 9.36 | 31.10 | 16.78 | 5.91 | 19.01 | 10.42 | 3.15 | 17.48 | 7.53 | | +Full Fine-Tune | 3.40 | 15.10 | 7.44 | 7.76 | 33.04 | 15.90 | 4.87 | 16.47 | 8.86 | 1.75 | 18.18 | 6.78 | | +Prefix Tune (l= 16) | 2.33 | 12.56 | 6.14 | 10.13 | 33.38 | 17.98 | 5.99 | 19.42 | 10.53 | 2.10 | 17.83 | 7.53 | | +Prompt Tune (l= 16) | 1.14 | 9.79 | 4.39 | 8.00 | 30.29 | 15.70 | 5.95 | 19.03 | 10.53 | 2.45 | 16.78 | 7.16 | | +Adapter Tune (r= 48) | 2.49 | 14.11 | 6.59 | 9.35 | 32.61 | 17.20 | 5.79 | 19.09 | 10.26 | 2.10 | 18.18 | 7.03 | Type-1 Type-2 F1−pro F1−*anti* Avg Diff F1−pro F1−*anti* Avg Diff BERT +Full Fine-Tune 70.95 68.04 69.50 2.91 99.49 99.49 99.49 0 +Prefix Tune (l= 16) 65.08 64.57 64.83 0.51 99.49 99.49 99.49 0 +Prompt Tune (l= 16) 56.56 53.33 54.95 3.23 99.24 99.24 99.24 0 +Adapter Tune (r= 48) 66.50 65.99 66.25 0.51 99.49 99.49 99.49 0 GPT-2 +Full Fine-Tune 63.33 63.47 63.40 -0.14 99.49 99.49 99.49 0 +Prefix Tune (l= 16) 51.66 52.79 52.23 -1.13 99.49 99.49 99.49 0 +Prompt Tune (l= 16) 53.46 52.36 52.91 1.10 99.24 99.24 99.24 0 +Adapter Tune (r= 48) 60.70 59.96 60.33 0.74 99.49 99.49 99.49 0 the corresponding resources such as the French CrowS-Pairs (Névéol et al., 2022) and multilingual WEAT (Lauscher and Glavaš, 2019). (ii) Our evaluation has a limited coverage over different kinds of harms according to Blodgett et al. (2020). CrowS-Pairs, StereoSet, and WinoBias all focus on stereotyping, a kind of representational harm, while others like allocational harms are untouched. Developing methods to measure these harms generally requires in-depth interactions between technologists and customers. Blodgett et al. (2021) also point out several conceputalization and operationalization pitfalls in the above three bias benchmarks, which limits the validity of the results evaluated on them. (iii) Due to the incomplete bias attribute word lists, our CDA-based debiasing method is by no means fair enough to cover all the minority groups (e.g., groups with non-binary genders). Therefore the current debiasing method in this paper can only be used to mitigate bias among the demographic groups mentioned in Appendix A. We recommend more complete resources such as the gender-inclusive word list in (Cao and Daumé III, 2021) for real-world scenarios. ## Acknowledgements This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1, by the AXA Research Fund, and by the EU TAILOR grant 952215. We also acknowledge the use of Oxford's ARC facility, of the EPSRC-funded Tier 2 facilities JADE (EP/P020275/1), and of GPU computing support by Scan Computers International Ltd. ## References Vamsi Aribandi, Yi Tay, and Donald Metzler. 2021. How reliable are model diagnostics? In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1778–1785, Online. Association for Computational Linguistics. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610–623, New York, NY, USA. Association for Computing Machinery. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454– 5476, Online. Association for Computational Linguistics. Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Yang Trista Cao and Hal Daumé III. 2021. Toward gender-inclusive coreference resolution: An analysis of gender and bias throughout the machine learning lifecycle*. *Computational Linguistics*, 47(3):615– 661. Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. FairFil: Contrastive neural debiasing method for pretrained text encoders. In International Conference on Learning Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, and Maosong Sun. 2022. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, arXiv:2203.06904. Michael D. Ernst. 2004. Permutation Methods: A Basis for Exact Inference. *Statistical Science*, 19(4):676 – 685. Robert M. French. 1999. Catastrophic forgetting in connectionist networks. *Trends in Cognitive Sciences*, 3(4):128–135. Aparna Garimella, Akhash Amarnath, Kiran Kumar, Akash Pramod Yalla, Anandhavelu N, Niyati Chhaya, and Balaji Vasan Srinivasan. 2021. He is very intelligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4534–4545, Online. Association for Computational Linguistics. Michael Gira, Ruisu Zhang, and Kangwook Lee. 2022. Debiasing pre-trained language models via efficient fine-tuning. In *Proceedings of the Second Workshop* on Language Technology for Equality, Diversity and Inclusion, pages 59–69, Dublin, Ireland. Association for Computational Linguistics. Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012–1023, Dublin, Ireland. Association for Computational Linguistics. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In *International Conference on Learning Representations*. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. *CoRR*, arXiv:2203.15556. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799. PMLR. Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1256–1266, Online. Association for Computational Linguistics. Anne Lauscher and Goran Glavaš. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In *Proceedings of the* Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 85–91, Minneapolis, Minnesota. Association for Computational Linguistics. Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4782–4797, Punta Cana, Dominican Republic. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582– 4597, Online. Association for Computational Linguistics. Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2020. Towards debiasing sentence representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502–5515, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics. Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland. Association for Computational Linguistics. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In *International Conference on Learning Representations*. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Aurélie Névéol, Yoann Dupont, Julien Bezançon, and Karën Fort. 2022. French CrowS-pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8521–8531, Dublin, Ireland. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In *Proceedings of the 16th Conference of the European Chapter of the Association* for Computational Linguistics: Main Volume, pages 487–503, Online. Association for Computational Linguistics. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54, Online. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical Report. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in NLP. *Transactions of the* Association for Computational Linguistics, 9:1408– 1424. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zheng, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using deepspeed and megatron to train megatron-turing NLG 530b, A large-scale generative language model. *CoRR*, arXiv:2201.11990. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. *CoRR*, arXiv:2010.06032. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics. ## A Bias Attribute Words We adopt the same bias attribute words as Meade et al. (2022), where the list for gender is from Zhao et al. (2018) and that for religion is from Liang et al. (2020). Gender: (actor, *actress*), (actors, *actresses*), (airman, *airwoman*), (uncle, *aunt*), (uncles, *aunts*), (boy, *girl*), (boys, *girls*), (groom, *bride*), (*grooms*, brides), (brother, *sister*), (brothers, *sisters*), (businessman, *businesswoman*), (businessmen, *businesswomen*), (chairman, *chairwoman*), (*chairmen*, chairwomen), (dude, *chick*), (dudes, *chicks*), (dad, mom), (dads, *moms*), (daddy, *mommy*), (daddies, mommies), (son, *daughter*), (sons, *daughters*), (father, *mother*), (fathers, *mothers*), (male, *female*), (males, *females*), (guy, gal), (guys, *gals*), (gentleman, *lady*), (gentlemen, *ladies*), (grandson, *granddaughter*), (grandsons, *granddaughters*), (guy, girl), (guys, *girls*), (he, she), (himself, *herself*), (him, her), (his, her), (husband, *wife*), (*husbands*, wives), (king, *queen*), (kings, *queens*), (lord, *lady*), (lords, *ladies*), (sir, *maam*), (man, *woman*), (men, women), (sir, *miss*), (mr., *mrs.*), (mr., ms.), (policeman, *policewoman*), (prince, *princess*), (princes, princesses), (spokesman, *spokeswoman*), (spokesmen, *spokeswomen*) Religion: (jewish, christian, *muslim*), (*jews*, christians, *muslims*), (torah, bible, *quran*), (synagogue, church, *mosque*), (rabbi, priest, *imam*), (judaism, christianity, *islam*) Race: (black, caucasian, *asian*), (african, caucasian, *asian*), (black, white, *asian*), (africa, america, *asia*), (africa, america, *china*), (africa, europe, asia) ## B Additional Training Details For all the experiments on parameter-efficient tuning methods and full fine-tuning, we use the default settings of the AdamW optimizer (Loshchilov and Hutter, 2019) and a linear learning rate scheduler from the Hugging Face library. For the debiasing experiments trained on Wikipedia, we fix the number of training epochs to 2 and greedily search initial learning rate from {5e-1, 5e-2, 5e-3, 5e-4, 5e-5, 5e-6, 5e-7} according to the language modeling loss on the validation set (we use 5% of the augmented debiasing corpus for validation). For experiments trained on WinoBias, we greedily search training epochs from {10, 20, 30, 50, 100, 200} and initial learning rate from {5e-1, 5e-2, 5e-3, 5e-4, 5e-5, 5e-6, 5e-7} according to the Avg F1 score on type-1 examples in the validation set (we use 5% of the training set for validation). The hyperparameter values to reproduce our results in Sections 5 and 6 are in Table 5. Implementations of SentenceDebias and SelfDebias are based on Meade et al. (2022)'s, where we also follow their default parameter settings. | lr | epoch | bsz | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------|----| | For results in Table 1 (gender bias) BERT +Full Fine-Tune 5e-5 | 2 | 16 | | | +Prefix Tune (l= 16) | 5e-3 | 2 | 16 | | +Prompt Tune (l= 16) | 5e-1 | 2 | 16 | | +Adapter Tune (r= 48) | 5e-4 | 2 | 16 | | GPT-2 +Full Fine-Tune | 5e-5 | 2 | 8 | | +Prefix Tune (l= 16) | 5e-3 | 2 | 8 | | +Prompt Tune (l= 16) | 5e-2 | 2 | 8 | | +Adapter Tune (r= 48) | 5e-4 | 2 | 8 | | For results in Table 2's upper sub-table (racial bias) BERT +Full Fine-Tune 5e-5 2 16 +Prefix Tune (l= 192) 5e-3 2 16 +Prompt Tune (l= 192) 5e-3 2 16 +Adapter Tune (r= 4) 5e-4 2 16 GPT-2 +Full Fine-Tune 5e-6 2 8 +Prefix Tune (l= 384) 5e-3 2 8 +Prompt Tune (l= 384) 5e-1 2 8 +Adapter Tune (r= 2) 5e-3 2 8 For results in Table 2's lower sub-table (religious bias) BERT +Full Fine-Tune 5e-5 2 16 +Prefix Tune (l= 384) 5e-3 2 16 +Prompt Tune (l= 384) 5e-3 2 16 +Adapter Tune (r= 2) 5e-4 2 16 GPT-2 +Full Fine-Tune 5e-6 2 8 +Prefix Tune (l= 384) 5e-3 2 8 +Prompt Tune (l= 384) 5e-1 2 8 +Adapter Tune (r= 2) 5e-5 2 8 For results in Table 4 (WinoBias) BERT +Full Fine-Tune 5e-6 30 16 +Prefix Tune (l= 16) 5e-2 20 16 +Prompt Tune (l= 16) 5e-1 20 16 +Adapter Tune (r= 48) 5e-4 20 16 GPT-2 +Full Fine-Tune 5e-5 20 16 +Prefix Tune (l= 16) 5e-3 200 16 +Prompt Tune (l= 16) 5e-4 100 16 +Adapter Tune (r= 48) 5e-4 50 16 | | | | Table 5: Hyperparameter values adopted during training. "lr" denotes initial learning rate; "epoch" denotes total training epochs; "bsz" denotes batch size. | Debiasing | CrowS-Pairs | StereoSet | WikiText2 | Stereoset LM | | |-----------------------|-----------------------|------------------|----------------|----------------|-------| | Corpus | Stereotype Score | Stereotype Score | Perplexity (↓) | Score (↑) | | | GPT-2 | 56.87 | 62.65 | 29.669 | 91.01 | | | Wikipedia | +Full Fine-Tune | 56.87 | 61.30 | 80.499 | 90.23 | | (single | +Prefix Tune (l= 16) | 55.34 | 62.02 | 31.567 | 91.14 | | sentence) | +Prompt Tune (l= 16) | 52.29 | 60.95 | 30.534 | 91.29 | | +Adapter Tune (r= 48) | 51.15 | 60.50 | 34.910 | 90.80 | | | Wikipedia | +Full Fine-Tune | 56.49 | 61.74 | 56.527 | 90.19 | | (example | +Prefix Tune (l= 16) | 58.40 | 62.67 | 31.935 | 91.22 | | length=1024 | +Prompt Tune (l= 16) | 56.87 | 63.37 | 32.461 | 91.03 | | tokens) | +Adapter Tune (r= 48) | 59.92 | 62.31 | 34.527 | 90.75 | | OpenWebText | +Full Fine-Tune | 55.73 | 62.43 | 38.252 | 90.60 | | (example | +Prefix Tune (l= 16) | 53.44 | 60.94 | 31.592 | 90.31 | | length=1024 | +Prompt Tune (l= 16) | 53.05 | 62.68 | 30.464 | 91.41 | | tokens) | +Adapter Tune (r= 48) | 56.87 | 61.94 | 33.130 | 90.87 | Table 6: Results on gender debiasing and language modeling for GPT-2 using different debiasing corpora. ## C Effect Of The Debiasing Corpus For Gpt-2 For consistency, we adopt the same debiasing corpus (the English Wikipedia) for both BERT and GPT-2 in Section 5.2, where each training example consists of a single sentence (the average sentence length in our corpus is around 107 tokens). However, this setting is different from the original pre-training settings of GPT-2 (Radford et al., 2019) in terms of example length and data source. Therefore, we further investigate debiasing GPT-2 on two other debiasing corpora: for one corpus, we still use Wikipedia but concatenate all the sentences into a long sequence and truncate it into examples of 1024 tokens; for the other corpus, we use 1% of OpenWebText8, which is a public replicate of GPT-2's private pre-training corpus, and truncate it into examples of 1024 tokens. The results are shown in Table 6. 9 Comparing the results on Wikipedia, with single sentence and example length 1024 tokens, in Table 6, we can see that the former is consistently better. This indicates that these methods favor shorter example lengths. We conjecture that this is due to GPT-2's language modeling objective being an average over all the tokens in an example. Therefore, the counterfactual token's signal will be less significant if it is close to the end of a long example. Comparing the last two blocks, we can see that the results from the debiasing methods trained on OpenWebText are superior to those trained on Wikipedia under the same example length of 1024. This indicates that using a similar data source to the original pre-training corpus is beneficial. For full fine-tuning, this can improve perplexity to 38.252. For the parameter-efficient methods, the improvements are more significant on stereotype scores. Given that parameter-efficient methods' model capacity is limited, if we allocate some capacity for adapting to new data sources, it is reasonable for the debiasing performance to be negatively affected. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 8 "Limitations" ✓ A2. Did you discuss any potential risks of your work? section 8 "Limitations" ✓ A3. Do the abstract and introduction summarize the paper's main claims? in the abstract and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5, 6 And Appendix A, B, C ✓ B1. Did you cite the creators of artifacts you used? section 5, 6 and Appendix A, B, C ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? we adhere to each artifact's original licnese and terms. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 8 "Limitations" B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. We follow previous work and the data creator's practices. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 8 "Limitations", Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 5, 6 and Appendix B ## C ✓ **Did You Run Computational Experiments?** Section 5, 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 5, 6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 5, 6 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 5, 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 5 and Appendix B ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wang-etal-2023-two
Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models
https://aclanthology.org/2023.acl-long.877
The bias-variance tradeoff is the idea that learning methods need to balance model complexity with data size to minimize both under-fitting and over-fitting. Recent empirical work and theoretical analysis with over-parameterized neural networks challenges the classic bias-variance trade-off notion suggesting that no such trade-off holds: as the width of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. In this work, we first provide a variance decomposition-based justification criteria to examine whether large pretrained neural models in a fine-tuning setting are generalizable enough to have low bias and variance. We then perform theoretical and empirical analysis using ensemble methods explicitly designed to decrease variance due to optimization. This results in essentially a two-stage fine-tuning algorithm that first ratchets down bias and variance iteratively, and then uses a selected fixed-bias model to further reduce variance due to optimization by ensembling. We also analyze the nature of variance change with the ensemble size in low- and high-resource classes. Empirical results show that this two-stage method obtains strong results on SuperGLUE tasks and clinical information extraction tasks. Code and settings are available: \url{https://github.com/christa60/bias-var-fine-tuning-plms.git}
# Two-Stage Fine-Tuning For Improved Bias And Variance For Large Pretrained Language Models Lijing Wang1∗ Yingya Li2∗ Timothy Miller2 Steven Bethard3 **Guergana Savova**2 1New Jersey Institute of Technology, lijing.wang@njit.edu 2Boston Children's Hospital and Harvard Medical School, {firstname.lastname}@childrens.harvard.edu 3 University of Arizona, bethard@email.arizona.edu ## Abstract The bias-variance tradeoff is the idea that learning methods need to balance model complexity with data size to minimize both under-fitting and over-fitting. Recent empirical work and theoretical analyses with over-parameterized neural networks challenge the classic bias-variance trade-off notion suggesting that no such trade-off holds: as the width of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. In this work, we first provide a variance decomposition-based justification criteria to examine whether large pretrained neural models in a fine-tuning setting are generalizable enough to have low bias and variance. We then perform theoretical and empirical analysis using ensemble methods explicitly designed to decrease variance due to optimization. This results in essentially a two-stage fine-tuning algorithm that first ratchets down bias and variance iteratively, and then uses a selected fixed-bias model to further reduce variance due to optimization by ensembling. We also analyze the nature of variance change with the ensemble size in low- and high-resource classes. Empirical results show that this two-stage method obtains strong results on SuperGLUE tasks and clinical information extraction tasks. Code and settings are available: https://github.com/christa60/ bias-var-fine-tuning-plms.git ## 1 Introduction Transformer-based neural language models, such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019), have achieved state-of-the-art (SOTA) performance for a variety of natural language processing (NLP) tasks through the process of *fine-tuning*. Given an NLP task, the process often involves searching for optimal pretrained models and hyperparameters ∗co-first authors while continuing to train the pretrained model on a domain-specific dataset, with the aim of building generalizable and robust fine-tuned models. Based on the classic notion of the bias-variance tradeoff (Geman et al., 1992), where increasing model capacity decreases bias but increases variance (leading to a U-shaped test error curve), large pretrained models (LPMs) should have large variance and overfit domain-specific data which is often sparsely labeled and extremely imbalanced for classification. However, recent empirical work and theoretical analysis of neural networks challenge this classic bias-variance trade-off notion (Neal et al., 2018; Yang et al., 2020). It has been suggested that no such trade-off holds: as the width/depth of the network grows, bias monotonically decreases while variance initially increases followed by a decrease. This is why transformerbased LPMs often achieve better performance compared to less complex models like long short-term memory (LSTM)-based models or feature-rich methods (e.g., support vector machines). In the context of the new bias-variance paradigm, these LPMs are seemingly complex enough to have low bias and variance, however, so far there has been no method to justify whether those SOTA models are generalizable and robust in solving a variety of downstream tasks. In this paper, we (1) show that many SOTA models are very sensitive to data and training randomness, and (2) provide a variance decomposition-based justification method. We also aim to improve model performance, reducing the generalization error of LPMs by reducing their bias and variance. Recent findings in Yang et al. (2020) show that the generalization error mainly comes from bias. Bias can be reduced by modifying the model architecture, e.g., making the neural networks wider and deeper as in transformer-based LPMs. However, pretraining new or larger language models can be challenging due to the technical and computational resource 15746 requirements afforded by only a few institutions - a topic outside the scope of this paper. We focus on the problem of reducing variance of neural models to further boost model performance, given a fixed bias (i.e., a fixed pretrained model). Ensemble methods have been successful in boosting predictive performance of single learners (Ren et al. (2016) presents a comprehensive review) and thus are a promising venue to explore. We propose a two-stage fine-tuning framework that first justifies the generalization status of a selected pretrained model through a concrete metric, and then uses the fixed-bias model to further reduce variance due to optimization through ensembling techniques. To the best of our knowledge, we are the first to provide such a metric and perform theoretical and empirical analysis using ensembles for improved bias and variance for LPMs. We conduct experiments on the SuperGLUE tasks as well as on information extraction from clinical text as it is a domain of high significance and presents data challenges due to the limitations of sharing patientrelated data. We believe our proposal is of interest to any unstructured domain where neural models are used. Specifically we make the following contributions: - We propose a two-stage fine-tuning algorithm for improving bias and variance in the new bias-variance paradigm; - We provide a variance decomposition-based strategy to examine whether LPMs in finetuning settings are generalizable enough to have low bias and variance; - We perform theoretical and empirical analyses using ensembles explicitly designed to decrease variance due to optimization while keeping bias unchanged; - We analyze the nature of variance change due to ensembling in low- and high-resource classes in classification tasks; - We conduct comprehensive experiments and show that the proposed two-stage method obtains strong results on SuperGLUE tasks and two clinical NLP tasks. ## 2 Preliminaries In this section we present the bias-variance decomposition for squared loss in the new paradigm studied in Neal et al. (2018) and Yang et al. (2020). We also present a further decomposition of variance. We denote f as a supervised learning task such that f : *X → Y*, based on a training dataset S of m i.i.d. samples drawn from a joint distribution D of (X , Y). The learning target is to minimize the mean squared error E(f) = E(x,y) -∥ y − f(x) ∥ 2, where (x, y) ∼ D. We consider the predictor fθ as a random variable depending on the random variable S for training dataset and the random variable O for optimization randomness, where θ = A(*S, O*) ∈ R p represents the weights of neural networks produced by the learning algorithm A. p denotes the dimension of θ. The notations and their descriptions are shown in Table 3 in the Appendix. ## 2.1 Bias-Variance Decomposition In the context of classification of C classes, where y ∈ R C is represented as a one-hot vector and fθ(x) ∈ R C denotes an output probability vector by the predictor, the risk R of the learning algorithm can be decomposed into three sources of errors (Geman et al., 1992): $${\mathcal{R}}={\mathcal{E}}_{n o i s e}+{\mathcal{E}}_{b i a s}+{\mathcal{E}}_{v a r}$$ $\left(1\right)$. The first term is the irreducible noise and comes from the intrinsic error of data independent of the predictor. The second is a bias term: $${\mathcal{E}}_{b i a s}=\mathbb{E}_{(x,y)}\left[\parallel\mathbb{E}_{\theta}[f_{\theta}(x)]-\mathbb{E}[y|x]\parallel^{2}\right]$$ 2(2) The third is a variance term $\mathcal{E}_{var}=\mathbb{E}_{x}\text{Var}(f_{\theta}(x))$ $=\mathbb{E}_{x}\left[\mathbb{E}_{\theta}\left[\|\ f_{\theta}(x)-\mathbb{E}_{\theta}[f_{\theta}(x)]\ \|^{2}\right]\right]$ 2 (3) and can be further decomposed into the *variance* due to optimization Varopt and the *variance due to* sampling Var*samp* (Neal et al., 2018): $$\begin{array}{l}\mbox{Var}(f_{\theta}(x))=\mbox{Var}_{opt}+\mbox{Var}_{samp}\\ =\mathbb{E}_{S}[\mbox{Var}_{O}(f_{\theta}(x)|S)]\\ \mbox{}+\mbox{Var}_{S}(\mathbb{E}_{O}[f_{\theta}(x)|S])\end{array}\tag{4}$$ $$\left(2\right)$$ $$(3)$$ where we denote the expectation of the decomposed variance as EvarO = Ex[Varopt] and EvarS = Ex[Var*samp*]. ## 2.2 Theoretical Findings From Variance Decomposition Assuming the learning task is to learn a linear mapping y = θ T x+ϵ where ϵ denotes the noise random ![2_image_0.png](2_image_0.png) variable with E[ϵ] = 0 and Var(ϵ) = σ 2 ϵ , and the input vector is x ∈ R p where p is the input or parameter dimensionality. In this context, the *over-parameterized* setting is when *p > m* and the *under-parameterized* setting is when p ≤ m. The theoretical findings in Neal et al. (2018) prove that Evar grows with p in the under-parameterized case, while in the overparameterized case, the variance does not grow with p but scales with the dimension of the data: $$\mathbb{E}_{x}\mathrm{Var}(f_{\theta}(x))={\left\{\begin{array}{l l}{{\frac{p}{m}}\sigma_{\epsilon}^{2}}&{{\mathrm{for}}}\quad p\leq m}\\ {{\frac{r}{m}}\sigma_{\epsilon}^{2}}&{{\mathrm{for}}}\quad p>m}\end{array}\right.}\quad(5)$$ where r = *rank*(X) and X ∈ R m×p denotes the data matrix whose ith row is the training point x T i . Furthermore, EvarO vanishes as p increases under the linear squared regression assumption and EvarS depends on critical parameter dimensionality d(p). ## 2.3 Empirical Findings From Variance Decomposition Finding-I: as shown in the left panel of Figure 1 1, E*bias* decreases quickly and levels off once sufficiently over-parameterized, while Evar is unimodal contrary to the classic theory. *Finding-II*: in the right panel of Figure 1, EvarO is significant and higher than EvarS in the under-parameterized regime. The two variances cross at a certain p once sufficiently over-parameterized. However, empirical p and m of the over-parameterized setting is not strictly following the theoretical findings in section 2.2. *Finding-III*: in multi-layer models where p is the width and q is the depth, given a fixed p, E*bias* decreases while Evar increases as q increases (Yang et al., 2020). These empirical findings hold for multiple datasets in the original papers. ## 3 Two-Stage Fine-Tuning For Improved Bias And Variance 3.1 Overview The prevailing fine-tuning methods first build or select an LPM and then fine-tune its parameters on the downstream datasets. The SOTA setting for the LPM and its best hyperparameters for fine-tuning are chosen based on evaluation results, such as precision (P), recall (R), F1 and accuracy scores, using grid-search or random-search. Given a fine-tuning task with a fixed training dataset, there is an upper limit to the learning ability of an LPM which is hard to measure by traditional evaluation methods. For a selected LPM, it is usually hard to decide when to stop searching for hyperparameters. Different from the prevailing fine-tuning setting, we propose a two-stage fine-tuning method. We first provide a variance decomposition-based justification method to roughly measure the generalization ability of a pretrained model w.r.t. the upper limit of its learning ability. In Stage-I, the SOTA setting is built by ratcheting down bias and variance in an iterative way. The searching loop stops until an acceptable performance appears or no more improvement is observed. In Stage-II, given the SOTA setting built in Stage-I, the variance due to optimization is reduced by ensembling techniques. Algorithm 1 outlines the procedure of the proposed two-stage fine-tuning method. The details of each stage are presented below. ## 3.2 Stage-I: Justification Of Generalization Ability Based on the preliminaries in Section 2.3, assuming an algorithm A(*S, O*) is fixed, the Ebias, Evar, EvarO , and EvarS changes as p, q, and m change. Taking the crossing point (EvarO = EvarS ) as a dividing point, we define the generalization ability Gp as: $$\mathbb{G}_{p}=\left\{\begin{array}{l l l}{{\mathrm{Phase-I}}}&{{\mathrm{for}}}&{{{\mathcal{E}}_{v a r o}>{\mathcal{E}}_{v a r s}}}\\ {{\mathrm{Phase-II}}}&{{\mathrm{for}}}&{{{\mathcal{E}}_{v a r o}\leq{\mathcal{E}}_{v a r s}}}\end{array}\right.\quad(6)$$ where Phase-I implies large bias and variance leading to large generalization error. Phase-II implies small bias and variance leading to small generalization error which may be good enough w.r.t. the upper limit of the learning ability of A. Justification criteria: After each evaluation, if Gp is in Phase-I, it is necessary to explore more hyperparameter settings or new pretrained models until Gp is in Phase-II or an acceptable performance (e.g., P, R, F1) is achieved given the limited computing resources available in practice. Then fine-tuning can move to Stage-II. If Gp is in Phase-II, the current setting may be generalizable enough for the given learning task so that the searching can be stopped. Stage-II can be applied but is not necessary. We note that similar to *Finding-II* in Section 2.3, Gp cannot be determined directly based on p, q, and m. This breakdown provides a high-level guideline for evaluating the generalization of LPMs in an empirical way. ## 3.3 Stage-Ii: Ensembling To Reduce Variance Ensembles have been proven to be able to improve prediction accuracy and consistency of any learning models in Bonab and Can (2019); Wang et al. (2020). Bagging-based ensembles which are commonly used in various learning tasks have been proven to be able to reduce Evar while keeping E*bias* unchanged. However, no theoretical analysis has been discussed in the context of the variance decomposition paradigm. In Stage-II, we focus on bagging-based ensembles to further improve the model performance by reducing EvarO while keeping EvarS unchanged. Applying Stage-II can either move a model from Phase-I to Phase-II though ensembling, i.e., reducing EvarO until EvarO ≤ EvarS ; or further improve a model's generalization ability from Phase-II by reducing EvarO . We perform empirical analysis in Section 4 and theoretical analysis in Section 5 to investigate why and how bagging-based ensembles can guarantee such improvements in this context. We also analyze the nature of variance change with the ensemble size in low- and high-resource classes in classification tasks. Boosting ensembles have a more complex behaviour thus are out of scope for this paper. ## 4 Experiments 4.1 Data And Models We conduct experiments on the SuperGLUE tasks and two major clinical information extraction datasets. The data processing and statistics and hyperparameter settings are shown in Appendix Table 4 and Table 5. Their brief descriptions are: - **SuperGLUE** (Wang et al., 2019) is a benchmark dataset designed for a more rigorous test of general-purpose language understanding Algorithm 1: Pseudocode of the two-stage fine-tuning method. Input: S: training dataset; A: optimization algorithm; N: number of ensemble single learners; F: majority voting for ensemble; Output: ζ: ensemble learner /* Stage-I */ 1 Gp ← Phase-I 2 E ∗ ← 0 3 **while** (Gp = *Phase-I) or (*E ∗*is not acceptable)* do 4 Choose a pretrained model f and an initialization seed O. 5 θ ← A(*S, O, f*) 6 Compute EvarO and EvarS by Equation 4. 7 Gp ← Phase-I if EvarO > EvarS otherwise Phase-II 8 E ∗ ← score(fθ) // P,R,or F1. 9 ζ := f /* Stage-II */ 10 if (Gp ̸= *Phase-II) or (*E ∗*is not satisfied)* **then** 11 ξ ← ∅ // The set of ensemble components. 12 N is set to be ≥ EvarO /EvarS 13 **while** len(ξ) < N do 14 Resample training and validation sets Si from S. ![3_image_0.png](3_image_0.png) ∗on Si using snapshot learning and ∗ θ1 , . . . , f ∗ θN 16 Select l trained learners from the saved ones by applying pruning algorithm (Wang systems after GLUE (Wang et al., 2018). We replicate the scores on dev set2, and select six tasks (BoolQ, CB, RTE, MultiRC, WiC, and COPA). The selected tasks cover question answering (QA), natural language inference, and word sense disambiguation. The SOTA setting is based on the setting in Liu et al. (2019) using roberta-large as the pretrained model. - **THYME** (Temporal Histories of Your Medical Events) corpus (Styler IV et al., 2014) for temporal relation extraction, consisting of 594 de-identified clinical and pathology notes on colon cancer patients. We use the THYME+ version of the corpus (Wright-Bettner et al., 2020). There are 10 classes of extremely imbalanced class distribution. The SOTA setting is based on the setting in Lin et al. (2021) using PubmedBERTbase-MimicBig-EntityBERT as the pretrained model. 2https://github.com/pytorch/fairseq/ tree/master/examples/roberta - **2012 i2b2 Temporal Relations** (Sun et al., 2013) consists of 394 training de-identified reports, 477 test de-identified reports, and 877 unannotated reports. There are 3 classes of slightly imbalanced class distribution. The SOTA setting is based on the setting in Haq et al. (2021) using BioBERT-base-cased-v1.1 as the pretrained model. ## 4.2 Metrics And Settings We use an NVIDIA Titan RTX GPU cluster of 7 nodes for fine-tuning experiments through HuggingFace's Transformer API (Wolf et al., 2020) version 4.13.0. We leverage the run_glue.py pytorch version as our fine-tuning script. Unless specified, default settings are used in our experiments. Due to differences in the fine-tuning script and some missing settings not provided by the original authors, we were unable to reproduce the exact SOTA scores but we achieved scores close to the SOTA ones. Our implementation are denoted as replicated-SOTA (RSOTA). We compare our implementation and reference SOTA scores in Appendix Table 7. We use the RSOTA settings as the starting point to conduct the experiments. We use E**bias**, Evar, EvarO, and EvarS for Stage-I, and adopt classic evaluation metrics (P, R, and F1) for Stage-I and Stage-II. For the purposes of consistency, we report P, R, and F1 of SuperGLUE tasks for consistency with the reported results for the THYME and i2b2 tasks in the experiment results. Accuracy scores are reported in Appendix Table 7. ## 4.3 Experimental Design There are two stages in the proposed method. For Stage-I, we use 5 random seeds for the randomness over data samples S and 5 for the randomness over initialization O, resulting in a total of 25 fine-tuned models. The averages over data samples are performed by taking the training set S and creating 5 bootstrap replicate training/validation splits with the same class distribution. The bias expectation in Equation 2 is estimated as the averages over both S and O. The variance decomposition is estimated based on Equation 4. More specifically, EvarO is estimated as the averages over S of the variance over O, and EvarS is estimated as the variance over S of the averages over O. Furthermore, we also apply RoBERTa-base-uncased (RBU) as the pretrained model for each fine-tuning task using the RSOTA setting except for pretrained models. Their | BoolQ-RBU | 162 | 5.4 | 3.9 | 1.6 | Phase-I 77.8 | | |---------------|-------|-------|-------|-------|----------------|------| | BoolQ-RSOTA | 142 | 9.9 | 6.2 | 3.7 | Phase-I 84.3 | | | CB-RBU | 175 | 0.2 | 0.1 | 0.1 | - | 49.2 | | CB-RSOTA | 149 | 1.7 | 1.5 | 0.2 | Phase-I 62.0 | | | RTE-RBU | 176 | 11.4 | 8.0 | 3.4 | Phase-I 74.0 | | | RTE-RSOTA | 153 | 13.2 | 11.2 | 2.1 | Phase-I 83.5 | | | MultiRC-RBU | 164 | 5.9 | 4.6 | 1.3 | Phase-I 78.5 | | | MultiRC-RSOTA | 178 | 13.3 | 10.5 | 2.8 | Phase-I 74.7 | | | WiC-RBU | 212 | 5.5 | 4.2 | 1.3 | Phase-I 63.6 | | | WiC-RSOTA | 199 | 12.7 | 10.1 | 2.5 | Phase-I 70.3 | | | COPA-RBU | 250 | 0.0 | 0.0 | 0.0 | - | 38.0 | | COPA-RSOTA | 185 | 4.3 | 3.9 | 0.5 | Phase-I 81.2 | | | THYME-RBU | 81 | 0.17 | 0.14 | 0.02 | Phase-I 57.0 | | | THYME-RSOTA | 80 | 0.09 | 0.07 | 0.02 | Phase-I 61.8 | | | i2b2-RBU | 150 | 0.76 | 0.62 | 0.14 | Phase-I 76.8 | | | i2b2-RSOTA | 152 | 0.73 | 0.58 | 0.14 | Phase-I 78.1 | | descriptions are shown in Appendix Table 6. To replicate SOTA scores and obtain RSOTA settings for each task, we conduct hyperparameter searching in an iterative way. This process is considered as the experiment of Stage-I. For Stage-II, any bagging-based ensemble algorithms are feasible. In our preliminary experiments (Wang et al., 2022), we have shown that the dynamic snapshot ensemble algorithm (Wang et al., 2020), which we call ENS in this paper, works better than vanilla bagging ensembles. ENS is a bagging-based ensemble explicitly designed to reduce variance over optimization-related hyperparameters in one framework, with the aim of building computationally efficient strategies to boost model performance on top of any given setting with a guarantee (i.e., simple bagging ensemble cannot guarantee an improvement). In our implementation, we employ ENS. The ensemble size is 5 and majority voting is used to generate ensemble predictions. To explore the ensemble impact on low- and high-resource classes, we compute and compare performance improvements of each class from the extremely imbalanced THYME dataset. To investigate the impact of ensemble size on improving the model performance of imbalanced classes, we also evaluate performance of individual classes using ENS of size 1 to 10. We compute 95% confidence intervals for these estimates using bootstrapping over 5 samples. More details are in Appendix B. | BoolQ | CB | RTE | MultiRC | | | | | | | | | | |-------------------------|---------|---------|-------------------------------------------------------------------|-----------------------------------------|--------|--------|-------|-------|-------|--------|-------|--------| | Method | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | | RSOTA | 84.58 | 84.10 | 84.34 | 60.76 | 63.80 | 62.06 | 83.94 | 83.40 | 83.54 | 73.02 | 77.54 | 74.68 | | (±0.34) (±0.32) (±0.30) | (±19.0) | (±15.0) | (±16.9) (±0.89) (±0.94) (±0.94) (±21.8) (±13.5) (±18.76) | | | | | | | | | | | ENS | 84.96 | 84.38 | 84.74 | 92.68 | 93.14 | 92.66 | 86.00 | 85.38 | 85.48 | 82.14 | 82.30 | 82.16 | | (±0.34) (±0.58) (±0.47) | (±1.3) | (±2.8) | (±1.2) | (±0.79) (±1.16) (±1.10) (±1.41) (±1.41) | (±1.3) | | | | | | | | | IPV | 0.45% | 0.33% | 0.47% | 52.53% | 45.99% | 49.31% | 2.45% | 2.37% | 2.32% | 12.49% | 6.14% | 10.02% | | WiC | COPA | THYME | i2b2 | | | | | | | | | | | Method | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | | RSOTA | 72.06 | 70.74 | 70.30 | 82.54 | 81.72 | 81.24 | 66.6 | 58.2 | 61.8 | 78.3 | 76.9 | 78.1 | | (±0.12) | (±1.7) | (±1.9) | (±11.40) (±12.10) (±12.4) (±1.02) (±1.47) (±1.25) (±1.56) (±0.98) | (±1.24) | | | | | | | | | | ENS | 72.18 | 71.30 | 70.98 | 93.84 | 93.80 | 93.60 | 72.9 | 60.1 | 65.9 | 80.5 | 78.3 | 79.3 | | (±0.17) (±0.36) (±0.42) | (±0.43) | (±0.48) | (±0.48) (±0.86) (±1.16) (±0.95) (±1.23) (±0.79) | (±0.97) | | | | | | | | | | IPV | 0.17% | 0.79% | 0.97% | 13.69% | 14.78% | 15.21% | 9.46% | 3.26% | 6.63% | 2.81% | 1.82% | 1.54% | ## 4.4 Justification Results Table 1 shows Ebias, Evar, EvarO , and EvarS computed on the datasets with different pretrained models. It is noted that in our experiment, we are not applying the algorithm for Stage-I to ratchet down bias and variance in an iterative manner. The goal of this table is to analyze both RBU and RSOTA models for the bias and variance trends discussed in Section 2. Interestingly, we observe that EvarO > EvarS for all datasets and models except for CB-RBU and COPA-RBU where models are not well trained given that F1 score is around 0.5 indicating random guess. This implies that the vast majority of the SOTA models we experimented with are in Phase-I (i.e., not generalizable enough for their tasks), which is contrary to our intuition that these transformer-based models are complex enough given the moderate sized labeled datasets. It is also observable that E*bias* is much larger than Evar indicating that the model performance is dominated more by bias than by variance. For SuperGLUE tasks, with the same hyperparameter setting (i.e., A(*S, O*)), the RSOTA models (i.e., larger p and q than RBU models) achieve smaller E*bias* but larger Evar than RBU models except for MultiRC. The change of Evar mainly comes from the change of EvarO . As *Finding-I* in Section 2.3 that E*bias* decreases while Evar is unimodal, our observation implies that RBU models are before peak and long way toward Phase-II while RSOTA models get closer to Phase-II than RBU models. The exception is the result on the MultiRC dataset which is QA corpus listing a passage which consists of text, a question, and multiple answers to that question each with a true/false label. Although MultiRC represents a variety of question types (e.g. yes/no, factual, etc.), the passages are not annotated for question type. As explained in Appendix section B.1., we represented the QA pairs within each passage as text, question, answer, label instances and sampled from these instances. Using this instance-based sampling likely leads to samples not stratified by question types, therefore not necessarily representative. This probably explains the better mean F1 for MultiRC-RBU as compared to the mean F1 for MultiRC-RSOTA in Table 1 (different samples are created for each run). However, when we drill down to the best model F1 for MultiRC-RBU and MultiRC-RSOTA, the results are 78.9 and 85.4 F1-scores respectively, which supports the trend in Table 1. For the SuperGLUE tasks, the bias and variance of RBU models and RSOTA models are shown together to illustrate a trend (like Fig. 1) as p and q are the only variables. However for THYME and i2b2 tasks, similar trends could not be interpreted since the RSOTA models are pretrained with domain specific corpora while the RBU models are pretrained with general corpora. This implies that for fine-tuning tasks such as temporal relation extraction, other factors (e.g., domain corpora used to pretrain models) may have larger impact than the model complexity. Our observations are consistent with *Finding-II* that empirical p and m setting is not strictly following theoretical findings which are under linear squared regression assumption. This also indicates that p, q, and m cannot be used to measure Gp empirically. On the other hand, our variance decomposition-based method does not rely on p, q, and m, therefore it provides the basis for a more generalized justification method. ## 4.5 Ensemble Results Table 2 presents P, R, and F1 scores of RSOTA and ENS methods on all datasets. Similar to prior studies (e.g. Zhang et al., 2020; Du et al., 2022), results for the SuperGLUE tasks are reported on the dev set. The accuracy scores for each task are presented in the Appendix Table 7. Compared to the RSOTA setting, the ENS method boosts performance on all datasets, with the largest gains of 49.3% and 15.2% relative F1 improvements on the low-resource CB and COPA datasets respectively. In Section 5, we analyze why ensembles work from the variance decomposition perspective, which provides insights into how ensembles help reduce EvarO and lead to better prediction accuracy. ## 4.6 Ensemble Impact On Low- And High-Resource Classes We further investigate the improvements on lowresource datasets (e.g. CB and COPA). To eliminate all interference from p, q, A(*S, O*), pretrained models and only keep m as the variable, we tease apart the results of the extremely unbalanced THYME dataset and analyze the performance on each class. Its most frequent classes (i.e., *high-resource* classes) are CONTAINS (2895), OVERLAP (2359), and BEFORE (2241); and the least frequent classes (i.e., *lowresource* classes) are NOTED-ON (140), BEGINSON (160), and ENDS-ON (244). The initial F1 scores are: CONTAINS-0.776, OVERLAP0.539, and BEFORE-0.469; NOTED-ON-0.618, BEGINS-ON-0.608, and ENDS-ON-0.695. In Figure 2, we show absolute improvement and improvement percentage of F1 with various ensemble size N (compared with single learners i.e., N = 1). These values are computed based on the mean with 95% confidence interval over 5 random samples for each class and each ensemble size. It is observable that given a fixed N, the performance improvements by F1 scores on the low-resource classes - NOTED-ON (brown), BEGINS-ON (red), and ENDS-ON (orange) - are larger than the ones of high-resource classes. The difference becomes larger as N increases. The scales of improvement are not affected by the initial results; i.e., the larger improvements on low-resource classes are not due to lower initial F1 scores. This is an interesting observation and may introduce a new solution for improving performance of imbalanced datasets. More similar results on P and R are shown in in Appendix Fig. 3. We explore theoretical insights into these observations in Section 5. ## 5 Discussion And Theoretical Analysis 5.1 Basic Statistics Let X1, X2, · · · , XN be a random sample from a population with mean µ and variance σ 2and X¯ =1N PN i=1 Xi. Then the following two items hold. $$\begin{array}{l}{{\mathrm{{\bf~a:}\mathbb{E}[\bar{X}]=\mathbb{E}[X_{i}]=\mu}}}\\ {{\mathrm{{\bf~b:}Var}(\bar{X})=\frac{1}{N}\mathrm{{Var}}(X_{i})=\frac{1}{N}\sigma^{2}}}\end{array}$$ ## 5.2 Ensemble In Bias-Variance Decomposition We work in the context of bagging-based ensembles, assuming the ensemble predictor ¯f(x) is ¯f(x) = 1N PN i=1 fi(x) is the averaging of N single learners trained with different samples of S and O. Based on the basic statistics, the E*bias* of ¯f(x) in Equation 2 is unchanged while the Evar of ¯f(x) in Equation 3 decreases by 1N . Furthermore, we have: $$\mathrm{d}{\boldsymbol{\cdot}}$$ and: $$\begin{array}{c}{{{\mathcal{E}}_{v a r o}=\mathbb{E}_{x}\left[\mathbb{E}_{S}[\mathrm{Var}_{O}({\bar{f}}_{\theta}(x)|S)]\right]}}\\ {{=\frac{1}{N}\mathbb{E}_{x}\left[\mathbb{E}_{S}[\mathrm{Var}_{O}(f_{\theta}(x)|S)]\right]}}\end{array}\tag{7}$$ and: $$\begin{split}\mathcal{E}_{vars}&=\mathbb{E}_{x}\left[\text{Var}_{S}(\mathbb{E}_{O}[\tilde{f}_{\theta}(x)|S])\right]\\ &=\mathbb{E}_{x}\left[\text{Var}_{S}(\mathbb{E}_{O}[f_{\theta}(x)|S])\right]\end{split}\tag{8}$$ which indicates that $\mathcal{E}_{var_{O}}$ reduces while $\mathcal{E}_{vars}$ which indicates that EvarO reduces while EvarS keeps unchanged as the ensemble size N increases. EvarO vanishes when N is sufficiently large. The improvement of the variance by ensembling comes from the reduction of the variance due to optimization. As mentioned in Section 2.2 that under linear squared regression assumption, EvarO vanishes as p increases and EvarS depends on critical parameter dimensionality d(p). In this paper, we also proved that EvarO vanishes as N increases. Given that pretraining LPMs with larger p and/or q is extremely difficult, increasing N is a much better way for improving performance of LPMs. This also proves the effectiveness of Stage-II in our proposed two-stage fine-tuning method. To ensure that a fine-tuned LPM can move from Phase-I to PhaseII, the ensemble size N in Stage-II should be set to a value that is larger or equal to E*varO* E*varS* . (a) F1 absolute improvement (b) F1 percentage improvement ![7_image_0.png](7_image_0.png) ## 5.3 Ensemble In Low- And High-Resource Classes One interesting experimental observation is that the improvement on low-resource classes is larger than that on high-resource classes. To further investigate the impact of the ensemble learners on the imbalanced datasets, we make the following analysis to Equation 5. $$\mathbb{E}_{x}\text{Var}(\bar{f}_{\theta}(x))=\left\{\begin{array}{l l}{{\frac{1}{N}\cdot\frac{p}{m}\sigma_{\epsilon}^{2}}}&{{\text{for}}}\quad p\leq m}\\ {{\frac{1}{N}\cdot\frac{r}{m}\sigma_{\epsilon}^{2}}}&{{\text{for}}}\quad p>m}\end{array}\right.\tag{9}$$ where the impact of ensembles is represented as a function of ensemble size N, denoted as π(N) = 1 N∈ [0, 1] that a smaller π(N) means a larger performance improvement. Given a fixed p, in overparameterized cases (*p > m*) where m is small, since the samples in S are i.i.d, thus X ∈ R m×p is full rank so that r = rank(X) = m. The variance becomes 1N· σ 2 ϵ which does not change with m. The impact of ensemble solely depends on N thus is significant. On the other hand, in underparameterized cases (p ≤ m) where m is large, the variance is negligible as m becomes much larger than p, i.e., lim m→∞ 1 N· p m σ 2 ϵ = 0, so that the variance becomes 0 regardless of π(N). This implies that the impact of ensemble can be ignored as m increases. In general, given a fixed p, for both cases the impact of ensemble is significant when m is small and insignificant as m becomes very large. These theoretical findings explain why we observe larger performance improvement on lowresource than on high-resource classes using ensembles. Similar to the discussion in Section 4.4, empirically, it is hard to define low-resource and high-resource classes using m and p because our analysis is based on least squared linear regression assumption which is simplified compared to conditions in real scenarios. Besides p and m, there are ![7_image_1.png](7_image_1.png) other factors that may have implicit but significant impact on model performance. This also explains why the improvement does not strictly follow the sorting of classes by their sample size. However, our findings show another advantage of using ensembles. The empirical impact of ensemble size on imbalanced classes has been examined and shown in Section 4.6 and Appendix C, which is consistent with the theoretical findings discussed in this section. ## 6 Related Works In a fine-tuning setting, searching for an optimal setting of pretrained models and hyperparameters is challenging due to the high dimensionality of the search space, as well as the infinite values for each dimension. In previous works of fine-tuning tasks (Lee et al., 2020; Alsentzer et al., 2019; Beltagy et al., 2019; Lin et al., 2021), the SOTA models are single learners carefully selected and fine-tuned based on evaluation results, such as P, R, and F1 scores, using grid-search or random-search. To improve the stability of the pre-trained transformerbased language models, Mosbach et al. (2021) suggests using small learning rates with bias correction and increasing the number of iterations to avoid vanishing gradients. Prior efforts also highlight the comparable effect of weight initialization and training data order to the variance of model performance (Dodge et al., 2020). Ensemble methods have been successful in boosting the predictive performance of single learners (Ren et al., 2016 present a comprehensive review; also see Wang et al., 2003; Cire¸sAn et al., 2012; Xie et al., 2013; Huang et al., 2017) as well as in estimating predictive uncertainty (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Snoek et al., 2019). Among these studies, Bonab and Can (2019) and Wang et al. (2020) theoretically prove that ensembles can perform better than the average performance of component learners for prediction accuracy and consistency of learning models. Wang et al. (2022) empirically evaluates the application of ensemble methods to fine-tuned transformer-based models for clinical NLP tasks. The findings demonstrate that ensemble methods improve model performance, particularly when employing dynamic snapshot ensembling. Although it is common knowledge that ensembles can reduce variance thus reducing the generalization error, no previous work has discussed or measured this in the context of variance decomposition. Furthermore, no previous work has investigated the impact of ensembles on imbalanced datasets. ## 7 Conclusion Different from the prevailing fine-tuning settings, we propose a two-stage fine-tuning method to improve the generalization ability of a fine-tuned LPM. We provide a variance decomposition-based justification method to empirically measure the generalization ability of the LPM w.r.t. the upper limit of its learning ability. In Stage-I, the RSOTA setting is built by ratcheting down bias and variance in an iterative way. In Stage-II, given the RSOTA setting, the fine-tuned LPM is guaranteed to be further generalized through ensembling techniques by reducing the variance due to optimization. The proposed justification method provides a concrete metric to track this process. We provide empirical evidence by conducting experiments on the SuperGLUE tasks and two clinical datasets. Furthermore, we perform theoretical analysis on how ensembles improve variance due to optimization. We investigate the nature of variance change for the ensemble size in lowand high-resource classes in classification tasks. Different from previous theoretical analyses using only model complexity and data size which depends on least squared regression, our variance decomposition-based justification method in StageI does not rely on specific factors thus leading to a more generalizable measurement. The ENS further boosts performance without risk of computational cost and overfitting. Our analysis on imbalanced data reveals another advantage of ensemble algorithms in improving model performance on lowresource classes. As future work, we are interested in (1) rigorously proving variance decomposition-based justification criteria, (2) quantifying low- and highresource classes with specific features that interplay with ensemble size. If properly used, we believe the theoretical and empirical findings discussed in this paper can guide practitioners to fine-tune more generalizable models. ## Limitations As we stated under future work, one of the limitations is the variance decomposition-based proof. Our work is based on simplified settings, i.e., linear squared regression assumption. Post-ensemble variance is not evaluated due to the nature of the ENS ensemble algorithm. Extended experiments using vanilla bagging ensemble would enable analysis of post-ensemble variance. Further investigation into refining the two stages would help understand the performance of LPMs, e.g. those that are in Phase-I but before the peak in Figure 1. Our results for MultiRC are based on the instance sampling, however a better sampling technique should be based on stratified sampling based on the ratio of the question types in the MultiRC set. However, to achieve this, the MultiRC set needs to be annotated for question types, which is currently missing. Sampling techniques by themselves can become a research topic so that a further decrease of variance due to sampling can be achieved. Although we list these items as limitations, they are also topics for future research within the greater theme of understanding the new bias-prevalence paradigm for LPMs. ## Acknowledgements The authors would like to thank the anonymous reviewers for feedback that improved the paper, the US National Institutes of Health (NIH) and the New Jersey Institute of Technology (NJIT) for providing funding. This research is supported by NJIT FY24 Faculty Seed Grant, NIH Grants U24CA248010, R01LM013486 and R01GM114355. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NJIT or NIH. ## References Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In *Proceedings of the 2nd* Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Hamed Bonab and Fazli Can. 2019. Less is more: a comprehensive framework for the number of components of ensemble classifiers. IEEE Transactions on neural networks and learning systems. Dan Cire¸sAn, Ueli Meier, Jonathan Masci, and Jürgen Schmidhuber. 2012. Multi-column deep neural network for traffic sign classification. *Neural networks*, 32:333–338. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv* preprint arXiv:2002.06305. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference* on machine learning, pages 1050–1059. Stuart Geman, Elie Bienenstock, and René Doursat. 1992. Neural networks and the bias/variance dilemma. *Neural computation*, 4(1):1–58. Hasham Ul Haq, Veysel Kocaman, and David Talby. 2021. Deeper clinical document understanding using relation extraction. arXiv preprint arXiv:2112.13259. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. 2017. Snapshot ensembles: Train 1, get m for free. arXiv preprint arXiv:1704.00109. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402–6413. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2021. EntityBERT: Entity-centric masking strategy for model pretraining for the clinical domain. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 191–201, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. In *9th International Conference on Learning* Representations, CONF. Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, and Ioannis Mitliagkas. 2018. A modern take on the biasvariance tradeoff in neural networks. arXiv preprint arXiv:1810.08591. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. 2011. Reading digits in natural images with unsupervised feature learning. Ye Ren, Le Zhang, and Ponnuthurai N Suganthan. 2016. Ensemble classification and regression-recent developments, applications and future directions. *IEEE* Computational Intelligence Magazine, 11(1):41–53. Jasper Snoek, Yaniv Ovadia, Emily Fertig, Balaji Lakshminarayanan, Sebastian Nowozin, D Sculley, Joshua Dillon, Jie Ren, and Zachary Nado. 2019. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In *Advances in Neural Information Processing Systems*, pages 13969– 13980. William F. Styler IV, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C de Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, and James Pustejovsky. 2014. Temporal annotation in the clinical domain. *Transactions of the* Association for Computational Linguistics, 2:143– 154. Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. Evaluating temporal relations in clinical text: 2012 i2b2 challenge. Journal of the American Medical Informatics Association, 20(5):806–813. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of* the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Haixun Wang, Wei Fan, Philip S Yu, and Jiawei Han. 2003. Mining concept-drifting data streams using ensemble classifiers. In *Proceedings of the ninth ACM* SIGKDD international conference on Knowledge discovery and data mining, pages 226–235. AcM. Lijing Wang, Dipanjan Ghosh, Maria Gonzalez Diaz, Ahmed Farahat, Mahbubul Alam, Chetan Gupta, Jiangzhuo Chen, and Madhav Marathe. 2020. Wisdom of the ensemble: Improving consistency of deep learning models. Advances in Neural Information Processing Systems, 33:19750–19761. Lijing Wang, Timothy Miller, Steven Bethard, and Guergana Savova. 2022. Ensemble-based fine-tuning strategy for temporal relation extraction from the clinical narrative. In *Proceedings of the 4th Clinical Natural Language Processing Workshop*, pages 103–108, Seattle, WA. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Kristin Wright-Bettner, Chen Lin, Timothy Miller, Steven Bethard, Dmitriy Dligach, Martha Palmer, James H. Martin, and Guergana Savova. 2020. Defining and learning refined temporal relations in the clinical narrative. In *Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis*, pages 104–114, Online. Association for Computational Linguistics. Notation Description X , Y The input and output sets of a learning task. D The unknown joint distribution of (X , Y). (*x, y*) The pair drawn from D; x ∈ X , y ∈ Y. S A finite training dataset of m i.i.d. samples from D. m The sample size of S. C The number of classes in a training dataset. p The number of hidden units in a neural network layer. q The number of hidden layers in a neural network. fθ The predictors that are parameterized by the weights θ ∈ R pof neural networks. fθ(x) The output prediction given x. O The random variable for optimization randomness. A The learning algorithm that produces θ = A(*S, O*). E[y|x] The expectation of y given x. Rm The performance of a learning algorithm using training sets of size m. E*noise* The expected noise of the output predictions. E*bias* The expected bias of the output predictions. Evar The expected variance of the output predictions. Varopt The variance due to optimization. Var*samp* The variance due to sampling. EvarO The expected variance due to optimization. EvarS The expected variance due to sampling. N The ensemble size. Gp Generalization ability of a learning algorithm. Table 3: Notations and their descriptions. Jingjing Xie, Bing Xu, and Zhang Chuang. 2013. Horizontal and vertical ensemble with deep representation for classification. *arXiv preprint arXiv:1306.2759*. Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. 2020. Rethinking bias-variance trade-off for generalization of neural networks. In *International Conference on Machine Learning*, pages 10767–10777. PMLR. Yian Zhang, Alex Warstadt, Haau-Sing Li, and Samuel R Bowman. 2020. When do you need billions of words of pretraining data? arXiv preprint arXiv:2011.04946. ## A Notations Table 3 shows major notations and their descriptions. ## B Experiments B.1 Data Description Table 4 shows the statistics of all datasets used in our experiments. Each downloaded SuperGLUE dataset includes train, val, and test sets in json format.2 The downloaded test set does not have goldstandard labels thus is not used in our experiment. 2https://super.gluebenchmark.com/ tasks | BoolQ CB RTE MultiRC WiC COPA THYME i2b2 | | | | | | | | | |--------------------------------------------|------|----------|------|------|------|--------|--------|------| | Classes | 2 | 3 | 2 | 2 | 2 | 2 | 10 | 3 | | Train samples | 7541 | 200 2000 | 4080 | 4800 | 320 | 423612 | 9909 | | | Dev samples | 1886 | 50 | 500 | 1020 | 1200 | 80 | 235117 | 2478 | | Val samples | 3270 | 57 | 278 | 953 | 638 | 100 | 208892 | 9630 | Table 4: Data statistics. | BoolQ | CB | RTE | MultiRC | WiC | COPA THYME | i2b2 | | | |-----------------------------|----------------------|--------|-----------|--------|--------------|--------|------|------| | Random seed | 62 | 52 | 72 | 72 | 42 | 72 | 42 | 42 | | Batch size | 10 | 10 | 10 | 10 | 10 | 10 | 32 | 32 | | Epoch | 8 | 7 | 10 | 6 | 8 | 8 | 3 | 3 | | Learning rate | 1e-5 | 2e-5 | 2e-5 | 2e-5 | 1e-5 | 1e-5 | 4e-5 | 4e-5 | | Learning rate schedule type | linear linear linear | linear | linear | linear | linear | linear | | | | Max sequence length | 512 | 512 | 512 | 512 | 512 | 512 | 100 | 128 | | Gradient accumulation steps | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | We split the train set into train (80%) and dev (20%) sets, and evaluate the model performance on val set. The i2b2 does not have a development (dev) set in the released data and we split the train set into train (80%) and dev (20%) sets. Random seed 42 is used to replicate the sampling process. For MultiRC, because each question can have more than one correct answer, we sampled the instances based on individual question-answer options in the train set for training and validation in our experiment. ## B.2 Hyperparameter Settings Table 5 shows the details of hyperparameter settings. Unless otherwise specified, we use default values of the hyperparameters in Huggingface. We also summarize pretrained models used in our experiments in Table 6. ## B.3 Replicated Sota Scores To ensure that our experiments on the SuperGLUE tasks are reproducible, we followed the settings and replicated the SOTA accuracy scores reported in: https://github.com/ facebookresearch/fairseq/tree/ main/examples/roberta. We could not replicate the representation (special token extractions) and the model settings (unpublished pretrained model) for WSC task, thus it is omitted in our paper. In our experiments, we report the classic metrics of precision/recall/F1 for consistency with the reported results for the THYME and i2b2 tasks. Our accuracy scores for the SuperGlue tasks (shown in Table 7) are directly comparable and are consistent with those in Table 2 in the main paper. ## B.4 Implementation Details Of Ens ENS allows a pretrained model to be fine-tuned multiple times (i.e., multiple training runs) sequentially with different random seeds and data shuffling of train/validation splits. It uses a cyclic annealing schedule and cyclic snapshot strategy to periodically save the best model during each training run. Different from the simple bagging ensemble, after each training run, a dynamic pruning algorithm is applied to select a few single learners from the saved ones which can lead to better performance of the ensemble learner with theoretical guarantees. The sequential training runs stop when the accumulated number of selected single learners reaches a preset ensemble size. The total amount of training runs is a dynamic value rather than a preset value, which is determined by the snapshot strategy and pruning factor during the sequential training. In our experiments, we implemented ENS on the top of RSOTA setting. The ensemble size is set as 5 and majority voting is used to generate ensemble predictions. We reuse RSOTA settings except that we set *cosine with restarts* as the learning rate scheduler and set the learning rate to restart every k epochs which, based on the RSOTA setting, allows the model to converge to a reasonable state before each restart. The total number of epochs for each training run is 5 × k and we save the top 4 models for pruning based on validation accuracy. The random seeds for initialization and data shuffling are [42, 52, 62, 72, 82]. The logic behind the above settings is to retain the benefits from RSOTA fine-tuning settings as much as possible. Code and settings to reproduce the results are avail- | Model name | Model Details | |--------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------| | RoBERTa-base | 12-layer, 768-hidden, 12-heads, 125M parameters. | | RoBERTa-large | 24-layer, 1024-hidden, 16-heads, 355M parameters | | PubmedBERTbase-MimicBig-EntityBERT 12-layer, 768-hidden, 12-heads, 110M parameters. BioBERT-base-cased-v1.1 12-layer, 768-hidden, 12-heads, 110M parameters. | | | BoolQ | CB | RTE | MultiRC | WiC | COPA | | |------------|-----------------------------------------|----------|---------------------|-------|--------|------| | Reference | 86.9 | 98.2 | 89.5 | 85.7 | 75.6 | 94.0 | | Replicated | 86.3 | 98.2 | 87.4 | 84.7 | 72.1 | 93.4 | | RSOTA | 85.4±0.31 81.1±9.9 83.7±0.91 79.1±10.72 | 70.7±1.7 | 81.6±11.9 | | | | | ENS | 85.7±0.41 92.2±1.4 85.6±1.05 | 82.5±1.3 | 71.3±0.36 93.6±0.48 | | | | Table 6: Details of pretrained models. able at https://github.com/christa60/ bias-var-fine-tuning.git. ## B.5 **Experimental Design Of Bagging Ensemble** For Investigating Various Ensemble Sizes To analyze the nature of the variance change with the ensemble size in low-resource classes (NOTED-ON, BEGINS-ON, END-ON relations in the THYME corpus) and high-resource (CONTAINS, OVERLAP, BEFORE relations in the THYME corpus) classes, we vary the ensemble size from 1 to 10 and then compute the P, R, and F1 scores for each class on THYME data. We create 10 bootstrap replicate training sets by resampling training and dev datasets with the same size and class distribution. The random seeds for resampling are randomly chosen and then fixed. The various splits are denoted as ['split_r42', 'split_r52', 'split_r62', 'split_r72', 'split_r82', 'split_r92', 'split_r102', 'split_r112', 'split_r122', 'split_r132']. Given a random seed of initialization, we train N fine-tuned single learners. To compute 95% confidence intervals for these estimates, we use 5 random seeds of initialization, resulting in 5 ensemble models for each ensemble size. We vary the ensemble size N from 1 to 10 and have 100 ensemble models in total. ## C Section **4.6: Additional Results** We show the absolute and percentage improvement (compared with single learners i.e., N = 1) change over the ensemble size N using P and R in Figure 3. Together with Figure 2, the *major* observations are: (a) The absolute and percentage improvements of P, R, and F1 increase as N increases. (b) The precision improvements are more pronounced than those of recall thus contributing the major part of the F1 improvements. This phenomenon is more pronounced for high-resource classes. (c) Given a fixed N, the improvements on low-resource classes are larger than those on high-resource classes across the three metrics. The difference becomes larger as N increases. Discussion: Our experimental results are consistent with our theoretical findings in Section 5 that model performance keeps improving because variance due to optimization decreases as ensemble size increases. Furthermore, the impact of ensemble is more pronounced on low-resource classes than on high-resource classes. ![13_image_2.png](13_image_2.png) ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 and Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.4 and 4.5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ramesh-etal-2023-comparative
A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models
https://aclanthology.org/2023.acl-long.878
Compression techniques for deep learning have become increasingly popular, particularly in settings where latency and memory constraints are imposed. Several methods, such as pruning, distillation, and quantization, have been adopted for compressing models, each providing distinct advantages. However, existing literature demonstrates that compressing deep learning models could affect their fairness. Our analysis involves a comprehensive evaluation of pruned, distilled, and quantized language models, which we benchmark across a range of intrinsic and extrinsic metrics for measuring bias in text classification. We also investigate the impact of using multilingual models and evaluation measures. Our findings highlight the significance of considering both the pre-trained model and the chosen compression strategy in developing equitable language technologies. The results also indicate that compression strategies can have an adverse effect on fairness measures.
# A Comparative Study On The Impact Of Model Compression Techniques On Fairness In Language Models Krithika Ramesh Microsoft Research kramesh.tlw@gmail.com Shrey Pandit∗ BITS, Pilani pandit.shrey.01@gmail.com ## Abstract Compression techniques for deep learning have become increasingly popular, particularly in settings where latency and memory constraints are imposed. Several methods, such as pruning, distillation, and quantization, have been adopted for compressing models, each providing distinct advantages. However, existing literature demonstrates that compressing deep learning models could affect their fairness. Our analysis involves a comprehensive evaluation of pruned, distilled, and quantized language models, which we benchmark across a range of intrinsic and extrinsic metrics for measuring bias in text classification. We also investigate the impact of using multilingual models and evaluation measures. Our findings highlight the significance of considering both the pre-trained model and the chosen compression strategy in developing equitable language technologies. The results also indicate that compression strategies can have an adverse effect on fairness measures. ## 1 Introduction Despite their increasing popularity, machine learning models have been known to exhibit biases in their outputs, present privacy risks, and have potentially negative environmental consequences from their training and deployment. (Bender et al., 2021; Talat et al., 2022). Language models suffer from biases that result in unequal resource distributions (**allocational harms**), in addition to the undesired tendency to reproduce biases and stereotypes in content that is reflective of hegemonic worldviews (**representational harms**). Although measures have been proposed in tasks such as text classification (Czarnowska et al., 2021) to investigate the disparate allocational treatment of different classes, much of the research on fairness in language models centers on addressing representational harms Arnav Chavan∗ Indian Institute of Technology, Dhanbad arnavchavan04@gmail.com ## Sunayana Sitaram Microsoft Research Sunayana.Sitaram@Microsoft.Com (Blodgett et al., 2020). The potential of these models to further stigmatize marginalized communities is demonstrated in (Dressel and Farid, 2018), which illustrates how recidivism prediction systems are biased against black defendants, who have a higher baseline risk for repeat offences. Biases are also prevalent in computer vision applications such as facial recognition technologies. Within NLP, (Bolukbasi et al., 2016), one of the first forays that studied this phenomenon in language, noted that word embeddings contained stereotypical associations with respect to gender. Language models can exhibit biases toward different dialects for tasks like toxicity and hate speech detection (Garg et al., 2022; Sap et al., 2019), generate stereotypical representations and narratives (Lucy and Bamman, 2021), and are capable of the outright erasure of underrepresented identities (Dev et al., 2021). Compressed models that are biased may have detrimental consequences in the real world, as they are typically deployed on edge devices, which can further disadvantage communities without access to other forms of technology. Consequently, these issues have compelled a shift towards developing more inclusive systems. Hooker et al. (2020) demonstrates how compression techniques, when applied to models that deal with tabular data, lead to the disparate treatment of less-represented classes. However, equivalent studies in NLP (Tal et al., 2022; Ahn et al., 2022; Silva et al., 2021) do not provide a conclusive observation as to whether compression methods are effective for reducing bias in NLP, and are centered mainly solely around model distillation being the compression technique of choice. This paper aims to resolve the following questions by benchmarking a wide range of metrics and datasets to study bias in text classification systems. - How does model compression using pruning, quantization, or distillation impact bias in language models, and to what extent? - To what extent are these observations influenced by variables such as the utilization of different techniques within a specific compression method or a change in model architecture or size? - How does multilinguality affect these observations in compressed models? ## 2 Related Work Compression techniques such as pruning, distillation, and quantization have proven effective at reducing the size of models while maintaining their performance. **Pruning** can be done in two ways, via structured and unstructured pruning. While structured pruning involves removing groups of neurons, unstructured pruning removes individual neurons by zeroing out their values. Structured pruning methods generally achieve faster inference speeds, along with a reduction in parameter size. **Knowledge distillation** techniques are another alternative that have been demonstrated to effectively transfer knowledge from a teacher model to a smaller student model, using a loss function designed to minimize the distance between the features or the outputs of the student and teacher models. We also incorporate a third form of model compression - **quantization**, where model weights and/or activations are represented using lower-bit precisions. There are two main approaches to quantization: post-training quantization, which is applied to a pre-trained model, and quantization-aware training (Zafrir et al., 2019a), which incorporates quantization into the training process in order to mitigate the loss of accuracy that can occur with post-training quantization. Although several techniques for pruning and quantization have been developed, we acknowledge that our work consists only of models compressed using post-training dynamic quantization and the pruning method proposed in Zafrir et al. (2021). Whilst there has been research at the confluence of fairness and efficiency in natural language processing (NLP), the results from these studies can be inconclusive, limited in their research design, and at times, contradict the results from previous analyses. Talat et al. (2022); Orgad and Belinkov (2022); Field et al. (2021); Blodgett et al. (2020) provide critical insights into the current state of fairness in NLP and delve into the details of what research studies must consider when conducting work in this area. The discussion thus far concerning fairness, in general, has mainly been Anglo-centric, but recent forays (Kaneko et al., 2022; Huang et al., 2020b; Gonen et al., 2019; Zhao et al., 2020) have explored bias in multilingual spaces and languages beyond English. In the context of model compression, Tal et al. (2022) show that while larger models produce fewer gendered errors, they produce a *greater proportion* of gendered errors in coreference resolution whilst Xu and Hu (2022) suggest that distillation and pruning have a regularizing effect that mitigates bias in text classification. On the other hand Silva et al. (2021); Ahn et al. (2022); Hessenthaler et al. (2022) all demonstrate how distillation can have an adverse impact on model fairness. Hessenthaler et al. (2022) strongly casts doubt on the results from Xu and Hu (2022) by showing that knowledge distillation decreases model fairness. Additionally, the findings from Mohammadshahi et al. (2022) point toward the fact that pruning can amplify bias in multilingual machine translation models. It must also be noted that with the exception of Hessenthaler et al. (2022); Tal et al. (2022); Xu and Hu (2022); Mohammadshahi et al. (2022), many of these studies do not validate the fairness of these models over downstream tasks. This is essential as bias measurements over a model's pretrained representations cannot be used as a proxy to assess bias in its downstream outputs (GoldfarbTarrant et al., 2021). Lauscher et al. (2021); Gupta et al. (2022) explore the efficient debiasing of models via the use of adapters and an adapted form of distillation, respectively. To our knowledge, our work is the first comprehensive study on fairness in NLP with respect to pruning, distillation and quantization, in addition to which it addresses both monolingual and multilingual models. ## 3 Methodology And Setup 3.1 Pruning, Quantization, Distillation Our pruning approach uses the Prune Once For All (Prune OFA) (Zafrir et al., 2021) method on the base models. The Prune OFA method is a state-ofthe-art pruning strategy that prunes models during the pre-training phase, eliminating the need for additional pruning on downstream tasks. We employ dynamic quantization (Zafrir et al., 2019b) as a post-training quantization method for fairness evaluation. This approach converts model weights to INT8 format post-training and dynamically quantizes activations during runtime based on the range of data. This method has the advantage of minimal hyperparameter tuning and additional flexibility in the model, which minimizes any potential performance loss. For knowledge distillation, we consider models compressed using the techniques employed in (Sanh et al., 2019; Wang et al., 2020a), with the primary difference in these methods being the type of feature representations that the student is encouraged to mimic. We utilize pre-trained distilled models that are publicly available12for all of our experiments. The complete list of models we considered for these experiments is in the appendix (Table 9). ## 3.2 Fairness Evaluation In Language Models To examine bias in LMs, we rely on a combination of **intrinsic** and **extrinsic** measures. Intrinsic measures primarily evaluate bias in the pre-trained representations of language models, such as in the static and contextualized embedding spaces. On the other hand, extrinsic measures estimate bias in the outputs produced by the LLM in the downstream task it is fine-tuned for. Extrinsic evaluation measures are capable of identifying both allocational and representational harms, while intrinsic measures only address the latter. The inconsistencies and lack of correlation between these two kinds of metrics (Goldfarb-Tarrant et al., 2021; Cao et al., 2022) has led to calls for better evaluation practices that prioritize extrinsic evaluation. We have included detailed explanations of the metrics and datasets in the next section and provided a broad overview and additional details in the appendix in Table 11. ## 4 Intrinsic Measures StereoSet (Nadeem et al., 2021) is an English dataset used for analyzing's a model's proclivity for stereotypical and anti-stereotypical data across the axes of gender, race, religion, and profession. We consider only the intrasentence samples from StereoSet and evaluate the test set split. The ICAT (Idealized Context Association Test) score combines both the language model score (LMS) and the stereotype score (SS) such that it is maximized when the model is unbiased and simultaneously proficient at language modeling as shown in Equation 1. $$I C A T=L M S*{\frac{\operatorname*{min}(S S,100-S S)}{50}}\quad\quad(1)$$ Similar to StereoSet, **CrowS-Pairs** (Nangia et al., 2020) is a crowdsourced dataset that allows us to observe bias along the dimensions of gender, race, and religion. The distance between the stereotype and anti-stereotype pairs is kept to a minimum, and the metric involves the pseudo-log likelihood scoring mechanism from Salazar et al. (2020). However, both StereoSet and CrowS-Pair have been subject to critique for the inconsistencies in their datasets (Blodgett et al., 2021). ## 5 Extrinsic Measures For extrinsic measurement over downstream tasks, we have used multiple datasets with different fairness definitions (details in Table 11 in the appendix). The Jigsaw dataset is used to evaluate bias in toxicity detection systems across multiple demographic identities. We do this by assessing the difference in False Positive Rates (FPR) across subgroups to ensure that text from one group is not unfairly flagged as toxic. We report ROC-AUC as a metric on three specific subsets: - **Subgroup AUC** : The test set is restricted to samples that mention the specific identity subgroup. A low value suggests that the model is ineffective at differentiating between toxic and non-toxic remarks that mention the identity. - **BPSN AUC** (Background Positive, Subgroup Negative) : The test set is restricted to the non-toxic examples that mention the identity and the toxic examples that do not mention the identity. A low value suggests that the model predicts a higher toxicity score than it should for a non-toxic example mentioning the identity. - **BNSP AUC** (Background Negative, Subgroup Positive) : The test set is restricted to the toxic examples that mention the identity and the non-toxic examples that do not mention the identity. A low value here indicates that the model predicts lower toxicity scores than it should for toxic examples mentioning the identity. The other monolingual extrinsic measure includes the **African American Vernacular** English-Standard American English (AAVESAE) dataset (Groenwold et al., 2020a), which consists of intent-equivalent SAE and AAVE sentence pairs. Sap et al. (2019) has shown that AAVE language is more likely to be identified as hate speech compared to the standardized form of American English. A fair, unbiased model on this data would produce similar sentiment scores for both AAVE and SAE. We have also included the results for the Equity Evaluation Corpus (EEC), a templatebased dataset that evaluates the emotional intensity of sentiment classification systems over four categories of data- anger, fear, sadness, and joy, in the appendix (Section C.1). ## 5.1 Multilingual Datasets To test if these observations are consistent with results across multilingual models, we use a binarized **hate speech detection** dataset, originally sourced from Huang et al. (2020a). It consists of online data accumulated from Twitter along with labels containing information pertinent to the user's age, gender, country, and race/ethnicity, and the details regarding the distribution of data and labels across languages are provided in the Appendix in Table 17. The fairness evaluation objective for the hate speech detection task involves measuring the equality differences (ED) metric across each of the groups corresponding to the aforementioned demographic factors. The ED is defined as the difference between the true positive/negative and false positive/negative rates for each demographic factor. For instance, the ED for false positive rates (FPED) is defined below, where d is representative of each demographic group within a demographic factor D (for example, gender is a demographic factor, and male is a corresponding representative demographic group). $$F P E D=\sum_{d\epsilon D}\|F P R_{d}-F P R\|\qquad(2)$$ We also make use of **reviews datasets** sourced from Trustpilot, Yelp, and Amazon, with a rating (1-5) for each review (Hovy et al., 2015; Huang and Paul, 2019). The data includes user information, such as age, gender, and country (our analysis is constrained to gender). For this specific task, the dataset has been transformed into a binary sentiment analysis classification task, where reviews with a rating above 3 are classified as positive, and those with a rating below 3 are classified as negative. Reviews with a rating of 3 are discarded. As with the hate speech dataset, the **equality difference** metric is used to evaluate group fairness over this task along a given dimension. ## 6 Analysis Of Results 6.1 Stereoset The findings of the StereoSet evaluation are presented in Table 1, wherein a higher ICAT score implies a lesser biased model 3. According to the results, the monolingual models' distilled and pruned versions exhibit more bias than their original counterparts. However, this trend does not necessarily apply to the multilingual or quantized versions of these models (Table 13). There is also an indication that the extent of pruning is potentially proportional to the negative impact on fairness in these models for this metric. Additionally, the MiniLM models, which employ a different distillation technique than the one used for DistilBERT, show a significant decrease in the ICAT score. However, it is worth noting that they are relatively smaller (MiniLMv2 being approximately one-third the size of DistilBERT). Among the three techniques, quantization appears to be the rank the lowest in terms of bias according to the intrinsic StereoSet measure. That said, these results may not accurately predict the model's performance in downstream tasks (Goldfarb-Tarrant et al., 2021). Based on the ICAT score measurement, the models distilled using MiniLMv2 exhibit the highest level of bias, while the quantized models demonstrate the best performance in this metric. DistilBERT emerges as the least biased among the distilled models, while the quantized version of BERT-base shows the least bias among the quantized model sets. We highlight that while quantization results in a higher ICAT score for BERT, this is not the case for RoBERTa. Furthermore, although we have aggregated the scores for the dimensions of gender, race, and religion, these trends do not persist uniformly across individual dimensions. This observation is also reflected in our evaluation of the CrowS-Pair dataset. | Model | Overall ICAT Score | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|--------------|--------------|------|----------| | bert-base-uncased | 70.30 | | | | | | distilbert-base-uncased | 69.52 ↓-0.78 | | | | | | miniLMv2-L6-H384-uncased | 53.94 ↓-16.36 | | | | | | bert-base-uncased-90%-pruned | 69.44 ↓-0.86 | | | | | | bert-base-uncased-85%-pruned | 68.50 ↓-1.8 | | | | | | bert-base-uncased-quantized | 72.06 ↑1.76 | | | | | | bert-base-multilingual-cased | 64.94 | | | | | | distilbert-base-multilingual-cased | 67.99 ↑3.05 | | | | | | xlm-roberta-large | 71.29 | | | | | | multilingual-MiniLM-L12-H384 | 52.47 ↓-18.82 | | | | | | roberta-base | 67.18 | | | | | | distilroberta | 66.68 ↓-0.5 | | | | | | roberta-base-quantized | 65.81 ↓-1.37 | | | | | | bert-large-uncased | 69.50 | | | | | | miniLMv2-L6-H384-uncased | 49.74 ↓-19.76 | | | | | | bert-large-uncased-90%-pruned | 68.91 ↓-0.59 | | | | | | bert-large-uncased-quantized | 70.20 ↑0.7 | Model | Gender | Race | Religion | | bert-base-uncased | 57.25 +7.25 | 62.33 +12.33 | 62.86 +12.86 | | | | distilbert-base-uncased | 56.87 +6.87 | 60.97 +10.97 | 66.67 +16.67 | | | | miniLMv2-L6-H384-uncased | 50.76 +0.76 | 50.68 +0.68 | 72.38 +22.38 | | | | bert-base-uncased-90%-pruned | 51.91 +1.91 | 59.61 +9.61 | 60.95 +10.95 | | | | bert-base-uncased-85%-pruned | 51.91 +1.91 | 53.01 +3.01 | 58.10 +8.10 | | | | bert-base-uncased-quantized | 57.25 +7.25 | 62.14 +12.14 | 46.67 -3.33 | | | | bert-base-multilingual-cased | 47.71 -2.29 | 44.66 -5.34 | 53.33 +3.33 | | | | distilbert-base-multilingual-cased | 50.38 +0.38 | 41.94 -8.06 | 53.33 +3.33 | | | | xlm-roberta-large | 54.41 +4.41 | 51.65 +1.65 | 69.52 +19.52 | | | | multilingual-MiniLM-L12-H384 | 39.85 -10.15 | 60.39 +10.39 | 47.62 -2.38 | | | | roberta-base | 60.15 +10.15 | 63.57 +13.57 | 60.00 +10.00 | | | | distilroberta | 52.87 +2.87 | 60.08 +10.08 | 63.81 +13.81 | | | | roberta-base-quantized | 53.64 +3.64 | 58.53 +8.53 | 49.52 -0.48 | | | | bert-large-uncased | 55.73 +5.73 | 60.39 +10.39 | 67.62 +17.62 | | | | miniLMv2-L6-H384-uncased | 43.13 -6.87 | 50.1 +0.1 | 57.14 +7.14 | | | | bert-large-uncased-90%-pruned | 54.20 +4.20 | 60.19 +10.19 | 69.52 +19.52 | | | | bert-large-uncased-quantized | 50.38 +0.38 | 63.11 +13.11 | 55.24 +5.24 | | | | Table 2: The results for the CrowS-Pairs metric for different model families have been reported, with values closer to 50 indicating less biased models according to this metric. | | | | | | Table 1: We report the overall ICAT score for the model evaluations over the StereoSet dataset. The higher the ICAT score, the less biased the model. ## 6.2 Crows-Pair In Table 2, the results for CrowS-Pair have been presented for gender, race, and religion, along with the deviation from the ideal baseline score of 50. According to this metric, a higher magnitude of deviation indicates more bias in the model. Our findings reveal inconsistent disparities in the scores across different compression methods and their base and large counterparts. For example, while the results suggest that DistilBERT is less biased than BERT-base in terms of gender and race, this does not hold true for religion. While this may also be in due part to the relatively smaller sample size of the data for each dimension (Meade et al., 2022), it would be essential to understand if a model demonstrating lower bias in one dimension generalizes to other dimensions or data that incorporates intersectional identities. However, it is important to acknowledge that intrinsic and extrinsic measures do not necessarily correlate with each other. Additionally, Aribandi et al. (2021) highlights the substantial variance in likelihoodbased and representation-based diagnostics during empirical evaluations, emphasizing the need for caution when interpreting findings from intrinsic measures. ## 6.3 Jigsaw To evaluate the potential harm caused by these models, it is essential to assess bias in the context of downstream tasks. We fine-tuned the models on the Jigsaw dataset and examined how well they performed on various forms of protected identity mentions. Table 3 presents the aggregated scores for all subgroups across the metrics discussed in Section 5. 4 The overall trend suggests that compression methods can have a negative impact on fairness. Distilled models generally appear to demonstrate a higher level of bias compared to their pruned and quantized counterparts. In contrast to the findings from intrinsic measurements, quantization does lead to a decrease in performance in these models, and this drop is also observed in the multilingual models. However, the pruned and quantized models generally exhibit a lower magnitude of bias compared to the distilled models. Among all the compressed models evaluated, the base form of DistilBERT exhibits the highest degree of bias. These findings may vary at different training stages, and they warrant further probing to see if training the models further to improve the performance of these compressed models could also significantly contribute to reducing bias. ## 6.4 Aave-Sae Given the proclivity of hate speech detection systems to flag AAVE language as hate speech (Sap et al., 2019; Groenwold et al., 2020b), we aimed to assess whether SST-2 fine-tuned models also tend to classify AAVE language as negative. The underlying fairness objective in this context is to evaluate the robustness of sentiment analysis models to data from diverse dialects. We make use 4Results for the pruned version of BERT-large excluded due to low performance on Jigsaw and AAVE-SAE. | Model | Subgroup | BPSN | BNSP | |------------------------------------|---------------|-----------------------------|---------------| | AUC | AUC | AUC | | | bert-base-uncased | 0.918 | 0.934 | 0.975 | | distilbert-base-uncased | 0.878 ↓-0.04 | 0.892 ↓-0.042 | 0.972 ↓-0.003 | | miniLM-L12-H384-uncased | 0.917 ↓-0.001 | 0.943 ↑0.009 | 0.970 ↓-0.005 | | bert-base-uncased-90%-pruned | 0.915 ↓-0.003 | 0.932 ↓-0.002 | 0.973 ↓-0.002 | | bert-base-uncased-85%-pruned | 0.917 ↓-0.001 | 0.933 ↓-0.001 | 0.974 ↓-0.001 | | bert-base-uncased-quantized | 0.917 ↓-0.001 | 0.933 ↓-0.001 | 0.974 ↓-0.001 | | bert-base-multilingual-cased | 0.914 | 0.936 | 0.971 | | distilbert-base-multilingual-cased | 0.895 ↓-0.019 | 0.913 ↓-0.023 | 0.969 ↓-0.002 | | xlm-roberta-base | 0.914 | 0.942 | 0.969 | | multilingual-MiniLM-L12-H384 | 0.904 ↓-0.01 | 0.926 ↓-0.016 0.968 ↓-0.001 | | | roberta-base | 0.920 | 0.947 | 0.971 | | distilroberta | 0.901 ↓-0.019 | 0.921 ↓-0.026 | 0.971 0 | | roberta-base-quantized | 0.918 ↓-0.002 | 0.943 ↓-0.004 | 0.971 0 | | bert-large-uncased | 0.913 | 0.922 | 0.975 | | bert-large-uncased-quantized | 0.909 ↓-0.004 | 0.922 0 | 0.971 ↓-0.004 | of well-optimized, pre-trained models that were fine-tuned on the Stanford Sentiment Bank (SST2) dataset (Socher et al., 2013), and we fine-tuned the pruned pre-trained models over SST-2. Additionally, we applied quantization techniques to the existing models and compared the outcomes of dynamically quantized models with other compressed variations. We examined the change in predictions when considering the AAVE intentequivalent counterpart of the SAE language. We term the contradictory predictions of the classifier on AAVE-SAE sentence pairs as non-concurrent predictions, and our results are presented in Table 4. A consistent pattern is observed where distilled models demonstrate a significantly higher degree of bias in this particular task than their base models. While the BERT-base pruned models also show a decline in performance, the 90% pruned version appears to be more robust than the 85% pruned version. Across all cases, except for the dynamically quantized form of RoBERTA-base, the quantized models show an increase in these non-concurrent predictions. Another interesting point of note is that several of these models seem to record positive to negative non-concurrent predictions when considering AAVE language instead of its SAE intent-equivalent counterpart. ## 7 Multilingual Datasets To investigate whether the observed trends in a monolingual setting extend to a multilingual scenario, we conducted experiments using a separate set of models, with information about their size | Model | Negative | Positive | Total | |------------------------------|-------------|------------|----------| | to Positive | to Negative | Changes | | | bert-base-uncased | 238 | 89 | 327 | | distilbert-base-uncased | 326 ↑88 | 76 ↓-13 | 402 ↑75 | | bert-base-uncased-90%-pruned | 205 ↓-33 | 128 ↑39 | 333 ↑6 | | bert-base-uncased-85%-pruned | 340 ↑102 | 147 ↑58 | 487 ↑160 | | bert-base-uncased-quantized | 281 ↑43 | 93 ↑4 | 374 ↑47 | | xlm-roberta-base | 247 | 56 | 303 | | multilingual-MiniLM-L12-H384 | 294 ↑47 | 73 ↑17 | 367 ↑64 | | roberta-base | 241 | 102 | 343 | | distilroberta | 238 ↓-3 | 108 ↑6 | 346 ↑3 | | roberta-base-quantized | 207 ↓-34 | 115 ↑13 | 322 ↓-21 | | roberta-large | 178 | 110 | 288 | | miniLM-L12-H384-uncased | 265 ↑87 | 64 ↑-46 | 329 ↑41 | | bert-large-uncased | 230 | 72 | 302 | | bert-large-uncased-quantized | 175 ↓-55 | 156 ↑84 | 331 ↑29 | provided in Table 10 in the appendix. For these experiments, we employed the same techniques of pruning, distillation, and quantization as used in the monolingual experiments. ## 7.1 Hate Speech Detection The hate speech dataset evaluation results are presented in Table 5 and Table 7. In contrast to the trends observed in the monolingual evaluations conducted for English, the impact on fairness, as measured by the equality differences (ED) metric, is not as consistently evident among the compressed models in the multilingual setup. In the quantized and distilled models, the trends with respect to English remain consistently negative. The training for all these models was constrained to 5 epochs, and the F1 and AUC scores for the base models are lower than their compressed counterparts. The compressed models demonstrate greater performance gains within the same training duration as compared to their base forms, and this observed improvement in performance could contribute to enhanced fairness outcomes as well. Furthermore, it is worth considering that in previous monolingual tasks and even in the multilingual evaluation of Trustpilot reviews (Table 8), the compressed models were more likely to experience a drop in the ED metric. However, it is essential to highlight that the magnitude of this drop observed in the current results is considerably less pronounced. Additionally, the F1 and AUC performance of these models over these datasets is significantly higher. Across nearly all the experiments conducted and languages documented in Tables 5, 7, and 8, Model Language AUC F1-macro Age Gender bert-basemultilingual-cased English 0.743 0.645 0.110 0.043 Italian 0.662 0.509 0.064 0.070 Polish 0.735 0.648 0.302 0.266 Portuguese 0.616 0.539 0.194 0.181 Spanish 0.676 0.618 0.177 0.179 distilbert-basemultilingual-cased English 0.790 0.702 0.199 ↑+0.089 0.084 ↑+0.041 Italian 0.673 0.551 0.123 ↑+0.059 0.102 ↑+0.032 Polish 0.706 0.638 0.264 ↓-0.038 0.249 ↓-0.017 Portugese 0.651 0.513 0.031 ↓-0.163 0.173 ↓-0.008 Spanish 0.695 0.617 0.134 ↓-0.043 0.135 ↓-0.044 bert-base-multilingual -cased-quantized English 0.750 0.641 0.141 ↑+0.031 0.080 ↑+0.037 Italian 0.675 0.509 0.089 ↑+0.025 0.078 ↑+0.008 Polish 0.735 0.628 0.314 ↑+0.012 0.242 ↓-0.024 Portuguese 0.602 0.493 0.191 ↓-0.003 0.026 ↓-0.155 Spanish 0.670 0.613 0.217 ↑+0.040 0.173 ↓-0.006 bert-base-multilingualcased-90%-pruned English 0.813 0.708 0.135 ↑+0.025 0.075 ↑+0.032 Italian 0.666 0.537 0.150 ↑+0.086 0.238 ↑+0.168 Polish 0.698 0.580 0.221 ↓-0.081 0.230 ↓-0.036 Portuguese 0.697 0.540 0.209 ↑+0.015 0.054 ↓-0.127 Spanish 0.659 0.616 0.185 ↑+0.008 0.150 ↓-0.029 bert-base-multilingualcased-50%-pruned English 0.764 0.657 0.078 ↓-0.032 0.048 ↑+0.005 Italian 0.648 0.553 0.168 ↑+0.104 0.178 ↑+0.108 Polish 0.711 0.622 0.245 ↓-0.057 0.233 ↓-0.033 Portuguese 0.644 0.505 0.115 ↓-0.079 0.108 ↓-0.073 Spanish 0.684 0.625 0.246 ↑+0.069 0.085 ↓-0.094 bert-base-multilingualcased-10%-pruned English 0.745 0.644 0.089 ↓-0.021 0.051 ↑+0.008 Italian 0.670 0.565 0.210 ↑+0.146 0.260 ↑+0.190 Polish 0.670 0.597 0.160 ↓-0.142 0.167 ↓-0.099 Portuguese 0.590 0.480 0.142 ↓-0.052 0.048 ↓-0.133 Spanish 0.681 0.620 0.347 ↑+0.170 0.188 ↑+0.009 xlm-roberta-large English 0.529 0.218 0.005 0.004 Italian 0.629 0.549 0.246 0.119 Polish 0.580 0.520 0.080 0.067 Portuguese 0.447 0.398 0.126 0.045 Spanish 0.590 0.556 0.251 0.088 multilingual-MiniLM-L12-H384 English 0.701 0.605 0.060 ↑+0.055 0.032 ↑+0.028 Italian 0.622 0.571 0.337 ↑+0.091 0.191 ↑+0.072 Polish 0.643 0.587 0.138 ↑+0.058 0.098 ↑+0.031 Portuguese 0.606 0.559 0.336 ↑+0.210 0.237 ↑+0.192 Spanish 0.624 0.570 0.270 ↑+0.019 0.096 ↑+0.008 | Language | Race | Country | Age | Gender | |------------|------------------------------------|-----------------------------------------|-----------------------------------------|-----------------------------------------| | English | distilbert-base-multilingual-cased | distilbert-base-multilingual-cased | distilbert-base-multilingual-cased | distilbert-base-multilingual-cased | | Italian | - | - | bert-base-multilingual-cased-10%-pruned | bert-base-multilingual-cased-10%-pruned | | Spanish | multilingual-MiniLM-L12-H384 | bert-base-multilingual-cased-90%-pruned | bert-base-multilingual-cased-10%-pruned | bert-base-multilingual-cased-10%-pruned | | Portuguese | multilingual-MiniLM-L12-H384 | multilingual-MiniLM-L12-H384 | multilingual-MiniLM-L12-H384 | multilingual-MiniLM-L12-H384 | | Polish | - | - | multilingual-MiniLM-L12-H384 | multilingual-MiniLM-L12-H384 | | Model | Language | AUC | F1-macro | Race | Country | |-----------------------------------------|------------|-------|---------------|---------------|---------------| | bert-basemultilingual-cased | English | 0.743 | 0.645 | 0.059 | 0.031 | | Portuguese | 0.616 | 0.539 | 0.200 | 0.109 | | | Spanish | 0.676 | 0.618 | 0.087 | 0.130 | | | distilbert-basemultilingual-cased | English | 0.790 | 0.702 | 0.086 ↑+0.027 | 0.077 ↑+0.046 | | Portuguese | 0.651 | 0.513 | 0.105 ↓-0.095 | 0.089 ↓-0.020 | | | Spanish | 0.695 | 0.617 | 0.089 ↑+0.002 | 0.127 ↓-0.003 | | | English | 0.750 | 0.641 | 0.066 ↑+0.007 | 0.043 ↑+0.012 | | | bert-base-multilingual -cased-quantized | Portuguese | 0.602 | 0.493 | 0.069 ↓-0.131 | 0.037 ↓-0.072 | | Spanish | 0.670 | 0.613 | 0.039 ↓-0.048 | 0.149 ↑+0.019 | | | bert-base-multilingualcased-90%-pruned | English | 0.813 | 0.708 | 0.041 ↓-0.018 | 0.026 ↓-0.005 | | Portuguese | 0.697 | 0.540 | 0.151 ↓-0.049 | 0.106 ↓-0.003 | | | Spanish | 0.659 | 0.616 | 0.033 ↓-0.054 | 0.289 ↑+0.159 | | | bert-base-multilingualcased-50%-pruned | English | 0.764 | 0.657 | 0.038 ↓-0.019 | 0.020 ↓-0.011 | | Portugese | 0.644 | 0.505 | 0.086 ↓-0.114 | 0.118 ↑+0.009 | | | Spanish | 0.684 | 0.625 | 0.092 ↑+0.005 | 0.217 ↑+0.087 | | | bert-base-multilingualcased-10%-pruned | English | 0.745 | 0.644 | 0.024 ↓-0.025 | 0.009 ↓-0.022 | | Portuguese | 0.590 | 0.480 | 0.193 ↓-0.007 | 0.024 ↓-0.085 | | | Spanish | 0.681 | 0.620 | 0.130 ↑+0.043 | 0.249 ↑+0.119 | | | xlm-roberta-large | English | 0.529 | 0.218 | 0.005 | 0.003 | | Portuguese | 0.447 | 0.398 | 0.121 | 0.175 | | | Spanish | 0.590 | 0.556 | 0.030 | 0.376 | | | multilingual-MiniLM-L12-H384 | English | 0.701 | 0.605 | 0.011 ↑+0.006 | 0.027 ↑+0.024 | | Portuguese | 0.606 | 0.559 | 0.263 ↑+0.142 | 0.232 ↑+0.057 | | | Spanish | 0.624 | 0.570 | 0.097 ↑+0.067 | 0.383 ↑+0.007 | | the MiniLM model distilled from XLM-R Large demonstrates higher levels of bias compared to the base model. These results also exhibit variations across languages and dimensions under consideration. A model may produce fairer outcomes for data in one language but not necessarily generalize to another language or dimension. Additionally, the trends observed in the ED values for pruning the multilingual BERT-base model are not consistently monotonic. We have included the results for the most significant decrease in magnitude across each dimension and language for these experiments in Table 6. Our benchmarking of these compressed models indicates that various elements in the experimental setup, such as the selection of techniques within a given compression method or the choice of pre-trained model architecture, are likely to have consequences in the measurements we observe. ## 7.2 Trustpilot Reviews Dataset We also fine-tuned these models using a dataset comprising Trustpilot reviews from four different languages. The results for the equality difference (ED) for gender are presented in Table 8. Although the compressed models generally exhibit poorer performance in terms of their overall equality difference, the magnitude of the difference in ED between the compressed models and their base forms is considerably smaller compared to the values observed in the previous task. However, it is worth noting that the results for the English reviews dataset (Table 12 in the appendix) contradict this pattern. In that case, the compressed versions of BERT demonstrate less bias, whereas the opposite is true for XLM-R Large. ## 8 How Does Model Compression Affect Fairness? 8.1 Distillation, Pruning And Quantization The claim that distillation tends to amplify biases in models aligns with our findings in monolingual evaluation experiments. However, the impact on fairness metrics can vary, and this pattern does not necessarily hold true in multilingual settings, as evidenced by our evaluation of multilingual fairness datasets. Similar observations can be made regarding pruned models, although further investigation is warranted to understand how different pruning strategies and levels of pruning may influence these effects. In contrast, our approach of post-training quantization has yielded more diverse outcomes. While its impact on fairness may be relatively less pronounced, it can sometimes lead to impractical models for downstream tasks due to their low perfor- Model Language F1-W Avg AUC-W Avg Total ED ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) ![8_image_2.png](8_image_2.png) English 0.981 0.987 0.026 French 0.976 0.990 0.022 German 0.979 0.985 0.014 Danish 0.971 0.992 0.015 English 0.975 0.987 0.026 0.000 French 0.971 0.984 0.037 ↑+0.015 German 0.976 0.977 0.043 ↑+0.029 Danish 0.964 0.992 0.020 ↑+0.005 English 0.978 0.984 0.047 ↑+0.021 French 0.969 0.984 0.048 ↑+0.026 German 0.976 0.980 0.005 ↓-0.009 Danish 0.970 0.991 0.021 ↑+0.006 English 0.976 0.988 0.029 ↑+0.003 French 0.973 0.986 0.036 ↑+0.014 German 0.975 0.982 0.025 ↑+0.011 Danish 0.963 0.991 0.024 ↑+0.009 English 0.980 0.989 0.020 ↓-0.006 French 0.975 0.989 0.038 ↑+0.016 German 0.977 0.988 0.025 ↑+0.011 Danish 0.970 0.991 0.019 ↑+0.004 English 0.979 0.988 0.026 0.000 French 0.976 0.988 0.028 ↑+0.006 German 0.976 0.981 0.017 ↑+0.003 Danish 0.969 0.993 0.017 ↑+0.002 English 0.987 0.993 0.018 French 0.984 0.991 0.024 German 0.985 0.992 0.031 Danish 0.985 0.994 0.008 English 0.976 0.991 0.042 ↑+0.024 French 0.972 0.989 0.041 ↑+0.017 German 0.975 0.986 0.023 ↓-0.008 Danish 0.970 0.993 0.017 ↑+0.009 mance. Therefore, careful consideration is required when employing post-training quantization to strike a balance between fairness and task effectiveness. ## 8.2 Multilingual Vs Monolingual Models While monolingual evaluation generally negatively impacts fairness, the same cannot be said for multilingual evaluation, which varies across languages and dimensions. It would be valuable to investigate the underlying causes for the decrease in fairness during compression and explore its relationship with the multilingual and monolingual aspects of the model. It also remains to be seen whether welloptimized models for a specific task are more prone to demonstrating increased bias in their compressed versions, thereby possibly relying on unfair associations to make predictions. ## 8.3 Additional Considerations There are still lingering questions regarding the influence of various elements, such as model size, architecture choices, different variants of compression techniques, and their impact on our evaluations. While our results seem to indicate otherwise for some of these parameters (such as size), it is essential to explore whether these observations translate across different tasks. As evinced by Tal et al. (2022), the size of a model does not necessarily correlate with reduced biases, a notion that is further supported by our own findings. It would be worthwhile to extensively examine how these models are affected when different compression methods are combined or constrained to the same parameter count. ## 9 Conclusion In this work, we conduct a comprehensive evaluation of fairness in compressed language models, covering multiple base models, compression techniques, and various fairness metrics. While prior studies have evaluated the fairness of compressed models, the results have not always been conclusive. In contrast, our extensive benchmarking provides evidence that challenges recent research suggesting that model compression can effectively reduce bias through regularization, and we demonstrate that this is the case for both multilingual and monolingual models across different datasets. The compression of language models through distillation, quantization, and pruning is crucial for the practical use of language technologies in various real-world applications. While it is essential to preserve performance during compression, it is equally imperative that the compressed versions of language models maintain or even enhance fairness measures to avoid potential harm. ## 10 Ethics Statement Our results indicate that compression does harm fairness, particularly in the monolingual setting. The potential harm that the system may cause and the application it will be used for should be considered when selecting a model compression technique, in addition to factors like accuracy, latency, and size. Although we have not observed absolute trends across models, datasets, and compression techniques, it is especially crucial to evaluate compressed models for fairness and accuracy before deployment and, on a broader note, to understand why compressed models might exhibit issues with respect to fairness. In our paper, we conducted evaluations of multilingual language models using fairness metrics for various languages, including English. We observed varying trends regarding their performance on fairness metrics across different languages. However, it is vital to consider the potential influence of the lack of well-optimized models for these specific tasks, which may mitigate some of these issues. Additionally, evaluation datasets are scarce for assessing bias in languages other than English and for different fairness definitions. We also acknowledge that fairness trends identified in English evaluations may not necessarily be true for all languages. While our benchmarking encompassed multiple intrinsic and extrinsic metrics, it is important to acknowledge their limitations in capturing all dimensions of fairness. Further research is needed to develop comprehensive extrinsic metrics across diverse tasks. Although our work has been centered around fairness in allocation-based (classification) applications, addressing fairness concerns in other types of language models, such as natural language generation models, is necessary. In generative tasks, the measurement of unfair outcomes would be distinct from the methods we have used. Another area of potential future work could involve benchmarking debiasing methods for compressed models and developing new compression-aware methods. ## Limitations The primary motivation behind this paper was to provide a comprehensive benchmarking study that explores the impact of model compression techniques on bias in large language models. While our work is among the first efforts to address fairness in compressed language models across multiple compression methods, including exploring multilingual settings, we are aware of the inherent limitations associated with our benchmarking study. Some of the limitations and potential directions for future work that builds on our study include the following: - Our study primarily focused on benchmarking pre-trained models and evaluating their performance in the downstream text classification task. Expanding our investigation to encompass other tasks, particularly those involving generative models or large language models (LLMs), would be a valuable contribution to the research community. Examining the impact of model compression techniques on fairness in these domains would provide further insights and contribute to a more comprehensive understanding of bias in different types of language models. - While our work includes a multilingual evaluation component, we acknowledge that there is room for further improvement and comprehensiveness in our benchmarking study, particularly with regard to quantization and pruning techniques. Apart from this, we did not provide a comparative analysis of monolingual and multilingual models using the same extrinsic data, which could provide valuable insights into the disparate impact of compression on the bias across languages. These are potential areas for future research that could contribute to a more thorough understanding of bias in compressed language models. - Despite showing results for state-of-the-art pruning methods, further benchmarking is necessary to observe how bias varies across different pruning techniques. Similarly, whilst our method serves as a proxy to estimate bias trends in quantized models, a thorough quantization-specific study is needed. - Different compression strategies yield varied benefits in terms of latency, memory, and so forth. Investigating the tradeoffs between these elements and fairness and accuracy would yield valuable insights for obtaining realistic estimations in real-world scenarios. Additionally, conducting case-study analyses would give practitioners in the field a deeper understanding of the potential harm these methods may introduce. ## References Jaimeen Ahn, Hwaran Lee, Jinhwa Kim, and Alice Oh. 2022. Why knowledge distillation amplifies gender bias and how to mitigate from the perspective of DistilBERT. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)*, pages 266–272, Seattle, Washington. Association for Computational Linguistics. Sarah Alnegheimish, Alicia Guo, and Yi Sun. 2022. Using natural sentence prompts for understanding biases in language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2824–2830, Seattle, United States. Association for Computational Linguistics. Vamsi Aribandi, Yi Tay, and Donald Metzler. 2021. How reliable are model diagnostics? In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1778–1785, Online. Association for Computational Linguistics. Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating BERT's gender bias. In *Proceedings of* the Second Workshop on Gender Bias in Natural Language Processing, pages 1–16, Barcelona, Spain (Online). Association for Computational Linguistics. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610–623, New York, NY, USA. Association for Computing Machinery. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454– 5476, Online. Association for Computational Linguistics. Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, Online. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. António Câmara, Nina Taneja, Tamjeed Azad, Emily Allaway, and Richard Zemel. 2022. Mapping the multilingual margins: Intersectional biases of sentiment analysis systems in English, Spanish, and Arabic. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 90–106, Dublin, Ireland. Association for Computational Linguistics. Yang Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, and Aram Galstyan. 2022. On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 561–570, Dublin, Ireland. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*, abs/1911.02116. Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics. *Transactions of the Association for* Computational Linguistics, 9:1249–1267. Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1968–1994, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805. Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. *Science* Advances, 4:eaao5580. Anjalie Field, Su Lin Blodgett, Zeerak Waseem, and Yulia Tsvetkov. 2021. A survey of race, racism, and anti-racism in NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1905–1925, Online. Association for Computational Linguistics. Tanmay Garg, Sarah Masud, Tharun Suresh, and Tanmoy Chakraborty. 2022. Handling bias in toxic speech detection: A survey. Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sánchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic bias metrics do not correlate with application bias. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1926–1940, Online. Association for Computational Linguistics. Hila Gonen, Yova Kementchedjhieva, and Yoav Goldberg. 2019. How does grammatical gender affect noun representations in gender-marking languages? In *Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)*, pages 463–471, Hong Kong, China. Association for Computational Linguistics. Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, and William Yang Wang. 2020a. Investigating africanamerican vernacular english in transformer-based text generation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 5877–5883. Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, and William Yang Wang. 2020b. Investigating AfricanAmerican Vernacular English in transformer-based text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5877–5883, Online. Association for Computational Linguistics. Umang Gupta, Jwala Dhamala, Varun Kumar, Apurv Verma, Yada Pruksachatkun, Satyapriya Krishna, Rahul Gupta, Kai-Wei Chang, Greg Ver Steeg, and Aram Galstyan. 2022. Mitigating gender bias in distilled language models via counterfactual role reversal. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 658–678, Dublin, Ireland. Association for Computational Linguistics. Marius Hessenthaler, Emma Strubell, Dirk Hovy, and Anne Lauscher. 2022. Bridging fairness and environmental sustainability in natural language processing. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models. Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015. User review sites as a resource for large-scale sociolinguistic studies. In Proceedings of the 24th International Conference on World Wide Web, WWW '15, page 452–461, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. Xiaolei Huang. 2022. Easy adaptation to mitigate gender bias in multilingual text classification. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 717–723, Seattle, United States. Association for Computational Linguistics. Xiaolei Huang, Xing Linzi, Franck Dernoncourt, and Michael J. Paul. 2020a. Multilingual twitter corpus and baselines for evaluating demographic bias in hate speech recognition. In *Proceedings of the Twelveth* International Conference on Language Resources and Evaluation (LREC 2020), Marseille, France. European Language Resources Association (ELRA). Xiaolei Huang and Michael J. Paul. 2019. Neural user factor adaptation for text classification: Learning to generalize across author demographics. In *Proceedings of the Eighth Joint Conference on Lexical* and Computational Semantics (*SEM 2019), pages 136–146, Minneapolis, Minnesota. Association for Computational Linguistics. Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael J. Paul. 2020b. Multilingual Twitter corpus and baselines for evaluating demographic bias in hate speech recognition. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1440–1448, Marseille, France. European Language Resources Association. ## Jigsaw. Masahiro Kaneko, Aizhan Imankulova, Danushka Bollegala, and Naoaki Okazaki. 2022. Gender bias in masked language models for multiple languages. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2740–2750, Seattle, United States. Association for Computational Linguistics. Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. 2021. I-bert: Integeronly bert quantization. In International conference on machine learning, pages 5506–5518. PMLR. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In *Proceedings of the Seventh Joint Conference on Lexical and Computational* Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguistics. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In *Proceedings of* the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Association for Computational Linguistics. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M Rush. 2021. Block pruning for faster transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619–10629. Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4782–4797, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In *Proceedings of the Third Workshop on Narrative Understanding*, pages 48–55, Virtual. Association for Computational Linguistics. Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland. Association for Computational Linguistics. Alireza Mohammadshahi, Vassilina Nikoulina, Alexandre Berard, Caroline Brun, James Henderson, and Laurent Besacier. 2022. What do compressed multilingual machine translation models forget? Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Hadas Orgad and Yonatan Belinkov. 2022. Choose your lenses: Flaws in gender bias evaluation. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)*, pages 151–167, Seattle, Washington. Association for Computational Linguistics. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *ArXiv*, abs/1910.01108. Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. Advances in Neural Information Processing Systems, 33:20378–20389. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics. Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021. Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383–2389, Online. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic bert for resource-limited devices. Yarden Tal, Inbal Magar, and Roy Schwartz. 2022. Fewer errors, but more stereotypes? the effect of model size on gender bias. In *Proceedings of the 4th* Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 112–120, Seattle, Washington. Association for Computational Linguistics. Zeerak Talat, Aurélie Névéol, Stella Biderman, Miruna Clinciu, Manan Dey, Shayne Longpre, Sasha Luccioni, Maraim Masoud, Margaret Mitchell, Dragomir Radev, Shanya Sharma, Arjun Subramonian, Jaesung Tae, Samson Tan, Deepak Tunuguntla, and Oskar Van Der Wal. 2022. You reap what you sow: On the challenges of bias evaluation under multilingual settings. In Proceedings of BigScience Episode \#5 - Workshop on Challenges & Perspectives in Creating Large Language Models, pages 26–41, virtual+Dublin. Association for Computational Linguistics. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020a. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. Ziheng Wang, Jeremy Wohlwend, and Tao Lei. 2020b. Structured pruning of large language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6151–6162. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Guangxuan Xu and Qingyuan Hu. 2022. Can model compression improve nlp fairness. *ArXiv*, abs/2201.08542. Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019a. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pages 36–39. IEEE. Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019b. Q8BERT: quantized 8bit BERT. CoRR, abs/1910.06188. Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, and Moshe Wasserblat. 2021. Prune once for all: Sparse pre-trained language models. *arXiv preprint* arXiv:2111.05754. Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender bias in multilingual embeddings and crosslingual transfer. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 2896–2907, Online. Association for Computational Linguistics. ## A Appendix A.1 Methodology And Setup 2019), DistilRoBERTa, and multilingual MiniLM (Wang et al., 2020a), which are publicly available through the HuggingFace API (Wolf et al., 2020) for our experiments. DistilBERT selects one layer from each pair of alternate layers in the teacher architecture (BERT-base), lowering the number of layers in the distilled model by half. MiniLM is distilled from the final attention layer of the teacher model, thus making this knowledge distillation method task-independent. In addition to evaluating bias in these pre-trained models using intrinsic metrics, we fine-tuned some distilled models on the SAE-AAVE, Jigsaw, and Equity Evaluation Corpus (EEC) datasets for evaluation using extrinsic metrics. ## A.1.3 Quantization Dynamic quantization is particularly effective when the time required to execute a model is dominated by loading weights from memory rather than computing matrix multiplications, as with transformer models. Therefore, we adopt dynamic quantization in all of our experiments. With this approach, model parameters are converted to INT-8 format post-training, and the scale factor for activations is dynamically determined based on the range of the data observed at runtime, which helps to maintain flexibility in the model and minimize any loss in performance. Additionally, dynamic quantization requires minimal hyperparameter tuning and is easy to deploy in production. ## A.2 **Further Details On Pruning, Quantization** And Distillation A.2.1 Pruning Neural architecture pruning aims at eliminating redundant parts of neural networks while maintaining model performance. Unstructured pruning removes individual neurons by setting the value of these parameters to zero, whereas structured pruning removes groups of neurons such as layers, attention heads, and so forth. (Sanh et al., 2020) presents a form of unstructured weight pruning in which individual weights can be eliminated to create a sparse network. Although massive reductions in the parameter count are observed, the inference speeds show no such improvement. On the other hand, structured pruning methods (Wang et al., 2020b) achieve faster inference speeds along with a reduction in parameter size. (Lagunas et al., 2021) extend the work of movement pruning to the structured and semi-structured domains. Re- ## A.1.1 Pruning We Adopt The Prune Once For All Or **Prune Ofa** method (Zafrir et al., 2021) as our central pruning strategy. Prune OFA has demonstrated state-ofthe-art performance in terms of compression-toaccuracy ratio for BERT-based models, and it also eliminates the need to conduct task-specific pruning, as the sparse pre-trained language model can be directly fine-tuned on the target task. This simplifies our comparisons, as the same pruned model can be fine-tuned on different datasets. ## A.1.2 Distillation We use the pre-trained distilled variants of base models such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and XLM-R (Conneau et al., 2019), namely DistilBERT (Sanh et al., cently, (Zafrir et al., 2021) showed that integrating pruning during the pre-training of language models gives high-performing sparse pre-trained models, thus removing the burden of pruning for a specific downstream task. ## A.2.2 Distillation Knowledge distillation (KD) (Hinton et al., 2015) has been shown to effectively transfer knowledge from a teacher model to a smaller student model, with a loss function designed to minimize the distance between the features or the outputs of the student and teacher models. Numerous alterations can be made to the KD setup, such as choosing intermediate layers of the teacher model for initializing the student architecture (Sanh et al., 2019), distilling the final attention layer of the teacher transformer architecture (Wang et al., 2020a), introducing bottlenecks for distillation (Sun et al., 2020). However, biases in the teacher model could potentially propagate into the distilled models making it more biased compared to the original teacher model (Silva et al., 2021). ## A.2.3 Quantization Quantization compresses models by representing model weights and/or activations with lower bit precisions. It can also make it possible to carry out inference using integer-only operations, as demonstrated by Kim et al. (2021). There are two main approaches to quantization: post-training quantization, which is applied to a pre-trained model, and quantization-aware training (Zafrir et al., 2019a), which incorporates quantization into the training process in order to mitigate the loss of accuracy that can occur with post-training quantization. ## B Additional Results We have included the results and a brief description for certain monolingual and multilingual measures below. Our decision to include the Equity Evaluation Corpus (EEC) and Log Probability Bias Score (LPBS) metric measures in the appendix is motivated by the fact that both these metrics consist of template-based data lacking concrete fairness objectives, and are therefore not a reflection of harms that can be caused in real-world applications. Recent research (Alnegheimish et al., 2022) has effectively highlighted the sensitivity of templatebased evaluations to the selection and design of templates, which can bias the results. Furthermore, the LPBS is an intrinsic measure, and Aribandi et al. (2021) addresses the instability of likelihood and representation-based model diagnostic measures. Therefore, we advise readers to exercise caution when drawing conclusions from these results. ## B.1 Multilingual Datasets The findings of our multilingual evaluation on the English reviews dataset, comprising reviews obtained from platforms such as Amazon and Yelp, have been presented in Table 12. Additionally, we have included the results of the evaluation of multilingual models on the English version of StereoSet in Table 13, as well as the evaluation of these models for the Crows-Pair dataset for English in Table 14. Ideally, we intended to study the performance of models on datasets with comparable fairness notions or objectives in both monolingual and multilingual contexts. Unfortunately, we encountered limitations in sourcing such datasets, and therefore, we leave this as an avenue for future research. ## C Fine-Tuning Setup For the extrinsic measures, the models are finetuned over a specific training dataset before the fairness evaluation is carried out over the test set. Most of our fine-tuning setups have been derived from previous work (Huang et al., 2020a; Huang, 2022; Câmara et al., 2022). The intrinsic measures do not require a hyperparameter search, as they are evaluated over pre-trained model representations. For the extrinsic measures, we relied on pre-trained SST-2 fine-tuned models available on HuggingFace. We performed fine-tuning solely for all the models trained on the Jigsaw Toxicity Classification dataset, the final details of which are as follows: - Batch Size : 16 - Learning Rate : 1e-4 - Weight decay: 0.01 - Warmup Ratio: 0.06 - Epochs: 5 - Optimizer: AdamW ## C.1 Equity Evaluation Corpus The Equity Evaluation Corpus (Kiritchenko and Mohammad, 2018) is a template-based corpus for evaluating sentiment analysis systems for emotional intensity across four categories (joy, sadness, anger and joy). In this particular task, we measure the Pearson Correlation Coefficient (PCC) of the predictions of these models against the gold label. the It must be noted that previous research (Alnegheimish et al., 2022) indicates that bias evaluation is sensitive to design choices in templatebased data, and that evaluating our models over natural sentence-based datasets would be a better alternative to gauge the impact these models can have. The fairness objective here looks to address the disparity in terms of the PCC across all the models across the different categories of template data. The results have been reported in Table 15. ## C.2 Log Probability Bias Score The Log Probability Bias Score (LPBS) (Kurita et al., 2019) was proposed as a modification to the DisCo metric (Webster et al., 2020). LPBS operates similarly to WEAT, using template sentences (e.g., '[TGT] likes to [ATT]') in which TGT represents a list of target words and ATT represents a list of attributes for which we aim to measure biased associations. The test also accounts for the prior probability of the target attribute, allowing us to evaluate bias solely based on the attributes without being influenced by the prior probability of the target token. The attribute categories that we have taken into consideration are a list of professions, positive words, and negative words (Bartl et al., 2020) (Kurita et al., 2019). The results have been reported in Table 16. Model Name Parameter # Jigsaw EEC AAVE-SAE StereoSet CrowS-Pair LPBS bert-base-uncased 110 M ✓ ✓ ✓ ✓ ✓ ✓ distilbert-base-uncased 66 M ✓ ✓ ✓ ✓ ✓ ✓ miniLM-L12-H384-uncased 33 M ✓ ✓ ✓ ✗ ✗ ✗ bert-base-uncased-85%-pruned 16.5 M ✓ ✓ ✓ ✓ ✓ ✓ bert-base-uncased-90%-pruned 11 M ✓ ✓ ✓ ✓ ✓ ✓ bert-base-uncased-quantized 110 M ✓ ✓ ✓ ✓ ✓ ✓ bert-large-uncased 340 M ✓ ✓ ✓ ✓ ✓ ✓ bert-large-uncased-90%-pruned 34 M ✗ ✓ ✗ ✓ ✓ ✓ bert-large-uncased-quantized 340 M ✓ ✓ ✓ ✓ ✓ ✓ bert-base-multilingual-cased 178 M ✓ ✓ ✗ ✓ ✓ ✓ distilbert-base-multilingual-cased 135 M ✓ ✓ ✗ ✓ ✓ ✓ xlm-roberta-large 560 M ✗ ✗ ✗ ✓ ✓ ✓ multilingual-MiniLM-L12-H384 [xlm-roberta-large] 117 M ✗ ✗ ✗ ✓ ✓ ✓ xlm-roberta-base 278 M ✓ ✓ ✓ ✗ ✗ ✗ multilingual-MiniLM-L12-H384 [xlm-roberta-base] 117 M ✓ ✓ ✓ ✗ ✗ ✗ roberta-base 125 M ✓ ✓ ✓ ✓ ✓ ✓ distilroberta 82 M ✓ ✓ ✓ ✓ ✓ ✓ roberta-base-quantized 125 M ✓ ✓ ✓ ✓ ✓ ✓ Table 9: Details about the models and which metrics they were evaluated for in the monolingual fairness experiments. The parameter counts for the pruned models indicates the total number of non-sparse parameters. Some of the models could not be evaluated for the intrinsic measures due to their architectural setup. Table 10: Parameter count for all the models used for the multilingual fairness evaluation experiments. The parameter counts for the pruned models indicates the total number of non-sparse parameters. These models have been used uniformly for all the multilingual datasets. Table 11: List of all the details pertaining to the fairness metrics used. | Model Name | Parameter # | |-----------------------------------------|---------------| | bert-base-multilingual-cased | 178 M | | distilbert-base-multilingual-cased | 135 M | | bert-base-multilingual-cased-10%-pruned | 160 M | | bert-base-multilingual-cased-50%-pruned | 89 M | | bert-base-multilingual-cased-90%-pruned | 17 M | | bert-base-multilingual-cased-quantized | 178 M | | xlm-roberta-large | 560 M | | multilingual-MiniLM-L12-H384 | 117 M | | Metric | Type of Metric | Downstream Task | Template-Based | Fairness Objective | Dimensions | |--------------------------------------------------------------------|-----------------------------|---------------------------------------------------|-------------------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------| | Monolingual | Multiple [Gender, Religion, | | | | | | Jigsaw Toxicity Unintended Bias Extrinsic | Toxicity Detection | No | Increased likelihood of being classifying | Race/Ethnicity, Sexual Orientation, | | | comment as toxic based on identity group mentions Disability, etc] | | | | | | | Classification | No | Increased likelihood of being classifying comment | | | | | AAVE-SAE | Extrinsic | Sentiment | as negative based on dialect used | Dialect | | | Classification | Yes | Difference in emotion categories for emotional | | | | | EEC | Extrinsic | Sentiment | intensity prediction | Emotional Intensity | | | StereoSet | Intrinsic | N/A | No | Evaluation of model preference for stereotypical | Gender, Race/Ethnicity, Religion, | | sentences | Profession | | | | | | CrowS-Pair | Intrinsic | N/A | No | Evaluation of model preference for stereotypical sentences | Gender, Race/Ethnicity, Religion | | LPBS | Intrinsic | N/A | Yes | Evaluation of model preference for stereotypical associations | Gender | | Multilingual Hate Speech | Extrinsic | Hate Speech Detection | No | Measuring performance across data based on the demographic groups they are sourced from | Age, Gender, Country, Race/Ethnicity | | Classification | No | Measuring performance across data based on the | | | | | Reviews Dataset | Extrinsic | Sentiment | demographic groups they are sourced from | Gender | | | Model | F1-W Avg | AUC-W Avg | Total ED | |---------------------------------------------------|------------|-------------|---------------| | bert-base-multilingual-cased | 0.872 | 0.916 | 0.499 | | distilbert-base-multilingual-cased | 0.868 | 0.914 | 0.350 ↑-0.149 | | bert-base-multilingual-cased-quantized | 0.854 | 0.892 | 0.317 ↑-0.182 | | bert-base-multilingual-cased-10%-pruned | 0.869 | 0.921 | 0.258 ↑-0.241 | | bert-base-multilingual-cased-50%-pruned | 0.865 | 0.918 | 0.313 ↑-0.186 | | bert-base-multilingual-cased-90%-pruned | 0.862 | 0.910 | 0.442 ↑-0.057 | | xlm-roberta-large | 0.908 | 0.947 | 0.290 | | multilingual-MiniLM-L12-H384-distilled-XLMR-Large | 0.839 | 0.898 | 0.402 ↓+0.112 | | xlm-roberta-large-quantized | 0.865 | 0.928 | 0.474 ↓+0.184 | | xlm-roberta-base | 0.787 | 0.900 | 0.349 ↓+0.059 | Table 12: We report the performance of multilingual models and the ED (equality differences) fairness estimate over a set of English reviews sourced from websites such as Amazon, Yelp, etc. The higher the ED, the less fair the model. | Model | Overall ICAT Score | |-----------------------------------------|----------------------| | bert-base-multilingual-cased | 64.94 | | distilbert-base-multilingual-cased | 67.99 ↑+3.05 | | bert-base-multilingual-cased-quantized | 64.78 ↓-0.16 | | bert-base-multilingual-cased-10%-pruned | 67.82 ↑+2.88 | | bert-base-multilingual-cased-50%-pruned | 66.67 ↑+1.73 | | bert-base-multilingual-cased-90%-pruned | 67.00 ↑+2.06 | | xlm-roberta-large | 71.29 | | multilingual-MiniLM-L12-H384 | 52.47 ↓-18.82 | | xlm-roberta-large-quantized | 69.63 ↓-1.66 | Table 13: The overall ICAT score for the multilingual models for the StereoSet (English) dataset. The higher the ICAT score, the less biased the model. | Model | Gender | Race | Religion | |-----------------------------------------|--------------|--------------|--------------| | bert-base-multilingual-cased | 47.71 -2.29 | 44.66 -5.34 | 53.33 +3.33 | | distilbert-base-multilingual-cased | 50.38 +0.38 | 41.94 -8.06 | 53.33 +3.33 | | bert-base-multilingual-cased-quantized | 52.29 +2.29 | 42.72 -7.28 | 52.38 +2.38 | | bert-base-multilingual-cased-10%-pruned | 47.71 -2.29 | 47.57 -2.43 | 58.1 +8.1 | | bert-base-multilingual-cased-50%-pruned | 49.24 -0.76 | 48.54 -1.46 | 56.19 +6.19 | | bert-base-multilingual-cased-90%-pruned | 50.0 0 | 57.48 +7.48 | 53.33 +3.33 | | xlm-roberta-large | 54.41 +4.41 | 51.65 +1.65 | 69.52 +19.52 | | multilingual-MiniLM-L12-H384 | 39.85 -10.15 | 60.39 +10.39 | 47.62 -2.38 | | xlm-roberta-large-quantized | 52.87 2.87 | 57.28 +7.28 | 71.43 +21.43 | Table 14: The results for the CrowS-Pairs metric for multilingual models have been reported, with values closer to 50 indicating less biased models according to this metric. Model Joy Sadness Anger Fear bert-base-uncased 0.600 0.533 0.557 0.552 distilbert-base-uncased 0.623 0.587 0.623 0.565 distilbert-base-uncased-60%-pruned 0.586 0.551 0.585 0.540 miniLM-L12-H384-uncased 0.352 0.195 0.230 0.245 miniLM-L12-H384-uncased-70%-pruned 0.600 0.539 0.573 0.547 bert-base-uncased-85%-pruned 0.550 0.432 0.464 0.478 bert-base-uncased-90%-pruned 0.523 0.418 0.450 0.472 bert-base-uncased-quantized 0.455 0.382 0.383 0.410 bert-base-multilingual-cased 0.506 0.386 0.364 0.408 distilbert-base-multilingual-cased 0.478 0.380 0.328 0.410 xlm-roberta-base 0.491 0.476 0.039 0.354 multilingual-miniLM-L12-H384 0.336 0.012 0.019 0.046 roberta-base 0.495 0.305 0.393 0.450 distilroberta-base 0.540 0.503 0.508 0.557 roberta-base-quantized 0.177 0.230 0.360 0.108 bert-large-uncased 0.545 0.450 0.549 0.503 bert-large-uncased-90%-pruned 0.614 0.476 0.519 0.547 bert-large-uncased-quantized 0.375 0.314 0.356 0.364 Profession **Positive Negative** Table 16: The results for the effect size from the LPBS metric. The higher the effect size (calculated using Cohen's d), the higher the magnitude of bias in the model. Table 17: List of the multilingual datasets and their corresponding languages. | bert-base-uncased | 0.694 | 0.040 | 0.111 | |------------------------------------|---------|---------|---------| | distilbert-base-uncased | 1.113 | 0.279 | 0.218 | | distilbert-base-uncased-60% | 0.206 | 0.422 | 0.361 | | bert-base-uncased-85%-pruned | 1.393 | 0.090 | 0.135 | | bert-base-uncased-90%-pruned | 1.943 | 0.070 | 0.048 | | bert-base-uncased-quantized | 1.116 | 0.102 | 0.006 | | bert-base-multilingual-cased | 1.326 | 0.322 | 0.052 | | distilbert-base-multilingual-cased | 0.660 | 0.005 | 0.053 | | xlm-roberta-large | 2.007 | 0.031 | 0.073 | | multilingual-MiniLM-L12-H384 | 0.667 | 1.739 | 0.028 | | roberta-base | 4.704 | 0.016 | 0.014 | | distilroberta | 6.218 | 0.287 | 0.271 | | roberta-base-quantized | 3.657 | 0.019 | 0.014 | | bert-large-uncased | 0.155 | 0.359 | 0.343 | | bert-large-uncased-90%-pruned | 0.899 | 0.293 | 0.269 | | bert-large-uncased-quantized | 1.861 | 0.115 | 0.082 | | Dataset | Languages | |-----------------------|--------------------| | StereoSet | en | | Crows-Pair | en | | Reviews Dataset | en, fr, de, dk | | Hate Speech Detection | en, pt, es, it, po | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes, in the final section of the paper. ✗ A2. Did you discuss any potential risks of your work? No, as our work deals with pointing out the potential risks of certain strategies used to optimize for efficiency that could inadvertently cause models to make more biased decision. We have extensively discussed the limitations and other potential implications of our findings, however. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes, Section 1. Section 8 does this as well. ✓ A4. Have you used AI writing assistants when working on this paper? Yes, Grammarly, for ensuring that the content was grammatically correct and coherent. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✓ B1. Did you cite the creators of artifacts you used? Yes, we cited the relevant papers/authors that developed these datasets in whichever sections they were brought up. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The datasets we used are open access and not proprietary. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We use the datasets for their intended use i.e to evaluate models. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The details for the datasets are listed in tables in the Appendix. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 6 Onwards C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, Section 4 onward. We utilized hyperparameters that have been used in previous related research and did not specifically perform a hyperparameter search to find the most optimized model, due to the computational expense required given the extent of our benchmarking. The point of this paper was to evaluate and benchmark fairness across a series of compression methods to provide a conclusive answer as to how compression may affect fairness, and to what extent. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No, our experiments are based on fairness evaluation and do not require multiple reruns. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We have referenced previous research that proposes evaluation metrics using which we have carried out these evaluations. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
seonwoo-etal-2023-ranking
Ranking-Enhanced Unsupervised Sentence Representation Learning
https://aclanthology.org/2023.acl-long.879
Unsupervised sentence representation learning has progressed through contrastive learning and data augmentation methods such as dropout masking. Despite this progress, sentence encoders are still limited to using only an input sentence when predicting its semantic vector. In this work, we show that the semantic meaning of a sentence is also determined by nearest-neighbor sentences that are similar to the input sentence. Based on this finding, we propose a novel unsupervised sentence encoder, RankEncoder. RankEncoder predicts the semantic vector of an input sentence by leveraging its relationship with other sentences in an external corpus, as well as the input sentence itself. We evaluate RankEncoder on semantic textual benchmark datasets. From the experimental results, we verify that 1) RankEncoder achieves 80.07{\%} Spearman{'}s correlation, a 1.1{\%} absolute improvement compared to the previous state-of-the-art performance, 2) RankEncoder is universally applicable to existing unsupervised sentence embedding methods, and 3) RankEncoder is specifically effective for predicting the similarity scores of similar sentence pairs.
# Ranking-Enhanced Unsupervised Sentence Representation Learning Yeon Seonwoo† *, Guoyin Wang‡, Changmin Seo‡, **Sajal Choudhary**‡, Jiwei Li §, Xiang Li ‡, Puyang Xu ‡, Sunghyun Park ‡, **Alice Oh** † †KAIST, ‡Amazon, §Zhejiang Univeristy yeon.seonwoo@kaist.ac.kr {guoyiwan, changmis, sajalc, lixxiang, puyax, sunghyu}@amazon.com jiwei_li@zju.edu.cn alice.oh@kaist.edu ## Abstract ![0_Image_0.Png](0_Image_0.Png) Unsupervised sentence representation learning has progressed through contrastive learning and data augmentation methods such as dropout masking. Despite this progress, sentence encoders are still limited to using only an input sentence when predicting its semantic vector. In this work, we show that the semantic meaning of a sentence is also determined by nearestneighbor sentences that are similar to the input sentence. Based on this finding, we propose a novel unsupervised sentence encoder, RankEncoder. RankEncoder predicts the semantic vector of an input sentence by leveraging its relationship with other sentences in an external corpus, as well as the input sentence itself. We evaluate RankEncoder on semantic textual benchmark datasets. From the experimental results, we verify that 1) RankEncoder achieves 80.07% Spearman's correlation, a 1.1% absolute improvement compared to the previous state-of-the-art performance, 2) RankEncoder is universally applicable to existing unsupervised sentence embedding methods, and 3) RankEncoder is specifically effective for predicting the similarity scores of similar sentence pairs.1 ## 1 Introduction Sentence representation learning aims to encode sentences into a semantic vector space. This task has been a fundamental task in natural language processing (NLP), as universal sentence vectors are widely applicable to many NLP tasks (Kiros et al., 2015; Hill et al., 2016; Conneau et al., 2017; Logeswaran and Lee, 2018; Cer et al., 2018; Reimers and Gurevych, 2019). Recently, unsupervised sentence embedding methods have arisen as they have shown a potential to overcome limited labeled data with simple data augmentation methods (Gao et al., *This work was done during an internship at Amazon. 1We provide the implementation of RankEncoder at https: //github.com/yeonsw/RankEncoder.git Figure 1: Vector representations of sentences and their neighbor sentences. The neighbor sentences reveal that (*a, c*) share more semantic meanings than (*a, b*). This captures more accurate semantic similarity scores than their vectors. 2021; Wang et al., 2022; Yan et al., 2021; Liu et al., 2021; Wu et al., 2021; Izacard et al., 2021; Kim et al., 2021). These approaches minimize the distance between the vector representations of similar sentences, called positive pairs, while maximizing the distance between those of dissimilar sentences, called negative pairs. Many studies have focused on developing better positive and negative pair sampling methods. Data augmentation methods such as dropout masking (Gao et al., 2021), token shuffling (Yan et al., 2021), and sentence negation (Wang et al., 2022) have been proposed and achieved comparable semantic textual similarity performance to sentence encoders trained on human-annotated datasets. The semantic meaning of a sentence is not only determined by the words within the sentence itself but also by other sentences with similar meanings. However, previous unsupervised sentence embed15783 ding methods use only the input sentence when predicting its semantic vector. Figure 1 shows example sentences and their semantic vector space. In this figure, the human-annotated similarity scores indicate that sentence pair (*a, c*) is more similar than (*a, b*). However, the similarity scores computed by their sentence vectors indicate the opposite result; the vector representations of a and b are closer than a and c as they have more overlapping words than a and c. This problem can be alleviated by leveraging the distance between the input sentence and other sentences in a large corpus. The vectors of their neighbor sentences approximate the overall semantic distribution, and the semantic distribution reveals that sentences a and c are likely to share more semantic meanings than sentences a and b. This facilitates the accurate prediction of semantic similarity between sentences. In this paper, we propose **RankEncoder**, a novel unsupervised sentence encoder that leverages a large number of sentences in an external corpus. For a given corpus with n sentences and an input sentence, RankEncoder computes a rank vector, an n-dimensional vector in which i'th element represents the distance between the input sentence and i'th sentence in the corpus; RankEncoder uses an existing unsupervised sentence encoder, E, to compute the distances. Then, two sentences that share the same neighbor sentences (e.g., sentence a and c in Fig 1) have similar rank vector representations. We verify that using the similarity between rank vectors captures better semantic similarity than their vector representation computed by the base encoder, E, (Fig 4) without further training. We further leverage the similarity scores predicted by the rank vectors to train another sentence encoder and achieve a better sentence encoder (Table 1). From experiments on seven STS benchmark datasets, we verify that 1) rank vectors are effective for capturing the semantic similarity of similar sentences, 2) RankEncoder is applicable to any unsupervised sentence encoders, resulting in performance improvement, and 3) this improvement is also valid for the previous state-of-the-art sentence encoder and leads to a new state-of-the-art semantic textual similarity performance. First, we measure the performance of RankEncoder and the baselines on three sentence pair groups divided by their similarity scores. The experimental results show that RankEncoder is effective on similar sentence pairs. Second, we apply RankEncoder to the three unsupervised sentence encoders, SimCSE (Gao et al., 2021), PromptBERT (Jiang et al., 2022), and SNCSE (Wang et al., 2022), then verify that our approach brings performance improvement to each encoder. Third, we apply RankEncoder to the stateof-the-art unsupervised sentence encoder (Wang et al., 2022) and achieve a 1.1% improvement; the previous state-of-the-art is 78.97 Spearman's correlation, and we achieve 80.07 Spearman's correlation. The contributions of this paper are three folds. First, we demonstrate that the semantic meaning of a sentence is also determined by its nearestneighbor sentences as well as the words within the sentence itself. Second, we propose RankEncoder, which leverages a large number of sentences to capture the semantic meanings of sentences. Third, we achieve state-of-the-art STS performance and reduce the gap between supervised and unsupervised sentence encoders; the performances of our method and the state-of-the-art supervised sentence encoder (Jiang et al., 2022) are 80.07 and 81.97, respectively. ## 2 Related Works Unsupervised sentence representation learning has progressed through contrastive learning with positive and negative sentence pair sampling methods (Gao et al., 2021; Jiang et al., 2022; Chuang et al., 2022; Wang et al., 2022). SimCSE (Gao et al., 2021) and ConSERT (Yan et al., 2021) apply data augmentation methods such as dropout masking, token shuffling, and adversarial attacks to an input sentence and sample a positive pair. However, these data augmentation methods often change the meaning of the input sentence and generate dissimilar positive pairs. A masked language modeling-based word replacement method has been proposed to alleviate this problem (Chuang et al., 2022). They train a sentence encoder to predict the replaced words and make the encoder aware of surface-level augmentations. Some studies adopt momentum contrastive learning to generate positive samples inspired by unsupervised visual representation learning (Zhang et al., 2021; Wu et al., 2021). Prompting (Jiang et al., 2022; Jiang and Wang, 2022) is another direction that is capable of generating positive pairs. Recently, a negative sampling method for data augmentation has been proposed (Wang et al., 2022). This approach takes the negation of an ![2_image_0.png](2_image_0.png) input sentence and uses this soft-negative sample in a contrastive learning framework. Compared to previous approaches focused on generating better positive and negative pairs, our work uses nearestneighbor sentences to predict better semantic vectors of sentences, a novel approach that previous approaches have yet to cover. Related but different from our work, Trans-Encoder proposes the selfdistillation method that gets supervision signals from itself (Liu et al., 2022). Trans-Encoder solves a slightly different problem from ours. They aim to solve an unsupervised sentence pair modeling problem, not unsupervised sentence embedding; although this work does not require any humanannotated similarity scores of sentence pairs for training, they need the sentence pairs of the STS datasets, which are not allowed to be used for training in unsupervised sentence representation learning. ## 3 Method Leveraging the k-nearest-neighbor sentences helps a sentence encoder to approximate a more accurate semantic meaning of a sentence. For instance, when two input sentences have more common neighbors than other sentences, it is likely that they are semantically similar; we have provided an example in Figure 1. We extend this idea to leverage the entire sentences in the corpus, not just the neighbor sentences. Our unsupervised sentence encoder, RankEncoder, computes a rank vector for a given sentence. The rank vector is a list of ranks of all sentences in the corpus computed by their similarity scores to the input; for a given corpus with n number of sentences, a rank vector is an n-dimensional vector, in which i'th element represents the rank of i'th sentence in the corpus. Thus, when two input sentences have common neighbor sentences, their rank vectors are similar. We found that rank vectors capture more accurate semantic similarity than previous unsupervised sentence encoders. Since rank vectors predict better semantic similarity scores between sentences, we use these scores for training another sentence encoder to further increase its STS performance. We provide the overall illustration of RankEncoder in Figure 2. ## 3.1 Contrastive Learning For Base Encoder The first step of our framework is to learn a base sentence encoder E1 via the standard contrastive learning approach (Chen et al., 2020). Given an input sentence xi, we first create a positive example x + i which is semantically similar to xi (Gao et al., 2021; Chuang et al., 2022); we apply each data augmentation method used by existing unsupervised sentence representation learning studies (Gao et al., 2021; Jiang et al., 2022; Wang et al., 2022) and verify that our approach works in all cases. Then, a text encoder, e.g., BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), predicts their sentence vectors, v⃗i and v⃗ + i . Given a batch of m sentences {xi} m i=1, the contrastive training objective for the sentence xi with in-batch negative examples is as follows: $$l_{i}=-\log\frac{e^{\cos(\vec{v}_{i},\vec{v}_{i}^{+})/\tau}}{\sum_{j=1}^{m}e^{\cos(\vec{v}_{i},\vec{v}_{j}^{+})/\tau}},\qquad\qquad(1)$$ ![3_image_0.png](3_image_0.png) where cos(·) is the cosine similarity function and τ is the temperature hyperparameter. We then get the overall contrastive loss for the whole P batch by summing over all the sentences; lcl = m i=1 li. Note that the training objective lcl can be further enhanced by adding other relevant losses (Chuang et al., 2022), transforming the input sentences (Jiang et al., 2022; Gao et al., 2021), or modifying the standard contrastive loss (Zhou et al., 2022). For simplicity, we use lcl to represent all the variants of contrastive learning loss in this paper. By optimizing lcl, we obtain a coarse-grained sentence encoder E1 for the following steps. ## 3.2 Rankencoder RankEncoder computes the orders of sentences in the corpus with their similarity scores to the input sentence. For a given corpus with n sentences, C = [x1*, ..., x*n], and a given base encoder, E1, RankEncoder first computes the vector representation of each sentence in the corpus, V = [v⃗1*, ...,* v⃗n], with E1. Then computes the rank vector of an input sentence, x, by their orders as follows: RankEncoder${}_{E_{1}}(x,\mathcal{V})=g(<r_{1},r_{2},...,r_{n}>)$, where riis the order of sentence xi. We use cosine similarity scores between V and the vector representation of x, E1(x). The function g is a normalization function defined as follows: $$g({\vec{\mathbf{r}}})={\frac{{\vec{\mathbf{r}}}-{\frac{1}{n}}\sum_{i=1}^{n}r_{i}\cdot{\vec{\mathbf{1}}}}{{\sqrt{n}}\times\sigma([r_{i}]_{i=1}^{n})}},\qquad\qquad(3)$$ where σ is the standard deviation of the input values, ⃗1 is a n-dimensional vector of ones. By applying this function to rank vectors, the inner product of two normalized rank vectors becomes equivalent to Spearman's rank correlation, and the similarity is scaled between -1 and 1. We describe the connection between normalization function g and Spearman's rank correlation in Appendix A.1. ## 3.3 Semantic Vector Space Of Rank Vectors The similarity between rank vectors is affected mainly by their neighbor sentences, even though we use the entire sentences in a given corpus. Figure 3 shows a simplified example of RankEncoder's semantic space when the corpus has three sentences. Each solid line represents the boundary that two rank variables are converted. For instance, the yellow line is the boundary that reverses the orders of sentences b and c; all the vectors located in the left part of the yellow line are closer to sentence b than c. Since we have three sentences in this corpus, we get six rank vectors, and all vectors in each region have the same rank vector. In this figure, we see that the vectors in the red area are more affected by sentences a, b, and c than vectors in the grey area. For a given sentence, if its sentence representation lies in the central area, i.e., the red area, then its corresponding rank vector can be easily changed by a small modification of its sentence vector. For vectors having a larger distance from these sentences, e.g., the vectors in the gray area, the corresponding rank vectors are much less sensitive to modification of the input's sentence vector. This pattern also holds when we increase the size of the corpus as well; we demonstrate this in Section 5.5. ## 3.4 Model Training We use similarities predicted by rank vectors to train another sentence encoder, E2 2; rank vectors capture a better semantic similarity than their vector representation computed by base encoder E1. For a given unsupervised sentence encoder E1 and corpus C, we compute similarity scores of all sentence pairs in a batch with their rank vectors computed from E1. The similarity scores are computed by the inner product of these rank vectors. Then, we define the loss function as the mean square error of RankEncoder's similarity scores as follows: $$l_{\mathrm{r}}={\frac{1}{m^{2}}}\sum_{i=1}^{m}\sum_{j=1}^{m}\left({\vec{\mathbf{u}}}_{i}^{\mathsf{T}}{\vec{\mathbf{u}}}_{j}-\cos\bigl(E_{2}(x_{i}),E_{2}(x_{j})\bigr)\right)^{2},$$ where {xi} m i=1 are the sentences in the batch, E2 is the sentence encoder in training, ⃗uiis a rank vector of xi computed by RankEncoderE1 , and cos(·) is the cosine similarity function. Then, we combine the RankEncoder loss, lr, with the standard contrastive loss, lcl, in the form of the hinge loss as follows: $$l_{\mathrm{total}}=m a x(\lambda_{\mathrm{train}}\times l_{r},l_{\mathrm{cl}}),$$ where λtrain is a weight hyperparameter. ## 3.5 Sentence Pair Filtering Previous unsupervised sentence encoders randomly sample sentences to construct a batch, and randomly sampled sentence pairs are mostly dissimilar pairs. This causes sentence encoders to learn mostly on dissimilar pairs, which is less important than similar sentence pairs. To alleviate this problem, we filter dissimilar sentence pairs with a similarity under a certain threshold3. Also, it is unlikely that randomly sampled sentence pairs have the same semantic meaning. We regard sentence pairs with high similarity as noisy samples and filter these pairs with a certain threshold. The final RankEncoder loss function with sentence pair filtering is as follows: $$l_{r}=\sum_{i=1}^{m}\sum_{j=1}^{m}\Big\{\frac{1[\eta_{l}\leq\vec{\mathbf{u}}_{i}^{\mathsf{T}}\vec{\mathbf{u}}_{j}\leq\tau_{u}]}{\sum_{p=1}^{m}\sum_{q=1}^{m}1[\eta_{l}\leq\vec{\mathbf{u}}_{p}^{\mathsf{T}}\vec{\mathbf{u}}_{q}\leq\tau_{u}]}$$ $$\times\Big(\vec{\mathbf{u}}_{i}^{\mathsf{T}}\vec{\mathbf{u}}_{j}-\cos\big{(}E_{2}(x_{i}),E_{2}(x_{j})\big{)}\Big{)}^{2}\Big\},\tag{6}$$ where τl and τu are the thresholding parameters, and 1 is the indicator function that returns 1 when the condition is true and returns 0 otherwise. ## 3.6 Inference We can further utilize RankEncoder in inference stage. Given a sentence pair (xi, xj ), we compute the similarity between the two sentences as follows: $$\begin{array}{c}{{\sin(x_{i},x_{j})=\lambda_{\mathrm{inf}}\cdot\vec{\mathbf{z}}_{i}^{\mathsf{T}}\vec{\mathbf{z}}_{j}}}\\ {{\qquad+(1-\lambda_{\mathrm{inf}})\cdot\cos\bigl(E_{2}(x_{i}),E_{2}(x_{j})\bigr),}}\end{array}\quad(7)$$ where E2 is a sentence encoder trained by Eq 5, λinf is a weight parameter, and ⃗zi and ⃗zj are the rank vectors of xi and xj computed by RankEncoderE2 . $$({\boldsymbol{5}})$$ ## 4 Experimental Setup 4.1 Base Encoder E1 **& Corpus** C RankEncoder computes rank vectors using corpus C and base encoder E1. We use 100,000 sentences sampled from Wikipedia4as the corpus C 5, and we use the following unsupervised sentence encoders for E1, SimCSE (Gao et al., 2021), PromptBERT (Jiang et al., 2022), and SNCSE (Wang et al., 2022). SimCSE is a standard unsupervised sentence encoder that uses a standard contrastive learning loss with the simple data augmentation method. We use SimCSE as it is effective to show the efficacy of RankEncoder. We use PromptBERT and SNCSE, the state-of-the-art unsupervised sentence encoders, to verify whether RankEncoder is effective on more complex models. | Model | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | AVG | |--------------------------------|---------|---------|---------|---------|---------|---------|----------|-------| | ConSERT(Yan et al., 2021) | 64.64 | 78.49 | 69.07 | 79.72 | 75.95 | 73.97 | 67.31 | 72.74 | | SimCSE(Gao et al., 2021) | 68.40 | 82.41 | 74.38 | 80.91 | 78.56 | 76.85 | 72.23 | 76.25 | | DCLR(Zhou et al., 2022) | 70.81 | 83.73 | 75.11 | 82.56 | 78.44 | 78.31 | 71.59 | 77.22 | | ESimCSE(Wu et al., 2021) | 73.40 | 83.27 | 77.25 | 82.66 | 78.81 | 80.17 | 72.30 | 78.27 | | DiffCSE(Chuang et al., 2022) | 72.28 | 84.43 | 76.47 | 83.90 | 80.54 | 80.59 | 71.23 | 78.49 | | PromptBERT(Jiang et al., 2022) | 71.56 | 84.58 | 76.98 | 84.47 | 80.60 | 81.60 | 69.87 | 78.54 | | SNCSE(Wang et al., 2022) | 70.67 | 84.79 | 76.99 | 83.69 | 80.51 | 81.35 | 74.77 | 78.97 | | RankEncoder | 74.88 | 85.59 | 78.61 | 83.50 | 80.56 | 81.55 | 75.78 | 80.07 | Table 1: Semantic textual similarity performance of RankEncoder and baselines in an unsupervised setting. Following previous sentence embedding studies, we measure the Spearman's rank correlation between the human annotated scores and the model's predictions. The results of the baselines are from the original paper. RankEncoder uses SNCSE as base encoder E1. ## 4.2 Datasets & Evaluation Metric We evaluate RankEncoder on seven semantic textual similarity benchmark datasets: STS20122016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS-B (Cer et al., 2017), and SICKRelatedness (Marelli et al., 2014). Each dataset consists of sentence pairs with human-annotated similarity scores. For each sentence pair, sentence encoders predict the similarity, and we measure the Spearman's rank correlation between the predicted similarities and the human-annotated similarities. ## 4.3 Training Details & Hyper-Parameter Settings We train RankEncoder on 106sentences from Wikipedia, following existing unsupervised sentence embedding studies. We use two NVIDIA V100 GPUs for training. The running time for training RankEncoder is approximately 1.5 hours, which takes an hour more than the training time of SimCSE, and its inference takes slightly more time, about 1.8% more than SimCSE based on BERTbase. We provide the details in Appendix A.7. We find the best hyperparameter setting on the development sets of the STS-B and SICKRelatedness datasets. We set λtrain = 0.05, λinf = 0.1, τl = 0.5, and τu = 0.8. We provide more analysis on the hyper-parameter, λtrain, in Appendix A.2. For other hyperparameters, we follow the base encoder's setting provided by the authors of each base encoder, E1. ## 5 Results And Discussions In this section, we demonstrate that 1) RankEncoder is effective for capturing the semantic similarity scores of similar sentences, 2) RankEncoder is universally applicable to existing unsupervised sentence encoders, and 3) RankEncoder achieves state-of-the-art semantic textual similarity (STS) performance. We describe the detailed experimental results in the following sections. ## 5.1 Semantic Textual Similarity Performance We apply RankEncoder to an existing unsupervised sentence encoder and achieve state-of-the-art STS performance. We use SNCSE (Wang et al., 2022) fine-tuned on BERT-base (Devlin et al., 2019) as the base encoder, E1. Table 1 shows the STS performance of RankEncoder and unsupervised sentence encoders on seven STS datasets and their average performance (AVG). RankEncoder increases the AVG performance of SNCSE by 1.1 and achieves state-of-the-art STS performance. RankEncoder brings a significant performance gain on STS12, STS13, STS14, and SICK-R, but a comparably small improvement on STS16 and STSB. We conjecture that this is because RankEncoder is specifically effective on similar sentence pairs. The STS12, 13, 14, and SICK-R datasets contain similar sentence pairs more than dissimilar pairs; we show the similarity distribution of each dataset in Appendix A.3. This pattern is aligned with the performance gain on each STS dataset in Table 1. ## 5.2 Universality Of Rankencoder RankEncoder applies to any unsupervised sentence encoders. We apply RankEncoder to SimCSE (Gao et al., 2021), PromptBERT (Jiang et al., 2022), and SNCSE (Wang et al., 2022). SimCSE represents the vanilla contrastive learning-based sentence encoder, and PromptBERT and SNCSE represent the state-of-the-art unsupervised sentence encoders. We evaluate each encoder's average per- ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) formance (AVG) on seven STS datasets. We train each encoder in three separate trials and report the mean and the standard deviation of the AVG performances in Figure 4; the error bar shows the standard deviation. This figure shows that RankEncoder increases the average STS performance on each unsupervised sentence encoder; the improvements on SimCSE, PromptBERT, and SNCSE are 2.1, 0.9, and 0.9, respectively. We report detailed experimental results in Appendix A.5. This result implies that RankEncoder is a universal method that applies to any unsupervised sentence encoder. ## 5.3 Overlapping Neighbor Sentences In Section 3.3, we conjecture that the RankEncoder is specifically effective for similar sentence pairs as they have more overlapping neighbor sentences, which are used to approximate their semantic similarity. To support this supposition, we show the relation between the performance gain caused by RankEncoder and the number of overlapping neighbor sentences of the input sentences. We group sentence pairs in the STS-B dataset by cosine similarity scores of their sentence vectors, then compare the STS performance of SimCSE and RankEncoder (Eq. 2 without re-training) on each group; we use SimCSE as the base encoder, E1. We also report the average number of overlapping neighbor sentences of each group; for each sentence pair, we count the number of sentences in the intersection of their top 100 nearest neighbor sentences and take the average. Figure 6 shows one expected result of our supposition; the performance gain correlates with the number of overlapping neighbor sentences. ## 5.4 Performance On Similar Sentence Pairs It is more important to accurately predict the similarities between similar texts than those between dissimilar ones. This is because many NLP down- ![7_image_1.png](7_image_1.png) stream tasks, e.g., retrieval and reranking, aim to find the most relevant/similar text (or texts) from candidate texts, and we can easily filter out dissimilar texts with simple approaches such as a lexical retriever; we only need rough similarity scores to identify the dissimilar texts. In this section, we demonstrate the efficacy of our approach on similar sentence pairs. We divide sentence pairs in the STS-B dataset into three groups by their human-annotated similarity scores and use the group with the highest similarity. The similarity range of each group is 0.0-0.33 for the dissimilar groups, 0.33-0.67 for the intermediate group, and 0.67-1.0 for the similar group; we normalize the scores to a 0.0-1.0 scale. Figure 5 shows the performance of three unsupervised sentence encoders and the performance gain brought by each component of RankEncoder. RankEncoderE is the model with Eq. 2 that uses E as the base encoder. RankEncoderE-retrain is the model with re-training (Eq. 5). RankEncoderE-retrain-inf is the model with re-training and weighted inference (Eq. 7). From the comparison between E and RankEncoderE, we verify that rank vectors effectively increase the base encoder's performance on similar sentence pairs. This improvement is even more significant when using rank vectors for retraining and inference. We report the detailed re- ![7_image_0.png](7_image_0.png) ## Sults In Appendix A.6. 5.5 The Vector Space Of Rankencoder In Section 3.3, we show that rank vectors become more distinguishable as the number of sharing neighbor sentences increases. In this section, we demonstrate that this pattern holds for a larger corpus as well. Figure 7 shows the vector representations of randomly sampled 1,000 sentences in the STS-B dataset; Figure 7a is the vector space of PromptBERT, and Figure 7b is the rank vector space. We see that the dense sub-spaces in Figure 7a expand as shown in 7b, and their representations become more distinguishable. Rank vectors improve the uniformity of the semantic vector space with negligible degradation in alignment. Uniformity and alignment are metrics for measuring the quality of embedding vectors (Gao et al., 2021; Wang and Isola, 2020). Uniformity is a measure of the degree of evenness of the embedding vectors. Alignment is a measure of the degree of closeness of the embedding vectors of positive pairs (e.g., sentence pairs with a similarity score higher than 4.0 in the STS-B dataset). We show the uniformity and alignment of each base encoder, E1, and their corresponding rank vectors, RankEncoderE1 , in Table 2. For each base encoder, their rank vectors largely improve uniformity, which is aligned with the results shown in Figure 7. These results also show that rank vectors bring degradation in alignment. However, this degradation is relatively negligible compared to the improvement in uniformity. From these results, we conjecture that the performance improvement on the STS benchmark datasets shown in Figure 4 is mostly related to the improvement in uniformity | Uniformity | Alignment | | |--------------|-------------|------| | SimCSE | -2.42 | 0.21 | | +RankEncoder | -3.23 | 0.23 | | PromptBERT | -1.49 | 0.11 | | +RankEncoder | -3.31 | 0.22 | | SNCSE | -2.21 | 0.16 | | +RankEncoder | -3.20 | 0.21 | Table 2: Uniformity and alignment of base encoders and RankEncoder. Lower is better. ## Rather Than Alignment. 6 Conclusion In this study, we showed that the semantics of a sentence is also determined by its similar sentence, not just the words within the sentence itself. We proposed RankEncoder to overcome the limitation of the previous sentence representation learning approaches, which are limited to using only the input sentence. RankEncoder leverages the distance between the input sentence and the sentences in a corpus to predict its semantic vector. RankEncoder is universally applicable to any unsupervised sentence encoder, resulting in performance improvement, and we demonstrated this with three unsupervised sentence encoders. We achieved state-of-the-art semantic textual similarity performance by applying our approach to the previous best sentence encoder. We also showed that our approach is specifically effective for capturing the semantic similarities of similar sentences. ## 7 Limitations This work has been studied on the Wikipedia corpus, following the standard experimental setting used in previous unsupervised sentence representation learning studies. We expect to see many important findings by investigating sentence representation learning on various corpora in different domains such as Bookcorpus (Zhu et al., 2015) and the C4 corpus (Raffel et al., 2019). ## Acknowledgements This research was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921) ## References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In *SemEval*. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In *SemEval*. Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and crosslingual evaluation. In *SemEval*. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SemEval*. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. * sem 2013 shared task: Semantic textual similarity. In *SemEval*. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *SemEval*. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. *arXiv*. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *ICML*. Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljaciˇ c, Shang- ´ Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. 2022. Diffcse: Difference-based contrastive learning for sentence embeddings. *arXiv*. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. *arXiv*. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In *EMNLP*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop* on Paraphrasing. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *EMNLP*. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In *NAACL*. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD. Gautier Izacard, Mathild Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. *arXiv*. Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, and Qi Zhang. 2022. Promptbert: Improving bert sentence embeddings with prompts. arXiv. Yuxin Jiang and Wei Wang. 2022. Deep continuous prompt for contrastive learning of sentence embeddings. *arXiv*. Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for bert sentence representations. In ACL. Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. *NeurIPS*. Fangyu Liu, Yunlong Jiao, Jordan Massiah, Emine Yilmaz, and Serhii Havrylov. 2022. Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations. In *ICLR*. Fangyu Liu, Ivan Vulic, Anna Korhonen, and Nigel ´ Collier. 2021. Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders. In *EMNLP*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv*. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In *ICLR*. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In LREC. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv*. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *EMNLP-IJCNLP*. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *EMNLP*. Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In *SIGIR*. Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022. Sncse: Contrastive learning for unsupervised sentence embedding with soft negative samples. *arXiv*. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *ICML*. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. *Language resources and evaluation*. Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2021. Esimcse: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. *arXiv*. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consert: A contrastive framework for self-supervised sentence representation transfer. In ACL. Yan Zhang, Ruidan He, Zuozhu Liu, Lidong Bing, and Haizhou Li. 2021. Bootstrapped unsupervised sentence representation learning. In ACL. Kun Zhou, Beichen Zhang, Xin Zhao, and Ji-Rong Wen. 2022. Debiased contrastive learning of unsupervised sentence representations. In ACL. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *ICCV*. ## A Appendix A.1 The Connection Between The Normalization Function G **And Spearman'S** Rank Correlation The Spearman'S Rank Correlation Of Two Lists Of Variables, U =< U1, ..., Un > And V =< V1, ..., Vn >, is the Pearson correlation coefficient, ρ, of their ranks, r uand r v, as follows: $$\rho(r^{u},r^{v})=\frac{\sum_{i=1}^{n}\left\{\frac{1}{n}(r_{i}^{u}-\overline{r}^{u})\times(r_{i}^{v}-\overline{r}^{v})\right\}}{\sigma(r^{u})\times\sigma(r^{v})},\tag{8}$$ where r uand r vare the mean of the rank variables, σ(r) uand σ(r) vare the standard deviations of ranks. Then, this can be re-written as follows: $$\rho(r^{u},r^{v})=$$ $$(\frac{1}{\sqrt{n}}(r^{u}-\overline{r}^{u})/\sigma(r^{u}))^{\mathsf{T}}(\frac{1}{\sqrt{n}}(r^{v}-\overline{r}^{v})/\sigma(r^{v})).\tag{9}$$ Thus, the inner product of the two rank vectors after normalization with g is equivalent to the Spearman's rank correlation of the rank variables. ## A.2 Λtrain **Analysis** The RankEncoder loss, lr, brings a large effect to RankEncoder's re-training process even when the weight parameter, λtrain, is set to a small value. In this section, we show that the two losses, lcl and lr, similarly affect to the total loss, ltotal in Eq. 5, when λtrain = 0.05, which is the default setting we use for all experiments in this paper. Figure 8 shows the training loss curves of RankEncoder and SimCSE-unsup with the same random seed. We show the two losses, lcl and lr, of RankEncoder separately. SimCSE-unsup's loss rapidly decreases at the beginning, and converges to a value less than 0.001. We see a similar pattern in the contrastive loss of RankEncoder, which is the same loss function as SimCSE-unsup. In contrast, λtrain ×lr starts from a much lower value than lcl; even without the weight parameter, lr is still much lower than lcl. After few training steps, λtrain × lr converges close to the value of lcl. Given that λtrain determines the scale of two losses of our hinge loss function (Eq. 5), we expect that increasing λtrain brings RankEncoder's loss curve converged to higher than SimCSE's loss. This result shows that λtrain = 0.05 is optimal value that maintaining the RankEncoder's loss curve similar to the base encoder's loss curve, while balancing the weights of the two losses, lcl and lr. The loss curve of a supervised sentence encoder provides a reference point for comparison between the loss curves of unsupervised sentence ![10_image_0.png](10_image_0.png) encoders. In Figure 8, all unsupervised sentence encoders' loss curves show a rapidly decreasing pattern, which implies overfitting in training. To verify whether this pattern comes from unsupervised training, we show the loss curve of the supervised sentence encoder, SimCSE-sup, in Figure 8. In this experiment, we measure the same contrastive loss used in unsupervised sentence encoders but in the SimCSE-sup's fully supervised training process. We see the same pattern also holds for SimCSEsup and verify that the rapidly decreasing pattern is not the problem that only occurs in unsupervised training. ## A.3 Similarity Distribution Of Sts Benchmark Datasets Semantic textual similarity datasets have different similarity distributions. Since RankEncoder is specifically effective for similar sentence pairs, we expect that RankEncoder brings a more performance increase on datasets with more similar sentence pairs. We show the similarity distribution of each STS dataset in Figure 9. In this figure, we normalize the similarity scores between 0 and 1. The result shows that the similarity distributions of STS12, STS14, and SICK-R are skewed to a high similarity score and STS13's similarity distribution has a distinct peak at a high similarity score. From the results in Table 1, we see that RankEncoder is more effective on STS12, STS13, STS14, SICK-R, and show the relation between the performance increase and the similarity distribution of Model MR CR SUBJ MPQA SST TREC MRPC AVG SimCSE 81.18 86.46 94.45 88.88 85.50 89.80 74.43 85.81 SimCSE w/ MLM **82.92** 87.23 **95.71** 88.73 **86.81** 87.01 **78.07** 86.64 RankEncoder-SimCSE w/ MLM 82.14 **87.31** 95.35 **89.05** 86.66 **91.00** 76.06 **86.80** Table 3: Transfer task results of baselines and RankEncoder. We use RankEncoder with base encoder SimCSE. MLM represents that the model is trained by both loss functions: its loss function and the masked language modeling loss used in pre-trained language models such as BERT. We set the weight parameter of the MLM loss function to 0.1. Model STS12 STS13 STS14 STS15 STS16 STS-B SICK-R AVG SimCSE 68.1±3.3 81.4±1.6 73.8±2.4 81.8±1.4 78.3±0.6 77.3±2.3 71.0±0.4 76.0±1.5 + RankEncoder 75.0±0.6 82.0±0.7 75.2±0.2 83.0±0.1 79.8±0.1 80.4±0.6 71.1±1.2 78.1±0.1 PromptBERT 72.1±0.2 84.6±0.3 76.8±0.1 84.2±0.3 80.4±0.3 81.8±0.3 69.5±0.2 78.5±0.0 + RankEncoder 74.2±0.3 85.2±0.2 77.7±0.2 84.4±0.3 80.7±0.5 82.1±0.4 71.2±0.2 79.4±0.2 SNCSE 70.2±0.5 84.1±0.5 77.1±0.4 83.2±0.5 80.7±0.1 80.7±0.6 75.0±0.1 78.7±0.3 + RankEncoder 73.9±0.6 84.5±0.5 78.0±0.3 83.0±0.5 81.0±0.2 81.2±0.2 75.3±0.1 79.6±0.2 ## Each Dataset. A.4 Transfer Tasks We verify that applying our approach to an existing unsupervised sentence encoder increases the performance on transfer tasks. We use the following seven transfer tasks to evaluate sentence embeddings: MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST (Socher et al., 2013), TREC (Voorhees and Tice, 2000), and MRPC (Dolan and Brockett, 2005). These transfer tasks employ an additional single-layer neural network to transform sentence embeddings into the appropriate output format for a given task. The singlelayer neural network is trained with the training set of each task. We use the SentEval toolkit (Conneau and Kiela, 2018) for evaluation. Table 3 shows the performance of SimCSE and RankEncoder on these transfer tasks. We use SimCSE as the base encoder of RankEncoder. Recently, SimCSE (Gao et al., 2021) has shown that training sentence encoders with auxiliary masked language modeling (MLM) loss enhances their performance on transfer tasks. Inspired by this finding, we use MLM loss when training RankEncoder. The experimental results show that our approach increases the average performance on transfer tasks by 0.16%p. This performance gain is relatively small when we compare it with the performance gain on STS benchmark datasets shown in Table 1. This is because the sentence embedding quality is not directly connected to the objective of transfer tasks (Gao et al., 2021). ## A.5 Universality Of Rankencoder In this section, we report the detailed experimental results of Figure 4. Table 4 shows the results. ## A.6 The Performance Of Rankencoder On Similar Sentence Pairs We Report The Detailed Results Of Figure 5 In Table 5. A.7 Computational Cost In this section, we describe the details of the computational efficiency of RankEncoder. Pre-Computation: We pre-compute sentence vectors of corpus C for training and inference. This takes a few seconds on a single V100 GPU. Training: Most of the additional training time comes from calculating a rank vector similarity matrix (the matrix in Figure 2). First, we calculate a rank vector for every sentence in a given batch. The time complexity of calculating a rank vector is | Base Encoder E1 | | | | |-------------------------------|------------|-------|-------| | SimCSE | PromptBERT | SNCSE | | | E1 | 44.59 | 49.56 | 48.19 | | RankEncoderE1 | 46.73 | 50.06 | 49.44 | | RankEncoderE1 − retrain | 48.41 | 50.75 | 49.80 | | RankEncoderE1 − retrain − inf | 48.73 | 50.93 | 49.92 | O(N × D), where N is the number of pre-indexed vectors, and D is the dimension of the vectors. Assuming a batch size of B, the time complexity of this step is O(B × N × D). Second, we calculate a B × B rank vector similarity matrix. This is O(B × B × N) since the dimension size of the rank vector is N. The total time complexity is O(B×D×N +B×B×N), which is O(B×D× N), assuming the batch size is much smaller than the dimension size. As a result, the total training time is 1.5 hours, 0.5 hours (the base encoder's training time) + 1.0 hours (the additional training time brought by our approach). Inference: RankEncoder's inference process comprises two steps: 1) predicting the sentence vector of an input sentence and 2) computing similarity scores between the input sentence vector and the pre-indexed vectors; we exclude the indexing time since indexing is completed before inference. The first step takes the same inference time as BERT-base (0.07 seconds for a given sentence on a single V100 GPU) as RankEncoder uses BERTbase. The second step entails matrix multiplication of an N × D matrix and a D × 1 matrix (this takes 0.0013 seconds), which takes 1.8% of the whole inference time. Thus, our method increases the inference time by 1.8%. ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 3, Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3, Section 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3, Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4, Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
skitalinskaya-wachsmuth-2023-revise
To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support
https://aclanthology.org/2023.acl-long.880
Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims in need of specific revisions. By learning from collaborative editing behaviors in online debates, we seek to capture implicit revision patterns in order to develop approaches aimed at guiding writers in how to further improve their arguments. We systematically compare the ability of common word embedding models to capture the differences between different versions of the same text, and we analyze their impact on various types of writing issues. To deal with the noisy nature of revision-based corpora, we propose a new sampling strategy based on revision distance. Opposed to approaches from prior work, such sampling can be done without employing additional annotations and judgments. Moreover, we provide evidence that using contextual information and domain knowledge can further improve prediction results. How useful a certain type of context is, depends on the issue the claim is suffering from, though.
# To Revise Or Not To Revise: Learning To Detect Improvable Claims For Argumentative Writing Support Gabriella Skitalinskaya1,2 and **Henning Wachsmuth**1 1Institute of Artificial Intelligence, Leibniz University Hannover 2 Department of Computer Science, University of Bremen {g.skitalinska,h.wachsmuth}@ai.uni-hannover.de ## Abstract Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims in need of specific revisions. By learning from collaborative editing behaviors in online debates, we seek to capture implicit revision patterns in order to develop approaches aimed at guiding writers in how to further improve their arguments. We systematically compare the ability of common word embedding models to capture the differences between different versions of the same text, and we analyze their impact on various types of writing issues. To deal with the noisy nature of revision-based corpora, we propose a new sampling strategy based on revision distance. Opposed to approaches from prior work, such sampling can be done without employing additional annotations and judgments. Moreover, we provide evidence that using contextual information and domain knowledge can further improve prediction results. How useful a certain type of context is, depends on the issue the claim is suffering from, though. ## 1 Introduction Text revision is an essential part of professional writing and is typically a recursive process until a somehow *optimal* phrasing is achieved from the author's point of view. Aside from proofreading and copyediting, text revision subsumes substantive and rhetorical changes not only at the lexical, syntactic, and semantic levels, but also some that may require knowledge about the topic of discussion or about conventions of the domain or genre. An optimal phrasing is especially important in argumentative writing, where it is considered a key component in academic and professional success: An argument's style directly affects its persuasive effect on the audience (El Baff et al., 2020). ![0_image_0.png](0_image_0.png) But how to know whether an argument is phrased well enough and no more revisions are needed? Most existing approaches to argument quality assessment score arguments on different aspects of a topic or compare one to another, rather than detecting issues within arguments to highlight potential improvements (see Section 2 for details). Beyond those, Zhang and Litman (2015) analyze the nature of revisions in argumentative writing. They annotate revisions at various levels to learn to classify changes that occur. Others compare revisions in terms of quality on essay level (Afrin and Litman, 2018) or claim level (Skitalinskaya et al., 2021). Still, the question of whether a given argumentative text should be revised remains unexplored. Figure 1 illustrates the underlying learning problem. What makes research on detecting the need for revision challenging is the noisy and biased nature of revision-based corpora in general and respective argument corpora specifically. Not only is it uncertain whether a text will be revised in the future and how, but also the inherent subjectivity and context dependency of certain argument quality dimensions (Wachsmuth et al., 2017) pose challenges. In this work, we investigate how to best develop approaches that identify argumentative claims in need of further revision, and that decide what type 15799 of revision is most pressing. We delineate the main challenges originating from the nature of revisionbased corpora and from the notion of argument quality. To tackle these challenges, we exploit different types of knowledge specific to text and argument revisions: (a) the number of revisions performed, in the past and the available future, (b) the types of revision performed, such as typo correction vs. clarification, (c) contextual information, such as the main thesis or parent claim in the given debate, (d) topic knowledge, such as debates belonging to the same topical categories, and (e) the nature of revisions and their concordance with training processes of embedding representations. In systematic experiments on a claim revision corpus, we provide evidence that some explored approaches can detect claims in need of revision well even in low-resource scenarios, if appropriate sampling strategies are used. While employing contextual information leads to further improvements in cases where linguistic cues may be too subtle, we find that it may also be harmful when detecting certain types of issues within the claim. We argue that technologies that identify texts in need of revision can highly benefit writing assistance systems, supporting users in formulating arguments in a better way in order to optimize their impact. The main contributions of this paper are: 1. An overview of the main challenges in assessing whether a claim needs revision; 2. a detailed analysis of the strengths and weaknesses of strategies to tackle said challenges, guiding future research on revisions in argumentation and other domains; 3. a systematic comparison of approaches based on different contextualized representations for the tasks of suboptimal-claim detection and claim improvement suggestion.1 ## 2 Related Work Foundational studies of writing specify two main revision sub-processes: evaluating (reading the produced text and detecting any problems) and editing (finding an optimal solution and executing the changes) (Flower et al., 1986). In this work, we focus on the former in the domain of *argumentative texts*. Although considerable attention has been given to the computational assessment of the 1Data, code, and models from our experiments are found at https://github.com/webis-de/ACL-23. quality of such texts, very few works consider the effects of revision behaviors on quality. Existing research largely focuses on the absolute and relative assessment of single quality dimensions, such as cogency and reasonableness (Marro et al., 2022). Wachsmuth et al. (2017) propose a unifying taxonomy that combines 15 quality dimensions. They clarify that some dimensions, such as acceptability and rhetorical effectiveness, depend on the social and cultural context of the writer and/or audience. A number of approaches exist that control for topic and stance (Habernal and Gurevych, 2016), model the audience (Al Khatib et al., 2020), or their beliefs (El Baff et al., 2020), and similar. However, they all compare texts with different content and meaning in terms of the aspects of topics being discussed. While such comparisons help characterize good arguments, they are not geared towards identifying issues within them, let alone towards guiding writers on how to improve the quality of their arguments. The only works we are aware of that study revisions of argumentative texts are those of Afrin and Litman (2018), Skitalinskaya et al. (2021), and Kashefi et al. (2022). The first two suggest approaches that enable automatic assessments of whether a revision can be considered successful, that is, it improves the quality of the argumentative essay or claim. The third extends the corpus of Zhang and Litman (2015) to complement 86 essays with more fine-grained annotations, enabling the distinction of content-level from surface-level revision changes at different levels of granularity. All these approaches to characterizing the type of revision and its quality require two versions as input. In contrast, we seek to identify whether an argumentative text needs to be revised at all and, if so, what type of improvement should be undertaken. In such framing, the solutions to the tasks can also be used to support argument generation approaches, for example, by helping identify weak arguments for counter-argument generation (Alshomary et al., 2021), as well as automated revision approaches, for example, by providing required revision types or weak points as prompts (Hua and Wang, 2020; Skitalinskaya et al., 2022). Due to the lack of corpora where revisions are performed by the authors of texts themselves, researchers utilize collaborative online platforms. Such platforms encourage users to revise and improve existing content, such as encyclopedias (Faruqui et al., 2018), how-to instructions (Anthonio et al., 2020), Q&A sites (Li et al., 2015), and debate portals (Skitalinskaya et al., 2021). Studies have explored ways to automate content regulation, namely text simplification (Botha et al., 2018), detection of grammar errors (Lichtarge et al., 2019), lack of citations (Redi et al., 2019), biased language (De Kock and Vlachos, 2022), and vagueness (Debnath and Roth, 2021). While Bhat et al. (2020) consider a task similar to ours - detecting sentences in need of revision in the domain of instructional texts - their findings do not fully transfer to argumentative texts, as different domains have different goals, different notions of quality, and, subsequently, different revision types performed. Revision histories of peer-reviewed content help alleviate the shortcomings typical for self-revisions, where a writer may fail to improve a text for lack of expertise or skills (Fitzgerald, 1987). Yet, they also introduce new challenges stemming from sociocultural aspects, such as opinion bias (Garimella et al., 2018; Pryzant et al., 2020) and exposure bias (Westerwick et al., 2017). Approaches to filtering out true positive and negative samples have been suggested to tackle such issues. These include community quality assessments, where high quality content is determined based on editor or user ratings and upvotes (Redi et al., 2019; Chen et al., 2018), timestamp-based heuristics, where high-quality labels are assigned to content that has not been revised for a certain time period (Anthonio et al., 2020), and complementary crowdsourced annotation (Asthana et al., 2021). However, all this requires domain-specific information which may not be available in general. In our experiments, we analyze the potential of sampling data solely based on revision characteristics, namely revision distance (the number of revisions between a certain claim version and its final version). Moreover, studies have shown that writing expertise is domain-dependent, revealing commonalities within various professional and academic writing domains (Bazerman, 2016). Although certain quality aspects can be defined and evaluated using explicit rules, norms, and guidelines typical for a domain, not all quality aspects can be encoded in such rules. This raises the need to develop approaches capable of capturing implicit revision behaviors and incorporating additional context relevant to the decision-making process (Flower et al., 1986; Boltužic and Šnajder ´ , 2016). Below, we outline the main challenges stemming from the noisy and biased nature of revision-based corpora as well as from the context dependence of certain argument quality aspects. We then establish potential data filtering and sampling strategies targeting said issues. ## 3 Tasks And Challenges Revision-based data provides many opportunities; yet, it also comes with several challenges that arise at different stages of the experiment design process. In the following, we define the tasks we deal with in this paper, summarize the main challenges, and outline our approaches to these challenges. ## 3.1 Tasks Previous work has studied how to identify the better of two revisions. However, this does not suffice to support humans in optimizing their arguments, as it remains unclear when a claim is phrased optimally. We close this gap by studying two new tasks: Suboptimal-Claim Detection Given an argumentative claim, decide whether it is in need of further revision or can be considered to be phrased more or less optimally (binary classification). Claim Improvement Suggestion Given an argumentative claim, select all types of quality issues from a defined set that should be improved when revising the claim (multi-class classification). Reasons for revision can vary strongly, from the correction of grammatical errors to the clarification of ambiguity or the addition of evidence supporting the claim. In our experiments, we select quality issues sufficiently represented in the given data. ## 3.2 Challenges To tackle the given tasks on revision-based data, the following main challenges need to be faced: - *Data.* Compiling a dataset that is (a) representative and reliable and (b) free of topical bias. - *Model.* Selecting a model for the task whose complexity and architecture fit the data. - *Context.* Incorporating complementary contextual knowledge useful for the tasks at hand. We detail each challenge below and discuss how we approach it in our experiments. Representativity and Reliability Compiling a reliable dataset from claim revision histories that represents argumentative claim quality well is not straightforward. While examples of suboptimal quality are rather easy to find, since each revision signals that something is wrong with the previous version, identifying examples of high (ideally, optimal) quality text remains a challenge. The main reason is that such texts remain unchanged throughout time in collaborative systems, yet the same holds for low-quality texts that may have been overlooked and never revised. Prior work largely employs external information and additional quality assessments to sample representative examples (see Section 2), limiting scalability. In this paper, we complement existing efforts by suggesting a scalable sampling strategy solely based on revision history characteristics, namely revision distance, which denotes the number of revisions that occurred until an optimal (final) state of the claim was reached. The proposed strategy as illustrated in Figure 2 only considers claim histories with 4 or more revisions (chosen empirically). At each revision distance i from 1 to 4, a dataset Di is compiled, where all final versions of claims are considered as positive examples not needing a revision, and claim versions at revision distance i are considered as negative ones. Another problem is identifying flaws that need to be tackled in a revision. Although a claim may suffer from multiple flaws at the same time, not all of them may be eliminated in the same revision. In the dataset introduced in Section 4, revisions may be accompanied by a label describing the type of improvement performed. Still, such labels are skewed towards improvements addressed by the community and do not account for other flaws in the text. To address these issues, we explore three ways of extracting quality labels from revision histories: - We consider the revision distance between positive and negative examples when identifying claims in need of revision (Section 6.2). - We extend the given dataset with examples of claims that were never revised (Section 4). - We frame the improvement suggestion task as a multi-class classification task, where only the next most probable improvement type is predicted. This better reflects the iterative nature of revision processes and accounts for the lack of explicit quality labels (Section 6.5). | v1 | v2 | v3 | v4 | | | |------|------|------|------|---------|----| | v1 | v2 | v3 | | | | | v1 | v2 | v3 | v4 | v5 | v6 | | v1 | v2 | v3 | v4 | v5 | | | D4 | D3 | D2 | D1 | Optimal | | Topical Bias Despite the best efforts, histories of collaborative online revisions may contain noise, be it due to accidental mistakes or biases of users or moderators. Different users may have conflicting opinions on what should or should not be edited, and certain debate topics may be seen as controversial, making it even more difficult to assess the quality of claims and suggest further improvements. Accounting for such bias is inherently difficult and also depends on the prominence of such behaviors in the given community. We do not solve this issue here, but we explore it: - We determine the extent to which bias differs across topical debate categories by assessing performance differences when including claims on specific topics or not (Section 6.3). Model Complexity and Architecture Learning quality differences across several versions of the same argumentative claim likely requires a model whose architecture aligns with the idea of revisions. To determine the best model, we carry out two complementary steps: - We train several types of models of varying complexity, including statistical and neural approaches to both obtaining claim representations and classification (Section 5). - We disentangle how pre-training, fine-tuning, and the final classification affect the performance of claim assessment (Section 6.1). Contextuality As mentioned in Section 2, some quality aspects require domain knowledge. However, determining what kind of information should be included when deciding whether a text needs a revision remains an open question. Some revisions may be typical for debate as a whole, for example, related to a desired structure, layout and style of citations, or choice of words for main concepts in the debate. In such cases, conditioning models on the debate thesis may be beneficial. Others may depend on the parent claim, which is supported or opposed by the claim in question, and affects whether clarifications or edits improving the relevance of the claim are needed, and potentially general domain knowledge as well (Gretz et al., 2020). Therefore, we explore contextuality as follows: - We compare benefits of using contextual debate information of varying specificity when detecting suboptimal claims and recommending revision types (Sections 6.4–6.5). ## 4 Data In our experiments, we use ClaimRev (Skitalinskaya et al., 2021): a corpus of over 124,312 claim revision histories, collected from the online debating platform Kialo,2 which uses the structure of dialectical reasoning. Each debate has a main thesis, which is elaborated through pro and con claims, which allows to consider each comment as a self-contained, relevant argument. Each revision history is a set of claims in chronological order, where each claim represents a revised version of the preceding claim meant to improve a claim's quality, which holds in 93% of all cases according to an annotation study (Skitalinskaya et al., 2021). We extend the corpus by extracting 86,482 unrevised claims from the same set of debates as in ClaimRev, which have been introduced *before* the reported date of data collection (June 26, 2020). Since claims that have been introduced shortly before this date are still likely to receive revisions, we additionally filter out claims that have undergone a revision within six months after the initial data collection (December 22, 2020). We remove all revision histories, where claim versions have been reverted to exclude potential cases of edit wars. Our final corpus is formed by 410,204 claims with 207,986 instances representing optimally phrased claims (positive class) and 202,218 instances requiring revisions (negative class). All claims in need of further refinement are also provided with labels indicating the specific type of improvement the claim could benefit from. In this work, we limit ourselves to the three most common types, covering 95% of all labels revisions in the ClaimRev dataset: clarification, typo/grammar correction, and adding/correcting links. Specifically, 2Kialo, https://www.kialo.com | Subset | Type | # Instances | |-----------------|------------------|---------------| | Positive | Final in history | 121 504 | | Unrevised | 86 482 | | | Negative | Clarification | 61 142 | | Typo/Grammar | 57 219 | | | Links | 17 467 | | | Other/Unlabeled | 66 390 | | | Overall | 410 204 | | clarification means to adjust/rephrase a claim to make it more clear, *typo/grammar correction* simply indicates linguistic edits, and *adding/correcting* links refers to the insertion or editing of evidence in the form of links that provide supporting information or external resources to the claim. Statistics of the final dataset are shown in Table 1. Ensuring that all versions of the same claim appear in the same split, we assign 70% of the histories to the training set and the remaining 30% are evenly split between the dev and test sets. ## 5 Methods To study the two proposed tasks, we consider two experimental settings: (i) extracting claim representations by using embeddings as input to an SVM (Joachims, 2006), (ii) adding a classifier layer on top of pre-trained transformer models with further fine-tuning (FT). In our experiments, we consider the following approaches to generating claim representations: - *Glove* (Pennington et al., 2014). A static word embedding method - *Flair* (Akbik et al., 2018). A contextual character-level embedding method - *BERT* (Devlin et al., 2019). A standard baseline pre-trained transformer - *ELECTRA* (Clark et al., 2020). A transformer with adversarial pre-training fitting our tasks - *DeBERTa* (He et al., 2021). A transformer that achieved state-of-the-art performance on the SuperGLUE benchmark (Wang et al., 2019). ## 6 Experiments Based on the data from Section 4 and the methods from Section 5, we now present a series of experiments aimed at understanding the effects and | Approach | Model | Accuracy Ma. F1 | P | R | F1 | | |-----------------|-----------------|-------------------|------|-----------|-----------|------| | Embed. | Glove | 54.9 | 54.9 | 54.9 | 50.0 | 52.1 | | + SVM | Flair | 60.1 | 60.1 | 60.2 | 56.9 58.5 | | | BERT | 62.1 | 61.8 | 63.5 | 54.7 | 58.8 | | | ELECTRA | 63.2 | 62.9 | 65.1 | 55.0 | 59.6 | | | DeBERTa | 61.5 | 61.2 | 63.2 | 52.9 | 57.6 | | | Fine- | FT-BERT | 63.1 | 61.7 | 70.1 | 44.2 | 54.2 | | tuned | FT-ELECTRA 63.8 | 62.9 | 68.8 | 49.0 | 57.2 | | | FT-DeBERTa | 67.1 | 66.6 | 71.3 | 55.9 62.6 | | | | Random baseline | 50.0 | 50.0 | 50.0 | 50.0 50.0 | | | possible solutions to the four challenges from Section 3: (1) the right model complexity and architecture to capture differences between claim revisions; (2) representative and reliable examples of high and low quality; (3) the impact of topical bias in the data; (4) contextuality, where the quality of a claim may depend on its surrounding claims. ## 6.1 Model Complexity And Architecture First, we explore the ability of the methods to predict whether a claim can be considered as optimal or needs to be revised. We treat all unrevised claims and final versions of claims as not needing revisions and all preceding versions as requiring revisions. Table 2 presents the results of integrating several embeddings with a linear SVM classifier and fine-tuned transformer-based language models. Although we see gradual substantial improvements as we increase the complexity of the models used to generate the word embeddings, the best results (accuracy 67.1, macro F1 66.6) indicate the difficult nature of the task itself. Low results of Glove (both 54.9) indicate that general word cooccurrence statistics are insufficient for the task at hand. And while switching to contextualized word embeddings, such as *Flair*, leads to significant improvements, pre-trained transformers perform best. The difference between the transformer-based models suggests that the pre-training task and attention mechanism of models impact the results notably. Unlike BERT, *ELECTRA* is pretrained on the replaced-token detection task, mimicking certain revision behaviors of human editors (e.g., replacing some input tokens with alternatives). Using ELECTRA boosts accuracy from 62.1 to 63.2 for non-finetuned models and from 63.1 to 63.8 for | Training Subset | D1 | D2 | D3 | D4 | Average | |-------------------|------|------|------|------|-----------| | D1 | 53.8 | 56.3 | 58.1 | 60.5 | 57.2 | | D2 | 55.1 | 58.4 | 60.1 | 63.6 | 59.4 | | D3 | 55.2 | 58.4 | 61.1 | 64.2 | 59.7 | | D4 | 55.5 | 59.0 | 61.8 | 65.6 | 60.5 | | Full training set | 56.8 | 61.2 | 64.3 | 65.8 | 62.0 | fine-tuned ones. *FT-DeBERTa* further improves the accuracy to 67.2, suggesting that also separately encoding content and position information, along with relative positional encodings, make the model more accurate on the given tasks. We point out that, apart from considering alternative pre-training strategies, other sentence embeddings and/or pooling strategies may further improve results. Error Analysis Inspecting false predictions revealed that detecting claims in need of revisions concerning corroboration (i.e., *links*) is the most challenging (52% of such cases have been misclassified). This may be due to the fact that corroboration examples are underrepresented in the data (only 13% of the negative labeled samples). Accordingly, increasing the number of training samples could lead to improvement. In the appendix, we provide examples of false negative and false positive predictions. They demonstrate different cases where claims are missing necessary punctuation, clarification, and links to supporting evidence. ## 6.2 Representativeness And Reliability Next, we explore the relationship between revision distance and data reliability by using the sampling strategy proposed in Section 3. We limit our experiments to revision histories with more than four revisions and compile four datasets, each representing a certain revision distance. We use the same data split as for the full corpus, resulting in 11,462 examples for training, 2554 for validation, and 2700 for testing for each of the sampled datasets. Table 3 shows the accuracy scores obtained by FT-ELECTRA, when trained and tested on each sampled subset Di. Not only does the accuracy increase when training on a subset with a higher revision distance (results per column), but also the same model achieves higher accuracy when classifying more distant examples (results per row). On one hand, this indicates that, the closer the claim is to its optimal version, the more difficult it is to identify flaws. On the other hand, when considering claims of higher revision distance, the model seems capable of distinguishing optimal claims from other improved but suboptimal versions. Comparing the results of training on D4 with those for the full training set, we see that the D4 classifier is almost competitive, despite the much smaller amount of data. For example, the accuracy values on the D4 test set are 65.6 and 65.8, respectively. We conclude that the task at hand can be tackled even in lower-resource scenarios, if a representative sample of high quality can be obtained. This may be particularly important when considering languages other than English, where argument corpora of large scale may not be available. ## 6.3 Topical Bias To measure the effects of topical bias, in a first experiment we compare the accuracy per topic category of *FT-DeBERTa* and *FT-ELECTRA* to detect whether identifying suboptimal claims is more difficult for certain topics. In Table 4, we report the accuracy for the 20 topic categories from Skitalinskaya et al. (2021). We find that the task is somewhat more challenging for debates related to topics, such as justice, *science*, and *democracy* (best accuracy 63.6–65.2) than for *europe* (69.1) or *education* (68.9). We analyzed the relationship between the size of the categories and the models' accuracy, but found no general correlation between the variables indicating that the performance difference does not stem from how represented each topic is in terms of sample size.3 In a second experiment, we evaluate how well the models generalize to unseen topics. To do so, we use a leave-one-category-out paradigm, ensuring that all claims from the same category are confined to a single split. We observe performance drops for both *FT-DeBERTa* and *FT-ELECTRA* in Table 4. The differences in the scores indicate that the models seem to learn topic-independent features that are applicable across the diverse set of categories, however, depending on the approach certain topics may pose more challenges than others, such as *religion* and *europe* for *FT-DeBERTa*. Overall, the experiments indicate that the considered approaches generalize fairly well to unseen topics, however, further evaluations are necessary to assess whether the identified topical bias is due | FT-ELECTRA | FT-DeBERTa | | | | |---------------|--------------|--------|------|--------| | Category | Full | Across | Full | Across | | Education | 67.0 | 66.2 | 68.9 | 68.6 | | Technology | 65.7 | 64.3 | 66.9 | 67.0 | | Philosophy | 65.0 | 65.2 | 67.3 | 67.1 | | Europe | 65.3 | 64.6 | 69.1 | 66.5 | | Economics | 64.8 | 65.1 | 68.0 | 66.2 | | Government | 65.2 | 64.7 | 67.7 | 66.8 | | Law | 64.5 | 62.0 | 67.7 | 65.9 | | Ethics | 64.7 | 64.4 | 67.4 | 66.1 | | Children | 64.2 | 62.0 | 67.2 | 66.0 | | Society | 64.5 | 63.6 | 67.1 | 66.2 | | Health | 65.0 | 64.7 | 68.7 | 66.5 | | Religion | 64.2 | 63.9 | 67.5 | 63.4 | | Gender | 63.4 | 62.9 | 66.8 | 65.0 | | ClimateChange | 63.2 | 62.8 | 66.0 | 63.8 | | Politics | 62.6 | 62.2 | 66.5 | 64.7 | | USA | 62.0 | 62.2 | 65.4 | 64.0 | | Science | 61.9 | 61.0 | 65.2 | 62.8 | | Justice | 60.2 | 58.6 | 63.6 | 61.2 | | Equality | 62.9 | 61.2 | 67.5 | 65.5 | | Democracy | 61.3 | 60.3 | 65.2 | 63.4 | to the inherent difficulty of certain debate topics, or the lack of expertise of participants on the subject resulting in low quality revisions, requiring the collection of additional data annotations. ## 6.4 Contextuality In our fourth experiment, we explore the benefits of incorporating contextual information. We restrict our view to the consideration of the main thesis and the parent claim, each representing context of different levels of broadness. We do so by concatenating the context and claim vector representations in SVM-based models, and by prepending the context separated by a delimiter token when fine-tuning transformer-based methods. Table 5 reveals that, overall, adding context leads to improvements regardless of the method used. Across all approaches, including the narrow context of the parent claim seems more important for identifying suboptimal claims, with the best result obtained by *FT-DeBERTa* (accuracy of 68.6). The results also suggest that classification approaches employing non-finetuned transformer embeddings and contextual information can achieve results comparable to fine-tuned models, specifically models with a high similarity of the pretraining and target tasks (Peters et al., 2019). However, some quality aspects may require more general world knowledge and reasoning capabilities, which | Model | Accuracy | Ma. F1 | P | R | F1 | |-----------------|------------|----------|------|------|------| | Glove+SVM | 54.9 | 54.9 | 54.9 | 50.0 | 52.1 | | + thesis | 55.9 | 55.8 | 55.6 | 53.1 | 54.3 | | + parent | 56.9 | 56.9 | 56.3 | 57.3 | 56.8 | | Flair+SVM | 60.1 | 60.1 | 60.2 | 56.9 | 58.5 | | + thesis | 62.4 | 62.4 | 62.0 | 61.4 | 61.7 | | + parent | 62.8 | 62.8 | 61.9 | 64.4 | 63.1 | | BERT+SVM | 62.1 | 61.8 | 63.5 | 54.7 | 58.8 | | + thesis | 63.5 | 63.4 | 64.2 | 59.0 | 61.5 | | + parent | 63.8 | 63.8 | 64.0 | 61.0 | 62.5 | | ELECTRA+SVM | 63.2 | 62.9 | 65.1 | 55.0 | 59.6 | | + thesis | 65.0 | 64.9 | 66.1 | 60.0 | 62.9 | | + parent | 65.2 | 65.1 | 65.4 | 62.6 | 64.0 | | DeBERTa+SVM | 61.5 | 61.2 | 63.2 | 52.9 | 57.6 | | + thesis | 62.5 | 62.2 | 63.9 | 55.1 | 59.2 | | + parent | 63.3 | 63.2 | 64.0 | 59.0 | 61.4 | | FT-BERT | 63.1 | 61.7 | 70.1 | 44.2 | 54.2 | | + thesis | 64.1 | 63.0 | 70.1 | 47.6 | 56.7 | | + parent | 65.7 | 65.4 | 67.5 | 58.8 | 62.8 | | FT-ELECTRA | 63.8 | 62.9 | 68.8 | 49.0 | 57.2 | | + thesis | 64.4 | 63.5 | 69.2 | 50.4 | 58.2 | | + parent | 64.8 | 64.6 | 66.0 | 59.3 | 62.4 | | FT-DeBERTa | 67.1 | 66.6 | 71.3 | 55.9 | 62.6 | | + thesis | 67.3 | 67.0 | 70.1 | 59.5 | 64.2 | | + parent | 68.6 | 68.4 | 71.4 | 60.8 | 65.7 | | Random baseline | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | could be incorporated by using external knowledge bases. We leave this for future work. ## 6.5 Claim Improvement Suggestion While previous experiments have shown the difficulty of distinguishing between claims in need of improvements and acceptable ones, the aim of this experiment is to provide benchmark models for predicting the type of improvement that a certain claim could benefit from. Here, we limit ourselves to the three most common types of revision: clarification, typo and grammar correction (includes style and formatting edits), and *adding/correcting links* to evidence in the form of external sources. Additional experiments covering an end-to-end setup by extending the classes to include claims that do not need revisions can be found in the appendix. We compare two best performing models from previous experiments, FT-ELECTRA and FT-DeBERTa, on a subset of 135,828 claims, where editors reported any of the three types. Table 6 emphasizes the general benefit of utiliz- | F1-Score | | | | | | |-----------------|----------|--------|---------|------|-------| | Setup | Accuracy | Ma. F1 | Clarif. | Typo | Links | | FT-ELECTRA | 56.0 | 49.0 | 62.4 | 52.4 | 34.5 | | + parent | 56.2 | 50.3 | 62.0 | 53.6 | 35.3 | | + thesis | 57.5 | 52.0 | 63.4 | 54.4 | 38.3 | | FT-DeBERTa | 59.9 | 55.4 | 63.7 | 60.2 | 42.5 | | + parent | 60.3 | 56.0 | 63.6 | 61.2 | 43.0 | | + thesis | 62.0 | 57.3 | 65.2 | 63.1 | 43.4 | | Random baseline | 33.4 | 31.4 | 38.5 | 33.4 | 45.3 | ing contextual information for claim improvement suggestion. Though, depending on the specific revision type, the addition of contextual information can both raise and decrease performance. For example, despite the slightly improved overall accuracy of considering the parent claim as context, the F1score for *clarification* edits drops for all considered approaches (from 63.7 to 63.6 for FT-DeBERTa and from 62.4 to 62.0 for FT-ELECTRA). On the other hand, in the case of *links*, both types of contextual information lead to increased F1-scores. Generally, we notice that opposed to the task of suboptimal-claim detection, providing the main thesis of the debate leads to higher score improvements overall. When comparing the approaches directly, we observe that *FT-DeBERTa* consistently outperforms *ELECTRA+SVM* in accuracy, achieving 62.0 when considering the main thesis. Overall, our experiments indicate that to identify whether certain approaches to generating text representations are more suitable than others, one needs to consider the relationships between improvement type and context as well. In future work, we plan to focus on the problem of further defining and disentangling revision types to enable a deeper analysis of their relationships with contextual information. Error Analysis Inspecting false predictions of the best performing model (FT-DeBERTa) revealed that the typo/grammar correction class seems to be confused frequently with both the clarification class and the links class (see the appendix for a confusion matrix). Our manual analysis suggests that editors frequently tackle more than one quality aspect of a claim in the same revision, for example, specifying a claim and fixing grammatical errors, or, removing typos from a link snippet. In the collected dataset, however, the revision type label in such cases would reflect only one class, such as clarification or *adding/correcting links*, respectively. These not fully accurate labels reduce the models ability to properly distinguish said classes. We provide examples of misclassifications obtained by FT-DeBERTa in the appendix, illustrating cases where both the true label and the predicted label represent plausible revision type suggestions. ## 7 Conclusion Most approaches to argument quality assessment rate or compare argumentative texts that cover different aspects of a topic. While a few works studied which of two revisions of the same argumentative text is better, this does not suffice to decide whether a text actually needs revisions. In this paper, we have presented two tasks to learn *when* and how to improve a given argumentative claim. We have delineated the main challenges of revision-based data, covering issues related to the representativeness and reliability of data, topical bias in revision behaviors, appropriate model complexities and architectures, and the need for context when judging claims. In experiments, we have compared several methods based on their ability to capture quality differences between different versions of the same text. Despite a number of limitations (discussed below), our results indicate that, in general, revision-based data can be employed effectively for the given tasks, contributing towards solutions for each of the considered challenges. Specifically, our suggested sampling strategy revealed that training on claim versions with a higher revision distance between them improves the performance when detecting claims in need of improvement. Moreover, we found that the impact of the available types of contextual information is not only task-dependent but also depends on the quality issue that a claim suffers from. We argue that the developed approaches can help assist automated argument analysis and guide writers in improving their argumentative texts. With our work, we seek to encourage further research on improving writing support not only in debate communities but in educational settings as well. ## Acknowledgments We thank Andreas Breiter for his valuable feedback on early drafts, and the anonymous reviewers for their helpful comments. This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project number 374666841, SFB 1342. ## Limitations A limitation of our work is that we cannot directly apply our methods to the few existing revisionbased corpora from other domains (Yang et al., 2017; Afrin and Litman, 2018; Anthonio et al., 2020) for multiple reasons: On the one hand, those corpora do not contain histories with more than one revision but only before-after sentence pairs). Some also consist of less than 1000 sentence pairs, rendering the quantitative experiments considered in this paper pointless. On the other hand, additional metadata useful for our analysis (e.g., revision types and contextual information) is either not available at all or only for a limited number of instances that is insufficient for training models. Furthermore, the methods we evaluated utilize distantly supervised labels based on the assumption that each revision improves the quality of the claim and additional annotations provided by human editors. These annotations suffer from being coarse-grained, consisting of mainly three classes. However, each of the improvement types can be represented by several more fine-grained revision intentions. A point that we did not consider as part of this work is whether certain revisions can affect or inform future revisions within the same debate, for example, rephrasing of arguments to avoid repetition or ensuring that all claims use the same wording for the main concepts. Often, such relationships are implicit and cannot be derived without additional information provided by the user performing the revision. We believe that collecting datasets and developing approaches, which enable distinguishing more fine-grained types of edits and implicit relationships, could not only enable deeper analysis and training more fine-grained improvement suggestion models, but also allow for better explanations to end users. However, it should be noted that some of the considered methods rely on deep learning and have certain limitations when it comes to underrepresented classes, where the number of available training instances is very low. This is especially important when considering the task of claim improvement suggestion. We also point out in this regard that we only use the base versions of the BERT, ELECTRA, and DeBERTa models due to resource constraints. The results may vary, if larger models are used. While common types of improvements likely differ across other domains and communities, we stress that our approaches are entirely data-driven, and are not tied to any specific quality definition. Therefore, we expect our data processing and filtering methods as well as the considered approaches to be applicable to other domains, where historical collaborative editing data similar to ours is available. When it comes to practice, several issues require further investigation, such as how to integrate recommendations in collaborative editing and educational environments, whether the recommended improvements will be accepted by users, and how they may impact the users' behavior. We leave these questions for future work. ## Ethical Considerations Online collaborative platforms face challenging ethical problems in maintaining content quality. On the one hand, they need to preserve a certain level of free speech to stimulate high quality discussions, while implementing regulations to identify editing behaviors defined as inappropriate. On the other hand, distinguishing such legitimate forms of regulation from illegitimate censorship, where particular opinions and individuals are suppressed, is a challenge of its own. Our work is intended to support humans in different scenarios, including the creation or moderation of content on online debate platforms or in educational settings. In particular, the presented approaches are meant to help users by identifying argumentative claims in need of further improvements and suggesting potential types of improvements, so they can deliver their messages effectively and honing their writing skills. However, the presented technology might also be subject to intentional misuse, such as the above-mentioned illegitimate censorship. While it is hard to prevent such misuse, we think that the described scenarios are fairly unlikely, as such changes tend to be noticed by the online community quickly. Moreover, the source of the used data (online debate platform Kialo) employs thorough content policies and user guidelines aimed at dealing with toxic behaviors, censorship, and discrimination. However, we suggest that follow-up studies stay alert for such behaviors and carefully choose training data. ## References Tazin Afrin and Diane Litman. 2018. Annotation and classification of sentence-level revision improvement. In *Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications*, pages 240–246, New Orleans, Louisiana. Association for Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In *COLING 2018, 27th International Conference on* Computational Linguistics, pages 1638–1649. Khalid Al Khatib, Michael Völske, Shahbaz Syed, Nikolay Kolyada, and Benno Stein. 2020. Exploiting personal characteristics of debaters for predicting persuasiveness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7067–7072, Online. Association for Computational Linguistics. Milad Alshomary, Shahbaz Syed, Arkajit Dhar, Martin Potthast, and Henning Wachsmuth. 2021. Counterargument generation by attacking weak premises. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1816–1827, Online. Association for Computational Linguistics. Talita Anthonio, Irshad Bhat, and Michael Roth. 2020. wikiHowToImprove: A resource and analyses on edits in instructional texts. In *Proceedings of the* 12th Language Resources and Evaluation Conference, pages 5721–5729, Marseille, France. European Language Resources Association. Sumit Asthana, Sabrina Tobar Thommel, Aaron Lee Halfaker, and Nikola Banovic. 2021. Automatically labeling low quality content on wikipedia by leveraging patterns in editing behaviors. 5(CSCW2). Charles Bazerman. 2016. What do sociocultural studies of writing tell us about learning to write. Irshad Bhat, Talita Anthonio, and Michael Roth. 2020. Towards modeling revision requirements in wikiHow instructions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8407–8414, Online. Association for Computational Linguistics. Filip Boltužic and Jan Šnajder. 2016. ´ Fill the gap! analyzing implicit premises between claims from online debates. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 124–133, Berlin, Germany. Association for Computational Linguistics. Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning to split and rephrase from Wikipedia edit history. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 732–737, Brussels, Belgium. Association for Computational Linguistics. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. *arXiv* preprint arXiv:1312.3005. Chunyang Chen, Xi Chen, Jiamou Sun, Zhenchang Xing, and Guoqiang Li. 2018. Data-driven proactive policy assurance of post quality in community q&a sites. Proceedings of the ACM on human-computer interaction, 2(CSCW):1–22. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In *International Conference on Learning Representations*. Christine De Kock and Andreas Vlachos. 2022. Leveraging Wikipedia article evolution for promotional tone detection. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5601–5613, Dublin, Ireland. Association for Computational Linguistics. Alok Debnath and Michael Roth. 2021. A computational analysis of vagueness in revisions of instructional texts. In *Proceedings of the 16th Conference* of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 30–35, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Roxanne El Baff, Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2020. Analyzing the Persuasive Effect of Style in News Editorial Argumentation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 3154–3160, Online. Association for Computational Linguistics. Manaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipanjan Das. 2018. WikiAtomicEdits: A multilingual corpus of Wikipedia edits for modeling language and discourse. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 305–315, Brussels, Belgium. Association for Computational Linguistics. Jill Fitzgerald. 1987. Research on revision in writing. Review of educational research, 57(4):481–506. Linda Flower, John R Hayes, Linda Carey, Karen Schriver, and James Stratman. 1986. Detection, diagnosis, and the strategies of revision. *College composition and Communication*, 37(1):16–55. Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying controversy on social media. *Trans. Soc.* Comput., 1(1). Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, and Noam Slonim. 2020. A large-scale dataset for argument quality ranking: Construction and analysis. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 7805–7813. Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1589–1599, Berlin, Germany. Association for Computational Linguistics. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations. Xinyu Hua and Lu Wang. 2020. PAIR: Planning and iterative refinement in pre-trained transformers for long text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 781–793, Online. Association for Computational Linguistics. Thorsten Joachims. 2006. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06, page 217–226, New York, NY, USA. Association for Computing Machinery. Omid Kashefi, Tazin Afrin, Meghan Dale, Christopher Olshefski, Amanda Godley, Diane Litman, and Rebecca Hwa. 2022. Argrewrite v. 2: an annotated argumentative revisions corpus. *Language Resources* and Evaluation, pages 1–35. Guo Li, Haiyi Zhu, Tun Lu, Xianghua Ding, and Ning Gu. 2015. Is it good to be like wikipedia? exploring the trade-offs of introducing collaborative editing model to q&a sites. In *Proceedings of the 18th* ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW '15, page 1080–1091, New York, NY, USA. Association for Computing Machinery. Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3291–3301, Minneapolis, Minnesota. Association for Computational Linguistics. Santiago Marro, Elena Cabrio, and Serena Villata. 2022. Graph embeddings for argumentation quality assessment. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 4154–4164, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In *Proceedings of* the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7–14, Florence, Italy. Association for Computational Linguistics. Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. 2020. Automatically neutralizing subjective bias in text. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01):480–489. Miriam Redi, Besnik Fetahu, Jonathan Morgan, and Dario Taraborelli. 2019. Citation needed: A taxonomy and algorithmic assessment of wikipedia's verifiability. In *The World Wide Web Conference*, WWW '19, page 1567–1578, New York, NY, USA. Association for Computing Machinery. Gabriella Skitalinskaya, Jonas Klaff, and Henning Wachsmuth. 2021. Learning from revisions: Quality assessment of claims in argumentation at scale. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1718–1729, Online. Association for Computational Linguistics. Gabriella Skitalinskaya, Maximilian Spliethöver, and Henning Wachsmuth. 2022. Claim optimization in computational argumentation. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 1, Long Papers, pages 176–187. Association for Computational Linguistics. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In *Advances in Neural Information* Processing Systems, volume 32. Curran Associates, Inc. Axel Westerwick, Benjamin K. Johnson, and Silvia Knobloch-Westerwick. 2017. Confirmation biases in selective exposure to political online information: Source bias vs. content bias. *Communication Monographs*, 84(3):343–364. Diyi Yang, Aaron Halfaker, Robert Kraut, and Eduard Hovy. 2017. Identifying semantic edit intentions from revisions in Wikipedia. In *Proceedings of the* 2017 Conference on Empirical Methods in Natural Language Processing, pages 2000–2010, Copenhagen, Denmark. Association for Computational Linguistics. Fan Zhang and Diane Litman. 2015. Annotation and classification of argumentative writing revisions. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 133–143, Denver, Colorado. Association for Computational Linguistics. ## A Implementation And Training Details A.1 Generating Embeddings All claim embeddings were generated using the flair library,4 via DocumentPoolEmbeddings for non-transformer-based models, such as Glove and Flair, or TransformerDocumentEmbeddings for BERT and ELECTRA embeddings. Glove + SVM We derived claim representations by averaging the obtained word representations and feed them as input to a linear SVM (Joachims, 2006). We initialized the 100-dimensional word embeddings pretrained on Wikipedia data ("glovewiki-gigaword-100"). Flair + SVM We used the 2,048-dimension "news-forward" embeddings, produced by a forward bi-LSTM, trained on the One Billion Word Benchmark (Chelba et al., 2013) and feed the obtained embeddings to a linear SVM classifier. BERT We use the case-sensitive pre-trained version (bert-base-cased). ## A.2 Training Svm Models For faster convergence when dealing with a large number of samples, we use a SVM with a linear kernel, specifically, LinearSVC, as implemented in the sklearn library.5 We set maximum iterations to 1000 and choose the regularization parameter out of {0.001, 0.01, 0.1, 1, 10}. ## A.3 Fine-Tuning Transformer-Based Models We used the *bert-base-cased* pre-trained BERT version (110M parameters), the *electra-basediscriminator* pre-trained ELECTRA version (110M parameters), and the *deberta-base* pretrained DeBERTA version (140M parameters) as implemented in the huggingface library.6 We set the maximum sequence length to 128 and 256 tokens, depending whether contextual information was used or not. We trained for a maximum of five epochs using the Adam optimizer with a warmup of 10000 steps and linear learning rate scheduler. We chose the learning rate out of {5e-7, 1e-6, 5e6, 1e-5, 5e-4} and found that 1e-5 works best for BERT and DeBERTa, and 1e-6 - for ELECTRA. 4flair, https://github.com/flairNLP/flair 5sklearn SVM, https://scikit-learn.org/ stable/modules/generated/sklearn.svm. LinearSVC.html\#sklearn.svm.LinearSVC 6Huggingface transformers, https://huggingface. co/transformers/pretrained_models.html In all experiments, the batch size was set to 8. The training time on one RTX 2080Ti GPU was 80–160 minutes, depending on the chosen setup (with or without context information). ## A.4 Data And Models All dataset extensions and trained models are available under the CC-BY-NC license. ## B Prediction Outputs B.1 Suboptimal Claim Detection Table 9 provides examples of false negative and false positive predictions obtained by FT-DeBERTa (without considering context) illustrating common patterns found in the results. ## B.2 Claim Improvement Suggestion Table 7 presents the confusion matrix of predictions made by FT-DeBERTa (without considering context) illustrating misclassification patterns found in the results. Table 10 provides examples of misclassifications obtained by the best performing model (FTDeBERTa), illustrating cases where both the true class label and the predicted class label represent plausible revision type suggestions. ## B.3 End-To-End Setup Table 8 provides extended performance results obtained by approaches using ELECTRA and DeBERTA in an end-to-end setup, where both optimal claim detection and improvement suggestion tasks are combined into one multiclass classification task with four classes: *optimal* (claim does not need revisions), needs *clarification*, needs *typo* and/or grammar correction, needs editing of *links*. The results suggest that in such setup it is highly difficult to detect claims requiring clarification edits (F1-scores of 15.3 (*FT-DeBERTa* with parent) and 1.5 (*FT-ELECTRA* with parent). Such low scores can be partially explained by (a) the high diversity of changes included in the class compared to *typo* and *links* classes, (b) the high imbalance of the data (percentage of samples per class: clarification (18%), typo (17%), links (5%), and optimal (60%)). Table 6 emphasizes the general benefit of utilizing contextual information, however, similar to the results obtained in the task of claim improvement suggestion, depending on the specific revision type, the addition of contextual information can both | Predicted | | | | |---------------|------------|------------|------------| | Clarification | Typo | Links | | | Clarification | 5884 (.64) | 2593 (.28) | 709 (.08) | | Typo | 2788 (.33) | 5214 (.61) | 483 (.06) | | Links | 1020 (.39) | 544 (.21) | 1067 (.41) | ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) Table 7: Claim improvement suggestion: Confusion matrix obtained by FT-DeBERTa without using context. | F1-Score | | | | | | |-----------------|--------------------------------------------|------|-----------|------|------| | Setup | Accuracy Ma. F1 Clarif. Typo Links Optimal | | | | | | FT-ELECTRA | 62.7 | 32.4 | 0.0 33.6 | 19.1 | 76.8 | | + parent | 62.9 | 33.0 | 0.0 33.5 | 21.2 | 77.1 | | + thesis | 63.3 | 34.1 | 1.5 36.8 | 20.7 | 77.3 | | FT-DeBERTa | 64.2 | 39.8 | 9.4 43.0 | 28.9 | 78.0 | | + parent | 64.8 | 40.3 | 9.1 45.4 | 28.1 | 78.5 | | + thesis | 65.5 | 42.7 | 15.3 47.0 | 29.6 | 78.8 | | Random baseline | 25.0 | 21.1 | 20.8 19.8 | 8.4 | 35.5 | raise and decrease performance. Particularly, we observe decreased performance in *FT-DeBERTa* when detecting *clarifications* and *link* corrections while considering the parent claim as context. On the other hand, in the case of *typo/grammar* and *optimal* claims, both types of contextual information lead to increased F1-scores. Generally, we notice that similar to the task of claim improvement suggestion, providing the main thesis of the debate leads to higher score improvements overall. As indicated previously, further defining and disentangling revision types along with their relationships to contextual information could further benefit not only our understanding of revision processes in argumentative texts and their relationship to quality, but also help overcome modeling limitations identified in this paper. ## C Figures C.1 Topical Categories Figure 3 depicts the relationship between how represented the topical category is in the corpus and the achieved prediction accuracy by FT-ELECTRA in the cross-category setting using a leave-one-outstrategy. | False Positives | False Negatives | | | | | | | |------------------------------------------------------|----------------------------------------------------------------------------------------------|----|----------|------|-----|---------|----------------------------------------------------| | The HPV virus is harmful. (Clarif) | can be dangerous for bikers | | | | | | | | Vertically farming is healthier for people. (Clarif) | Women are healthier than men | | | | | | | | There | would | be | disputed | over | the | leaders | I can't support this. The math is way off. We have | | (Typo/Grammar) | 15X the population and 55X the homicide rate. | | | | | | | | The world is becoming too populated anyway. (Style) | People are likely to forget distressing memories. | | | | | | | | The Czech Republic is funding travel TV shows in | The police of every country have abused their authority | | | | | | | | Korea. (Links) | systemically at some point in history | | | | | | | | A number of recreational drugs may have health benefits. (Links) | Podcasts cannot include music due to copyright issues, so they cannot replace radio entirely | | | | | | | Table 9: Examples of False Positive and False Negative predictions obtained by FT-DeBERTa (without considering context). The true class for False Positives is reflected in the brackets at the end of each claim. | Claim | True Label | Predicted Label | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|-------------------| | Freedom of speech is exceptionally good in the US, despite a recent decline | clarif | links | | in its acceptance Muslim women must remove their burkas for their driver's license. | clarif | links | | Voluntary help is beneficial to Germany | clarif | gram | | indecent exposure violated the right of free expression, and is therefore an | clarif | gram | | illegal law. Public restrooms should be gender neutral. | clarif | gram | | Not all platforms aid terrorists' cause. Those who do not will not be censored | typo | clarif | | or shut down. The use of nuclear weapons was required in order to end the Pacific War | typo | clarif | | between the US and Japan. Nuclear weapons have spread to politically unstable states, for example Pakistan which experienced stagflation during the 1990s, a military coup in 1999 as well as a unsuccessful coup attempt in 1995. | typo | links | | Many of the animals are now extinct, such as mammoths, mastodons, aurochs, | typo | links | | cave bears ect. For example, the one who will have more than one wife, should equally treat | links | clarif | | all his wives.[Link](http://islamqa.info/en/14022) Before the nuclear bombs were dropped 70% of suitable targets had already | links | typo | | been completely destroyed by conventional bombing. For the Spanish bullfighting is a way to reconnect to old, traditional and great | links | gram | | Spain and therefore a major source of identity. DDOS attacks are the online equivalent of a sit-in. | links | clarif | | Table 10: Examples of misclassifications obtained by TF-DeBERTa (without considering context). | | | ![15_image_0.png](15_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6.1, 6.2, 6.3, 6.4, 6.5, Limitations, and Ethical considerations ✓ A2. Did you discuss any potential risks of your work? Ethical considerations ✓ A3. Do the abstract and introduction summarize the paper's main claims? abstract, Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 4,5 ✓ B1. Did you cite the creators of artifacts you used? Sections 4,5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5, Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
mendes-etal-2023-human
Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of {COVID}-19 Treatments
https://aclanthology.org/2023.acl-long.881
We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that support them. Our approach extracts check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. We make our data and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.
# Human-In-The-Loop Evaluation For Early Misinformation Detection: A Case Study Of Covid-19 Treatments ## Ethan Mendes, Yang Chen, Wei Xu, Alan Ritter Georgia Institute of Technology {emendes3, yangc}@gatech.edu {wei.xu, alan.ritter}@cc.gatech.edu ## Abstract We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that support them. Our approach extracts check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. We make our data1and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw usergenerated content. ## 1 Introduction As many people now get information from social networking websites such as Facebook and Twitter, misinformation has become a serious societal problem. To address this, social media companies have spent billions on content moderation.2 Prior work on developing natural language processing systems to combat misinformation has mainly focused on various sub-tasks (Lee et al., 2021; Guo et al., 2022), including claim detection (Eger et al., 2017; Li et al., 2022), evidence retrieval (Jiang et al., 2020; Samarinas et al., 2021; Wan et al., 2021; Aly and Vlachos, 2022), fact verification (Aly et al., 2021; Wu et al., 2022; Chen et al., 2022; Gu et al., 2022), stance classification (Thorne et al., 2017; Conforti et al., 2018; Li et al., 2019), and fallacy recognition (Alhindi et al., 2022). Researchers have also attempted to perform early detection of novel misinformation claims (Yue et al., 2022), as it 1https://github.com/ethanm88/hitl-eva luation-early-misinformation-detection 2https://www.cnbc.com/2021/02/27/cont ent-moderation-on-social-media.html is crucial for supporting early interventions such as pre-bunking (Lewandowsky and Van Der Linden, 2021). However, evaluations are often set up automatically using datasets that were retrospectively constructed based on a predefined set of debunked claims. Recent work by Glockner et al. (2022) presented convincing evidence that existing NLP factchecking pipelines are unsuitable for detecting novel real-world misinformation. They show these systems rely on leaked counter-evidence from news sources that have already fact-checked the claim. In general, it is unrealistic to assume this type of evidence will be available for new claims that have not yet been widely spread. In this paper, we address this challenge by presenting a more realistic human-in-the-loop detection and evaluation framework that can measure a system's capabilities for detecting novel checkworthy claims *in the wild* (see Figure 1). We focus on discovering new, domain-specific claims from raw tweets which are then verified by humans, rather than relying on a pre-defined list of claims that have already been fact-checked for evaluation. More importantly, we consider not only the *accuracy* but also the volume, *relevance*, and *timeliness* of misinformation claims automatically identified by a system, given a collection of raw tweets. We argue this approach provides more realistic experimental conditions because (1) it does not rely on leaked counter-evidence from claims that have already been fact-checked, (2) human expertise is vital in verifying the truthfulness of claims (Nakov et al., 2021; Karduni et al., 2018) and (3) it is more effective for humans to check aggregated claims within a specific domain (e.g., claims about the efficacy of COVID-19 treatments), before proceeding to individual social media messages to determine if they violate specific misinformation policies. We validate our methodology for end-to-end misinformation detection in the domain of COVID-19 15817 ![1_image_0.png](1_image_0.png) treatments. COVID-19 treatments make an ideal testbed for human-in-the-loop misinformation extraction because Twitter has provided clearly defined policies in this area, which we use as guidelines in a realistic human evaluation of a system's output.3 We evaluate our baseline system with our four defined metrics and find that 18% of the top-50 trending claims were actually misleading (*relevance*), 50% of new misleading claims (unapproved COVID-19 treatments) are detected before they are debunked by journalists in a news article (*timeliness*), 65% of tweets flagged constitute policy-violations (*accuracy*), and an average of 124 policy violations can be confirmed by a human-annotator per hour (*volume*) when using our system. fl Our work fills an important gap in the literature, by showing that it is possible to construct a realistic end-to-end evaluation that supports the early detection of novel rumors directly from raw data. Instead of classifying individual tweets as rumorous or not, we extract phrase-level claims that can be aggregated and ranked across a large amount of data and thus can be reviewed more time-efficiently by fact-checkers for human evaluation and for realworld applications. Tweets that are automatically classified as supporting misinformation claims can then be reviewed to determine whether they violate relevant policies. ## 2 Related Work There is a large body of misinformation-related research. Due to space limitations, we only highlight the most relevant work. See also the excellent surveys by Nakov et al. (2021) and Guo et al. (2022). ## 2.1 Detecting Check-Worthy Claims One of the most related works to ours is the CLEF2022 CheckThat shared-task (Nakov et al., 2022), which evaluates three sub-tasks automatically and separately: (1) determine whether a tweet is worth fact-checking; (2) given a check-worthy claim in the form of a tweet, and a set of previously factchecked claims, rank the tweets in order of their usefulness to fact-check; and (3) given the text and the title of a news article, determine whether the main claim it makes is true, partially true, false, or other. In contrast, our experimental setup is more realistic as it operationalizes over a large amount (e.g., millions) of raw tweets and requires spanlevel extraction to identify the exact claims (e.g., claims about the efficacy of COVID-19 treatments) rather than just "claims in the form of a tweet" (e.g., tweets that talk about COVID-19 treatments). We also present an end-to-end human-in-the-loop evaluation of the entire misinformation detection pipeline based on the accuracy, volume, and timeliness of all extracted claims, other than just the automatic intrinsic evaluation of each component separately. Similar to CLEF CheckThat, there exist many other prior works that treat claim detection (or rumor detection) as a text classification problem by predicting check-worthiness (or rumourousness) given a tweet or sentence. One representative work is ClaimBuster (Hassan et al., 2017) which classifies 20,617 sentences from the U.S. general election debate transcripts as non-factual, unimportant factual, and check-worthy factual. Researchers have also developed other datasets (Diggelmann et al., 2020; Konstantinovskiy et al., 2021; Redi et al., 2019; Thorne et al., 2018) and automatic models (Hansen et al., 2019; Jaradat et al., 2018; Wright and Augenstein, 2020). Another relevant work is by Sundriyal et al. (2022) which identifies claims as text fragments, such as "our wine keeps you from getting \#COVID19" and "Better alternative to \#DisinfectantInjection". The evaluations are mostly done automatically over a small fixed set (normally at the scale of 1k∼50k) of annotated tweets or sentences. ## 2.2 Early Rumor Detection As briefly mentioned in §2.1, rumor detection is also commonly framed as a text classification task. The standard rumor detection setup (Zubiaga et al., 2016; Derczynski et al., 2017; Vosoughi et al., 2017; Gorrell et al., 2019; Shu et al., 2020) considers only accuracy without temporal information in the evaluation. More related to our work is a task called early rumor detection (Liu et al., 2015; Ma et al., 2017; Yu et al., 2017; Ruchansky et al., 2017; Zhou et al., 2019; Xia et al., 2020; Bian et al., 2020), which compares classification model's accuracy at different time points and has been extensively surveyed and discussed by Zeng and Gao (2022). However, as they pointed out, most existing methods were "designed with oversimplification" and evaluated automatically on datasets, such as TWITTER-WEIBO (Ma et al., 2016), PHEME (Zubiaga et al., 2016), and BEARD (Zeng and Gao, 2022), that were constructed retrospectively by collecting social media posts using manually curated search keywords (e.g., names of false treatment) based on a given set of debunked claims (e.g., from snopes.com). This setup does not measure systems' capability to discover unseen rumors in the wild as our human-in-the-loop evaluation does. In real-world scenarios, what exactly is needed from a misinformation detection system is to automatically figure out what keywords (e.g., names of potential false treatments) to search for - which we focus on and evaluate in this paper. ## 2.3 Covid-19 Misinformation Detection Given the severity and pervasiveness of the issue, there exists a lot of research (not limited to NLP) about COVID-19 misinformation (Hossain et al., 2020; Glandt et al., 2021; Dimitrov et al., 2020; Shahi et al., 2021; Agley and Xiao, 2021; Chen and Hasan, 2021; Biamby et al., 2022). The most related work to ours is the CONSTRAINT sharedtask (Patwa et al., 2021) at the AAAI-2021, which considers a binary text classification problem of 10,700 COVID-related tweets about real and fake news, and in particular, the work by Kou et al. (2022) that experimented on this dataset with a human-in-the-loop approach. For each input tweet (e.g., "Ethylene oxide used in COVID-19 testing swabs changes the structure of the DNA of human body"), Kou et al. (2022) asked crowd workers to write out the main message (e.g., "Ethylene oxide somehow damages human DNA"), which is then compared with information extracted from COVID-related fack-checking articles and medical papers to help automatic system to predict the tweet's truthfulness. While they prototyped the interesting idea of human-in-the-loop misinformation detection (Shabani et al., 2021), their design is unrealistic to require humans to manually write one sentence per tweet. ## 3 Human-In-The-Loop Evaluation Framework One of the most important functions of a misinformation detection system is to identify new misinformation claims *in the wild*, and in a timely manner. We thus design our evaluation framework to measure not only the *accuracy* but also the volume, *relevance*, and *timeliness* of misinformation claims identified by a system, given a large collection of raw tweets (not collected based on already debunked claims). See Figure 1 for an overview of our framework. ## 3.1 Early Detection Of Misleading Claims Problem Definition. Given a large set of tweets T, the goal is to automatically discover novel check-worthy claims, each denoted ci, and aggregate a ranked list of claims, C = [c1, c2*, ..., c*n]. In this task, we use *trendiness* (e.g. defined with Fisher's Exact Test in §4.1.3) as a factor in ranking claims, where a more popular or widely discussed claim will have a greater trendiness. More formally, a novel check-worthy claim ciis charac- ![3_image_0.png](3_image_0.png) terized by (ti, zi, Si), where tiis the first time the system identified the claim as trending i.e. when its trendiness first broke a set threshold, ziis the claim's trendiness score at time ti, and Si ⊂ T is the set of tweets supporting the claim. A filtering heuristic can also be applied to remove obvious non-misleading claims from consideration (e.g. filtering out claims supporting approved COVID-19 treatments in §4.1.3). Evaluation Metrics. A human-in-the-loop evaluation is performed over the top-K trending claims in which annotators verify whether a claim is misleading and, if so, find the earliest news article debunking the claim. The evaluation is based on two metrics: (1) the percentage of misleading claims in the top-K trending claims (*relevance*) and (2) the number of days (or hours), denoted as δ, between ti and the publication date of the earliest news article (*timeliness*). Figure 2 visually depicts the application of the δ metric for COVID-19 treatment misinformation. See §4.2 for details about this case study evaluation. By annotating at the claim level before the tweet level, we reduce human annotator workload by limiting the total number of tweets evaluated. This approach also makes our framework more realistic and efficient, allowing for a more thorough and accurate evaluation of misinformation detection systems. In order to facilitate future research in this area, we provide a collection of raw tweets to evaluate and a baseline system to compare against. As most existing systems are not available as opensource, the release of our evaluation platform will enable fair and comparable evaluations of these systems. ## 3.2 Policy Violation Verification Problem Definition. The objective of this task is to identify tweets within the set associated with a claim ci, Si = {s1, s2*, ..., s*|Si|}, that violate a misleading information policy. In general, a tweet sj , is likely to violate a policy if it expresses a strong supportive stance towards a claim ci, which was identified as misleading by the human-in-theloop evaluation process from the prior stage (§3.1). Evaluation Metrics. To evaluate a system's performance and effectiveness, a human-in-the-loop evaluation is performed on a random sample of N tweets that express a supportive stance toward misleading claims. The evaluation is based on two metrics: (1) the *accuracy* of the system in identifying policy-violation tweets and (2) the *volume* in terms of the number of policy violations found per hour by analysts using the system. To measure the *accuracy* of the system, human annotators assign a score to each tweet in the sample, based on a five-point Likert scale, with 5 corresponding to a clear violation of the policy and 1 representing a clear non-violation. We set a threshold of score ≥ 4 to make a binary policy violation determination. This scoring scheme allows us to measure the system's accuracy based on the distribution of annotator scores for all tweets in the sample. To quantify the *volume* of policy violations identified by analysts using the system, we define the metric *policy-violations per hour* as the number of tweets identified by the annotator containing policy violations, divided by the total number of hours spent by the annotator during the two-stage annotation process (§3.1 and 3.2): policy violations $/\,hr=\frac{V}{C\times r_{c}+T\times r_{t}}$. where V is the number of policy violations found, C and T are the numbers of claims and tweets checked respectively, and rc and rt are the average annotation rates for claims and tweets respectively. This metric allows us to assess the efficiency of the system in identifying policy-violation tweets and the potential benefits for content moderators using the system. ## 4 A Case Study: Covid-19 Treatment Misinformation To illustrate the usage of our human-in-the-loop evaluation framework outlined in §3, we present a case study for COVID-19 misinformation. Specifically, we target Twitter's COVID-19 policy on unapproved treatments, which states that: "False or misleading information suggesting that unapproved treatments can be curative of COVID-19" are grounds for labeling tweets with corrective information (Twitter, 2021). ## 4.1 Our System In this subsection, we describe the three components of our COVID-19 treatment misinformation detection system: claim extraction, stance classification, and claim ranking with their task-specific intrinsic performance. Later, in §4.2, we present an *extrinsic* human-in-the-loop evaluation of the entire system using our defined framework (§3). ## 4.1.1 Extracting Check-Worthy Claims Data. We train and evaluate our claim extraction models on the human-annotated Twitter COVID19 event extraction dataset created by Zong et al. (2022), which is collected between 2020/01/15 and 2020/04/26. In this work, we focus on claims of the form "*X is an effective COVID-19 treatment*", where X is an extracted span. We split the provided 1, 271 training tweets in the CURE & PREVENTION category into 60% for training and 15% for development, and report token-level F1 scores on the 500 tweets used for evaluation in the 2020 W-NUT shared task.4 Models. We develop three approaches, outlined below, to extract claims as a text span from a sentence with a sequence tagging model and a question-answering model (Rajpurkar et al., 2018; Du et al., 2021). Details of training hyperparameters can be found in Appendix D.1. (1) Sequence Tagging: a standard sequencelabeling task with a BIO tagging scheme, where 'B' and 'I' tags are used to identify treatment tokens. We follow a similar approach as the named entity recognition method used by Devlin et al. (2019) and experiment with two pre-trained models, including RoBERTa*large* (Liu et al., 2019) and a domain-specific COVID-Twitter-BERT*large*(CTBERT) (Müller et al., 2020). (2) Question-answering (QA): we treat the claim extraction as a question-answering task and apply the SQuADv2.0 (Rajpurkar et al., 2018) formulation as some tweets may not include relevant claims (similar to unanswerable questions). We experiment with two approaches: a span-prediction model that predicts start and end positions for answer spans in context using RoBERTa/CT-BERT as the encoder, in addition to a text-to-text model that generates answers using T5*large* (Raffel et al., 2020; Wang and Lillis, 2020). The question template for extracting treatments discussed in tweets is "What is the mentioned COVID-19 cure?". (3) QA-Pretraining: it has been shown that intermediate task pre-training can yield further gains for low-resource target tasks (Pruksachatkun et al., 2020; Poth et al., 2021). We thus experiment with pre-training QA models on the SQuADv2.0 dataset before fine-tuning on the claim extraction dataset. Intrinsic Evaluation. Table 1 shows the claim extraction results on the COVID-19 treatment dataset. We observe QA models outperform tagging models across encoders. RoBERTa outperforms the domain-specific encoder (CT-BERT) for QA extraction. However, after QA pre-training, CT-BERT improves from 53.1 to 63.8 F1 and outperforms RoBERTa by 2.3 points. Finally, as the T5 model achieves the best F1 regardless of QA pre-training, we use T5SQuADv2 Pre-train as our final QAPre-train RoBERTaSQuADv2 Pre-train 59.9 CT-BERTSQuADv2 Pre-train 63.8 T5SQuADv2 Pre-train **63.9** Table 1: Token-level F1 scores for claim extraction experiments on COVID-19 treatment dataset. | Approach | Model | F1 | |--------------------------|---------|------| | Tagging | RoBERTa | 50.3 | | CT-BERT | 51.2 | | | RoBERTa | 61.5 | | | CT-BERT | 53.1 | | | T5 | 63.7 | | | RoBERTaSQuADv2 Pre-train | 59.9 | | | CT-BERTSQuADv2 Pre-train | 63.8 | | | T5SQuADv2 Pre-train | 63.9 | | ## Claim Extraction Model. 4.1.2 Task-Specific Stance Classification Data. Due to the lack of datasets for COVID19 treatment stance, we annotate a new dataset for our evaluation. To collect relevant tweets we tracked the keywords "cure", "prevention", "virus", and "COVID-19" from November 2020 to December 2020 using the Twitter API. We collected 1, 055, 559 tweets and claims extracted using our model (§4.1.1). Out of 97, 016 tweets for which a treatment was able to be extracted, we randomly sample 2, 000 tweets to annotate the author's stance on the effectiveness of the treatment. We paid crowd workers on Amazon MTurk to annotate our data. Each task consists of a tweet with a highlighted treatment. We asked workers to determine the author's stance towards the treatment and select among three options (SUPPORTING, REFUTING, or NO STANCE). We decided not to include additional options for irrelevant and sarcastic tweets due to poor annotator agreement in pilot experiments. Each tweet is labeled by 5 independent crowd workers. Workers were paid $0.20/HIT, which roughly equates to $8.50/hour. A screenshot of the annotation interface is provided in Figure 6 and dataset statistics are summarized in Table 2. Quality Control. During the annotation process, we monitored the quality of workers' annotations using their agreement with each other and split the data into 10 batches (200 tweets each) to detect poor annotations in the early stages. We calculate the annotation agreement of each worker against the majority of 5 workers. If the worker's agreement is less than 0.75 for a SUPPORTING annotation based on a majority vote, we do not allow them to participate in the subsequent annotation batch. Across all annotations, we find a 0.65 value of Cohen's κ (Artstein and Poesio, 2008) for the | Majority Annotation | #Tweets | |--------------------------|-----------| | SUPPORTING | 743 | | REFUTING | 631 | | NO STANCE | 400 | | No Consensus (NO STANCE) | 226 | | Total | 2000 | inter-annotator agreement between workers. The distribution of the dataset based on the majority vote of workers is shown in Table 2. In the case that there is no majority annotation for a given tweet i.e. the 5 individual annotators are split 2/2/1 among the three annotation types, we assign the tweet a default annotation of NO STANCE. We randomly split our annotated dataset of 2,000 tweets into a training set of 1,200 tweets, a development set of 400, and a test set of 400. Models. Using the annotated corpus, we develop classifiers to detect the author's stance toward a treatment. Specifically, given a claim and a tweet (ci, sj ), our goal is to predict the author's stance mi ∈ {SUPPORTING, REFUTING, or NO STANCE}. We experiment with three models including a baseline neural bag-of-words (NBOW) model, RoBERTa*large* (Liu et al., 2019) and a COVID-Twitter-BERT*large* (CT-BERT) (Müller et al., 2020). To indicate the position of the claim in the input, we use relative position encoding (RPE) (Shaw et al., 2018) for the NBOW model. For the pre-trained language models, we add special markers around the claim following the bestperforming model from Soares et al. (2019), [EN-TITY MARKERS - ENTITY START]. Details of training hyperparameters are in Appendix D.2. Intrinsic Evaluation. Table 3 presents the results for NBOW, RoBERTa, and CT-BERT. We observe that CT-BERT outperforms all other models, with an F1 score of 66.7. Generally, we find that these models performed best in classifying tweets with SUPPORTING stance and worst on tweets with a NO STANCE label. This latter result is possibly due to annotators classifying those tweets that might be irrelevant to the task as NO STANCE. ## 4.1.3 Ranking Of Trending Claims Following Twitter's COVID-19 misinformation policy violation guidelines, we focus on tweets that advocate the efficacy of an unapproved treatment. | Stance | NBOW | RoBERTa | CT-BERT | |--------------|--------|-----------|-----------| | SUPPORTING | 59.8 | 70.0 | 74.9 | | REFUTING | 32.8 | 61.9 | 70.6 | | NO STANCE | 56.0 | 45.9 | 54.7 | | Aggregate F1 | 49.5 | 59.3 | 66.7 | SUPPORTING 59.8 70.0 **74.9** REFUTING 32.8 61.9 **70.6** NO STANCE **56.0** 45.9 54.7 Aggregate F1 49.5 59.3 **66.7** Table 3: F1 scores on stance classification dataset for classifying author's stance towards the extracted claim. Thus, we filter out tweets that mention common approved treatments listed in Table 8 in Appendix C, which are prepared according to authorities and news agencies including the CDC and NYT. Upon determining stance on the filtered set, we only consider tweets with a SUPPORTING stance towards the effectiveness of the extracted treatment. Finally, we remove near-duplicates and cluster the remaining extracted treatments based on word overlap (Jaccard similarity) to enable treatment-level decision-making similar to Basu et al. (2013). Ranking Claims. For the claims (treatments) in each cluster, we count the number of tweets mentioning the claim both daily and cumulatively. Based on these counts, we compute the claim's pvalue on a given date using the one-tailed Fisher's Exact Test (Fisher, 1922) which has been shown to be effective in rare event detection (Moore, 2004; Johnson et al., 2007; Ritter et al., 2011). A claim's p-value is a measure of its trendiness denoted zi (§3.1) by which it is ranked relative to other claims. Detecting Novel Claims. Based on the results of Fisher's Exact Test, our system automatically detects novel trending claims and flags them for manual inspection by content moderators. A claim is considered as newly trending if it's p-value is less than a preset significance threshold (α-level) and it has never broken this threshold previously (further details in Appendix E). Using the notation from §3.1, if the claim (ci) found to be newly trending on date tiis judged to be misleading by a human moderator, our system then provides a list of individual tweets (Si) that SUPPORT the misleading claim for manual inspection. ## 4.2 Human-In-The-Loop Evaluation For Detecting Covid-19 Misinformation In this section, we evaluate the system outlined in §4.1 using the human-in-the-loop evaluation methodology we define in §3. We follow the same procedure described in §4.1.2 to prepare a new dataset containing 14, 741, 171 tweets for largescale evaluation. We then extract treatments using our QA-based claim extractor and apply our stance detection model to classify authors' stance towards each treatment. After removing tweets without an extracted treatment, the resultant evaluation corpus consists of 1, 905, 424 tweets. ## 4.2.1 Early Detection Of Misleading Covid-19 Treatments We first evaluate the ability of our system to detect newly trending misleading COVID-19 treatment claims and report metrics measuring *relevance* and timeliness as defined in §3.1. Data Preparation and Human Evaluation. We set aside tweets collected from 2020/03/01 to 2020/03/31 (1-month time-frame) to serve as an initial base of historical data to compute cumulative counts for detecting novel trending claims using the Fisher's Exact Test. Newly trending treatments are then identified during the time period of 2020/04/01 to 2022/05/05 (2-year time-frame) using the methodology described in §4.1.3. The top 300 treatments are selected based on p-values from Fisher's Exact Test which equates to a significance level of α = 1.15e−6. We employ two in-house annotators, who act as mock content-moderators, to evaluate these 300 treatments and determine (1) if the extraction is a treatment (2) if the treatment is unapproved and (3) the earliest publication date of a news article debunking the treatment as effective for COVID-19 using the Google News engine. Appendix A contains further details about the annotation process. Out of the 300 treatments, 100 were annotated by both annotators to determine inter-rater agreement on task (2), which was 0.87 as given by Cohen's κ. On average, it took 89.7 seconds to complete each treatment annotation. ## Results On Early Detection Of Misleading Claims. In terms of *relevance*, Table 4 shows the percentage of the top 5/50/100 trending treatments ranked by pvalue that were determined to be unapproved, and Figure 3 shows the cumulative number of potential unapproved trends identified over time along with the total number of trending treatments. To evaluate the *timeliness*, we calculate δ, which measures the number of days our system detects misleading claims before a debunking news article is published (§3.1). We find that our system is able to detect 50% of rumors before the publication date of the relevant news article (δ ≥ 0), with the median δ 15823 being 21 days. Figure 2 shows three notable unapproved treatment examples from early in the pandemic with their relevant news article and δ values. ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) ## 4.2.2 **Identifying Covid-19 Policy Violations.** In addition to detecting novel rumors online, we also evaluate the ability of our system to identify tweets that violate Twitter's misleading information policy and report metrics measuring *accuracy* and volume as defined in §3.2. ## Data Preparation And Human Evaluation. For each of the 40 treatments identified as unapproved in the previous experiment, we randomly sample 10 tweets that have SUPPORTING stance towards the claim. Near duplicate tweets were identified and removed leaving 361 unique tweets. The two in-house annotators then assign a score to each tweet based on a five-point Likert scale, with 5 corresponding to a clear violation of the policy and 1 representing a clear non-violation (See score descriptions in Table 7 in Appendix A). To investigate the quality of annotations, we compute agreement on 206 tweets using ordinal Krippendorff's α (0 ≤ α ≤ 1) (Krippendorff, 2011).5 We find that annotators agreed moderately with Krippendorff's α = 0.54, indicating ![7_image_0.png](7_image_0.png) fair agreement. On average, it took the annotators 16.1 seconds to annotate each tweet. Results on Identifying Policy Violations. In Figure 4, we find that 65% (*accuracy*) of tweets had scores indicating that it was either likely or clearly violating the policy with an average score of 3.54 out of 5. Figure 4 presents the estimated distribution of Likert scores over 10,246 tweets using the 10 annotated tweets sampled for each treatment and extrapolated to all tweets mentioning the treatment. In terms of *volume*, we estimate that an annotator can identify approximately 124.2 policy violations per hour with our system, based on the average annotation rate of the 300 treatments and the same extended set of tweets, where a Likert score of 4 or 5 constitutes a policy violation. See Appendix F for a full calculation of this statistic. ## 5 Conclusion In this work, we present a novel end-to-end humanin-the-loop evaluation framework for the early identification of novel misinformation on social media from raw tweets. Unlike previous evaluation frameworks, our methodology captures the interplay between the system and human content moderators while also providing realistic metrics for early misinformation detection. We validate our misinformation detection framework for claims in the domain of COVID-19 treatments. By aggregating and ranking structured representations of claims, and relying on human fact-checkers to review trending claims, our system is able to detect 50% of misleading claims earlier than the news. ## 6 Limitations While our approach does require domain-specific information extraction models to extract structured representations of novel misinformation claims for easy aggregation and review, there is significant prior work on event extraction that can be adapted to extract check-worthy claims (Ritter et al., 2012; Luan et al., 2019; Du and Cardie, 2020). Furthermore, we argue content moderators or factcheckers are likely to be more effective when focusing on one claim type at a time (e.g. COVID-19 treatments, election integrity, vaccine effectiveness, etc.), rather than reviewing a mixture of claims on multiple topics. Our COVID-19 case study also makes use of "mock" content moderators, rather than employees or contractors working for social media companies or fact-checking websites. However, we believe this methodology still provides valuable insight that would not be publicly available otherwise, as social media companies do not currently publish extensive details about their content moderation processes6and fact-checking websites vary widely in policy and have been shown to provide inconsistent claim classification (Marietta et al., 2015). Some prior user studies (Nguyen et al., 2018; Pennycook and Rand, 2019; Shabani et al., 2021) have also shown laypeople (e.g., Amazon Mechanical Turk workers) can be good at judging the veracity of claims or reliability of news articles. As of late November 2022, Twitter has suspended enforcement of its COVID-19 misleading information policies such as the one we target in this paper.7 However, per the Associated Press article, one of the possible reasons for the suspension was that Twitter has "struggled to respond to a torrent of misinformation about the virus" with many "bogus claims about home remedies" still on the site despite the previous enforcement of policies. While we do not have details about the internal automated systems Twitter has in place to assist with content moderation, an end-to-end early detection system might have helped stem the spread of misinformation on the platform. Additionally, despite the lack of official policy enforcement, our system can still be used by third-party fact-checking websites or researchers to measure and report misinformation 6https://www.nytimes.com/2022/05/19/b usiness/twitter-content-moderation.html 7https://apnews.com/article/twitter-e nds-covid-misinformation-policy-cc232c9 ce0f193c505bbc63bf57ecad6 on Twitter. Finally, the main goal of our work is not to create a system for COVID-19 misinformation detection but rather to propose a framework that allows for a fair and realistic evaluation of early misinformation detection systems in any domain. ## 7 Broader Impact And Ethical Considerations We release our corpus of tweets annotated with stance, and our dataset of trending misinformation claims under Twitter's Developer Agreement,8 which grants permissions for academic researchers to share Tweet IDs and User IDs (less than 1,500,000 Tweet IDs within 30 days) for noncommercial purposes, as of October 10th. 2022. Our system is designed for research purposes and may contain unknown biases towards demographic groups or individuals (Sap et al., 2019). Further investigation into systematic biases should be conducted before our models are deployed in a production environment. We believe this study helps shed light on how NLP tools developed to help combat online misinformation might be used in a real content moderation workflow. We hope this will encourage future research on human-in-the-loop systems and help shape the design of new tasks and datasets in this area. We believe it is beneficial for some research on combating misinformation to take place outside of social media companies to provide an unbiased view of the challenges involved in fighting online misinformation. ## Acknowledgments We thank anonymous reviewers for their helpful feedback on this work. We also thank Chase Perry and Andrew Duffy for their help with annotations and human evaluation. This research is supported in part by the NSF (IIS-2052498), ODNI and IARPA via the BETTER and HIATUS programs (2019-19051600004, 2022-22072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References Jon Agley and Yunyu Xiao. 2021. Misinformation about COVID-19: evidence for differential latent profiles and a strong association with trust in science. BMC Public Health. Tariq Alhindi, Tuhin Chakrabarty, Elena Musi, and Smaranda Muresan. 2022. Multitask instructionbased prompting for fallacy recognition. In Proceedings of Empirical Methods in Natural Language Processing. Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. The fact extraction and VERification over unstructured and structured information (FEVEROUS) shared task. In Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER). Rami Aly and Andreas Vlachos. 2022. Natural logicguided autoregressive multi-hop document retrieval for fact verification. In Proceedings of Empirical Methods in Natural Language Processing. Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics. Sumit Basu, Chuck Jacobs, and Lucy Vanderwende. 2013. Powergrading: a clustering approach to amplify human effort for short answer grading. Giscard Biamby, Grace Luo, Trevor Darrell, and Anna Rohrbach. 2022. Twitter-COMMs: Detecting climate, COVID, and military multimodal misinformation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, and Junzhou Huang. 2020. Rumor detection on social media with bi-directional graph convolutional networks. In AAAI Conference on Artificial Intelligence. CDC. 2021. How to protect yourself and others. Jifan Chen, Aniruddh Sriram, Eunsol Choi, and Greg Durrett. 2022. Generating literal and implied subquestions to fact-check complex claims. In *Proceedings of Empirical Methods in Natural Language Processing*. Yuanzhi Chen and Mohammad Hasan. 2021. Navigating the kaleidoscope of COVID-19 misinformation using deep learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Costanza Conforti, Mohammad Taher Pilehvar, and Nigel Collier. 2018. Towards automatic fake news detection: Cross-level stance detection in news articles. In *Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)*, Brussels, Belgium. Association for Computational Linguistics. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-FEVER: A dataset for verification of real-world climate claims. In Proceedings of the NeurIPS 2020 Workshop: Tackling Climate Change with Machine Learning. Dimitar Dimitrov, Erdal Baran, Pavlos Fafalios, Ran Yu, Xiaofei Zhu, Matthäus Zloch, and Stefan Dietze. 2020. Tweetscov19 - a knowledge base of semantically annotated tweets about the covid-19 pandemic. In *Proceedings of the 29th ACM International Conference on Information & Knowledge Management*. Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In Proceedings of Empirical Methods in Natural Language Processing. Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, and Yuan Zhang. 2021. QA-driven zero-shot slot filling with weak supervision pretraining. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In *Proceedings of* the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Ronald A. Fisher. 1922. On the interpretation of χ2 from contingency tables, and the calculation of P. Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance Detection in COVID-19 Tweets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Max Glockner, Yufang Hou, and Iryna Gurevych. 2022. Missing counter-evidence renders nlp fact-checking unrealistic for misinformation. *Proceedings of Empirical Methods in Natural Language Processing*. Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation. Zihui Gu, Nan Fan, Ju andTang, Preslav Nakov, Xiaoman Zhao, and Xiaoyong Du. 2022. PASTA: Tableoperations aware fact verification via sentence-table cloze pre-training. In *Proceedings of Empirical Methods in Natural Language Processing*. Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. In *Transactions of the Association for Computational* Linguistics. Casper Hansen, Christian Hansen, Stephen Alstrup, Jakob Grue Simonsen, and Christina Lioma. 2019. Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking. Association for Computing Machinery. Naeemul Hassan, Fatma Arslan, Chengkai Li, and Mark Tremayne. 2017. Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In *Proceedings of* the 1st Workshop on NLP for COVID-19 at EMNLP. Israa Jaradat, Pepa Gencheva, Alberto Barrón-Cedeño, Lluís Màrquez, and Preslav Nakov. 2018. ClaimRank: Detecting check-worthy claims in Arabic and English. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Singh, and Mohit Bansal. 2020. HoVer: A dataset for many-hop fact extraction and claim verification. In *Findings of the Association for* Computational Linguistics: EMNLP 2020, Online. Association for Computational Linguistics. Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proceedings of Empirical Methods in Natural Language Processing. Alireza Karduni, Ryan Wesslen, Sashank Santhanam, Isaac Cho, Svitlana Volkova, Dustin Arendt, Samira Shaikh, and Wenwen Dou. 2018. Can you verifi this? studying uncertainty and decision-making about misinformation using visual analytics. In Twelfth international AAAI conference on web and social media. Lev Konstantinovskiy, Oliver Price, Mevan Babakar, and Arkaitz Zubiaga. 2021. Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital threats: research and practice. Ziyi Kou, Lanyu Shang, Yang Zhang, Zhenrui Yue, Huimin Zeng, and Dong Wang. 2022. Crowd, expert & ai: A human-ai interactive approach towards natural language explanation based covid-19 misinformation detection. In *International Joint Conferences on* Artificial Intelligence Organization. Klaus Krippendorff. 2011. Computing Krippendorff's alpha-reliability. Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau Yih, and Madian Khabsa. 2021. On unifying misinformation detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stephan Lewandowsky and Sander Van Der Linden. 2021. Countering misinformation and fake news through inoculation and prebunking. *European Review of Social Psychology*. Manling Li, Revanth Gangi Reddy, Ziqi Wang, YiShyuan Chiang, Tuan Lai, Pengfei Yu, Zixuan Zhang, and Heng Ji. 2022. COVID-19 claim radar: A structured claim extraction and tracking system. In *Proceedings of the Association for Computational Linguistics*. Quanzhi Li, Qiong Zhang, and Luo Si. 2019. Rumor detection by exploiting user credibility information, attention and multi-task learning. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics. Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time rumor debunking on twitter. In *Proceedings of the 24th* ACM International on Conference on Information and Knowledge Management, New York, NY, USA. Association for Computing Machinery. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In *Proceedings of the Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. Jing Ma, Wei Gao, and Kam-Fai Wong. 2017. Detect rumors in microblog posts using propagation structure via kernel learning. In *Proceedings of the Association for Computational Linguistics*. Association for Computational Linguistics. Morgan Marietta, David C. Barker, and Todd Bowser. 2015. Fact-checking polarized politics: Does the fact-check industry provide consistent guidance on disputed realities? *The Forum*, 13(4):577–596. Robert C. Moore. 2004. On log-likelihood-ratios and the significance of rare events. In Proceedings of Empirical Methods in Natural Language Processing. Martin Müller, Marcel Salathé, and Per E Kummervold. 2020. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter. In *arXiv preprint arXiv:2005.07503*. Preslav Nakov, Alberto Barrón-Cedeño, Giovanni da San Martino, Firoj Alam, Julia Maria Struß, Thomas Mandl, Rubén Míguez, Tommaso Caselli, Mucahid Kutlu, Wajdi Zaghouani, Chengkai Li, Shaden Shaar, Gautam Kishore Shahi, Hamdy Mubarak, Alex Nikolov, Nikolay Babulkov, Yavuz Selim Kartal, Michael Wiegand, Melanie Siegel, and Juliane Köhler. 2022. Overview of the clef–2022 checkthat! lab on fighting the covid-19 infodemic and fake news detection. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, Cham. Springer International Publishing. Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barr'on-Cedeno, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assisting human fact-checkers. In *International Joint Conference on Artificial Intelligence*. An T. Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease. 2018. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery. Parth Patwa, Mohit Bhardwaj, Vineeth Guptha, Gitanjali Kumari, Shivam Sharma, Srinivas PYKL, Amitava Das, Asif Ekbal, Md Shad Akhtar, and Tanmoy Chakraborty. 2021. Overview of constraint 2021 shared tasks: Detecting english covid-19 fake news and hindi hostile posts. In Combating Online Hostile Posts in Regional Languages during Emergency Situation. Gordon Pennycook and David G. Rand. 2019. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences. Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, and Iryna Gurevych. 2021. What to pre-train on? Efficient intermediate task selection. In *Proceedings of Empirical Methods in Natural Language Processing*. Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In *Proceedings of the Association for Computational Linguistics*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the Association for* Computational Linguistics. Miriam Redi, Besnik Fetahu, Jonathan Morgan, and Dario Taraborelli. 2019. Citation needed: A taxonomy and algorithmic assessment of wikipedia's verifiability. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of Empirical Methods in Natural Language Processing. Alan Ritter, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from twitter. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. Association for Computing Machinery. Chris Samarinas, Wynne Hsu, and Mong Li Lee. 2021. Improving evidence retrieval for automated explainable fact-checking. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations*, Online. Association for Computational Linguistics. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In *Proceedings of the Association for Computational Linguistics*. Shaban Shabani, Zarina Charlesworth, Maria Sokhn, and Heiko Schuldt. 2021. SAMS: Human-in-theloop approach to combat the sharing of digital misinformation. In Proceedings of the AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering. Gautam Kishore Shahi, Anne Dirkson, and Tim A. Majchrzak. 2021. An exploratory study of covid-19 misinformation on twitter. Online Social Networks and Media. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In *Proceedings of the Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies. Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. 2020. Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. *Big Data*. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In *Proceedings of the Association for Computational Linguistics*. Megha Sundriyal, Atharva Kulkarni, Vaibhav Pulastya, Md Shad Akhtar, and Tanmoy Chakraborty. 2022. Empowering the fact-checkers! automatic identification of claim spans on twitter. In Proceedings of Empirical Methods in Natural Language Processing. James Thorne, Mingjie Chen, Giorgos Myrianthous, Jiashu Pu, Xiaoxuan Wang, and Andreas Vlachos. 2017. Fake news stance detection using stacked ensemble of classifiers. In *Proceedings of the 2017* EMNLP Workshop: Natural Language Processing meets Journalism. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. Twitter. 2021. Covid-19 misleading information policy. Soroush Vosoughi, Mostafa 'Neo' Mohsenvand, and Deb Roy. 2017. Rumor gauge: Predicting the veracity of rumors on twitter. *ACM Transactions on* Knowledge Discovery from Data. Hai Wan, Haicheng Chen, Jianfeng Du, Weilin Luo, and Rongzhen Ye. 2021. A DQN-based approach to finding precise evidences for fact verification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online. Association for Computational Linguistics. Congcong Wang and David Lillis. 2020. UCD-CS at W-NUT 2020 shared task-3: A text to text approach for COVID-19 event extraction on social media. In Proceedings of the Sixth Workshop on Noisy Usergenerated Text. Dustin Wright and Isabelle Augenstein. 2020. Claim check-worthiness detection as positive unlabelled learning. In *Findings of the Association for Computational Linguistics: EMNLP*, Online. Association for Computational Linguistics. Xueqing Wu, Kung-Hsiang Huang, Yi Fung, and Heng Ji. 2022. Cross-document misinformation detection based on event graph reasoning. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Rui Xia, Kaizhou Xuan, and Jianfei Yu. 2020. A stateindependent and time-evolving network for early rumor detection in social media. In Proceedings of Empirical Methods in Natural Language Processing. Feng Yu, Qiang Liu, Shu Wu, Liang Wang, and Tieniu Tan. 2017. A convolutional approach for misinformation identification. In *Proceedings of the 26th* International Joint Conference on Artificial Intelligence. Zhenrui Yue, Huimin Zeng, Ziyi Kou, Lanyu Shang, and Dong Wang. 2022. Contrastive domain adaptation for early misinformation detection: A case study on covid-19. In *Proceedings of the 31st ACM International Conference on Information & Knowledge* Management. Fengzhu Zeng and Wei Gao. 2022. Early rumor detection using neural Hawkes process with a new benchmark dataset. In *Proceedings of the Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Kaimin Zhou, Chang Shu, Binyang Li, and Jey Han Lau. 2019. Early rumour detection. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics. Carl Zimmer, Katherine J. Wu, Jonathan Corum, and Matthew Kristoffersen. 2020. Coronavirus drug and treatment tracker. Shi Zong, Ashutosh Baheti, Wei Xu, and Alan Ritter. 2022. Extracting a knowledge base of COVID-19 events from social media. *Proceedings of the 29th* International Conference on Computational Linguistics. Arkaitz Zubiaga, Maria Liakata, and Rob Procter. 2016. Learning reporting dynamics during breaking news for rumour detection in social media. *ArXiv*, abs/1610.07363. ## A Annotation Guidelines For The Human-In-The-Loop Evaluation Early Detection of Misleading COVID-19 Treatments. Given a list of trending claims (e.g., COVID-19 treatments), annotators are required to determine (1) if the extraction is a treatment (2) if the treatment is unapproved and (3) the earliest publication date of a news article they can find that debunks the treatment as effective. Annotators query the Google News engine with a query in the form of *"[treatment] cures COVID-19"* and sort by date to find the earliest published news article starting from 2020/04/01 that debunks the treatment as effective against COVID-19. Treatments are only considered to be unapproved if the annotators can identify a reputable news source as a reference. Table 5 shows the annotation questions and guidelines as they appeared to annotators during the human-in-the-loop evaluation for this task. Note that March 1st, 2020 was used as the starting date for the article search because it was the earliest date for which we had tweet data. ## Identifying Covid-19 Policy Violation Tweets. Given a tweet with SUPPORTING stance towards the claim that "*treatment is effective in treating* COVID-19", annotators assign a score based on a five-point Likert scale, with 5 corresponding to a clear violation of the policy and 1 representing a clear non-violation. Table 7 shows the Likert score descriptions and Table 6 shows the annotation questions and guidelines as they appeared to annotators during the human-in-the-loop evaluation for this task. ## B Annotation Interface For Stance Classification Our stance data collection procedures (§4.1.2) on Amazon MTurk were approved by an ethics board. Before individuals were allowed to annotate data for our task, they were required to give consent by electronically signing off on the ethics statement found in Figure 5 (some portions have been redacted for anonymity purposes). Note that all annotators were MTurk workers in the United States who had previously annotated 1000 HITs with a pass-rate ≥ 95%. Figure 6 shows the interface used for collecting these stance annotations. ## C Approved Treatments Table 8 shows the approved treatments that were used for filtering in §4.1.3. ## D Implementation Details All experiments are performed with NVIDIA A40 GPUs. All hyperparameters are selected using a held-out development set. ## D.1 Claim Extraction Models Hyperparameters can be found in Table 9. Sequence Tagging Models. We apply sequencelabeling models with a standard BIO tagging scheme ('B' and 'I' tags are used to identify treatment tokens), similar to the named entity recognition method used by Devlin et al. (2019). We experiment with RoBERTa*large*(354M) (Liu et al., 2019) and a domain-specific COVID-TwitterBERT*large*(345M) (Müller et al., 2020). Question Answering Models. We experiment with QA-based slot filling models (Du et al., 2021), which model claim extraction as a SQuADv2.0 question-answering task (Rajpurkar et al., 2018). We experiment with two approaches: a spanprediction model that predicts start and end positions for answer spans in context using RoBERTa or CT-BERT as the encoder, in addition to a text-to-text model that generates answers using T5*large*(770M) (Raffel et al., 2020; Wang and Lillis, 2020). Pre-training QA Models. We pre-train QA models on the SQuADv2.0 dataset for 2 epochs with a learning rate of 2e-5 and batch size of 16, followed by fine-tuning for claim extraction. ## D.2 Stance Classification Models Hyperparameters can be found in Table 10. NBOW. We use an NBOW model and a relative position encoding (RPE) (Shaw et al., 2018) to indicate the position of the claim in the input sentence. Specifically, the 1D RPE encodes the relative distance of each token to the extracted treatment. We then concatenate NBOW with the RPE embedding and pass the concatenation through one layer of a feed-forward neural network. RoBERTa/CT-BERT. We finetune a RoBERTa*large* (Devlin et al., 2019) and COVIDTwitter-BERT (CT-BERT) (Müller et al., 2020) | Question | Annotation Guidance | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Should the extraction be considered for consideration of a new | - Answer "Repeat" if the same trend has been seen previously. - Answer"Approved" if treatment should have been marked as approved | | treatment? | based on the approved trending list or is otherwise an obvious valid treatment. | | - Answer "Unsure" if the treatment was in clinical trials at the time or you are otherwise unsure if the treatment is a valid cure. - Answer "Not a Treatment" for extraction errors, preventative measures, etc. - Answer "General health advice" for any general strategies for staying healthy such as "hand washing", "exercise", "hygiene", etc. - Answer "Unapproved" otherwise. | | | What is the date of publication of the earliest article stating that it is misinformation that this treatment cures COVID-19? | - Answer "NA" if you answered "Approved", "Unsure", "Repeat" in the previous question or if no such article can be found in the time frame of [03/01/2020 - Present]. Otherwise, provide the date of the earliest article found. - How to search for articles: 1. If X is the treatment, search: "X cures COVID-19" on the News tab in Google - correct any obvious misspellings - try some obvious variations if do not find appropriate results (e.g. "hcq" and "hydroxychloroquine") 2. Set date range to [03/01/2020 - Present] (or narrow the date range i.e. month by month range if too many search results) 3. Select the option to order results by date 4. Go to the last page in search results (earliest) 5. Find the earliest article that debunks the claim - answer "NA" if no appropriate article is found - Make sure to verify the correct date of the article publication from the article webpage as the date on google news is not always updated and reliable | | What is a link to the article? | - Article URL or "NA" is the answer to the previous question was "NA" | | Table 5: Human evaluation question and guidelines provided to annotators during the early detection of misleading | | Table 5: Human evaluation question and guidelines provided to annotators during the early detection of misleading COVID-19 claims task | Question | Annotation Guidance | | |------------------------------------------|--------------------------------------------------------------------|----| | Is the tweet a duplicate (already seen)? | - "Yes" or [BLANK] | | | Does this tweet violate Twitter's | - Answer "NA" if previous answer was [BLANK], otherwise answer "1" | | | COVID-19 | unapproved | treat | | ment policy? | - "5" based on the attached table (Table 7) | | Table 6: Human evaluation question and guidelines provided to annotators during the COVID-19 policy violation verification task using a linear classification layer. We use the best-performing model from Soares et al. (2019), [ENTITY MARKERS - ENTITY START], which uses special tokens [Cstart], [Cend] to mark treatment span in a sentence. The modified sentence is then fed into BERT/CT-BERT and the representation of the starting marker [Cstart] is sent into a linear classification layer. ## E Ranking Extracted Claims To generate a list of treatments mentioned on a specific day sorted by trendiness or significance we cannot simply use the daily frequency counts of treatments mentioned, as more popular treatments will consistently be mentioned at a higher volume. In our evaluation dataset used in §4.2, for example, chloroquine and its variants are mentioned in a tweet with SUPPORTING stance approximately "You are being asked to be a volunteer in a research study. The purpose of this study is to advance research on Computational Linguistics. The annotation form will take approximately 3 minutes to complete. You must be 18 years of age or older to participate. Your judgments will be used by researchers worldwide to help advance research on Computational Linguistics. They will enable machine learning techniques to be applied to problems in natural language understanding. We will keep your personal information (Mechanical Turk ID, etc.) confidential. The risks involved are no greater than those involved in daily activities. We will comply with any applicable laws and regulations regarding confidentiality. To make sure that this research is being carried out in the proper way, the [REDACTED] may review study records. The [REDACTED] may also look at study records. If you have any questions about the study, you may contact [REDACTED]. If you have any questions about your rights as a research subject, you may contact [REDACTED] at [REDACTED]. Thank you for participating in this study. By completing the online survey, you indicate your consent to be in the study. Subjects located in the EU are not allowed to join this study." Figure 5: Amazon MTurk ethics statement which was shown to annotators before stance labeling task | Source | Treatments | | | |--------------------|--------------|------------|----| | CDC(*) | Mask, Face Mask, Social Distancing, Stay Home, Wash Hands, Hand Washing, Cover Coughs, Cover Sneezes | | | | New York Times(**) | Remdesivir, | REGEN-COV, | Bam | | lanivimab, Etesevimab, Sotrovimab, Dexamethasone, Prone positioning, Ventilators, Evusheld, Paxlovid, Molnupiravir, Lagevrio, Baricitinib, Olumiant, Tocilizumab, Actemra | | | | Figure 6: Amazon MTurk interface for stance annotation towards extracted claims in tweets. | Score | Description | |---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | Clearly not in violation of Twitter's policy. | | 2 | Probably not violating the policy, but does seem to suggest a questionable treatment may be effective. For example, the treatment is in clinical trials at the time the tweet was written, or the tweet does not make a strong claim about effectiveness. | | 3 | Unclear whether or not this violates the policy. | | 4 | Most likely violating Twitter's policy. Seems like the treatment is not effective based on official sources or reputable news organizations, and the tweet is making a relatively strong claim that the treatment is effective. | | 5 | Clearly in violation of Twitter's policy. | Table 7: Likert score descriptions that are presented to annotators. These are used to evaluate whether tweets supporting a misinformation claim are in violation of Twitter's policies. 11.5 times per day on average while 56% of total treatments encountered were mentioned less than one time in the period studied. Therefore, we require a method that takes into account the historical frequency of treatments to calculate the strength of the association between the trendiness of treatment and the date. To do this, we use a one-tailed Fisher's Exact Test (Fisher, 1922), which has been shown to be effective in rare event detection applications in the Table 8: Approved COVID-19 treatments used in evaluation based on lists from the New York Times (Zimmer et al., 2020) and the Centers for Disease Control (CDC, 2021). | Tagging/QA (non-T5) | QA (T5) | | |-----------------------|----------------|----------------| | learning rate | 1e-5,2e-5,3e-5 | 1e-4,2e-4,3e-4 | | batch size | 8,16 | 8,16 | | epoch | 50 | 10 | Table 9: Hyperparameters of claim extraction models. | NBOW | RoBERTa/CT-BERT | | |---------------|---------------------|----------------| | learning rate | 1e-4,5e-5,1e-3,5e-3 | 8e-6,1e-5,3e-5 | | batch size | 4,16 | 8,16 | | epoch | 50 | 12 | Table 10: Hyperparameters of stance classification models. domain of statistical natural language processing (Moore, 2004; Johnson et al., 2007; Ritter et al., 2011). To apply this test, we first calculate the hypergeometric probability, the probability of a particular distribution of treatment frequencies assuming independence between the treatment and the date. We define T and D as the events when a tweet's extracted treatment is t and when a tweet is published on date d respectively. Also, we let C(X) be the observed frequency of event X and C(*X, Y* ) be the joint frequency of event X and event Y . Given these definitions, we can calculate the hypergeometric probability, pT,D, as follows: pT,D =C(T)!C(¬T)!C(D)!C(¬D)! N!C(T,D)!C(¬T,D)!C(T,¬D)!C(¬T,¬D)! where N is the sample size. Given this formula of the hypergeometric probability for a distribution with a treatment t and date d, we calculate the p-value of the test by summing the hypergeometric probabilities of this distribution and all more extreme distributions. In our case, more extreme distributions are hypothetical distributions where the joint frequency of a tweet with a specific treatment published on a specific date is greater than C(*T, D*). A treatment is flagged by our system if its pvalue is less than the threshold or α-value as set by the content moderator and it has not previously broken this threshold. ## F Policy Violations Per Hour Calculation Here we detail the calculation of the 124.2 policy violations per hour statistic reported in §4.2. First, we calculate the total amount of time required by each of the phases of the human annotation: 1. **Stage 1:** Detecting Misleading COVID-19 Treatments (§4.2.1) - \# of claims = 300 claims - time of verifying a claim = 89.7s/claim - time spent on claim annotation = 300 ∗ 89.7s = 7.5 hours 2. **Stage 2:** Identifying COVID-19 Policy Violations (§4.2.2) - \# of tweets mentioning an unapproved claim in the full batch = 10246 claims - time to annotate an individual tweet = 16.1s/tweet - estimated time spent on tweet annotation = 10246 ∗ 16.1s = 45.8 hours Next, we detail the calculation steps using these calculated times: 1. Total annotation time = 7.5 + 45.8 = 53.3 hours 2. Estimated \# of tweets (considering the full batch - see scores 4 and 5 bars in Figure 4) containing policy violation = 3151 + 3467 = 6618 tweets 3. \# policy violated identified per hour of annotation time = 6618/53.3 hours = 124.2 tweets/hours 4. total \# tweets judged per hour of annotation time = 10246/53.3 hours = 192.2 tweets*/hour* ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✓ A2. Did you discuss any potential risks of your work? Section 6, Section 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1.1, Section 4.1.2 ✓ B1. Did you cite the creators of artifacts you used? Section 4.1.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 7 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4.1.1 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 7 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1.2 ## C ✓ **Did You Run Computational Experiments?** Section 4.1.1, Section 4.1.2 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D1, Appendix D2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix D1, Appendix D2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4.1.2 and Appendix B ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix B ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.1.2 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix B ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Appendix B ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix B
chanchani-huang-2023-composition
Composition-contrastive Learning for Sentence Embeddings
https://aclanthology.org/2023.acl-long.882
Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.
# Composition-Contrastive Learning For Sentence Embeddings Sachin Chanchani and **Ruihong Huang** Texas A&M University {chanchanis, huangrh}@tamu.edu ## Abstract Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.1 ## 1 Introduction Significant progress has been made on the task of learning universal sentence representations that can be used for a variety of natural language processing tasks without task-specific fine-tuning (Conneau et al., 2017, Cer et al., 2018, Kiros et al., 2015, Logeswaran and Lee, 2018, Giorgi et al., 2021a, Yan et al., 2021, Gao et al., 2021, Chuang et al., 2022a). Recent works have shown the potential to learn good sentence embeddings without labeled data by fine-tuning pre-trained language models (PLMs) using the unsupervised framework introduced in SimCLR (Chen et al., 2020), adapted to the natural language processing (NLP) domain. In computer vision (CV), SimCLR exploits a series of transformations (blurs, crops, color distortions, etc.) to construct positive pairs from otherwise unique data points. A cross entropy objective (InfoNCE; Oord et al., 2018) is then applied to minimize distance 1Code, pre-trained models, and datasets will be available at github.com/perceptiveshawty/CompCSE. between representations originating from the same datum, while maximizing the distance to all other points in a mini-batch. The success of the framework in computer vision is due largely to the diversity of augmentations used for creating positive pairs, which leave the identity of the original example intact while reducing pairwise mutual information in the input space (Tian et al., 2020; Wu et al., 2020; Purushwalkam and Gupta, 2020). Constructing positive pairs via discrete augmentations have not been effective when applying the same objective to sentence embeddings. In fact, Gao et al. (2021) perform an ablation study of textual augmentations (e.g., cropping, synonym replacement) and find that training on these pairs hurts downstream performance on semantic textual similarity (STS) tasks. Instead, they observe that minimal (10%) dropout noise can be used to create positive pairs on-the-fly, and empirically results in stronger representations. This framework relying on nearly identical pairs is known as SimCSE. Since the dropout noise exists as a regularization component of the BERT architecture (Devlin et al., 2019a), explicit augmentations are unnecessary, making it a simple yet effective framework for unsupervised learning of sentence embeddings. Here, we make a case for composition as augmentation, by exploiting its presence in language as a signal for learning sentence encoders. We conduct a series of experiments to illustrate the impact of training on positive examples derived by averaging representations of textual constituents in the latent space. Following previous works, we benchmark the proposed strategy on 7 STS tasks. Our results show that it is feasible to significantly improve upon SimCSE without making expensive architectural modifications or changing the overall training objective. We hope our findings can inspire new avenues of inquiry in text representation learning that draw on long-standing notions in semantics and linguistics. 15836 ![1_image_0.png](1_image_0.png) ## 2 Background And Related Work 2.1 Unsupervised Contrastive Learning Contrastive learning (Hadsell et al., 2006) aims to learn vector-valued representations of data without relying on annotations. Meaning is derived from these representations based on their proximity to other points in the same space, e.g. two images of dogs will be closer in space than a dog and a chair. Several works have theoretically verified the utility of representations derived from contrastive learning (Arora et al., 2019; Lee et al., 2020; Tosh et al., 2020) under various assumptions; Chen et al. (2020) showed that SimCLR can even outperform supervised counterparts on CV transfer learning benchmarks. In SimCLR (and SimCSE), the learning objective for an example is: $$l_{i}=-log\frac{e^{sim(z_{i},z_{i}^{+})/\tau}}{\sum_{j=1}^{N}e^{sim(z_{i},z_{j}^{+})/\tau}},\tag{1}$$ where $z_{i}=f(x_{i}),z_{i}^{+}=f(x_{i}^{+})$ are vector repre i ) are vector representations of an input and its corresponding augmented positive, τ is a temperature hyperparameter, sim(*., .*) is cosine similarity, and N is batch size. Drawbacks of InfoNCE. In examination of eq. 1, it is evident that InfoNCE uniformly repels examples in the mini-batch besides the minimally augmented positive. Consequentially, the resulting embeddings show poor group-wise discrimination, especially in language, since it is likely that different examples in the batch can have different relative similarities to a given anchor. Another consequence of the unsupervised InfoNCE objective is dimensional collapse, wherein embedding vectors are mostly differentiated by a small proportion of the feature axes; thus under-utilizing the full expressive capacity of the encoder. This was theoretically posited in Jing et al. (2022). They prove that minimal augmentation, coupled with an over-parameterized network, results in low rank solutions to the unsupervised contrastive objective. We hypothesize that this is closely tied to short-cut learning (Robinson et al., 2021a) —- in the context of sentence embeddings, Wu et al. (2022c) observed that spurious features related to the lengths of sentences are relied on to solve the contrastive objective. Such solutions can yield nongeneralizable features that poorly represent data from new domains. Qualifying the representation space. Wang and Isola (2020) proposed two metrics to measure the quality of embeddings derived through contrastive learning. First, *alignment* measures on average the proximity of pairs of examples that *should* be close in space, i.e. for a set of positive pairs ppos and their normalized representations f(x), f(x +): $$\ell_{\rm align}\stackrel{{\Delta}}{{=}}\frac{\mathbb{E}}{(x,x^{+})\sim p_{\rm pos}}\|f(x)-f(x^{+})\|^{2}.\tag{2}$$ Conversely, *uniformity* measures how scattered the embeddings are upon the unit hypersphere: ℓuniform ≜ log E x,y i.i.d. ∼ pdata e−2∥f(x)−f(y)∥ 2, (3) where p*data* denotes the full data distribution. We use these metrics to explore the advantages and drawbacks of various augmentations in contrastive pre-training, similarly to Gao et al. (2021). ## 2.2 Learning Sentence Embeddings Early works. First approaches to learning sentence embeddings span unsupervised (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018), and supervised (Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019) methods which have been studied extensively in the literature. More recent work has focused on unsupervised contrastive learning with the advent of SimCSE (Gao et al., 2021), which passes the same sentence to a language model twice; the independent dropout masks sampled in the two forward passes encode the sentence at slightly different positions in vector space. A cross-entropy objective is then used to maximize the probability of top-1 proximity between positives while uniformly repelling other examples. Successors to SimCSE. Works that follow SimCSE attempt to improve the framework with auxiliary training objectives (Chuang et al., 2022a; Nishikawa et al., 2022; Zhou et al., 2023; Zhang et al., 2022; Wu et al., 2022b; Wang et al., 2022), verbalized or continuous prompts (Wang et al., 2022; Yuxin Jiang and Wang, 2022), instance generation or weighting strategies (Zhou et al., 2022), momentum encoders with negative sample queues (He et al., 2020), or entirely new parameters with secondary networks (Wu et al., 2022a). Many works combine several of these components, making it difficult to discern their impact in isolation. As the design choices have become more intricate and less parameter-efficient, performance on STS benchmarks has too become saturated. ## 3 Composition-Based Contrastive Learning Our augmentation strategy retains the simplicity and efficiency of SimCSE, as illustrated in Figure 1. Specifically, it requires just one additional forward pass that is ultimately compensated by a non-trivial reduction in convergence time (§6). Beginning with a corpus of unlabelled sentences {xi} m i=1, we consider x + i only in the latent space, as a composition of the representations of (x ′+ i, x ′′+ i). A simple (and effective) way to curate (x ′+ i, x ′′+ i) is to split the tokens of xiin half, and encode the left and right phrases in independent forward passes through the encoder and linear projector. After obtaining their respective [CLS]token representations (zi, z ′+ i, z ′′+ i), (z ′+ i, z ′′+ i) is aggregrated and taken to be the corresponding positive example for zi. The training objective for a single pair is then the same as in eq. 1, where z + = *aggregate*(z ′+ i, z ′′+ i). We experiment with aggregation methods in §5, and find that the best approach varies according to the size and type of underlying PLM. In our final model based on BERTbase, we find that this manner of augmentation is especially suitable for the scheme proposed in DirectCLR (Jing et al., 2022), which aims to directly mitigate dimensional collapse by computing the loss from eq. 1 on a subset of the embedding vector axes before backpropagating to the entire representation. Decomposition as data augmentation. To explain the motivation for decomposing examples in the input space, we can consider an example from the development subset of STS-B labelled as having high semantic similarity: A man is lifting weights in a garage. A man is lifting weights. There are two semantic atoms at play in the first text: 1) a man is lifting weights, and 2) a man is in a garage. The similarity between the two texts can only be considered high based on the first atom; lifting weights. It cannot be said that there is a general relation between *being in a garage* and *lifting* weights - a garage is equally, if not more likely to be related to cars, parking, or storage, yet this does not preclude a connection between them. It is only through the composition of both atoms that we can relate the two. Thus, there is a need for sentence encoders to learn more generalized phrase representations; to at least implicitly abide by principles of semantic compositionality. The challenge in enforcing this kind of constraint through a contrastive objective is in the choice of data - it would require a corpus where lexical collocations are encountered across a diverse set of contexts. ![3_image_0.png](3_image_0.png) Subsampling from decomposed inputs. To further examine the effect of decomposition in the input space, we leverage a pre-trained discourse parser2to extract atomic semantic units from each unique example in the training set; typically simple phrases or clauses. We experiment with 3 kinds of strategies (Figure 1a) to expand the training set, besides considering our augmentation in isolation: let C = {xi,k} c k=1 represent the c non-overlapping phrases extracted from an input xi: - **adjacent spans** are sampled by taking each unique pair in C such that there is no overlap between inputs; - **overlapping and adjacent spans** are sampled by taking (potentially) overlapping pairs in C; - **overlapping, adjacent, and subsuming** spans are sampled by recursively partitioning the elements of C in half, i.e. maximizing the lexical overlap of extracted input samples. Impact on the representation space. A consequence of expanding the training set with subsamples is the presence of harder in-batch negatives. Prior work has demonstrated that this is generally beneficial to contrastive learning (Robinson et al., 2021b; Kalantidis et al., 2020; Zhang and Stratos, 2021). Following Gao et al. (2021), we measure the uniformity and alignment of representations obtained for the development set of STS-B to understand the effect of training with additional sub2https://github.com/seq-to-mind/DMRST_Parser samples. STS-B is comprised of pairs of sentences accompanied by a score between 1-5 indicating degree of semantic similarity. We take all pairs as pdata, and pairs with a score greater than 4 as ppos. Both metrics are measured every 10 steps for 500 training steps, to understand the direction in which each of our strategies drives the encoder. As shown in Figure 2, any of the subsampling strategies can bring non-trivial improvements over unsupervised SimCSE in both alignment and uniformity. Specifically, expanding the training set with subsamples (*+ adjacent, + overlapping, +* subsuming) encourages a more uniform embedding distribution. On the other hand, forgoing subsampling for just the compositional augmentation (*naive partition*) achieves the better alignment while retaining the uniformity of SimCSE. This is because we leave the self-prediction objective intact, while increasing its difficulty: although subsamples are potentially highly related, positive pairs are only curated from the exact same text. As a consequence, the underlying PLM is forced to effectively distinguish examples with high lexical overlap - which is precisely the intuition underlying DiffCSE Chuang et al. (2022b), and other discriminative pre-training objectives. ## 4 Experiment Setup. In our experiments, we modify the public PyTorch implementation3 of SimCSE to support our proposed augmentation and subsampling methods. All of our language models are initialized from pre-trained BERT/RoBERTa checkpoints (Devlin et al., 2019b; Liu et al., 2019), except the randomlyinitialized MLP over the [CLS]representation. For all models, we employ the scheme illustrated in Figure 1 and report the best results after training with or without the 3 subsampling strategies. We keep the best checkpoints after evaluating on the development set of STS-B every 125 steps during training. Batch size is fixed at 64 for all models; for base and large sized models, learning rates are fixed to 3e-5 and 1e-5 respectively. Besides those covered in 5, extensive hyperparameter searches were not conducted in this work. Data. We use the same 1 million randomly sampled sentences4as SimCSE for training, be-3https://github.com/princeton-nlp/SimCSE 4https://huggingface.co/datasets/princeton-nlp/datasetsfor-simcse PLM Method STS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg. SimCSE♣ 68.40 82.41 74.38 80.91 78.56 76.85 72.23 76.25 L2P-CSR♡ 70.21 83.25 75.42 82.34 78.75 77.8 72.65 77.20 DCLR♠ 70.81 83.73 75.11 82.56 78.44 78.31 71.59 77.22 MoCoSE♢ 71.58 81.40 74.47 83.45 78.99 78.68 72.44 77.27 ArcCSE† 72.08 84.27 76.25 82.32 79.54 79.92 72.39 78.11 PCL‡ 72.74 83.36 76.05 83.07 79.26 79.72 72.75 78.14 ∗SimCSE (*w/ comp.*) 72.14 84.06 75.38 **83.82** 80.43 80.29 71.12 78.18 ESimCSE○ **73.40** 83.27 **77.25** 82.66 78.81 80.17 72.30 78.27 SNCSE$ 70.67 **84.79** 76.99 83.69 **80.51 81.35 74.77 78.97** | BERTbase BERTlarge RoBERTabase RoBERTalarge | |-----------------------------------------------| SimCSE ♣ 70.88 84.16 76.43 84.50 79.76 79.26 73.88 78.41 DCLR ♠ 71.87 84.83 77.37 84.70 79.81 79.55 74.19 78.90 L2P-CSR ♡ 71.44 85.09 76.88 84.71 80.00 79.75 74.55 78.92 MoCoSE ♢ 74.50 84.54 77.32 84.11 79.67 80.53 73.26 79.13 ESimCSE○ 73.21 85.37 77.73 84.30 78.92 80.73 74.89 79.31 ArcCSE† 73.17 86.19 77.90 84.97 79.43 80.45 73.50 79.37 ∗SimCSE (*+ subsum.*) **75.10** 86.57 77.70 84.72 80.25 80.17 73.21 79.67 PCL‡ 74.89 85.88 78.33 85.30 80.13 81.39 73.66 79.94 SNCSE$ 71.94 **86.66 78.84 85.74 80.72 82.29 75.11 80.19** SimCSE♣ 70.16 81.77 73.24 81.36 80.65 80.22 68.56 76.57 ESimCSE○ 69.90 82.50 74.68 83.19 80.30 80.99 70.54 77.44 L2P-CSR♡ 71.69 82.43 74.55 82.15 **81.81** 81.36 70.22 77.74 DCLR♠ 70.01 83.08 75.09 83.66 81.06 81.86 70.33 77.87 ∗SimCSE (*w/ comp.*) **72.56** 83.33 73.67 83.36 81.14 80.71 70.39 77.88 PCL‡ 71.54 82.70 75.38 83.31 81.64 81.61 69.19 77.91 SNCSE$ 70.62 **84.42 77.24 84.85** 81.49 **83.07 72.92 79.23** RoBERTalarge ∗SimCSE (*w/ comp.*) 72.32 84.19 75.00 84.83 81.27 82.10 70.99 78.67 SimCSE♣ 72.86 83.99 75.62 84.77 81.80 81.98 71.26 78.90 DCLR♠ 73.09 84.57 76.13 85.15 81.99 82.35 71.80 79.30 PCL‡ **73.76** 84.59 76.81 85.37 81.66 82.89 70.33 79.34 ESimCSE○ 73.20 84.93 76.88 84.86 81.21 82.79 72.27 79.45 L2P-CSR♡ 73.29 84.08 76.65 85.47 82.70 82.15 72.36 79.53 SNCSE$ 73.71 **86.73 80.35 86.80 83.06 84.31 77.43 81.77** sides incorporating the subsampling strategies from §3. We evaluate on 7 semantic textual similarity tasks: STS 2012-2016, STS-Benchmark, SICKRelatedness (Agirre et al., 2012, 2013, 2014, 2015, 2016; Cer et al., 2017; Marelli et al., 2014) and report averaged Spearman's correlation across all available test subsets. We employ the modified SentEval5(Conneau and Kiela, 2018) package accompanying the source code of SimCSE for fair comparison with other works. Baselines. We compare our results with many contemporaries: ESimCSE (Wu et al., 2022c), SNCSE (Wang et al., 2022), PCL (Wu et al., 2022a), DCLR (Zhou et al., 2022), ArcCSE (Zhang et al., 2022), MoCoSE (Cao et al., 2022), and L2P-CSR (Zhou et al., 2023). We consider SimCSE (Gao et al., 2021) as our baseline, since we leave its training objective and network architecture intact. Results. We can observe in Table 1 that our methods bring non-trivial improvements to SimCSE with both BERT encoders, as well as RoBERTabase. In fact, we achieve an average F1 score within 0.8 points of SNCSE-BERTbase (Wang et al., 2022). SNCSE exploits biases in test sets by engineering hard negatives via explicitly negated sentences - the impact of this strategy is more apparent in the results utilizing RoBERTa, where there is parity in all works besides SNCSE. In the case of BERTlarge, the gap in performance between our approach and SNCSE is narrower at 0.52 points. A clear failure of the compositionaugmented objective presents itself in the results with RoBERTalarge. This could be attributed to poor hyperparameter settings, or a fundamental incompatibility between our approach and the model size/RoBERTa pre-training objective, since other works achieve better results with this PLM. ## 5 Ablation We ablate several aspects of the approach to understand their impact in isolation. We first consider the subsampling strategy, or lack thereof, in which each model achieves the best STS-B development set performance. These are then tied to each model in subsequent ablations. Including subsamples. In the process of designing DeCLUTR, Giorgi et al. (2021b) report gains from subsampling more than one anchor per input document. In our experiments, we find that the aligment-uniformity trade-off differs between BERTlarge and BERTbase, ie. different strategies can be better suited to different PLMs. In Table 2, we show that including subsamples is beneficial to the BERTlarge PLM, but harmful to BERTbase. This is likely a result of the difference in no. of parameters - the smaller PLM may not possess the expressive capacity to distinguish highly related texts without suffering a degeneration in alignment. With RoBERTabase, we observe that subsampling non-overlapping spans gives the best results, whereas none of our strategies appeared compatible with RoBERTalarge. | PLM | Method | STS-B | |------------------------------------|------------------------------------|---------| | SimCSE | 81.47 | | | w/ composition | 83.97 | | | Additional subsampling: + adjacent | 83.39 | | | + overlapping | 83.18 | | | + subsuming | 82.97 | | | BERTbase | SimCSE | 84.41 | | w/ composition | 84.79 | | | Additional subsampling: + adjacent | 84.84 | | | + overlapping | 85.01 | | | + subsuming | 85.06 | | | BERTlarge | SimCSE | 83.91 | | w/ composition | 84.14 | | | Additional subsampling: + adjacent | 84.00 | | | + overlapping | 84.10 | | | + subsuming | 82.92 | | | RoBERTabase | SimCSE | 85.07 | | w/ composition | 84.80 | | | RoBERTalarge | Additional subsampling: + adjacent | 83.91 | | + overlapping | 82.74 | | | + subsuming | 83.33 | | Table 2: Development set results of STS-B after varying the subsampling strategy on different-sized PLMs. Aggregration method. In SBERT (Reimers and Gurevych, 2019), and in historically effective works such as InferSent (Conneau et al., 2017), PLMs are fine-tuned with a cross entropy loss to predict whether two sentences u and v entail or contradict eachother. Their pooler is a concatenation of the two sentence embeddings, along with second-order features such as the element-wise difference, |u − v|. We experiment with these aggregration methods, as well as simpler choices such as element-wise sums/averages. We can see in Table 3 that simply interpolating the embeddings is preferable to other methods for BERT-based encoders. We postulate that this interpolation functions as a form of self-distillation, and amplifies the salience of desirable sparse features correlated with sentential context (Wen and Li, 2021). For RoBERTa, we find that concatenating the first and last halves of the representations is better. Since RoBERTa does not use the next-sentence prediction (NSP) objective, its embeddings will not encode sentential knowledge. Averaging RoBERTa embeddings may not correlate well with real tokens in its vocabulary, whereas concatenating the first and last halves of constituent embeddings retains localized token-level information, making it a better choice in this case. | Aggregration | STS-B | |-------------------------------------|---------| | BERTbase sum | 83.92 | | avg. | 83.97 | | concat first & last half | 83.01 | | concat + project | 69.24 | | concat w/ abs. difference + project | 68.79 | | RoBERTabase sum | 84.00 | | avg. | 84.08 | | concat first & last half | 84.14 | | concat + project | 65.02 | | concat w/ abs. difference + project | 65.44 | Table 3: Results of different aggregration methods for composing z + in the latent space. Results are based on BERTbase on the development set of STS-B. Composing z vs. z +. In our training objective, there are two sets of sentence representations, one derived from pure dropout noise, and the second by averaging the coordinates of constituent representations. However, for each sentence we can: 1) compose the anchor z in latent space, which means other in-batch examples are repelled from | BERTbase | Compose | z | z+ | Both | |------------|-----------|-------|-------|--------| | STS-B | 83.61 | 83.97 | 83.81 | | a synthetic example's coordinate, 2) compose the positive z +, which means synthetic coordinates are repelled from representations of real examples, or 3) compose both z and z + in the latent space. In Table 4, we can see that with BERTbase, we found the best results by directly embedding the anchor sentence, and composing z + from constituents. Table 4: Differences between having compositional anchors and positives. In the *Both* case, the model framework is symmetric in that both anchors and positives are composed of constituent representations. Results are based on BERTbase on the development set of STS-B. | BERTbase | Partitions | 2 | 3 | 4 | |------------|--------------|-----|-----|-----| Number of partitions. Within our framework, we can aggregrate the embeddings of two or more phrases. Increasing the number of phrases increases the number of forward passes, and magnifies the impact of dropout noise. We find that partitioning into more than two bins is detrimental to the objective (Table 5), though perhaps this is the case because the evaluation data consists mostly of short-length sentences. Table 5: Impact of splitting examples into more than 2 bins. Results are based on BERTbase with the development set of STS-B. Hyperparameter d0. In our experiments with BERTbase, computing the contrastive loss on a subvector of (zi, z+ i ) is complementary to composing z + iin the latent space. When d0 → d, our training objective is the exact same as in all *CSE works, ie. computing the loss on all coordinates of (zi, z+ i ). For BERTbase, we search d0 ∈ {192, 256, 384} with the compositional augmentation in isolation (*w/ composition*); for BERTlarge, d0 ∈ {320, 384, 512} with the expanded training set of subsamples (*+ subsuming*). Our results in Table 6 indicate that taking a subvector to compute the loss is beneficial for BERTbase, but the entire vector is necessary for BERTlarge. With RoBERTa encoders, we aggregrate embeddings by concatenating the first and last halves of the phrase embeddings, so d0 is inapplicable. ## 6 Analysis | BERTbase | d0 | 192 | 256 | 384 | 768 | |------------|-------|-------|-------|-------|-------| | STS-B | 83.88 | 84.11 | 83.17 | 83.97 | | | BERTlarge | d0 | 320 | 384 | 512 | 1024 | | STS-B | 84.61 | 84.94 | 84.98 | 85.06 | | Stability and efficiency of training. Successors to SimCSE have incrementally improved STS performance while disproportionately driving up resource requirements. This limits accessibility to practitioners who wish to learn embeddings from their own corpora, perhaps in other languages. Differently, our approach relies on a single additional forward pass while converging much faster than SimCSE. In Figure 3, we compare our BERTbase model's evaluation curve to SimCSE's for 1000 training steps in the same setting. We observe that composition as augmentation greatly speeds up convergence, with evaluation metrics plateauing much faster, and more stably than SimCSE. In fact, on a single NVIDIA A100 GPU (40GB), our model can finish training in under 15 minutes. $$\begin{array}{r l r}{3}&{{}}&{4}\\ {83.48}&{{}}&{83.52}\end{array}$$ ![6_image_1.png](6_image_1.png) ![6_image_0.png](6_image_0.png) Text length as a feature. To investigate the structure of the learned space, In Figure 5, we visualize embeddings of sentences from the development set of STS-B after down-projecting to 2D Euclidean space. We employ UMAP (McInnes et al., 2018) with cosine distance as the metric to preserve local and global topological neighborhoods. The same ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) parameters are used to compute the embeddings in Figure 5a and 5b, which are derived from dropout noise, and composition-based augmentations (w/ composition) respectively. In Figure 5a, we can observe several clusters of dark points corresponding to shorter sentences. This corroborates our intuition that minimal augmentation to create positive pairs can lead to shortcut learning, wherein text length is relied upon to solve the training objective. In contrast, we see a more scattered distribution of points in Figure 5b, particularly with shorter sentences. Coupled with the improved performance on STS tasks, we can conclude that our framework is less prone to learning from spurious correlations. Learned similarity metric. Returning to the example initially posed in §3, we show in Figure 4 similarity scores for pairs of examples computed by our BERTbase model, as well as the corresponding DiffCSE and SimCSE variants. Notice that all three assign higher similarities between anchor: "A man is lifting weights in a garage", and phrases: "A man is lifting weights", "A man in a garage". However, despite their equal constitution in the anchor text, SimCSE incorrectly assesses a higher similarity between the anchor and the first phrase, whereas DiffCSE and our model better capture the equivalence in similarity. The same occurs with anchor: "We store it outside of the house", and texts: "A man is in a garage", "She parked on the driveway"; despite both being unrelated to the anchor, SimCSE spuriously assigns a higher affinity to the former. Overall, we observed parity in the similarity assessments given by our model and DiffCSE, which validates the ability of our approach to remedy the suboptimal alignment of SimCSE without explicit incentive. ## 7 Conclusion In summary, we proposed a new way to construct positive pairs for unsupervised contrastive learning frameworks relying on pre-trained language models. Our experiments on STS tasks verified the effectiveness of the approach, which achieved competitive results with more complex learning methods, with the benefit of stabilizing and reducing the overall cost of training. We provided empirical studies and qualitative examinations into our approach, verifying its ability to train sentence encoders with better alignment. We believe this work can foster new avenues of inquiry in contrastive learning, especially those that draw upon a *human* cognition of language. ## 8 Limitations There are several limitations in this work. First, we have not explored how to make use of compositionbased augmentations in the supervised setting. A second limitation is a lack of theoretical grounding in the impact of our latent space composition. Finally, we have not explored interoperability with other training objectives. ## 9 Note On Ethics We do not believe there are significant ethical considerations stemming from our work, except those that accompany the use of language models and unlabelled corpora in general. Pre-trained language models, including BERT and RoBERTa, are known to learn and reiterate harmful prejudices. Although our pre-training corpus is sourced from Wikipedia and cited in several related works, it cannot be feasibly vetted for explicit or inappropriate content. ## Acknowledgements We thank the anonymous reviewers for their valuable feedback and input. We also thank Haotian Xu, for his insight and suggestions that shaped the course of this work. We gratefully acknowledge support from National Science Foundation (NSF) via the awards IIS-1942918 and IIS-2127746. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. ## References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252–263, Denver, Colorado. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91, Dublin, Ireland. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511, San Diego, California. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In **SEM 2012:* The First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385– 393, Montréal, Canada. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In *Second Joint* Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA. Association for Computational Linguistics. Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. 2019. A theoretical analysis of contrastive unsupervised representation learning. In 36th International Conference on Machine Learning, ICML 2019, 36th International Conference on Machine Learning, ICML 2019, pages 9904–9923. International Machine Learning Society (IMLS). Publisher Copyright: © 2019 International Machine Learning Society (IMLS).; 36th International Conference on Machine Learning, ICML 2019 ; Conference date: 09-06-2019 Through 15-06-2019. Rui Cao, Yihao Wang, Yuxin Liang, Ling Gao, Jie Zheng, Jie Ren, and Zheng Wang. 2022. Exploring the impact of negative samples of contrastive learning: A case study of sentence embedding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3138–3152, Dublin, Ireland. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning (ICML)*, pages 1597–1607. Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, ShangWen Li, Scott Yih, Yoon Kim, and James Glass. 2022a. DiffCSE: Difference-based contrastive learning for sentence embeddings. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4207–4218, Seattle, United States. Association for Computational Linguistics. Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljaciˇ c, Shang- ´ Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. 2022b. DiffCSE: Difference-based contrastive learning for sentence embeddings. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA). Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In *Empirical Methods in Natural Language Processing (EMNLP)*, pages 670–680. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021a. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In *Association for* Computational Linguistics and International Joint Conference on Natural Language Processing (ACLIJCNLP), pages 879–895. John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021b. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 879–895, Online. Association for Computational Linguistics. Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In *IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), volume 2, pages 1735–1742. IEEE. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In *Proceedings of the* 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367–1377, San Diego, California. Association for Computational Linguistics. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. 2022. Understanding dimensional collapse in contrastive self-supervised learning. In International Conference on Learning Representations. Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. In Neural Information Processing Systems (NeurIPS). Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS), pages 3294–3302. J. Lee, Qi Lei, Nikunj Saunshi, and Jiacheng Zhuo. 2020. Predicting what you already know helps: Provable self-supervised learning. In *Neural Information* Processing Systems. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In International Conference on Learning Representations (ICLR). Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA). Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, and Isao Echizen. 2022. EASE: Entity-aware contrastive learning of sentence embedding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3870–3885, Seattle, United States. Association for Computational Linguistics. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. Senthil Purushwalkam and Abhinav Gupta. 2020. Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, and Suvrit Sra. 2021a. Can contrastive learning avoid shortcut solutions? In *Advances in Neural Information Processing Systems*, volume 34, pages 4974–4986. Curran Associates, Inc. Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021b. Contrastive learning with hard negative samples. In *International Conference on Learning Representations*. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. 2020. What Makes for Good Views for Contrastive Learning? In Advances in Neural Information Processing Systems, volume 33, pages 6827–6839. Curran Associates, Inc. Christopher Tosh, Akshay Krishnamurthy, and Daniel J. Hsu. 2020. Contrastive learning, multi-view redundancy, and linear models. *ArXiv*, abs/2008.10150. Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022. SNCSE: Contrastive learning for unsupervised sentence embedding with soft negative samples. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning (ICML)*, pages 9929–9939. Zixin Wen and Yuanzhi Li. 2021. Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning. Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, and Noah Goodman. 2020. On Mutual Information in Contrastive Learning for Visual Representations. Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, and Daxin Jiang. 2022a. PCL: Peercontrastive learning with diverse augmentations for unsupervised sentence embeddings. Xing Wu, Chaochen Gao, Zijia Lin, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022b. InfoCSE: Information-aggregated contrastive learning of sentence embeddings. Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022c. ESimCSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 5065–5075. Linhan Zhang Yuxin Jiang and Wei Wang. 2022. Improved universal sentence embeddings with promptbased contrastive learning and energy-based learning. Wenzheng Zhang and Karl Stratos. 2021. Understanding hard negatives in noise contrastive estimation. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1090–1101, Online. Association for Computational Linguistics. Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang Zhao. 2022. A contrastive framework for learning sentence representations from pairwise and triple-wise perspective in angular space. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4892–4903, Dublin, Ireland. Association for Computational Linguistics. Kun Zhou, Beichen Zhang, Xin Zhao, and Ji-Rong Wen. 2022. Debiased contrastive learning of unsupervised sentence representations. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6120– 6130, Dublin, Ireland. Association for Computational Linguistics. Kun Zhou, Yuanhang Zhou, Xin Zhao, and Ji-Rong Wen. 2023. Learning to perturb for contrastive learning of unsupervised sentence representations. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We provided a negative result in Section 4. We also dedicated a section to this end: Section 8. ✓ A2. Did you discuss any potential risks of your work? Provided in the appendix, although it is mostly a general statement on the use of pre-trained language models and unlabelled corpora. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Sections 1-2: "Vector representations of natural language are ubiquitous in search, retrieval and reranking applications. Various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data. These approaches maximize agreement between minimally perturbed texts, while uniformly repelling examples within a broader corpus." Section 3: "Different from previous works, we propose to maximize agreement between sentences and a composition of their semantic constituents. We consider several interpretations of this objective and elaborate the impact on resultant embeddings in each case." Section 4: "Experimental results on semantic textual similarity tasks demonstrate improvements over baselines that are on par with contemporary approaches." Sections 1-6: "Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters." ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 4-6. ✓ B1. Did you cite the creators of artifacts you used? Sections 4-6. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We include a footnote which links to a GitHub page where these statistics are available in detail; see Section 5. Related works do not report these statistics in the body or appendix of their paper. ## C ✓ **Did You Run Computational Experiments?** Sections 3-6. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We utliize BERT and RoBERTa, and cite their papers where these parameters are listed. Additionally, we report on efficiency as a benefit of our approach in Section 6. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We did not conduct extensive hyperparameter searches and did not repeat experiments due to time and resource constraints. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We provide footnotes with links to all source codes utilized in this work; Sections 3-5 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
shaham-etal-2023-causes
Causes and Cures for Interference in Multilingual Translation
https://aclanthology.org/2023.acl-long.883
Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall.
# Causes And Cures For Interference In Multilingual Translation Uri Shahamτ Maha Elbayadµ **Vedanuj Goswami**Μ Omer Levyτµ **Shruti Bhosale**Μ τ The Blavatnik School of Computer Science, Tel Aviv University µ Meta AI ## Abstract Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall. ## 1 Introduction Multilingual machine translation models can benefit from transfer between different language pairs (*synergy*), but may also suffer from *interference* (Ha et al., 2016; Firat et al., 2016; Aharoni et al., 2019; Arivazhagan et al., 2019). While there are methods to reduce interference and achieve better performance (Wang et al., 2020a; Kreutzer et al., 2021; Wang et al., 2021), such approaches are often compute intensive, and do not always work (Xin et al., 2022). In this work, we demonstrate that interference in multilingual translation largely occurs when the model is very small compared to the abundance of training data, and that the simple principled approach of enlarging the model and tuning the data sampling temperature provides a consistent solution to the interference problem that can even promote synergy. This work methodically deduces the most simple ways of reducing interference in multilingual translation. We begin by inquiring what are the dominant factors that may interfere with learning to translate a particular language pair of focus s → t, in the context of learning a multilingual translation model with many different language pairs. Controlled experiments show that besides model size and number of s → t training examples, the main factor that correlates with the level of interference is the proportion of *focus pair* examples (s → t) observed out of the *total* number of examples (all language pairs) seen at each training step on average. Surprisingly, aspects like language similarity or number of translation directions have a much smaller effect. In model and data scaling experiments, we observe that interference mainly occurs in extreme parameter poverty, when the language pair of focus is data-rich, but has to "share" a crowded parameter space with large quantities of other data. Enlarging the model to standard model sizes in machine translation literature alleviates interference and even facilitates synergy. For context, given a language pair of 15M sentence pairs that accounts for 20% of the total training data (75M), we observe severe levels of interference with 11M- and 44M-parameter transformers, but no interference when scaling the model to 176M parameters (the "big" model of Vaswani et al. (2017)) and significant synergy with 705M parameters. Interestingly, when the model is large enough, we find that increasing the amount of non-focus data to a certain point can further increase synergy. Finally, given the evidence that data sizes and ratios strongly correlate with interference, we experiment with a natural lever that controls the proportion of each dataset in the overall mix in the simplest way: sampling temperature. Indeed, we 15849 find that calibrating the distribution of language pairs via temperature can substantially reduce the amount of interference in both high- and lowresource language pairs. Our results demonstrate the importance of tuning the temperature hyperparameter in multitask training, and suggest that previously reported accounts of severe interference in multilingual translation models might stem from suboptimal hyperparameter configurations. ## 2 Measuring Interference We assume a common multilingual translation setup that involves L language pairs s → t, where the source is always the same language s (English), and the target language t varies (English-to-many), or vice versa (many-to-English). The overall training data is a union of these training subsets, we note their sizes by Ds→t. Sampling a training example x follows the distribution: $$P(x\in s\to t)\propto\left({\frac{D_{s\to t}}{\sum\limits_{s^{\prime},t^{\prime}}D_{s^{\prime}\to t^{\prime}}}}\right)^{\frac{1}{T}}\qquad(1)$$ Where T is the temperature hyperparameter (Devlin et al., 2019; Arivazhagan et al., 2019). T = 1 maintains the original data proportions, 0 *< T <* 1 starves low resource language pairs, and T > 1 increases their representation in the training distribution. We mostly focus on the English-to-many setting in which interference is more apparent.1 We define interference as a negative interaction between different translation directions in a multilingual translation model. It is measured for a specific translation direction s → t by the relative difference in performance (test-set cross-entropy loss) between a bilingual model trained to translate only from s to t (L bi s→t ) and a multilingual counterpart that is trained to translate other additional directions (L multi s→t ): $${\mathcal{I}}_{s\to t}={\frac{{\mathcal{L}}_{s\to t}^{\mathrm{bi}}-{\mathcal{L}}_{s\to t}^{\mathrm{multi}}}{{\mathcal{L}}_{s\to t}^{\mathrm{bi}}}}\qquad\qquad(2)$$ Negative values of Is→tindicate interference, while positive values indicate synergy. ## 3 Experimental Setup Models We train encoder-decoder Transformer (Vaswani et al., 2017) models of 4 different sizes 1Section 4.3 also includes many-to-English experiments, where we observe higher levels of synergy. | Size | Hidden | FFN | Attn Heads | Params | |--------|----------|-------|--------------|----------| | XS | 256 | 1024 | 4 | 11M | | S | 512 | 2048 | 8 | 44M | | M | 1024 | 4096 | 16 | 176M | | L | 2048 | 8192 | 32 | 704M | throughout our experiments. We use the original2 transformer-base and transformer-big variants, as well as a smaller and a larger versions by adjusting the width of the architecture (Table 1). Data We use the multilingual benchmark introduced by Siddhant et al. (2020) based on WMT data. This benchmark includes a diverse set of 15 languages, each paired with English. The number of training examples is also diverse, ranging from 155K sentence pairs in Gujarati to 51M examples in Czech.3 Table 2 provides additional dataset statistics. Tokenization We build a shared vocabulary of 64K BPE tokens with sentencepiece (Kudo and Richardson, 2018) using a sampling temperature of 5 to increase the lower resource languages' representation. We use this vocabulary for all our experiments. We also add language ID tokens to our vocabulary, which are prepended to each source and target sequence to indicate the target language (Johnson et al., 2017). Training We use Fairseq (Ott et al., 2019) to train transformer models with the Adam optimizer (Kingma and Ba, 2015) for up to 100K steps, with a dropout rate of 0.1, inverse square root learning rate schedule up to a maximum of 0.004, 8K warmup steps, and a batch size of 256K tokens. We choose the best checkpoint according to the average validation loss of all language pairs. ## 4 What Impacts Interference In Multilingual Translation? | Language | ID | #Sentences (M) | Test Set | |------------|------|------------------|------------| | Czech | cs | 51.769 | WMT18 | | French | fr | 40.853 | WMT14 | | Russian | ru | 38.492 | WMT19 | | Chinese | zh | 25.987 | WMT19 | | Spanish | es | 15.177 | WMT13 | | Finnish | fi | 6.587 | WMT19 | | German | de | 4.509 | WMT14 | | Estonian | et | 2.176 | WMT18 | | Latvian | lv | 0.638 | WMT17 | | Lithuanian | lt | 0.631 | WMT19 | | Romanian | ro | 0.610 | WMT16 | | Hindi | hi | 0.306 | WMT14 | | Kazakh | kk | 0.224 | WMT19 | | Turkish | tr | 0.207 | WMT18 | | Gujarati | gu | 0.156 | WMT19 | ## (1) Model Size (2) Training data size of s → t, Ds→t (3) Proportion of s → t examples observed during training P(x ∈ s → t) (4) Total number of languages L (5) Similarity between s → t and other pairs4 In the experiments we describe next, we provide empirical evidence that indicate the last two factors do not actually have a significant effect on the level of interference, and can therefore be pruned away. Subsequent experiments reveal that interference is indeed a function of model size, data size, and data proportion. Most striking is the fact that, across various data settings, enlarging the model to standard sizes consistently alleviates interference and may even promote synergy. ## 4.1 Does Language Similarity Matter? Intuitively, data from languages that humans perceive as similar (e.g. languages that have some degree of mutual intelligibility, exhibit similar linguistic properties, or have shared vocabularies) should have a more positive effect on translation quality comparing to data from distinct languages (Lin et al., 2019; Wang et al., 2020b). To test this, we fix a *focus* language, and train *trilingual* models to translate from English to two languages, the focus language and an additional *interfering* language. We then look at interference trends as we vary the | Focus | Other | | | |----------|-----------|--------------|-----------| | Language | #Examples | Language | #Examples | | es | 15.177M | fr⋆/cs/ru/zh | 15.177M | | es | 0.118M | fr⋆/cs/ru/zh | 15.177M | | et | 2.176M | fi⋆/fr/ru/zh | 6.587M | | et | 0.118M | fi⋆/fr/ru/zh | 6.587M | interfering language while controlling the amount of training data for each language pair. Setup We run two sets of experiments, one with Spanish (es, 15.2M parallel sentences) as the focus language, and another with Estonian (et, 2.2M examples). For each focus language, we select one of four interfering languages; Spanish is paired with French,5 Czech, Russian, and Chinese, while Estonian is paired with Finnish,6 French, Russian, and Chinese. To control the effects of data size in the English-Spanish experiments, we randomly sample 15.2M examples from each interfering language pair, making the ratio between focus and interfering languages 1:1. Similarly, in the English-Estonian experiments, we sample 6.6M examples from each interfering language to create a data ratio of 1:3. We also conduct similar experiments when we use only 118K focus language examples, to see the trends when the focus language pair is extremely low resource.7 Table 3 provides an overview of the language similarity experiments. Results Figure 1a shows the interference rate for every model size when Spanish has only 118K parallel examples (left) and when using the full English-Spanish dataset (right). The variance in results somewhat correlates with language similarity when the dataset is very small, which aligns with previous work (Lin et al., 2019); French seems to help Spanish more than other languages when the model is big enough, while Chinese helps less. However, when training with the full dataset, the differences between other languages diminish for all model sizes. Concurrently, Fernandes et al. (2023) also found no significant difference for using French or Chinese as a third language combined with English-German in a very high resource ![3_image_0.png](3_image_0.png) en-es interference ![3_image_1.png](3_image_1.png) setting (600M examples per language pair). We observe similar trends when Estonian is the focus language. Figure 1b shows that when Estonian only has 118K training examples, combining with Finnish data seems to have some positive effect. However, this effect also shrinks when using all of the English-Estonian train set (only 2.2M examples, compared to the 15.2M of EnglishSpanish) and a model that is not too small.8 ## 4.2 Does The Number Of Languages Matter? Do we get more interference when training with one interfering language pair or fourteen? We train models with varying numbers of language pairs while controlling for the overall number of interfering examples. We find that splitting the interfering data across more language pairs has a mild positive effect, which diminishes as the amount of focuslanguage data and/or model parameters scales up. | Focus | Other | | | |----------------|-----------|-------------|-----------| | Language | #Examples | Languages | #Examples | | cs/fr/ru/zh | 15.177M | | | | es | 15.177M | cs+fr+ru+zh | 15.177M | | cs+...+gu (14) | 15.177M | | | | cs/fr/ru/zh | 15.177M | | | | es | 0.118M | cs+fr+ru+zh | 15.177M | | cs+...+gu (14) | 15.177M | | | | fi/fr/ru/zh | 6.587M | | | | et | 2.176M | fi+fr+ru+zh | 6.587M | | cs+...+gu (14) | 6.587M | | | | fi/fr/ru/zh | 6.587M | | | | et | 0.118M | fi+fr+ru+zh | 6.587M | | cs+...+gu (14) | 6.587M | | | Setup We train multilingual models on EnglishSpanish data alongside English to 1, 4, or 14 interfering languages. The interfering data always sums ![4_image_0.png](4_image_0.png) (a) Models trained with 118K (left) and 15.2M (right) en-es training examples and 15.2M training examples for non-es languages. ![4_image_1.png](4_image_1.png) up to a fixed 15.2M examples budget, distributed as evenly as possible among the different languages.9 We repeat these experiments when Estonian is the focus language and the interfering example budget is 6.6M. Table 4 provides an overview of these experiments. Results Figure 2a shows that more than one interfering language pair somewhat helps when EnglishSpanish has few training examples, but this effect largely disappears in the full training set and with larger models. We see similar trends for Estonian in Figure 2b, even though its full training set has only 2.2M examples. This phenomenon might be related to the fact that when the data distribution is sharp (i.e. one high resource paired with one very low resource) there is not enough incentive for the model to pay attention to the focus language's identifier token, compared to when the distribution is much more uniform. This result also corroborates similar findings for pretrained multilingual models (Conneau et al., 2020), although those experiments did not control the total quantity of data as in ours.10 ## 4.3 The Impact Of Model And Data Size Seeing that language similarity and the number of interfering languages have only a limited effect on interference, we design a controlled setup to measure interference as a function of the remaining three factors: model size, focus language data size, and its proportion in the total amount of data seen during training. Setup We train models using all the available 15.2M English-Spanish examples, with an increasing example budget for interfering language pairs, ranging from 1/8 (1.9M) to 8 times (122M) the English-Spanish data, divided as evenly as possible between French, Czech, Russian, and Chinese.11 To observe trends across Ds→t sizes, we 10See Figure 6 in Appendix A for the results of these experiments with absolute BLEU scores. 11Since Chinese has only 26M examples (less than 122M/4), we use all of its train set in the 122M (8.0X) case, ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) ![5_image_2.png](5_image_2.png) rerun these experiments with a quarter (3.8M) of the English-Spanish data, while keeping the ratios with the rest of the data similar. Finally, we also conduct these experiments in the many-to-English setting. Results Figures 3a and 3b show the interference and synergy for English-Spanish using a varying number of interfering examples. For smaller models (XS and S), increasing the amount of interfering data (i.e. decreasing the proportion of focus data) exacerbates interference. However, larger models appear to benefit from significant quantities of interfering examples; for instance, when training with Ds→t = 3.8M, a large model (L) can gain over 10% relative loss improvement when there is 32 times more interfering data than focus data (P(x ∈ s → t) ≈ 3%). Interestingly, we also observe that interference is sensitive to the ratio between model parameters and focus data, as the M model trained on 15.2M focus examples produces a similar curve to that of the 4-times smaller S model trained on 3.8M examples, both intersecting the synergy/interference line at the same point. Finally, Figures 3c and 3d show that when translating *into* English, interference is much less of an issue, occurring only in the XS model when the total amount of training data significantly exceeds the model's capacity. Scaling up the model not only improves the absolute performance (Appendix A), but also introduces substantial gains from synergy. Our results align with trends observed on cross lingual transfer when scaling pretrained multilingual models to 3.5 and 10 billion parameters (Goyal et al., 2021). ## 4.4 Tuning Interference With Temperature In the previous sections we demonstrated that the dominant factors impacting interference are the model size, the amount of focus language pair data Ds→t, and the proportion of focus pair examples observed during training P(x ∈ s → t). In a 15854 practical situation where both model size and multilingual data are fixed, how can one control the level of interference? Recalling Equation 1, we observe that the proportion of focus pair examples P(x ∈ s → t) is controlled via the temperature hyperparameter T. Although previous literature has largely used a value of T = 5 following Arivazhagan et al. (2019), our systematic experiments with different temperatures across three different data distributions and four model sizes suggest that this value can be sub-optimal and induce a substantial amount of interference, especially for model sizes that alleviate significant amounts of interference (M and L). Conversely, tuning the temperature shows that lower values (T = 1, 2) are typically able to reduce high-resource interference without harming low-resource synergy in our standard multilingual translation setting. Setup We train models of four sizes with temperature ranging from 1 to 5 on three training distributions: (1) all available training data, (2) discarding 3 high resource languages (Czech, French and Russian), (3) discarding 4 low resource languages (Latvian, Lithuanian, Romanian and Hindi). When illustrating the results, we assign languages to high and low resource according to whether their relative data proportion decreases or increases when going from T = 1 to T = 2. Results Figure 4 shows the trade-offs between the lower and higher resource languages, as defined above. First, we can see a clear trade-off for the smaller models (XS and S) from T = 1 to T = 4 in most cases. Increasing T helps promote synergy for low resource languages at the cost of increasing interference for the high resource languages. However, the larger models (M and L) clearly degrade when using T ≥ 3; in fact, values of T = 1 and T = 2 are often better for high- and low-resource language pairs than the commonlyused T = 5. These results align with recent work Xin et al. (2022) showing that tuned scalarization is key to achieving strong bilingual baselines that often outperform more complicated multitask optimization methods.12 ## 5 Related Work Scaling Laws in Machine Translation Previous work also looked at scaling trends of data and 12See Table 5 in Appendix A for the results of these experiments with absolute BLEU scores. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) (b) Trained w/o 4 low resource languages ![6_image_2.png](6_image_2.png) models sizes for machine translation. Gordon et al. (2021) proposed scaling laws in the data and model parameters and demonstrated their ability to predict the validation loss of bilingual translation models from Russian, Chinese, and German to English. Ghorbani et al. (2022) found scaling laws for different configurations for the encoder and decoder, independently varying the number of layers in each of them. Bansal et al. (2022) examined different architectures and described data size scaling laws for machine translation in a large scale for English to German and English to Chinese. While all of these works focused on the bilingual setting, we unveil trends for multilingual translation, which has increased complexity. Concurrently to our work, Fernandes et al. (2023) proposed scaling laws for multilingual machine translation, focusing on trilingual models trained on English-German with English-Chinese or French ## Multitask Methods For Multilingual Machine Translation Multitask methods have been proposed extensively to enhance the performance of multilingual translation models. Some utilize validation based signals to determine which language pairs should be prioritized throughout training, either with adaptive scheduling (Jean et al., 2019), gradient similarities to the validation set Wang et al. (2020a), or a multi-armed bandits model (Kreutzer et al., 2021). Zhu et al. (2021) added dedicated embedding and layer adapter modules to the Transformer, and Lin et al. (2021) suggested learning a binary mask for every model parameter and every language pair, both requiring further training after the base multilingual model converges. Li and Gong (2021) used per language gradients geometry to rescale gradients of different language pair to improve performance on low resource languages. Wang et al. (2021) extended PCGrad (Yu et al., 2020) to create Gradient Vaccine, a method that attempts to deconflict different language pairs gradients by replacing them with more similar vectors in terms of cosine similarity. While the motivation for these methods is clear and intuitive, they are usually more complex and computationally expensive than the baseline. Moreover, their efficacy is often demonstrated using relatively small13 models, while modestly increasing the model size can both strengthen the bilingual baselines and reduce the interference problem significantly. Critical Takes on Multitask Optimization Methods Multitask optimization methods were recently under scrutiny. Kurin et al. (2022) experimented with many of those for image classification and reinforcement learning problems, and found that none of them consistently outperformed a well tuned baseline with proper use of known regular-13Transformer-base or big from Vaswani et al. (2017). ization techniques. Similarly, Xin et al. (2022) showed that despite their increased complexity, no popular multitask method was superior to a sweep over scalarization weights for a baseline trilingual translation model. This work complements this line of research by examining *multilingual* translation models and how can modest scale and calibrated temperature reduce problems associated with multitasking. ## 6 Conclusion This work examines the dominant factors that influence interference in multilingual machine translation. Namely, the model size, the amount of parallel data for the focus language pair, and the proportion of examples from the focus language pair with respect to the total data seen during training. While specialized multitask techniques are sometimes demonstrated on small transformer models, we find that a standard baseline model of 176M parameters reduces the interference problem significantly, and further scaling up results in synergy among the different language pairs. We further demonstrate the importance of tuning the temperature at which different language pairs are sampled during training; while existing literature largely relies on high temperatures, which indeed improve low-resource performance in parameter-poor settings, larger models benefit from a more natural distribution that reflects the raw training data. These simple strategies for addressing interference call into question the necessity and perhaps even the validity of recentlyproposed complex anti-interference methods and reaffirm the tried-and-true method of increasing model capacity to accommodate for higher data diversity. ## 7 Limitations One limitation of this work is the focus on Englishto-many and many-to-English settings, while previous studies also went beyond English-centric translation (Freitag and Firat, 2020; Fan et al., 2022). Second, we experiment with a WMT based benchmark that has a total of 15 languages and 200M training examples, when translation models were also trained on larger datasets (Aharoni et al., 2019; Arivazhagan et al., 2019; NLLB Team et al., 2022). We leave questions about the amount of scale that will be required to effectively mitigate interference in massively (many-to-many, billions of parallel sequences) multilingual settings for future work. Additionally, the data collected from high resource languages may be of higher quality compared to that collected from low resource languages. Further research is needed to determine the impact of low quality training data on interference and synergy. Finally, while we explore trends when scaling models width, deeper models (Ghorbani et al., 2022) might help mitigating interference even further. ## Acknowledgments This research is supported by the Yandex Initiative in Machine Learning. We thank Maor Ivgi, Yilin Yang, Jean Maillard, and Ves Stoyanov for their valuable feedback. ## References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. N. Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Z. Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *ArXiv*, abs/1907.05019. Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Colin Cherry, Behnam Neyshabur, and Orhan Firat. 2022. Data scaling laws in NMT: The effect of noise and architecture. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pages 1466–1482. PMLR. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2022. Beyond english-centric multilingual machine translation. J. Mach. Learn. Res., 22(1). Patrick Fernandes, Behrooz Ghorbani, Xavier Garcia, Markus Freitag, and Orhan Firat. 2023. Scaling laws for multilingual neural machine translation. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In *Proceedings* of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866–875, San Diego, California. Association for Computational Linguistics. Markus Freitag and Orhan Firat. 2020. Complete multilingual neural machine translation. In Proceedings of the Fifth Conference on Machine Translation, pages 550–560, Online. Association for Computational Linguistics. Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, and Colin Cherry. 2022. Scaling laws for neural machine translation. In *International Conference on Learning Representations*. Mitchell A Gordon, Kevin Duh, and Jared Kaplan. 2021. Data and parameter scaling laws for neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5915–5922, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021. Larger-scale transformers for multilingual masked language modeling. In *Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)*, pages 29–33, Online. Association for Computational Linguistics. Thanh-Le Ha, Jan Niehues, and Alex Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In Proceedings of the 13th International Conference on Spoken Language Translation, Seattle, Washington D.C. International Workshop on Spoken Language Translation. Sébastien Jean, Orhan Firat, and Melvin Johnson. 2019. Adaptive scheduling for multi-task learning. *ArXiv*, abs/1909.06434. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. *Transactions of the Association for Computational Linguistics*, 5:339–351. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Julia Kreutzer, David Vilar, and Artem Sokolov. 2021. Bandits don't follow rules: Balancing multi-facet machine translation with multi-armed bandits. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3190–3204, Punta Cana, Dominican Republic. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, and M. Pawan Kumar. 2022. In defense of the unitary scalarization for deep multi-task learning. In *Advances in Neural Information Processing Systems*. Xian Li and Hongyu Gong. 2021. Robust optimization for multilingual translation with imbalanced data. In Advances in Neural Information Processing Systems. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Zehui Lin, Liwei Wu, Mingxuan Wang, and Lei Li. 2021. Learning language specific sub-network for multilingual machine translation. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 293–305, Online. Association for Computational Linguistics. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling humancentered machine translation. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudugunta, Naveen Arivazhagan, and Yonghui Wu. 2020. Leveraging monolingual data with self-supervision for multilingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2827–2835, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020a. Balancing training for multilingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8526–8537, Online. Association for Computational Linguistics. Zirui Wang, Zachary C. Lipton, and Yulia Tsvetkov. 2020b. On negative interference in multilingual models: Findings and a meta-learning treatment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4438–4450, Online. Association for Computational Linguistics. Zirui Wang, Yulia Tsvetkov, Orhan Firat, and Yuan Cao. 2021. Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Derrick Xin, Behrooz Ghorbani, Justin Gilmer, Ankush Garg, and Orhan Firat. 2022. Do current multi-task optimization methods in deep learning even help? In Advances in Neural Information Processing Systems. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. 2020. Gradient surgery for multi-task learning. In *Advances in Neural Information Processing Systems*, volume 33, pages 5824–5836. Curran Associates, Inc. Yaoming Zhu, Jiangtao Feng, Chengqi Zhao, Mingxuan Wang, and Lei Li. 2021. Counter-interference adapter for multilingual machine translation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2812–2823, Punta Cana, Dominican Republic. Association for Computational Linguistics. ## A Bleu Scores Throughout the paper we calculate interference in terms of test loss values. We additionally provide the test BLEU scores achieved by our models. We generate using beam search with 5 beams, without length penalty. We use SacreBLEU (Post, 2018) to calculate test sets BLEU (Papineni et al., 2002) scores. Language similarities Figure 5 shows BLEU scores of models from experiments in Section 4.1. They reflect similar trends, as the variance between different interfering languages when the focus language has only 118K examples diminish when a decent amount of training data is available. Number of languages Figure 6 shows BLEU scores of models from experiments in Section 4.2. They also demonstrate that low resource pairs benefit when there are more interfering languages, but this effect disapper with a decent amount of training data. ![11_image_0.png](11_image_0.png) en-es BLEU ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) Size Tmp cs fr ru zh es fi de et lv lt ro hi kk tr gu bi **19.6 35.1 24.2 27.3 31.7 17.7 24.1 17.5** 12.1 9.2 **22.4** 6.5 0.5 7.7 1.6 1 16.7 31.9 20.5 20.8 27.4 12.8 15.7 10.4 8.0 6.1 15.6 4.8 1.0 4.9 2.5 2 15.5 30.8 19.0 20.3 27.4 14.1 17.6 13.1 11.3 9.4 20.4 9.1 2.3 9.1 6.4 3 15.2 30.2 18.6 19.6 27.3 14.6 18.0 13.5 11.9 9.8 21.5 11.0 3.2 10.2 7.6 4 14.8 30.1 18.2 19.4 27.1 14.7 18.1 13.7 12.4 10.0 21.5 11.7 3.2 10.7 8.7 5 14.5 29.9 17.6 19.0 27.1 14.6 18.1 13.6 **12.6 10.3** 21.7 **11.9 3.5 10.8 9.2** bi **22.1 38.4 27.2 29.9 33.8 19.8 26.1** 17.4 12.0 8.5 22.1 4.8 0.5 7.2 1.8 1 20.3 36.2 24.7 26.4 31.0 16.8 20.8 14.5 12.1 9.8 21.0 7.9 1.7 7.6 4.9 2 19.9 35.7 24.1 25.7 31.4 18.5 22.4 17.3 14.9 12.2 24.1 14.1 4.6 12.1 11.6 3 19.2 35.6 23.5 25.6 31.2 18.4 22.5 17.6 15.3 12.5 24.5 15.2 5.6 12.9 13.1 4 19.1 35.2 23.7 25.0 30.9 17.5 22.6 **17.7 15.4 12.8 25.0** 15.3 5.7 13.3 13.1 5 18.5 34.8 23.4 25.1 30.9 18.1 22.3 17.5 15.3 12.5 24.9 **15.4 5.9 13.8 13.5** bi **23.1 40.1 28.8 30.7 34.2** 19.6 25.9 17.1 11.5 7.8 21.6 4.0 0.4 5.9 1.0 1 22.4 39.6 27.3 29.8 33.6 19.1 24.1 18.0 14.6 12.0 23.9 12.4 3.6 10.7 8.2 2 22.1 39.3 26.5 29.7 33.5 19.5 25.7 19.3 17.1 13.8 **26.5 15.9 6.3** 14.1 **14.2** 3 21.8 38.0 26.1 29.6 33.4 20.1 **26.1 20.2 17.4** 13.8 **26.5** 15.2 5.8 **14.2** 14.1 4 21.3 38.0 25.9 29.0 33.4 **20.3** 25.8 20.1 16.9 **14.1 26.5** 14.6 5.5 13.7 12.2 5 21.1 37.7 26.2 28.6 32.8 19.9 25.6 19.4 16.8 13.9 26.3 14.6 5.2 13.8 12.3 | XS S M L | |------------| bi 22.9 40.0 28.5 30.7 34.4 18.6 25.8 16.9 10.8 8.5 21.4 3.8 0.4 5.4 1.3 1 **23.4 40.7 29.4 31.4** 34.8 20.7 26.5 19.2 16.3 13.4 26.1 **14.4** 4.6 12.5 10.3 2 23.0 40.4 29.1 31.1 34.7 20.6 **28.0** 20.2 **17.9 14.2 26.7** 14.2 **4.7 14.2** 12.4 3 22.9 39.8 28.4 31.1 **34.9 21.3** 27.7 **20.5** 17.4 **14.2** 26.2 13.5 4.6 14.0 12.2 4 22.1 39.2 26.5 29.8 34.0 20.5 26.7 20.3 17.3 **14.2** 26.4 13.8 4.7 14.0 12.1 5 21.9 38.9 27.5 30.1 34.1 21.1 26.7 20.4 17.2 13.7 25.8 13.6 3.8 13.9 **13.0** ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✗ A2. Did you discuss any potential risks of your work? Our work does not add new risks involving translation models ✓ A3. Do the abstract and introduction summarize the paper's main claims? Sections 0,1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 3 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The WMT data used in our experiment is a common machine translation dataset and is publicly available research purposes. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The usage was consistant with the artifacts intended use. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The WMT data used in our experiment is a common machine translation dataset and is publicly available research purposes. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Mostly languages in section 3. Regarding the rest, adding justification from above: The WMT data used in our experiment is a common machine translation dataset and is publicly available research purposes. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3, Appendix A ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
fang-feng-2023-understanding
Understanding and Bridging the Modality Gap for Speech Translation
https://aclanthology.org/2023.acl-long.884
How to achieve better end-to-end speech translation (ST) by leveraging (text) machine translation (MT) data? Among various existing techniques, multi-task learning is one of the effective ways to share knowledge between ST and MT in which additional MT data can help to learn source-to-target mapping. However, due to the differences between speech and text, there is always a gap between ST and MT. In this paper, we first aim to understand this modality gap from the target-side representation differences, and link the modality gap to another well-known problem in neural machine translation: exposure bias. We find that the modality gap is relatively small during training except for some difficult cases, but keeps increasing during inference due to the cascading effect. To address these problems, we propose the Cross-modal Regularization with Scheduled Sampling (Cress) method. Specifically, we regularize the output predictions of ST and MT, whose target-side contexts are derived by sampling between ground truth words and self-generated words with a varying probability. Furthermore, we introduce token-level adaptive training which assigns different training weights to target tokens to handle difficult cases with large modality gaps. Experiments and analysis show that our approach effectively bridges the modality gap, and achieves significant improvements over a strong baseline in all eight directions of the MuST-C dataset.
# Understanding And Bridging The Modality Gap For Speech Translation Qingkai Fang1,2**, Yang Feng**1,2∗ 1Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) 2University of Chinese Academy of Sciences, Beijing, China {fangqingkai21b, fengyang}@ict.ac.cn ## Abstract How to achieve better end-to-end speech translation (ST) by leveraging (text) machine translation (MT) data? Among various existing techniques, multi-task learning is one of the effective ways to share knowledge between ST and MT in which additional MT data can help to learn source-to-target mapping. However, due to the differences between speech and text, there is always a gap between ST and MT. In this paper, we first aim to understand this modality gap from the target-side representation differences, and link the modality gap to another well-known problem in neural machine translation: exposure bias. We find that the modality gap is relatively small during training except for some difficult cases, but keeps increasing during inference due to the cascading effect. To address these problems, we propose the Cross-modal Regularization with Scheduled Sampling (C**RESS**) method. Specifically, we regularize the output predictions of ST and MT, whose target-side contexts are derived by sampling between ground truth words and self-generated words with a varying probability. Furthermore, we introduce token-level adaptive training which assigns different training weights to target tokens to handle difficult cases with large modality gaps. Experiments and analysis show that our approach effectively bridges the modality gap, and achieves promising results in all eight directions of the MuST-C dataset.1 ## 1 Introduction End-to-end speech translation (ST) aims to translate speech signals to text in another language directly. Compared to traditional cascaded methods, which combine automatic speech recognition (ASR) and machine translation (MT) models in a pipeline manner, end-to-end ST could avoid ∗Corresponding author: Yang Feng. 1Code is publicly available at https://github.com/ ictnlp/CRESS. error propagation and high latency (Sperber and Paulik, 2020). Recently, end-to-end ST models have achieved comparable or even better results than cascaded ST models (Bentivogli et al., 2021; Anastasopoulos et al., 2021, 2022). However, due to the scarcity of ST data, it is difficult to directly learn a mapping from source speech to the target text. Previous works often leverage MT data to help the training with multi-task learning (Ye et al., 2022; Tang et al., 2021a). By sharing encoder and decoder between ST and MT, the model tends to learn similar representations from different modalities. In this way, the auxiliary MT task can help build the source-to-target mapping. However, there remains a gap between ST and MT due to the differences between speech and text. In this paper, we measure the *modality gap* with representation differences of the last decoder layer between ST and MT, because the representation of this layer will be mapped into the embedding space to obtain the final translation. A significant modality gap potentially causes different predictions, which makes ST lag behind MT. Thanks to multi-task learning, we observe that when training with teacher forcing, where both ST and MT use ground truth words as target-side contexts, the modality gap is relatively small except for some difficult cases. However, the exposure bias problem can make things worse. During inference, both ST and MT predict the next token conditioned on their previously generated tokens, which may be different due to the modality gap. Moreover, different predictions at the current step may lead to even more different predictions at the next step. As a result, the modality gap will increase step by step due to this cascading effect. To solve these problems, we propose the Crossmodal Regularization with Scheduled Sampling (C**RESS**) method. To reduce the effect of exposure bias, we introduce scheduled sampling during training, where the target-side contexts are sampled be15864 tween ground truth words and self-generated words with a changing probability. Based on this, we propose to regularize ST and MT in the output space to bridge the modality gap by minimizing the Kullback-Leibler (KL) divergence between their predictions. This will encourage greater consistency between ST and MT predictions based on partial self-generated words, which is closer to the inference mode. Besides, to handle the difficult cases, we introduce token-level adaptive training for C**RESS**, where each target token is given a varying weight during training according to the scale of the modality gap. In this way, those cases with significant modality gaps will be emphasized. We conduct experiments on the ST benchmark dataset MuST-C (Di Gangi et al., 2019a). Results show that our approach significantly outperforms the strong multi-task learning baseline, with 1.8 BLEU improvements in the base setting and 1.3 BLEU improvements in the expanded setting on average. Further analysis shows that our approach effectively bridges the modality gap and improves the translation quality, especially for long sentences. ## 2 Background 2.1 End-To-End Speech Translation End-to-end speech translation (ST) directly translates speech in the source language to text in the target language. The corpus of ST is usually composed of triplet data D = {(s, x, y)}. Here s = (s1*, ..., s*|s|) is the sequence of audio wave, x = (x1*, ..., x*|x|) is the transcription and y = (y1*, ..., y*|y|) is the translation. Similar to previous work (Ye et al., 2021; Fang et al., 2022), our ST model is composed of an acoustic encoder and a translation model. The acoustic encoder is a pre-trained HuBERT (Hsu et al., 2021) model followed by two convolutional layers, which are used to reduce the length of the speech sequence. The translation model follows standard Transformer (Vaswani et al., 2017) encoder-decoder architecture, where the encoder contains N Transformer encoder layers, and the decoder contains N Transformer decoder layers. We first pre-train the translation model with MT data, and then optimize the whole model by minimizing a cross-entropy loss: $$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{ST}}=-\sum_{i=1}^{|{\bf y}|}\log p(y_{i}|{\bf s},{\bf y}_{<i}),}}\\ {{p(y_{i}|{\bf s},{\bf y}_{<i})\propto\exp({\bf W}\cdot f({\bf s},{\bf y}_{<i})),}}\end{array}\quad\mathrm{(1)}$$ where f is a mapping from the input speech s and target prefix y<i to the representation of the last decoder layer at step i. W is used to transform the dimension to the size of the target vocabulary. ## 2.2 Multi-Task Learning For St Multi-task learning (MTL) has been proven useful for sharing knowledge between text translation and speech translation (Tang et al., 2021a), where an auxiliary MT task is introduced during training: $$\mathcal{L}_{\mathrm{MT}}=-\sum_{i=1}^{|\mathbf{y}|}\log p(y_{i}|\mathbf{x},\mathbf{y}_{<i}),$$ (3) $\binom{4}{2}$ . Note that both modalities (i.e., speech and text) share all transformer encoder and decoder layers. Finally, the training objective is written as follows: $${\mathcal{L}}_{\mathrm{MTL}}={\mathcal{L}}_{\mathrm{ST}}+{\mathcal{L}}_{\mathrm{MT}}.$$ $$({\mathfrak{H}})$$ ## 3 Preliminary Studies On Modality Gap With multi-task learning, most of the knowledge of MT can be transferred to ST. However, the performance gap between ST and MT still exists. In this section, we first conduct some preliminary studies with our multi-task learning baseline model to understand where this gap comes from. ## 3.1 Definition Of The Modality Gap The gap between ST and MT is related to the prediction difference at each decoding step, while the prediction depends only on the representation of the last decoder layer. Therefore, we define the modality gap at the i-th decoding step as follows: $$G({\bf s},{\bf y}_{<i}\|{\bf x},{\bf y}_{<i})=1-c o s(f({\bf s},{\bf y}_{<i}),f({\bf x},{\bf y}_{<i})),\tag{6}$$ where cos is the cosine similarity function cos(a, b) = a⊤b/∥a∥∥b∥. A larger cosine similarity indicates a smaller modality gap. To understand the extent of the modality gap, we count the distribution of G(s, y<i∥x, y<i) based on all triples (s, x, y<i) in the MuST-C (Di Gangi et al., 2019a) En→De dev set. As shown in Figure 1, the modality gap is relatively small (< 10%) in most cases, which proves the effectiveness of multi-task learning in sharing knowledge across ST and MT. However, we also observe a long-tail problem: there is a large difference between ST and MT representations in some difficult cases. ![2_image_0.png](2_image_0.png) ## 3.2 Connection Between Exposure Bias And Modality Gap Exposure bias, a discrepancy between training and inference, is a well-known problem in neural machine translation (Bengio et al., 2015; Ranzato et al., 2016; Wang and Sennrich, 2020; Arora et al., 2022). During training with *teacher forcing*, both ST and MT predict the next token conditioned on the ground truth target prefix y<i. However, during inference, the predictions of ST and MT depend on their previously generated tokens by the model itself (denoted as yb s <i and yb x <i for ST and MT respectively), which might be different due to the modality gap. Furthermore, different predictions at the current decoding step result in different target prefixes for ST and MT, potentially causing even more different predictions at the next step. Such cascading effect will enlarge the modality gap step by step during inference. To prove our hypothesis, we present the curves of the modality gap with decoding steps under teacher forcing, *beam search*, and *greedy search* strategies, respectively. As shown in Figure 2, with teacher forcing, there is no significant difference in the modality gap across steps, as both ST and MT depend on the same target prefix at any step. Hence, the modality gap G(s, y<i∥x, y<i) only comes from the difference between input speech s and text x. However, when decoding with *greedy search*, due to the cascading effect mentioned above, the self-generated target prefix yb s <i and yb x <i become increasingly different, making the modality gap G(s, yb s <i∥x, yb x <i) keep increasing with decoding steps. A simple way to alleviate this problem is beam search, which considers several candidate ![2_image_1.png](2_image_1.png) ![2_image_2.png](2_image_2.png) tokens rather than a single one at each decoding step. When there is an overlap between candidate tokens of ST and MT, the cascading effect will be reduced, thus slowing down the increase of the modality gap. ## 4 Method: C**Ress** Our preliminary studies in Section 3 show that: - The modality gap will be enlarged during inference due to exposure bias. - The modality gap may be significant in some difficult cases. Inspired by these, we propose the Cross-modal Regularization with Scheduled Sampling (C**RESS**) method to bridge the modality gap, especially in inference mode (Section 4.1). Furthermore, we propose a token-level adaptive training method for C**RESS** to handle difficult cases (Section 4.2). ## 4.1 Cross-Modal Regularization With Scheduled Sampling (C**Ress**) To bridge the modality gap during inference, we adopt scheduled sampling for both ST and MT to approximate the inference mode at training time. After that, we add a regularization loss between the predictions of ST and MT based on the part of their self-generated words as context. This allows for more consistent predictions between ST and MT during inference, thus reducing the performance gap between ST and MT. Figure 3 illustrates the main framework of our method. ![3_image_0.png](3_image_0.png) Scheduled Sampling *Scheduled sampling* (Bengio et al., 2015), which samples between ground truth words and self-generated words, i.e., *predicted words*, with a certain probability as targetside context, has proven helpful in alleviating exposure bias. In general, the input at the {i + 1}-th decoding step should be the ground truth word yi during training. With scheduled sampling, it can also be substituted by a predicted word. Next, we describe how to select the predicted word yb s i for ST and yb x i for MT. For ST, we follow Zhang et al. (2019) to select the predicted word yb s i by sampling from the word distribution p(yi|s, y<i) in Equation (2) with *Gumbel-Max* technique (Gumbel, 1954; Maddison et al., 2014), a method to draw a sample from a categorical distribution: $$\eta=-\log(-\log u),\eqno(7)$$ $$\widehat{y}_{i}^{s}=\arg\max\left({\bf W}\cdot f({\bf s},{\bf y}_{<i})+\eta\right),\eqno(8)$$ where η is the Gumbel noise calculated from the uniform noise u ∼ U(0, 1). Similarly, for MT, there is: $${\widehat{y}}_{i}^{x}=\arg\operatorname*{max}\left(\mathbf{W}\cdot f(\mathbf{x},\mathbf{y}_{<i})+\eta\right).\quad(9)$$ Note that we may omit the superscript and denote the predicted word for both ST and MT by ybiin the following. How to select between the ground truth word yi and the predicted word ybi? Similar to Bengio et al. (2015); Zhang et al. (2019), we randomly sample from both with a varying probability. We denote the probability of selecting from the ground truth word as p∗. At the beginning of training, since the model is not yet well trained, we select more from the ground truth words (with larger p∗) to help the model converge. In the later stages of training, we select more from the predicted words (with smaller p∗), which is closer to the situation during inference. To achieve this, we decrease p∗ with a function of the index of training epochs e: $$p^{*}=\frac{\mu}{\mu+\exp(e/\mu)},\qquad\qquad(10)$$ where µ is a hyper-parameter. With scheduled sampling, the target-side context becomes ye = (ye1*, ...,* ye|y|), where $$\widetilde{y}_{i}=\begin{cases}y_{i},&p\leq p^{*}\\ \widehat{y}_{i},&p>p^{*}\end{cases},\qquad\qquad(11)$$ where p is sampled from the uniform distribution U(0, 1). Using ye sand ye xto denote the targetside context of ST and MT respectively, the loss functions of ST and MT become: $${\mathcal{L}}_{\mathrm{ST}}^{\mathrm{CRES}}=-$$ $${\mathcal{L}}_{\mathrm{MT}}^{\mathrm{CRES}}=-$$ $$\begin{array}{l}{{\sum_{i=1}^{|{\bf y}|}\log p(y_{i}|{\bf s},\widetilde{{\bf y}}_{<i}^{s}),}}\\ {{\sum_{i=1}^{|{\bf y}|}\log p(y_{i}|{\bf x},\widetilde{{\bf y}}_{<i}^{x}),}}\end{array}$$ (12) $\binom{13}{2}$ (13) ... <i), (12) <i), (13) Cross-modal Regularization To bridge the modality gap in inference mode, we expect the predictions of ST and MT with scheduled sampling to be consistent. Inspired by recent works of consistency training (Liang et al., 2021; Guo et al., 2022), we regularize ST and MT in the output space. Specifically, we minimize the bidirectional Kullback-Leibler (KL) divergence between the output distributions of ST and MT at each step: $$\begin{split}\mathcal{L}_{\mathrm{Reg}}^{\mathrm{CreSS}}&=\sum_{i=1}^{|\mathbf{y}|}\frac{1}{2}(\mathcal{D}_{\mathrm{KL}}(p(y_{i}|\mathbf{s},\widetilde{\mathbf{y}}_{<i}^{s})\|p(y_{i}|\mathbf{x},\widetilde{\mathbf{y}}_{<i}^{x}))\\ &\quad+\mathcal{D}_{\mathrm{KL}}(p(y_{i}|\mathbf{x},\widetilde{\mathbf{y}}_{<i}^{x})\|p(y_{i}|\mathbf{s},\widetilde{\mathbf{y}}_{<i}^{s}))).\end{split}\tag{14}$$ With the translation loss in Equation (12) and (13), the final training objective is: $$\mathcal{L}^{\text{CRES}}=\mathcal{L}^{\text{CRES}}_{\text{ST}}+\mathcal{L}^{\text{CRES}}_{\text{MT}}+\lambda\mathcal{L}^{\text{CRES}}_{\text{Reg}},\tag{15}$$ where $\lambda$ is the hyper-parameter to control the weight $\alpha$ of $\alpha$CRES. weight of L CRESS Reg . ## 4.2 Token-Level Adaptive Training For C**Ress** As mentioned above, the modality gap might be significant in some difficult cases. Inspired by the idea of token-level adaptive training (Gu et al., 2020; Xu et al., 2021b; Zhang et al., 2022a), we propose to treat each token adaptively according to the scale of the modality gap. The training objectives in Equation (12), (13), and (14) are modified as follows: L CRESS ST = − X |y| i=1 wi· log p(yi|s, ye s <i), (16) L CRESS MT = − X |y| i=1 wi· log p(yi|x, ye x <i), (17) L CRESS Reg = X |y| i=1 1 2 wi(DKL(p(yi|s, ye s <i)∥p(yi|x, ye x <i)) + DKL(p(yi|x, ye x <i)∥p(yi|s, ye s <i))), (18) where wiis the token-level weight defined by a linear function of the modality gap: $$w_{i}=B+S\cdot G({\bf s},\widetilde{\bf y}_{<i}^{s}\|{\bf x},\widetilde{\bf y}_{<i}^{x}),$$ <i), (19) where B (base) and S (scale) are hyper-parameters to control the lower bound and magnitude of change of wi. In this way, cases with a large modality gap will be assigned a larger weight and thus emphasized during training. Note that the modality gap is computed on-the-fly during training. ## 5 Experiments 5.1 Datasets ST Datasets We conduct experiments on MuSTC (Di Gangi et al., 2019a) dataset, a multilingual ST dataset containing 8 translation directions: English (En) to German (De), French (Fr), Spanish (Es), Romanian (Ro), Russian (Ru), Italian (It), Portuguese (Pt) and Dutch (Nl). It contains at least 385 hours of TED talks with transcriptions and translations for each direction. We use dev set for validation and tst-COMMON set for evaluation. External MT Datasets We also introduce external MT datasets to pre-train our translation model in the expanded setting. For En→De/Fr/Es/Ro/Ru directions, we introduce data from WMT (Buck and Koehn, 2016). For En→It/Pt/Nl, we introduce data from OPUS1002(Zhang et al., 2020). Table 4 in Appendix A lists the statistics of all datasets. ## 5.2 Experimental Setups Pre-processing For *speech* input, we use the raw 16-bit 16kHz mono-channel audio wave. For *text* input, all sentences in ST and external MT datasets are tokenized and segmented into subwords using SentencePiece3. For each translation direction, the vocabulary is learned from the source and target texts from the ST dataset, with a size of 10K. For the external MT datasets, we filter out parallel sentence pairs whose length ratio exceeds 1.5. Model Setting We use the pre-trained HuBERT model4to encode the input audio. Two 1dimensional convolutional layers after HuBERT are set to kernel size 5, stride size 2, and padding 2. For the translation model, we employ Transformer architecture with the base configuration, which contains 6 encoder layers and 6 decoder layers, with 512 hidden states, 8 attention heads, and 2048 feedforward hidden states for each layer. The translation model is first pre-trained with MT task using transcription-translation pairs from the ST dataset (**base setting**), and also sentence pairs from the external MT dataset (**expanded setting**). During MT pre-training, each batch has up to 33k text tokens. The maximum learning rate is set to 7e-4. During fine-tuning, each batch contains up to 16M audio frames. The maximum learning rate 2http://opus.nlpl.eu/opus-100.php 3https://github.com/google/sentencepiece 4https://dl.fbaipublicfiles.com/hubert/hubert_ base_ls960.pt | Models | BLEU | | | | | | | | | |----------------------------------------|--------|--------|--------|--------|--------|--------|--------|--------|------| | En→De | En→Fr | En→Es | En→Ro | En→Ru | En→It | En→Pt | En→Nl | Avg. | | | Base setting (w/o external MT data) | | | | | | | | | | | XSTNet (Ye et al., 2021) | 25.5 | 36.0 | 29.6 | 25.1 | 16.9 | 25.5 | 31.3 | 30.0 | 27.5 | | STEMM (Fang et al., 2022) | 25.6 | 36.1 | 30.3 | 24.3 | 17.1 | 25.6 | 31.0 | 30.1 | 27.5 | | ConST (Ye et al., 2022) | 25.7 | 36.8 | 30.4 | 24.8 | 17.3 | 26.3 | 32.0 | 30.6 | 28.0 | | MTL | 25.3 | 35.7 | 30.5 | 23.8 | 17.2 | 26.0 | 31.3 | 29.5 | 27.4 | | CRESS | 27.2** | 37.8** | 31.9** | 25.9** | 18.7** | 27.3** | 33.0** | 31.6** | 29.2 | | Expanded setting (w/ external MT data) | | | | | | | | | | | Chimera (Han et al., 2021) | 27.1 | 35.6 | 30.6 | 24.0 | 17.4 | 25.0 | 30.2 | 29.2 | 27.4 | | XSTNet (Ye et al., 2021) | 27.1 | 38.0 | 30.8 | 25.7 | 18.5 | 26.4 | 32.4 | 31.2 | 28.8 | | STEMM (Fang et al., 2022) | 28.7 | 37.4 | 31.0 | 24.5 | 17.8 | 25.8 | 31.7 | 30.5 | 28.4 | | ConST (Ye et al., 2022) | 28.3 | 38.3 | 32.0 | 25.6 | 18.9 | 27.2 | 33.1 | 31.7 | 29.4 | | †STPT (Tang et al., 2022) | - | 39.7 | 33.1 | - | - | - | - | - | - | | †SpeechUT (Zhang et al., 2022b) | 30.1 | 41.4 | 33.6 | - | - | - | - | - | - | | MTL | 27.7 | 38.5 | 32.8 | 24.9 | 19.0 | 26.5 | 32.0 | 30.8 | 29.0 | | CRESS | 29.4** | 40.1** | 33.2* | 26.4** | 19.7** | 27.6** | 33.6** | 32.3** | 30.3 | is set to 1e-4. We use Adam optimizer (Kingma and Ba, 2015) with 4k warm-up steps. We set dropout to 0.1 and label smoothing to 0.1. The training will early stop if the BLEU score on the dev set did not increase for 10 epochs. During inference, we average the checkpoints of the last 10 epochs for evaluation. We use beam search with a beam size of 8. The length penalty is set to 1.2, 1.8, 0.6, 1.4, 0.8, 1.0, 1.4, and 1.0 for En→De, Fr, Es, Ro, Ru, It, Pt and Nl, respectively. We use scareBLEU5 (Post, 2018) to compute case-sensitive detokenized BLEU (Papineni et al., 2002) scores and the statistical significance of translation results with paired bootstrap resampling6(Koehn, 2004). We implement our model with *fairseq*7(Ott et al., 2019). All models are trained on 4 Nvidia RTX 3090 GPUs. For scheduled sampling, the decay parameter is set to µ = 15. For cross-modal regularization, the weight parameter is set to λ = 1.0. For tokenlevel adaptive training, we did a grid search for base and scale parameters on MuST-C En→De dev set with B ∈ {0.6, 0.7, 0.8, 0.9, 1.0} and S ∈ {0.05, 0.10, 0.20, 0.50, 1.00}. Finally, we set B = 0.7 and S = 0.05 for all translation directions. We start applying token-level adaptive training after the 20th epoch during training. Baseline Systems We include several strong endto-end ST models for comparison: Chimera (Han et al., 2021), XSTNet (Ye et al., 2021), STEMM (Fang et al., 2022), ConST (Ye et al., 2022), STPT (Tang et al., 2022), and SpeechUT (Zhang et al., 2022b). Besides, the multi-task learning baseline in Section 2.2 is also included as a strong baseline, which is denoted as MTL. We use C**RESS** to denote our method with token-level adaptive training. Among these models, Chimera, XSTNet, STEMM, and ConST combine pre-trained Wav2vec 2.0 (Baevski et al., 2020) and pre-trained translation model together, and then fine-tune the whole model on ST datasets. Our implemented MTL and C**RESS** follow a similar design, but we use HuBERT instead of Wav2vec 2.0 as we find HuBERT gives a stronger baseline (See Table 5 for details). STPT and SpeechUT jointly pre-train the model on speech and text data from scratch, which achieve better performance but also bring higher training costs8. ## 5.3 Main Results On Must-C Dataset Table 1 shows the results on MuST-C tst-COMMON set in all eight directions. First, we find that our implemented MTL is a strong baseline compared with existing approaches. Second, our proposed C**RESS** significantly outperforms MTL in both settings, with 1.8 BLEU improvement in the base set8For example, the pre-training of SpeechUT takes 3 days with 32 V100 GPUs. ![6_image_0.png](6_image_0.png) ting and 1.3 BLEU improvement in the expanded setting on average, demonstrating the superiority of our approach. Besides, we report ChrF++ scores on MuST-C in Appendix E, and we also provide results on CoVoST 2 (Wang et al., 2020a) En→De dataset in Appendix C. ## 6 Analysis And Discussion Results in Section 5.3 show the superiority of our method. To better understand C**RESS**, we explore several questions in this section. All analysis experiments are conducted on MuST-C En→De dataset in the expanded setting. ## (1) Do Scheduled Sampling, Cross-Modal Regularization, And Token-Level Adaptive Training all matter? Scheduled sampling, regularization, and token-level adaptive training are effective techniques to improve the performance of translation models. To understand the role of each, we conduct ablation experiments in Table 2. When only applying token-level adaptive training (\#5), we observe a performance decline of 0.2 BLEU since only adaptive training can not bridge the modality gap. When training with scheduled sampling only (\#4), we observe a slight improvement of 0.3 BLEU, probably due to the alleviation of exposure bias. When training with cross-modal regularization only (\#3), which encourages the consistency between predictions of ST and MT with ground truth target contexts, we observe an improvement of 0.7 BLEU. If we combine both (\#2), we obtain a much more significant boost of 1.3 BLEU, proving that both scheduled sampling and cross-modal regularization play a crucial role in our method. Furthermore, with token-level adaptive training (\#1), the improvement comes to 1.7 BLEU, which shows the benefit of treating different tokens differently according to the modality gap. (2) Does CRESS successfully bridge the modal- ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) ![6_image_3.png](6_image_3.png) ity gap? To validate whether our approach successfully bridges the modality gap between ST and MT, we revisit the experiments in Section 3. Figure 4 shows the distribution of the modality gap with teacher forcing. We observe a general decrease in the modality gap compared with MTL. We also plot the curves of the modality gap with decoding steps of C**RESS** under teacher forcing, greedy search, and beam search strategies. As shown in Figure 5, our approach significantly slows down the increase of the modality gap compared with MTL baseline, suggesting that the predictions of ST and MT are more consistent during inference, demonstrating the effectiveness of our method in bridging the modality gap. (3) How base and scale hyper-parameters influence token-level adaptive training? B (base) and S (scale) are two important hyper-parameters ![7_image_1.png](7_image_1.png) in token-level adaptive training. We investigate how different combinations of B and S influence performance. As shown in Figure 6, token-level adaptive training can bring improvements in most cases. In particular, it usually performs better with smaller B and smaller S, leading to a boost of up to 0.4 BLEU. We conclude that treating different tokens too differently is also undesirable. We use B = 0.7 and S = 0.05 for all translation directions. (4) Does CRESS **successfully reduce the performance gap between ST and MT?** As shown in Table 3, our method not only brings improvements to ST, but also gives a slight average boost of 0.3 BLEU to MT. We suggest that this may be due to the effect of regularization. More importantly, we find that the performance gap between ST and MT for C**RESS** is significantly reduced compared to the MTL baseline (6.0→5.0), which further demonstrates that the improvement in ST is mainly due to the effective reduction of the modality gap. (5) Is CRESS **more effective for longer sentences?** The autoregressive model generates the translation step by step, making the translation of long sentences more challenging. We divide the MuST-C En→De dev set into several groups according to the length of target sentences, and compute the BLEU scores in each group separately, as shown in Figure 7. We observe that C**RESS** achieve significant improvements over the baseline in all groups, especially for sentences longer than 45, which shows the superiority of our method when translating long sentences. (6) How the decay parameter in scheduled ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) sampling influence the performance? In scheduled sampling, the probability of selecting the ground truth word p∗ keeps decreasing during training as the function in Equation (10). Here, the hyper-parameter µ is used to control the shape of the function. As µ increases, the probability p∗ decreases more slowly, and vice versa. We investigate the impact of µ in Figure 8, and find that (1) the model performs worse when p∗ drops too quickly, and (2) when µ is within a reasonable range, there is not much impact on the final BLEU score. We use µ = 15 in our experiments. ## 7 Related Work End-to-end Speech Translation End-to-end speech translation (Bérard et al., 2016; Weiss et al., 2017) has shown great potential for overcoming error propagation and reducing latency compared to traditional cascaded ST systems (Salesky et al., | Models | Task | BLEU | | | | | | | | | | |----------|--------|--------|-------|-------|-------|-------|-------|-------|------|------|-----| | En→De | En→Fr | En→Es | En→Ro | En→Ru | En→It | En→Pt | En→Nl | Avg.↑ | ∆ ↓ | | | | MTL | ST | 27.7 | 38.5 | 32.8 | 24.9 | 19.0 | 26.5 | 32.0 | 30.8 | 29.0 | 6.0 | | MT | 33.5 | 46.6 | 38.3 | 30.9 | 22.1 | 33.0 | 38.6 | 36.7 | 35.0 | | | | CRESS | ST | 29.4 | 40.1 | 33.2 | 26.4 | 19.7 | 27.6 | 33.6 | 32.3 | 30.3 | 5.0 | | MT | 34.1 | 46.6 | 38.1 | 31.1 | 22.4 | 33.3 | 39.5 | 37.6 | 35.3 | | | 2019; Di Gangi et al., 2019c,b; Bahar et al., 2019a). One challenge in training end-to-end ST models is the scarcity of ST data. To address this problem, researchers employed MT data to help training with techniques like pre-training (Bansal et al., 2019; Stoian et al., 2020; Wang et al., 2020b,c; Alinejad and Sarkar, 2020; Le et al., 2021; Dong et al., 2021a; Zheng et al., 2021; Xu et al., 2021a; Tang et al., 2022), multi-task learning (Le et al., 2020; Dong et al., 2021b; Ye et al., 2021; Tang et al., 2021a,b; Indurthi et al., 2021), knowledge distillation (Liu et al., 2019; Inaguma et al., 2021), and data augmentation (Jia et al., 2019; Bahar et al., 2019b; Lam et al., 2022; Fang and Feng, 2023). However, due to the *modality gap* between speech and text, it is still difficult to fully exploit MT data with the above techniques. To overcome the modality gap, Han et al. (2021) projects features of both speech and text into a shared semantic space. Fang et al. (2022); Zhou et al. (2023) mixes up features of speech and text to learn similar representations for them. Ye et al. (2022) brings sentence-level representations closer with contrastive learning. Bapna et al. (2021, 2022); Chen et al. (2022); Tang et al. (2022); Zhang et al. (2022b) jointly train on speech and text and design methods to align two modalities. Different from previous work, in this work, we understand the modality gap from the target-side representation differences, and show its connection to exposure bias. Based on this, we propose the Cross-modal Regularization with Scheduled Sampling (C**RESS**) method to bridge the modality gap. Exposure Bias Exposure bias indicates the discrepancy between training and inference. Several approaches employ Reinforcement Learning (RL) (Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017) instead of Maximum Likelihood Estimation (MLE) to avoid this problem. However, Wu et al. (2018) shows that RL-based training is unstable due to the high variance of gradient estimation. An alternative and simpler approach is scheduled sampling (Bengio et al., 2015), which samples between ground truth words and self-generated words with a changing probability. Zhang et al. (2019) extends it with Gumbel noise for more robust training. In this paper, we adopt this approach to approximate the inference mode due to its training stability and low training cost. Output Regularization for MT Regularization in the output space has proved useful for MT. Liang et al. (2021) proposes to regularize the output predictions of two sub-models sampled by dropout. Guo et al. (2022) regularizes the output predictions of models before and after input perturbation. In this paper, we regularize the output predictions across modalities, which encourages more consistent predictions for ST and MT. Token-level Adaptive Training Token-level adaptive training for MT is first proposed in Gu et al. (2020), which assigns larger weights to lowfrequency words to prevent them from being ignored. Xu et al. (2021b); Zhang et al. (2022a) computes the weight with bilingual mutual information. In this paper, we compute the weights with the modality gap between ST and MT. ## 8 Conclusion In this paper, we propose a simple yet effective method C**RESS** to regularize the model predictions of ST and MT, whose target-side contexts contain both ground truth words and self-generated words with scheduled sampling. Based on this, we further propose a token-level adaptive training method to handle difficult cases. Our method achieves promising results on MuST-C benchmark. Further analysis shows that our method can effectively bridge the modality gap and improve the translation quality, especially for long sentences. In the future, we will explore how to apply our method to other tasks. ## Limitations Although our proposed method achieves promising results and outperforms most baseline systems on the ST benchmark, it still has some limitations: (1) the performance of our method still lags behind a recent work SpeechUT, although our approach has the advantage of consuming less time and resources; (2) we observe that the modality gap is still not eliminated and the effect of exposure bias on the modality gap still exists; (3) the performance of our approach on larger datasets and larger models remains to be explored; (4) how to apply our approach to other tasks also needs to be further investigated. ## Ethics Statement In this paper, we present an effective method C**RESS** for speech translation. While our model achieves superior performance on the widely used ST benchmark MuST-C, applying it directly to real scenarios is still risky. This is due to the fact that our training corpus only contains hundreds of hours of audio recordings from TED talks, which is far from covering all domains of the real world. Besides, the datasets we used in this paper (MuST-C, WMT, and OPUS-100) are all publicly available. We also release the code implemented with a wellknown framework fairseq. These guarantee the reproducibility of our work. ## Acknowledgements We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2018AAA0102502) ## References Ashkan Alinejad and Anoop Sarkar. 2020. Effectively pretraining a speech translation decoder with machine translation data. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8014–8020, Online. Association for Computational Linguistics. Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondˇrej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, Clara Emmanuel, Yannick Estève, Marcello Federico, Christian Federmann, Souhir Gahbiche, Hongyu Gong, Roman Grundkiewicz, Barry Haddow, Benjamin Hsu, Dávid Javorský, Vera Kloudová, Surafel Lakew, Xutai Ma, Prashant ˘ Mathur, Paul McNamee, Kenton Murray, Maria Nadejde, Satoshi Nakamura, Matteo Negri, Jan ˇ Niehues, Xing Niu, John Ortega, Juan Pino, Elizabeth Salesky, Jiatong Shi, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Yogesh Virkar, Alexander Waibel, Changhan Wang, and Shinji Watanabe. 2022. Findings of the IWSLT 2022 evaluation campaign. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 98–157, Dublin, Ireland (in-person and online). Association for Computational Linguistics. Antonios Anastasopoulos, Ondˇrej Bojar, Jacob Bremerman, Roldano Cattoni, Maha Elbayad, Marcello Federico, Xutai Ma, Satoshi Nakamura, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Alexander Waibel, Changhan Wang, and Matthew Wiesner. 2021. FINDINGS OF THE IWSLT 2021 EVALUATION CAMPAIGN. In Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021), pages 1–29, Bangkok, Thailand (online). Association for Computational Linguistics. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 4218–4222. European Language Resources Association. Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Cheung. 2022. Why exposure bias matters: An imitation learning perspective of error accumulation in language generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 700–710, Dublin, Ireland. Association for Computational Linguistics. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2022. XLS-R: self-supervised cross-lingual speech representation learning at scale. In *Interspeech 2022,* 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022, pages 2278–2282. ISCA. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33. Parnia Bahar, Tobias Bieschke, and Hermann Ney. 2019a. A comparative study on end-to-end speech to text translation. In *Proc. of ASRU*, pages 792–799. IEEE. Parnia Bahar, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019b. On using specaugment for end-to-end speech translation. In *Proc. of IWSLT*. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In *International* Conference on Learning Representations. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre-training on high-resource speech recognition improves lowresource speech-to-text translation. In *Proc. of* NAACL-HLT, pages 58–68. Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, and Alexis Conneau. 2022. mslam: Massively multilingual joint pre-training for speech and text. *CoRR*, abs/2202.01374. Ankur Bapna, Yu-an Chung, Nan Wu, Anmol Gulati, Ye Jia, Jonathan H Clark, Melvin Johnson, Jason Riesa, Alexis Conneau, and Yu Zhang. 2021. Slam: A unified encoder for speech and language modeling via speech-text joint pre-training. arXiv preprint arXiv:2110.10329. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc. Luisa Bentivogli, Mauro Cettolo, Marco Gaido, Alina Karakanta, Alberto Martinelli, Matteo Negri, and Marco Turchi. 2021. Cascade versus direct speech translation: Do the differences still make a difference? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2873–2887, Online. Association for Computational Linguistics. Alexandre Bérard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In NIPS workshop on End-to-end Learning for Speech and Audio Processing. Christian Buck and Philipp Koehn. 2016. Findings of the WMT 2016 bilingual document alignment shared task. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers*, pages 554–563, Berlin, Germany. Association for Computational Linguistics. Zhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhuvana Ramabhadran, Pedro J. Moreno, Ankur Bapna, and Heiga Zen. 2022. Maestro: Matched speech text representations through modality matching. In *INTERSPEECH*. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019a. MuST-C: a Multilingual Speech Translation Corpus. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2012–2017, Minneapolis, Minnesota. Association for Computational Linguistics. Mattia A. Di Gangi, Matteo Negri, Roldano Cattoni, Roberto Dessi, and Marco Turchi. 2019b. Enhancing transformer for end-to-end speech-to-text translation. In *Proceedings of Machine Translation Summit XVII* Volume 1: Research Track, pages 21–31. Mattia A. Di Gangi, Matteo Negri, and Marco Turchi. 2019c. Adapting transformer to end-to-end spoken language translation. In *Proc. of INTERSPEECH*, pages 1133–1137. International Speech Communication Association (ISCA). Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021a. Consecutive decoding for speech-to-text translation. In The Thirty-fifth AAAI Conference on Artificial Intelligence, AAAI. Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021b. Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation. In *Proceedings* of the AAAI Conference on Artificial Intelligence. Qingkai Fang and Yang Feng. 2023. Back translation for speech-to-text translation without transcripts. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Qingkai Fang, Rong Ye, Lei Li, Yang Feng, and Mingxuan Wang. 2022. STEMM: Self-learning with speech-text manifold mixup for speech translation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 7050–7062, Dublin, Ireland. Association for Computational Linguistics. Mark J. F. Gales, Kate M. Knill, Anton Ragni, and Shakti P. Rath. 2014. Speech recognition and keyword spotting for low-resource languages: Babel project research at CUED. In 4th Workshop on Spoken Language Technologies for Under-resourced Languages, SLTU 2014, St. Petersburg, Russia, May 1416, 2014, pages 16–23. ISCA. Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie Zhou, and Dong Yu. 2020. Tokenlevel adaptive training for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1035–1046, Online. Association for Computational Linguistics. Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications: a series of lectures. In *Nat. Bur. Standards Appl. Math. Ser.*, volume 33. US Government Printing Office. Dengji Guo, Zhengrui Ma, Min Zhang, and Yang Feng. 2022. Prediction difference regularization against perturbation for neural machine translation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*. Chi Han, Mingxuan Wang, Heng Ji, and Lei Li. 2021. Learning shared semantic space for speech-to-text translation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2214–2225, Online. Association for Computational Linguistics. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *IEEE/ACM Trans. Audio, Speech* and Lang. Proc., 29:3451–3460. Hirofumi Inaguma, Tatsuya Kawahara, and Shinji Watanabe. 2021. Source and target bidirectional knowledge distillation for end-to-end speech translation. In *Proceedings of NAACL*, pages 1872–1881. Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Seokchan Ahn, Sangha Kim, Chanwoo Kim, and Inchul Hwang. 2021. Task aware multi-task learning for speech to text tasks. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7723–7727. Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J. Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. 2019. Leveraging weakly supervised data to improve end-to-end speech-to-text translation. In *Proc. of ICASSP*, pages 7180–7184. Jacob Kahn, Morgane Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, and Emmanuel Dupoux. 2020. Libri-light: A benchmark for ASR with limited or no supervision. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 7669–7673. IEEE. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Tsz Kin Lam, Shigehiko Schamoni, and Stefan Riezler. 2022. Sample, translate, recombine: Leveraging audio alignments for data augmentation in end-toend speech translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 245– 254, Dublin, Ireland. Association for Computational Linguistics. Hang Le, Juan Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier. 2020. Dual-decoder transformer for joint automatic speech recognition and multilingual speech translation. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3520–3533, Barcelona, Spain (Online). International Committee on Computational Linguistics. Hang Le, Juan Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier. 2021. Lightweight adapter tuning for multilingual speech translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 817–824, Online. Association for Computational Linguistics. Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. Multilingual speech translation from efficient finetuning of pretrained models. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 827–838, Online. Association for Computational Linguistics. Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, and TieYan Liu. 2021. R-drop: Regularized dropout for neural networks. In *Advances in Neural Information* Processing Systems, volume 34, pages 10890–10905. Curran Associates, Inc. Yuchen Liu, Hao Xiong, Jiajun Zhang, Zhongjun He, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019. End-to-End Speech Translation with Knowledge Distillation. In *Proc. Interspeech 2019*, pages 1128– 1132. Chris J. Maddison, Daniel Tarlow, and Tom Minka. 2014. A∗ sampling. In *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovic. 2017. ´ chrF++: words helping character n-grams. In *Proceedings of the Second Conference on Machine Translation*, pages 612–618, Copenhagen, Denmark. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. 2020. MLS: A largescale multilingual dataset for speech research. In *Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual* Event, Shanghai, China, 25-29 October 2020, pages 2757–2761. ISCA. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In *4th International Conference on Learning Representations,* ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Elizabeth Salesky, Matthias Sperber, and Alexander Waibel. 2019. Fluent translations from disfluent speech in end-to-end speech translation. In Proc. of NAACL-HLT, pages 2786–2792. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683–1692, Berlin, Germany. Association for Computational Linguistics. Matthias Sperber and Matthias Paulik. 2020. Speech translation and the end-to-end promise: Taking stock of where we are. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7409–7421, Online. Association for Computational Linguistics. Mihaela C. Stoian, Sameer Bansal, and Sharon Goldwater. 2020. Analyzing asr pretraining for low-resource speech-to-text translation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7909–7913. IEEE. Yun Tang, Hongyu Gong, Ning Dong, Changhan Wang, Wei-Ning Hsu, Jiatao Gu, Alexei Baevski, Xian Li, Abdelrahman Mohamed, Michael Auli, and Juan Pino. 2022. Unified speech-text pre-training for speech translation and recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1488–1499, Dublin, Ireland. Association for Computational Linguistics. Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021a. Improving speech translation by understanding and learning from the auxiliary text translation task. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4252–4261, Online. Association for Computational Linguistics. Yun Tang, Juan Pino, Changhan Wang, Xutai Ma, and Dmitriy Genzel. 2021b. A general multi-task learning framework to leverage text data for speech to text tasks. In *ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6209–6213. Jörgen Valk and Tanel Alumäe. 2021. VOXLINGUA107: A dataset for spoken language recognition. In *IEEE Spoken Language Technology Workshop,* SLT 2021, Shenzhen, China, January 19-22, 2021, pages 652–658. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021a. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 993–1003, Online. Association for Computational Linguistics. Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, and Alexis Conneau. 2021b. Largescale self- and semi-supervised learning for speech translation. In *Interspeech 2021, 22nd Annual Conference of the International Speech Communication* Association, Brno, Czechia, 30 August - 3 September 2021, pages 2242–2246. ISCA. Changhan Wang, Anne Wu, and Juan Miguel Pino. 2020a. Covost 2: A massively multilingual speechto-text translation corpus. *CoRR*, abs/2007.10310. Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3544–3552, Online. Association for Computational Linguistics. Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020b. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. In *Proc. of AAAI*, volume 34, pages 9161–9168. Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, and Zhenglu Yang. 2020c. Curriculum pre-training for end-to-end speech translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3728–3738, Online. Association for Computational Linguistics. Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In *Proc.* of INTERSPEECH, pages 2625–2629. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3612–3621, Brussels, Belgium. Association for Computational Linguistics. Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, and Jingbo Zhu. 2021a. Stacked acoustic-and-textual encoding: Integrating the pre-trained models into speech translation encoders. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2619–2630, Online. Association for Computational Linguistics. Yangyifan Xu, Yijin Liu, Fandong Meng, Jiajun Zhang, Jinan Xu, and Jie Zhou. 2021b. Bilingual mutual information based adaptive training for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 511–516, Online. Association for Computational Linguistics. Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-toend speech translation via cross-modal progressive training. In *Proc. of INTERSPEECH*. Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5099–5113, Seattle, United States. Association for Computational Linguistics. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628– 1639, Online. Association for Computational Linguistics. Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, and Jie Zhou. 2022a. Conditional bilingual mutual information based adaptive training for neural machine translation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 2377–2389, Dublin, Ireland. Association for Computational Linguistics. Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4334– 4343, Florence, Italy. Association for Computational Linguistics. Ziqiang Zhang, Long Zhou, Junyi Ao, Shujie Liu, Lirong Dai, Jinyu Li, and Furu Wei. 2022b. Speechut: Bridging speech and text with hidden-unit for encoder-decoder based speech-text pre-training. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Online and Abu Dhabi. Renjie Zheng, Junkun Chen, Mingbo Ma, and Liang Huang. 2021. Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation. In *Proc. of ICML*. Yan Zhou, Qingkai Fang, and Yang Feng. 2023. CMOT: Cross-modal mixup via optimal transport for speech translation. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics*. ## A Statistics Of All Datasets Table 4: Statistics of all datasets. \#sents refers to the number of parallel sentence pairs. | ST (MuST-C) | External MT | | | | |---------------|---------------|--------|---------|--------| | En→ | hours | #sents | name | #sents | | De | 408 | 234K | WMT16 | 3.9M | | Fr | 492 | 280K | WMT14 | 31.2M | | Es | 504 | 270K | WMT13 | 14.2M | | Ro | 432 | 240K | WMT16 | 0.6M | | Ru | 489 | 270K | WMT16 | 1.9M | | It | 465 | 258K | OPUS100 | 0.7M | | Pt | 385 | 211K | OPUS100 | 0.7M | | Nl | 442 | 253K | OPUS100 | 0.7M | ## B Impact Of Different Acoustic Encoders Our model is composed of an acoustic encoder and a translation model. To investigate the impact of different acoustic encoders, we also conduct experiments using Wav2vec 2.09(Baevski et al., 2020) as the acoustic encoder. As shown in Table 5, we find that (1) HuBERT performs slightly better than Wav2vec 2.0 with an improvement of 0.5 BLEU, and (2) our proposed C**RESS** achieves consistent improvements with different acoustic encoders. In practice, we use HuBERT to build our systems, since we believe that developing on a strong baseline will make our results more convincing and demonstrate the robustness of our approach. Table 5: BLEU scores on MuST-C En→De tst-COMMON set (expanded setting) with different acoustic encoders. | Acoustic Encoder | MTL | CRESS | |------------------------------------|-------|---------| | HuBERT (Hsu et al., 2021) | 27.5 | 29.4 | | Wav2vec 2.0 (Baevski et al., 2020) | 27.0 | 28.9 | C Results on CoVoST 2 En→De We also conduct experiments on CoVoST 2 (Wang et al., 2020a) to examine the performance of our approach on large datasets. CoVoST 2 is a largescale multilingual speech translation corpus that covers translations from 21 languages into English and from English into 15 languages. It is one of the 9https://dl.fbaipublicfiles.com/fairseq/ wav2vec/wav2vec_small.pt largest open ST datasets available currently. In this paper, we evaluate our approach on the En→De direction, which contains 430 hours of speech with annotated transcriptions and translations. We use dev set for validation and test set for evaluation. We use the same pre-processing, model configuration, and hyper-parameters as MuST-C (see details in Section 5.2). The results are shown in Table 6. First, we find our C**RESS** significantly outperforms the MTL baseline, with 1.8 BLEU improvement in the base setting and 1.6 BLEU improvement in the expanded setting, which demonstrates the effectiveness and generalization capability of our method across different datasets, especially on the large-scale dataset. Second, our result is competitive with previous methods, although they use larger audio datasets (≥60K hours) and larger model size (≥300M), while we only use 960 hours of audio data and 155M model parameters. ## D Discussion About The Training Speed During training, our approach requires an additional forward pass to select predicted words compared with the baseline, which will impair the training speed. Practically, we find the training time for 1 epoch of C**RESS** is 1.12 times longer than MTL, which is actually negligible. This is because the step of selecting predicted words is fully parallel and has no gradient calculation, which does not incur a significant time overhead. ## E Chrf++ Scores On Must-C Dataset We also report ChrF++ score (Popovic´, 2017) using sacreBLEU toolkit10 on MuST-C dataset in Table 7. We observe that C**RESS** outperforms MTL with 1.4 ChrF++ improvement in the base setting and 1.0 ChrF++ improvement in the expanded setting. | Models | Audio Datasets | #Params | BLEU | |-----------------------------------------------------------|---------------------------|-----------|-------------| | wav2vec-2.0 (LS-960) (Wang et al., 2021b) | LS-960 | 300M | 20.5 | | wav2vec-2.0 (LV-60K) (Wang et al., 2021b) | LV-60K | 300M | 25.5 | | wav2vec-2.0 + Self-training (LV-60K) (Wang et al., 2021b) | LV-60K | 300M | 27.1 | | LNA (Joint Training) (Li et al., 2021) | LV-60K | 1.05B | 25.8 | | SLAM-TLM (Bapna et al., 2021) | LV-60K | 600M | 27.5 | | XLS-R (0.3B) (Babu et al., 2022) | VP-400K, MLS, CV, VL, BBL | 317M | 23.6 | | XLS-R (1B) (Babu et al., 2022) | VP-400K, MLS, CV, VL, BBL | 965M | 26.2 | | XLS-R (2B) (Babu et al., 2022) | VP-400K, MLS, CV, VL, BBL | 2162M | 28.3 | | MTL (base setting) | LS-960 | 155M | 21.4 | | CRESS (base setting) | LS-960 | 155M | 23.2 (+1.8) | | MTL (expanded setting) | LS-960 | 155M | 25.1 | | CRESS (expanded setting) | LS-960 | 155M | 26.7 (+1.6) | Table 6: BLEU scores on CoVoST 2 En→De test set. LS-960: LibriSpeech (Panayotov et al., 2015) (960 hours). LV-60K: Libri-Light (Kahn et al., 2020) (60K hours). VP-400K: VoxPopuli (Wang et al., 2021a) (372K hours). MLS: Multilingual LibriSpeech (Pratap et al., 2020) (50K hours). CV: CommonVoice (Ardila et al., 2020) (7K hours). VL: VoxLingua107 (Valk and Alumäe, 2021) (6.6K hours). BBL: BABEL (Gales et al., 2014) (1K hours). Models **ChrF++** En→De En→Fr En→Es En→Ro En→Ru En→It En→Pt En→Nl Avg. Base setting (w/o external MT data) MTL 52.4 60.4 56.4 50.9 41.7 52.6 57.3 56.1 53.5 CRESS 54.0** 62.0** 57.6** 52.4** 43.1** 53.8** 58.5** 57.6** **54.9** MTL 54.9 62.6 58.6 51.9 44.2 53.4 57.9 56.9 55.0 CRESS 56.1** 63.7** 58.9* 53.1** 44.5* 54.2** 59.3** 58.3** **56.0** | Base setting (w/o external MT data) | |----------------------------------------| | Expanded setting (w/ external MT data) | Table 7: ChrF++ scores on MuST-C tst-COMMON set. The external MT datasets are only used in the expanded setting. * and ** mean the improvements over MTL baseline are statistically significant (p < 0.05 and p < 0.01, respectively). ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5 ✓ B1. Did you cite the creators of artifacts you used? Section 5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We just use for research purposes, no commercial use and no derivative works of the original data. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our use of dataset and pre-trained models is consistent with their intended use, which does not require much discussion. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data we used are publicly available on the website, and widely used in the research community. We cannot change the training/test data in order to be consistent with previous work. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix ## C ✓ **Did You Run Computational Experiments?** Section 5, Section 6, Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5, Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
khalifa-etal-2023-shot
Few-shot Reranking for Multi-hop {QA} via Language Model Prompting
https://aclanthology.org/2023.acl-long.885
We study few-shot reranking for multi-hop QA (MQA) with open-domain questions. To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on language model prompting for multi-hop path reranking. PromptRank first constructs an instruction-based prompt that includes a candidate document path and then computes the relevance score between a given question and the path based on the conditional likelihood of the question given the path prompt according to a language model. PromptRank yields strong retrieval performance on HotpotQA with only 128 training examples compared to state-of-the-art methods trained on thousands of examples {---} 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever and 77.5 by multi-hop dense retrieval.
# Few-Shot Reranking For Multi-Hop Qa Via Language Model Prompting Muhammad Khalifa∗, Lajanugen Logeswaran†**, Moontae Lee**†‡, Honglak Lee∗†**, Lu Wang**∗ University of Michigan∗, LG AI Research†, University of Illinois at Chicago‡ ## Abstract We study few-shot reranking for multi-hop QA (MQA) with open-domain questions. To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PROMPTRANK, which relies on language model prompting for multi-hop path reranking. PROMPTRANK first constructs an instruction-based prompt that includes a candidate document path and then computes the relevance score between a given question and the path based on the conditional likelihood of the question given the path prompt according to a language model. PROMPTRANK yields strong retrieval performance on HotpotQA with only 128 training examples compared to state-of-theart methods trained on thousands of examples - 73.6 recall@10 by PROMPTRANK vs. 77.8 by PathRetriever (Asai et al., 2020) and 77.5 by multi-hop dense retrieval (Xiong et al., 2021).1 ## 1 Introduction Many information-seeking queries are in the form of multi-hop questions. For instance, to answer the question "What 1988 Christmas comedy film did Brian-Doyle Murray star in?", we need to (i) search for movies starring Brian Murray, then **(ii)** identify which of them were released in 1988 during Christmas. Evidence required to answer such questions is often dispersed in different documents, requiring sequential, multi-step reasoning to reach the answer (Perez et al., 2020), typically referred to as multi-hop question answering (MQA). Given a multi-hop question and a large document corpus, existing MQA systems largely follow a *retrieve-then-read* pipeline, where a retriever module first identifies relevant documents from the corpus, and a reader module produces the answer based on the retrieved output (Asai et al., ![0_image_0.png](0_image_0.png) 2020; Li et al., 2021; Singh et al., 2021; Qi et al., 2021). The retriever module is trained to predict the ground-truth evidence document(s) given the question (Karpukhin et al., 2020; Qi et al., 2021; ). However, curating large datasets of question-document pairs is expensive, especially for low-resource languages or domains that require unique expertise (e.g., medical or legal documents), thus creating a bottleneck for building QA pipelines (Ram et al., 2022). Moreover, resorting to heuristics for data labeling can lead to incorrect annotation (Izacard and Grave, 2021). This difficulty is further exacerbated in the case of multi-hop questions, as they need to be annotated with multiple support documents. The majority of existing data-efficient retrieval and reranking methods are restricted to *singlehop* QA, and it is unclear how to extend them to the *multi-hop* setting. For instance, Ram et al. (2022) proposed "recurrent span retrieval" to obtain psuedo question-document pairs in an unsupervised way for single-hop QA. However, in the multi-hop 15882 ![1_image_0.png](1_image_0.png) case, it is less likely that we can retrieve recurrent spans from multiple documents that follow a valid reasoning trajectory. Moreover, their method requires intensive pretraining on the obtained corpus. Seonwoo et al. (2021) focus on weakly supervised multi-hop QA retrieval, yet their method uses corpus-specific (e.g., Wikipedia) heuristics and also requires pretraining. This motivates the need for data-efficient multi-hop retrieval methods that (i) work out-of-the-box without requiring additional (pre)training, and **(ii)** do not rely on hand-designed heuristics for data collection and annotation. To this end, we present PROMPTRANK, which leverages the power of large language models (LLMs) for few-shot multi-hop retrieval. PROMPTRANK combines a simple unsupervised retrieval method i.e., TF-IDF similarity, with an LLM reranker that scores the relevance of document paths to a question based on the *conditional* likelihood of generating the question given the path. Our approach makes use of instruction-based prompting (Sanh et al., 2021; Ouyang et al., 2022) to steer the LLM towards assigning higher scores to more relevant support document chains.2 To calibrate the model's reranking scores and alleviate prompt sensitivity (Zhao et al., 2021b), we borrow techniques from the literature such as temperature scaling (Kull et al., 2019) and *instruction ensembling* (Schick and Schütze, 2021a). We also utilize *demonstration ensembling* to leverage more examples than what can fit into the context of transformer LLMs by combining reranking probabilities computed with different demonstrations. 2We use *path* and *chain* interchangeably throughout the paper. We evaluate few-shot PROMPTRANK on HotpotQA (Yang et al., 2018), a standard MQA benchmark, and show that it compares favorably against state-of-the-art models while using orders of magnitude fewer examples. More precisely, with only 128 training examples, PROMPTRANK outperforms DrKit (Dhingra et al., 2020) and is only 4.2 Recall@10 points lower than multi-hop dense retrieval (MDR) (Xiong et al., 2021) (see Figure 1). We also showcase PROMPTRANK as part of a QA pipeline, again, displaying close QA performance to fully-supervised retrievers—only 4.1 F1 points lower than MDR. In summary, our contributions in this paper are: 1. We propose PROMPTRANK, a few-shot reranking approach for multi-hop QA that reranks a given document path based on the likelihood of generating the question given a path prompt. 2. PROMPTRANK exhibits strong few-shot retrieval performance with as few as 128 examples and compares favorably to fully supervised methods (§3.1). 3. PROMPTRANK leads to strong QA performance when combined with a pretrained reader module, performing close to fully-supervised retrievers (§3.2). ## 2 Promptrank An overview of the full retrieval system is displayed in Figure 2: Given a question q, the system expands sequences of supporting documents into paths of length H, which are used to answer the question. At each step, we first use TF-IDF similarity to obtain an initial set of supporting document paths.3 We then use PROMPTRANK to rerank the current document chains based on their relevance to the question (§2.1). Concretely, we start with retrieving F candidate documents using TF-IDF for the 'first hop'. These '1-hop' candidates are scored by PROMPTRANK and K1 top-ranked documents are kept and further expanded based on their hyperlinks to obtain 2-hop reasoning paths.4 These 2-hop reasoning chains are again reranked and the most promising K2 candidates are further expanded. The process repeats until we obtain paths of length H, where H can be a hyperparameter.5 As the document graph can have a high branching factor, we only keep the top-L hyperlinks as reranking candidates based on TF-IDF similarity between the hyperlink document and the question. We have found this pruning step to improve efficiency without much performance drop. This process is shown in Figure 2(a). ## 2.1 Path Reranking With Promptrank Given a question q and a reasoning path or chain c, we use an LM to score c according to its relevance to q. Concretely, we measure the likelihood of the question given the path as follows: $$\mathrm{Score}_{q}(c)=P_{\mathrm{LM}}(q|\tau_{c})$$ Scoreq(c) = PLM(q|τc) (1) where PLM(q|τc) is the conditional probability of generating the question given a prompt τc containing path τc using an LM. Our initial experiments show that using PLM(q|τc) works *substantially* better than PLM(c|τq) for a question-containing prompt τq, which agrees with the findings in dos Santos et al. (2020).6 We argue that two factors contribute to this gap. First, LMs can be sensitive to the *surface form* (Holtzman et al., 2021) of reasoning paths, making it difficult to reliably compare the probabilities of different reasoning paths using PLM(c|τq). For instance, PLM(c|τq) tends to be higher for shorter paths. On the other hand, PLM(q|τc) does not suffer from this issue since we compare the probabilities of the same string (i.e., the question) by conditioning on different 3PROMPTRANK is agnostic to the retrieval approach and can be combined with other retrieval techniques. 4We assume the presence of hyperlinks following previous work (Asai et al., 2020; Qi et al., 2021) although PROMPTRANK is agnostic to how a candidate path is obtained. 5This process can be viewed as a variation of beam search. 6Earlier experiments showed that the recall of PLM(q|τc) was at least 60% better than that of PLM(c|τq). reasoning paths. Second, the prompt format using PLM(q|τc)—the question follows a documentagrees more with the web data used for LM pretraining, where documents are usually followed by FAQs, questionnaires, and surveys, rather than the other way around. We further add a temperature parameter to scale the model output logits before computing P(q|τc). This can be seen as an instance of model calibration (Guo et al., 2017; Desai and Durrett, 2020; Jiang et al., 2021) with the goal of improving the reranking scores. We show that temperature scaling boosts reranking performance in §3.1. Constructing Prompt τc As shown in Figure 2 (b), the prompt consists of an instruction along with the document path. The instruction's goal is to encourage higher scores for more relevant paths by eliciting the LM reasoning ability (Ouyang et al., 2022). We note that the instruction part is fixed across all prompts constructed for different candidate paths. The path is expressed in the prompt by *concatenating* all documents in the chain and prepending each document with a fixed prefix, such as "Document:" or *"Passage:"*. The concatenation of path documents significantly improves reranking by *simultaneously* considering all hops, which allows the LM to do a context-aware evaluation of path relevance. ## 2.2 Instruction Search And Ensembling Although instructions can be manually engineered to trigger the LM to accomplish the task (e.g., "Read the following documents and generate a question"), this requires human expertise and can be sub-optimal. Therefore, we leverage automated instruction search Gao et al. (2021), where we use an encoder-decoder LM, e.g., a T5-Base model (Raffel et al., 2020), that is trained to fill masked text spans to generate instructions. Specifically, we fill in the template "Task: <X> documents and <Y> *question. Question:"*, where <X> and <Y> are the masked spans expected to be filled in by the model (e.g., for a human-written instruction example, <X> = "*Read the following*" and <Y> = "*answer the*"). We consider two variations of this template corresponding to the cases where the document path appears before/after the template. We constrained the template to contain the words 'documents' and 'question' to ensure that the model generates relevant prompts. We have found that using a less specific template without such tokens leads to more diverse but less relevant instructions. The exact templates used are in Appendix A.2. Previous work has shown that mixing multiple prompts can improve few-shot performance (Gao et al., 2021; Schick and Schütze, 2021b). Similarly, such ensembling could produce more regularized path scores by alleviating prompt sensitivity (Zhao et al., 2021b). Given a path, we combine the scores of obtained through different instructions. We experiment with both *mean* and max ensembling. ## 2.3 Demonstration Ensembling We employ in-context learning (ICL) (Brown et al., 2020) to teach the LLM to do reranking by showing the model examples i.e., demonstrations of questions and their gold paths. A major obstacle to this approach is the *input length limit* in standard transformer LMs. Since paths are comprised of multiple documents, in most cases we cannot feed more than two demonstrations without exceeding the limit of 1024 tokens, a standard setup for pretrained LMs. To workaround that, we utilize *demonstration ensembling*, where different in-context demonstrations are used to compute scores for a given path, and the scores are combined by a mean or max operation. ## 3 Experiments Data We evaluate our method on **HotpotQA** (Yang et al., 2018), which consists of two-hop questions over diverse topics. We focus on the *fullwiki* setting in which two Wikipedia passages are required to answer the questions. Since the gold passages for the test set are not available, we follow prior work and evaluate PROMPTRANK on the development set, which has 7,405 questions. There are two main question types in HotpotQA: (1) *comparison* questions usually require contrasting two entities and (2) *bridge* questions can be answered by following a connecting entity that links one document to another. We also evaluate fewshot PROMPTRANK on the **2WikiMQA** dataset (Ho et al., 2020) in Appendix D.1. Compute Infrastructure All our reranking experiments are run on a workstation with a single Nvidia A40 GPU and 256GB of RAM. Our QA experiments in §3.2 are run on a workstation with two Nvidia Quadro RTX 8000 GPUs and 128GB of RAM. Models We use HuggingFace implementations (Wolf et al., 2020) of GPT2-XL (1.5B) (Brown et al., 2020), T5-Base (220M), T5-Large (770M) and T5-XL (3B) (Raffel et al., 2020) in our experiments. We use the 'LM adapted' version of T5 models since they have been shown to work better for prompt-based learning (Lester et al., 2021). We report additional results with the OPT-30B model (Zhang et al., 2022) in §4.3. Hyperparameters For PROMPTRANK, we use a path length of H = 2 for all experiments. For pruning the search space we use K1 = 5 and L = 3. We use the TF-IDF index implemented by Asai et al. (2020) and initially retrieved F = 100 documents from TF-IDF. We truncate path documents to 230 tokens before constructing the prompt and limit the prompt length to 600 tokens. When using in-context demos, we use the maximum length of 1024 tokens. Metrics Retrieval performance is measured using both Recall (R@k) and Answer Recall (AR@k), with k ∈ {2, 10, 20}. R@k measures whether the two gold documents are present in the top-k retrieved documents and AR@k is the recall of the answer string in the top-k retrieved documents. For HotpotQA, we only compute AR over questions with span answers (we ignore yes/no and comparison questions). Since we do not have access to the HotpotQA test set, we report results on the original development set provided by Yang et al. (2018). Document Scores We compute document scores from path scores as follows. Similar to Das et al. (2019), we take a document score to be the *maximum* of all its path scores. We find this change to yield better recall than using path scores, with details elaborated in Appendix B. Instruction Search and Temperature For instruction search, we generate 200 different instructions as described in §2.2 using top-k sampling with k = 10. Then, we select the best instruction based on R@2 over our development set of 128 examples. The same process is used to select the optimal temperature value. Table A1 shows the best 10 instructions identified. Baselines We compare our reranker to the following baselines. **TF-IDF** retrieves top similar documents to the question using TF-IDF similarity and TF-IDF + BM25 adds an extra step where retrieved documents and their hyperlinks are reranked using | # Ex. | R@2 | R@10 | R@20 | AR@2 | AR@10 | AR@20 | | |----------------------------------------|-------|-----------|-----------|-----------|-----------|-----------|-----------| | Unsupervised Baselines TF-IDF | - | 9.9 | 27.6 | 35.0 | 37.6 | 53.8 | 60.2 | | TF-IDF + BM25 | - | 19.1 | 54.7 | 61.8 | 49.5 | 74.7 | 79.9 | | Fully-supervised Baselines DrKit | ~90K | 38.3 | 67.2 | 71.0 | - | - | - | | MDR | ~90K | 65.9 | 77.5 | 80.2 | - | - | - | | PathRetriever | ~90K | 66.4 | 77.8 | 78.7 | 82.2 | 90.5 | 90.5 | | PROMPTRANK, no ICL GPT2-XL† | - | 36.6 | 60.5 | 65.9 | 63.0 | 83.9 | 87.4 | | T5-XL† | - | 42.8 | 68.9 | 74.1 | 69.3 | 86.8 | 89.0 | | + best inst. | 128 | 47.8 | 71.4 | 76.0 | 74.0 | 87.9 | 89.7 | | + temp. scaling | 128 | 49.7 | 71.9 | 76.2 | 76.2 | 88.4 | 89.9 | | + inst. ensemble | 128 | 51.3 | 72.0 | 76.4 | 77.6 | 88.5 | 90.3 | | PROMPTRANK, with ICL T5-XL, Ndemos = 2 | 128 | 52.3 (.7) | 73.1 (.2) | 77.1 (.2) | 78.6 (.7) | 88.7 (.0) | 90.3 (.1) | | T5-XL, Ndemos = 8 | 128 | 54.5 (.7) | 73.6 (.3) | 76.9 (.1) | 79.1 (.6) | 89.0 (.1) | 90.5 (.0) | | T5-XL, Ndemos = 10 | 128 | 54.4 (.5) | 73.5 (.3) | 76.9 (.1) | 78.9 (.4) | 88.9 (.1) | 90.5 (.0) | BM25 (Robertson et al., 1995). **PathRetriever** (Asai et al., 2020) is a graph-based retriever trained to expand an initial pool of documents based on Wikipedia links and searches for the best path using beam search.7 **DrKIT** (Dhingra et al., 2020) is an end-to-end trained dense retrieval approach that starts from question entities and traverses a virtual knowledge base to find the relevant entities. Multihop Dense Retrieval **(MDR)** (Xiong et al., 2021) encodes the question and the documents retrieved by each step into a dense vector and uses maximum inner-product search (MIPS) to find the next hop. Below, we start with the evaluation of the zeroand few-shot reranking of PROMPTRANK (§3.1). Then, we move to evaluate downstream MQA performance in the few-shot setting (§3.2). ## 3.1 Retrieval Performance Table 1 shows the performance of PROMPTRANK and other comparisons in zero- and few-shot settings. Zero-shot Performance We start with discussing the retrieval performance of zero-shot PROMPTRANK on HotpotQA. First, we observe that simple TF-IDF performs poorly in terms of different recall metrics, while TF-IDF + BM25 7We run PathRetriever on HotpotQA with original hyperparameters except for an initial TF-IDF pool size=100 to allow for fair comparison to our approach. ![4_image_0.png](4_image_0.png) performs much better, yet still worse than fullysupervised approaches. Next, we look at the performance of the zero-shot PROMPTRANK (T5-XL) which uses no instructions, i.e., the prompt consists of only the document path. These models obtain better recalls than TF-IDF + BM25 and even outperform the fully-supervised DrKit. Although this approach does not use any labeled data, it is only 3.7 AR@10 points worse than PathRetriever, which is trained on ~90K examples. These findings demonstrate PROMPTRANK's effectiveness at reranking paths of documents. Few-shot Performance The zero-shot performance of PROMPTRANK can be further improved with access to a small set of labeled examples (in our case, we only used 128 examples from HotpotQA) for instruction search and finding temperature value. We observe a substantial boost of 11.6% (42.8 → 47.8) in R@2 of PROMPTRANK when using the *best instruction* found by instruction search. Furthermore, temperature scaling with T = 1.4 also provides a boost of 3.9% (47.8 → 49.7) points in R@2. We also observe that instruction ensembling gives a further performance boost, reaching 51.3 R@2 with PROMPTRANK. We show the performance of *max ensembling*, which we have found to perform better than mean ensembling in terms of R@2. We hypothesize that max ensembling computes an *upper bound* on the path scores, compensating for any underestimation of path scores that can happen when using a single instruction. In-context learning We experiment with an Nshot setting while making sure that the two demonstrations cover *both* question types in HotpotQA (bridge and comparison). Figure 3 shows that both R@2 and AR@2 improve as we use more demonstrations. With only 2 examples, we observe a large boost of 6.3% (49.2 → 52.3) in R@2. Since we cannot fit more than 2 demonstrations in the 1024 context window, we use demonstration ensembling (§2.3). For instance, 6-shot ensembling scores a path by combining 3 different contexts, each obtained using 2 demonstrations. We use max ensembling as it is found to work best. Figure 3 shows the in-context learning performance with a different number of demonstrations. We observe a steady increase in R@2 until N = 8. AR@2 also improves with more demonstrations but drops slightly with N = 10. Interestingly, demonstration ensembling has enabled us to leverage more examples than permitted by the context window size of T5-XL. We leave it to future work to study the applicability of this technique to other tasks. ## 3.2 Full Qa Performance We analyze the performance of PROMPTRANK when used as the retriever in a QA pipeline. We adopt an extractive reader model based on ELECTRA Large (Clark et al., 2020) with two heads to predict the start and end of the answer span. We use the checkpoint provided by Xiong et al. (2021), and the same inference setting. Details on the inference hyperparameters for the reader are in Appendix C.1. In Table 2, we compare the QA performance on | Retriever | EM | F1 | |-------------------------------------------|-----------|-----------| | Fully-supervised MDR (Xiong et al., 2021) | 62.3 | 75.1 | | Zero-shot TF-IDF | 39.6 | 49.4 | | PROMPTRANK, no inst | 55.7 | 67.7 | | Few-shot PROMPTRANK, (Ndemos = 2) | 57.8 (.1) | 70.0 (.1) | | PROMPTRANK, (Ndemos = 10) | 58.3 (.0) | 70.5 (.1) | Table 2: Answer EM and F1 on HotpotQA development set. PROMPTRANK results are aggregated over 3 runs with different demonstrations. We show metrics mean and (std). To allow for a fair comparison, only the retriever is varied over these systems while the reader module is the *same*. | Retriever | EM | F1 | |-----------------------------------|------|------| | DrKit (Dhingra et al., 2020) | 42.1 | 51.7 | | PathRetriever (Asai et al., 2020) | 60.0 | 73.0 | | MDR (Xiong et al., 2021) | 62.3 | 75.3 | | PROMPTRANK, (Ndemos = 2) | 58.1 | 71.1 | Table 3: Answer EM and F1 on HotpotQA test set. MDR and PROMPTRANK use the same ELECTRA reader, while other systems use different readers. HotpotQA **development set** with PROMPTRANK as the retriever against a *fully-supervised* retriever, namely MDR (Xiong et al., 2021) as well as unsupervised TF-IDF. PROMPTRANK with Ndemos = 10 is only 4.6 F1 points worse than MDR, which is using the *same* reader module. Table 3 shows performance on HotpotQA **test set** with different *fullysupervised* systems compared to PROMPTRANK (Ndemos = 2), where PROMPTRANK is only 1.9 and 4.2 EM points below PathRetriever and MDR, respectively. ## 4 Analysis 4.1 Comparison To Single-Hop Reranking The key idea behind our approach is to conduct joint reasoning with documents in the path using the LM, as opposed to reranking each document in the path separately (*single-hop reranking*). More specifically, in single-hop reranking, we expand paths using the same setup of PROMPTRANK but rerank each document d separately using p(q|τd), for a given document prompt τd. To assess whether our multi-hop reranking approach offers the advantage of global reasoning, we compare both approaches by running two experiments with identical settings except for how documents are reranked. For evaluation, we use a set of 4K questions from HotpotQA and T5-Large, and | Re-ranking | R@2 | R@10 | AR@2 | AR@10 | |--------------|-------|--------|--------|---------| | Single-hop | 22.8 | 52.0 | 54.9 | 73.8 | | Multi-hop | 46.9 | 67.6 | 75.4 | 87.9 | no instruction is used, i.e., the prompt only contains the document(s). Table 4 shows the retrieval performance of both approaches. Interestingly, a large gap in recall scores is observed between single-hop and multi-hop reranking. This supports our hypothesis that jointly considering multiple documents in the path helps the LM better model documents' relevance to the question. ## 4.2 Role Of Instruction Our goal here is to investigate (i) how useful is the presence of the instruction in the prompt, **(ii)** how much benefit (if any) automated instruction search provides over manual instructions, and **(iii)** whether the instruction's location in the prompt matters. To answer these questions, we analyze the recall over 200 different instructions generated using the method described in §2.2 and using 1K examples from HotpotQA with different LM sizes: T5-XL, T5-Large, and T5-Base, with results displayed in Figure 4. This analysis uses an initial set of TFIDF documents of size F = 30. Usefulness of Instruction We can see that using no instruction consistently yields poorer performance than using an instruction of any sort, across all variants of T5. Interestingly, without the instruction, the three model sizes have almost the same R@2. The difference in their performances becomes apparent when an instruction is added. Strikingly, in the no instruction case, T5-Large performs worse than T5-Base in terms of AR@2, showing that scaling does not consistently help recall when no instructions are used. This hints at the fact that instructions play a major role in harnessing the full power of LLMs, at least for our task. Benefit of Automated Instruction Search Next, we compare a human-written instruction against an instruction found through automated instruction search on a labeled set of 128 examples. The manual instruction we use is *"Please write a question* based on these passages.", which is used by Sachan ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) et al. (2022).8In Figure 4, we compare the recall when using these instructions. Interestingly, the search-based instruction outperforms the manual one in almost all cases. We also observe that the manual instruction performs poorly for AR@2 on T5-base, even worse than no instruction. These observations hint at the utility of automated instruction search for path reranking. However, it is worth noting that the best instruction on a relatively small held-out set will not necessarily generalize during test time: The search-based instruction produces AR@2 and R@2 that are almost the same or worse than the median instruction, respectively with T5- Large. Location of Instruction We study the performance of two different kinds of prompts, where the instruction appears *before* and *after* the path. Figure 5 shows the R@2 and AR@2 in both cases for T5 models of different sizes. We observe that placing the instruction after the path performs *consistently better* than placing before it, across all model variants. We hypothesize this to be an instance of the *recency bias* exhibited by LMs (Zhao et al., 2021b), *i.e.,* placing the instruction right before where the model is asked to generate the question better primes the LM for the task and produces better calibrated path scores. We expect such finding to generalize to other tasks where instruction-based prompting is used. ## 4.3 Choice Of Language Model Table 5 compares the reranking performance of GPT2-XL and OPT-30B (Zhang et al., 2022) mod-8We average recall of the two cases where the instruction falls before and after the path. See the next paragraph for more context. | Model | R@2 | R@10 | AR@2 | AR@10 | |---------|-------|--------|--------|---------| | OPT-30B | 36.9 | 65.4 | 61.0 | 82.0 | | GPT2-XL | 47.2 | 70.3 | 57.1 | 85.7 | Table 5: Document and answer recall of GPT2 and OPT models based on 1000 questions from HotpotQA. ![7_image_0.png](7_image_0.png) els. Despite having an order of magnitude more parameters, we observe that the OPT model is generally worse compared to the smaller GPT2-XL model. We suspect this is due to domain mismatch between pre-training data and task relevant data. Pre-training data of GPT2 models is potentially more biased towards Wikipedia data compared to the OPT models which are trained on more diverse data. Importantly, this shows that scaling up the language model doesn't necessarily guarantee better reranking performance and domain gap is an important consideration. ## 4.4 Further Analysis And Comparisons We further analyze the inference cost of PROMPTRANK compared to PathRetriever and MDR in Appendix D.2. In Appendix D.3, we study PROMPTRANK's recall sensitivity to document order in the prompt τc by comparing performance using two different document ordering schemes in the prompt. Lastly, we compare PROMPTRANK to few-shot PathRetriever and LOUVRE (Seonwoo et al., 2021) in Appendix D.4. ## 5 Related Work Multi-hop Retrieval The majority of approaches for multi-hop question answering rely on two main components: a retriever and a reader. The retriever component can be a sparse index or heuristic-based such as TF-IDF or BM25 (Chen et al., 2017; Nie et al., 2019) or dense index (Karpukhin et al., 2020; Xiong et al., 2021; Zhao et al., 2021a). Other approaches aimed to improve the retriever with an additional *reranking* step on top of a simple retriever (Wang et al., 2018; Lee et al., 2018; Htut et al., 2018). Asai et al. (2020) combined TF-IDF retriever with a recurrent graph retriever and used the reader module to rerank paths based on answer confidence. Qi et al. (2021) used a single transformer model to perform retrieval, reranking and reading in an iterative fashion. However, the good performance of previous work comes mainly from training on a large number of examples and is likely to fail in low-data settings. To treat this issue, Seonwoo et al. (2021) proposed to pretrain MDR (Xiong et al., 2021) on a large number of weakly-supervised examples of questions and the corresponding document paths. Although promising in low-data settings, their pretraining is computationally expensive as it is done on millions of examples. On the other hand, our approach requires no task-specific pretraining. Language Models Prompting Prompt-based learning aims to construct better inputs, *i.e.,* prompts to language models to elicit better zeroor few-shot performance (Brown et al., 2020; Liu et al., 2021). Recently, instruction tuning, where a language model is trained to follow natural language instruction, has shown impressive zero-shot performance on unseen tasks (Wei et al., 2021; Ouyang et al., 2022). In our work, we use instructions to guide to model toward assigning better scores to more relevant document paths. LM-based Reranking Our scoring function is related to query likelihood retrieval (Lavrenko and Croft, 2017; Ponte and Croft, 2017) and is in line with previous work that employed generative language models for passage reranking (Nogueira et al., 2020). dos Santos et al. (2020) performed single-hop reranking using question likelihood given the passage, but their setting was limited to fully-supervised, single-hop QA. Concurrent with our work is (Sachan et al., 2022), where the authors leverage LLMs for unsupervised passage reranking for QA. While their focus is on single passages, we study the reranking of multi-passage paths, which is more challenging. Moreover, their exploration of prompting is limited to a single manual instruction, whereas we provide an in-depth analysis of the effect of different prompting aspects on the recall such as instruction importance, location in the prompt, and manual vs. automated. ## Conclusion We introduced PROMPTRANK, a method to perform few-shot reranking of multi-document paths for multi-hop question answering based on large language models. Experiments on a standard multihop QA benchmark show the strong performance of PROMPTRANK in the few-shot setting compared to fully-supervised multi-hop reranking systems. Future avenues of exploration include combining PROMPTRANK with efficient tuning techniques such as prefix tuning and efficient strategies for instruction search. ## Limitations One limitation to LM-based reranking is the computational overhead involved in reranking paths. Our approach requires a forward pass through the LM to rerank each path, which can become expensive when using relatively large models such as GPT-3 or when dealing with more hop count that creates combinatorially more paths. Another limitation of PROMPTRANK is imposed by the transformer context window length. Since PROMPTRANK requires the prompt to include all path documents, it could be infeasible to fit all path documents into the prompt for paths with a larger hop count. A potential direction to workaround this is to condense or summarize the path documents beforehand. We leave it to future work to explore this and other techniques. ## Acknowledgements This work is supported by LG AI Research. Additionally, we would like to thank Sewon Min for providing feedback on the paper. We also thank the anonymous reviewers for their valuable suggestions. ## References RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs | OpenReview. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1870–1879. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Rajarshi Das, Ameya Godbole, Manzil Zaheer, Shehzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-reasoning at textgraphs 2019 shared task: Reasoning over chains of facts for explainable multihop inference. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 101–117. Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 295–302. Association for Computational Linguistics. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Cicero dos Santos, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, and Bing Xiang. 2020. Beyond [cls] through ranking by generation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1722–1727. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting* of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321–1330. PMLR. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics. Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 7038–7051, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Phu Mon Htut, Samuel R. Bowman, and Kyunghyun Cho. 2018. Training a ranking function for opendomain question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 2-4, 2018, Student Research Workshop, pages 120–127. Association for Computational Linguistics. Gautier Izacard and Edouard Grave. 2021. Distilling knowledge from reader to retriever for question answering. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. *Transactions of the Association for Computational Linguistics*, 9:962–977. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769–6781. Association for Computational Linguistics. Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. 2019. Beyond temperature scaling: Obtaining wellcalibrated multi-class probabilities with dirichlet calibration. *Advances in neural information processing* systems, 32. Victor Lavrenko and W Bruce Croft. 2017. Relevancebased language models. In *ACM SIGIR Forum*, volume 51, pages 260–267. ACM New York, NY, USA. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 565–569. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3045– 3059. Association for Computational Linguistics. Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, and Bingquan Liu. 2021. Hopretriever: Retrieve hops over wikipedia to answer complex questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13279–13287. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2553–2566. Association for Computational Linguistics. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 708–718. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155. Ethan Perez, Patrick S. H. Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. Unsupervised question decomposition for question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8864– 8880. Association for Computational Linguistics. Jay M Ponte and W Bruce Croft. 2017. A language modeling approach to information retrieval. In ACM SIGIR Forum, volume 51, pages 202–208. ACM New York, NY, USA. Peng Qi, Haejun Lee, Tg Sido, and Christopher D Manning. 2021. Answering open-domain questions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3599–3614. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve passages without supervision. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2687–2700, Seattle, United States. Association for Computational Linguistics. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. *Nist Special Publication Sp*, 109:109. Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving passage retrieval with zero-shot question generation. *arXiv preprint* arXiv:2204.07496. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255–269. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021b. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 390–402. Association for Computational Linguistics. Yeon Seonwoo, Sang-Woo Lee, Ji-Hoon Kim, JungWoo Ha, and Alice Oh. 2021. Weakly supervised pretraining for multi-hop retriever. In *Findings of the* Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 694– 704. Association for Computational Linguistics. Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for opendomain question answering. In Advances in Neural Information Processing Systems, volume 34, pages 25968–25981. Curran Associates, Inc. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In *Thirty-Second AAAI Conference on* Artificial Intelligence. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. *arXiv preprint* arXiv:2109.01652. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45. Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick S. H. Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex opendomain questions with multi-hop dense retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369–2380. Association for Computational Linguistics. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Chen Zhao, Chenyan Xiong, Jordan L. Boyd-Graber, and Hal Daumé III. 2021a. Multi-step reasoning over unstructured text with beam dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 4635–4641. Association for Computational Linguistics. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021b. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 12697–12706. PMLR. ## A Instructions A.1 Best Instructions Table A1 shows the top 10 performing instructions found by instruction search (§2.2) based on R@2 and using T5-XL. ## A.2 Instruction Search The actual templates we feed T5 are "Task: <X> documents <Y> question based on them. Question:" and "Task: <X> *previous documents and* <Y> *question based on them. Question:"*. We have found using the phrase "based on them" to be essential in directing the model to generate sensible instructions. Otherwise, the model would generate something like "Read the documents in question..". However, we remove that phrase from the obtained instructions". ## B Document Scores It is not immediately obvious how to compute a final score for each document since PROMPTRANK is mainly used to score document. The main issue is that a document can fall on multiple paths at the same time (some of which could be incomplete or not fully expanded yet) and therefore could have multiple such scores. For example, assume a path A → B → C of consisting of the documents A, B, and C, respectively. Considering the document B, we see that two scores are associated with B: score of the subpath A → B and score of the full A → B → C path. To compute the final score of B, we could either just take the score of the longest path, or combine the two scores using mean, minimum, or maximum operations. What we found to work best compared to other alternatives is to take *maximum*, which is similar to what is done in (Das et al., 2019). We use this formulation when computing our recall metrics in §3.1. ## C Hyperparameters C.1 Electra Reader We use the same reader setting as in Xiong et al. (2021), where the top-100 retrieved paths are fed to the reader to obtain an answer from each path. Answers are then sorted based on a linear combination of path score and answer confidence, and the top answer is returned. We use the default hyperparameters for HotpotQA from (Xiong et al., 2021) in their codebase.9 We use a maximum path length of 512 tokens, maximum question length of 64, and answer length of 30. In their experiments, Xiong et al. (2021) combine the answer confidence along with a ranking score using linear interpolation with a hyperparameter λ. For our experiments, we use the path scores produced by PROMPTRANK instead and learn λ on a held-out development set. The value we end up using for λ is 0.9. ## D Further Results And Analysis D.1 Results On 2Wikimqa Table A2 shows the performance with few-shot PROMPTRANK with only one setting: few-shot, best instruction and with temperature scaling on the 2WikiMQA dataset (Ho et al., 2020). We compare it to two unsupervised baselines, namely TFIDF and TF-IDF+BM25 . PROMPTRANK is significantly outperforming the baselines while using only 128 examples for tuning the instruction and the temperature parameter. ## D.2 Inference Cost Here, we analyze inference cost in terms of latency per query. We run retrieval using each method over 100 queries and then compute the average time per query. Inference was run over a single Nvidia Quadro RTX 8000 GPU. We run each method with the maximum batch size that fits within the GPU. One parameter that highly affects the speed for both PathRetriever and MDR is the beam size. We use the default beam size for PathRetriever, which is 8, and we use a beam size of 5 for MDR, to closely match PROMPTRANK's pruning parameter K1 = 5. Other than beam size, we use the default parameters for each method. Table A3 shows the number of parameters of each method and the average time per query in seconds. First, we note that PROMPTRANK uses the most number of parameters since it is based on T5-XLwhile PathRetriever and MDR both rely on much smaller LMs such as BERT and RoBERTa. Interestingly, however, we can see that PROMPTRANK without ensembling has lower latency than PathRetriever, which is slowed down by the beam search process since it has to expand and encode outgoing links from each passage in the beam at each step. As expected, ensembling almost multiplies the latency of PROMPTRANK by 9https://github.com/facebookresearch/ multihop_dense_retrieval ID Prompt 1 Document: [D1] Document: [D2] , ..., Review previous documents and ask some question. Question: 2 Document: [D1] Document: [D2] , ..., Review the previous documents and answer question. Question: 3 Document: [D1] Document: [D2] , ..., Read the previous documents and write the following question. Question: 4 Document: [D1] Document: [D2] , ..., Search previous documents and ask the question. Question: 5 To analyze the documents and ask question. Document: [D1] Document: [D2] , ..., Question: 6 Document: [D1] Document: [D2] , ..., To read the previous documents and write a question. Question: 7 Document: [D1] Document: [D2] , ..., Read previous documents and write your exam question. Question: 8 Document: [D1] Document: [D2] , ..., Read the previous documents and ask this question. Question: 9 Read two documents and answer a question. Document: [D1] Document: [D2] , ..., Question: 10 Identify all documents and ask question. Document: [D1] Document: [D2] , ..., Question: | R@2 | R@10 | R@20 | AR@2 | AR@10 | AR@20 | | |-----------------------------------------------------|--------|--------|--------|---------|---------|------| | Unsupervised Baselines TF-IDF | 5.8 | 8.9 | 23.8 | 22.2 | 37.8 | 44.2 | | TF-IDF+BM25 | 7.6 | 34.1 | 45.8 | 20.7 | 44.9 | 53.0 | | PROMPTRANK, no ICL T5-XL, best inst., temp. scaling | 19.3 | 58.6 | 62.7 | 33.9 | 60.0 | 64.2 | Table A2: Retrieval performance on 2WikiMQA (Ho et al., 2020). the number of ensembles. Lastly, we note that MDR has significantly lower latency than both models, i.e., about 60x faster than PathRetriever and 46x than PROMPTRANK, which is mainly due to the fast implementation of the exact inner product search (Johnson et al., 2019). It is worth noting, however, that MDR requires an expensive indexing step where *every* document in the corpus (Wikipedia in our case) is encoded using the document encoder. PROMPTRANK, on the other hand, can work directly out-of-the-box without requiring such expensive indexing. ## D.3 Sensitivity To Document Order Here, study PROMPTRANK's recall sensitivity to the document order in the prompt τc by running a simple experiment comparing two document ordering schemes: **link-based** and **inverted link-based**. Link-based ordering is the standard approach used in PROMPTRANK, which orders the documents in | System | #params | Avg. query time(s) | |------------------|-----------|----------------------| | PathRetriever | 110M | 1.95 | | MDR | 125M | 0.03 | | PROMPTRANK | 3B | 1.38 | | PROMPTRANK (ens) | 3B | 5.22 | Table A3: Inference cost and number of parameters of three systems comparing PROMPTRANK to PathRetriever and MDR. Query time is obtained by averaging the time to process 100 queries. | Doc. ordering | R@2 | R@10 | AR@2 | AR@10 | |---------------------|-------|--------|--------|---------| | T5-Large Link-based | 44.9 | 66.9 | 73.6 | 88.0 | | Inverted | 44.5 | 67.7 | 72.6 | 87.8 | | T5-XL | | | | | | Link-based | 44.6 | 67.9 | 74.1 | 88.2 | | Inverted | 45.7 | 69.0 | 74.4 | 88.3 | the path based on their Wikipedia hyperlink traversal order. The inverted scheme, *reverses* the order of the documents in the prompt. No instruction is used for this experiment. Table A4 shows the retrieval performance with both orderings. Interestingly, reversing the order of the documents in the path does not seem to have a tangible effect on the reranking performance. While it is expected that p(q|τc) will change by reversing the document order in the prompt, it appears that the ranks of different paths remain almost unchanged, which explains why the recall is hardly affected. In other words, the path scores output by T5-XL does not appear to be sensitive to the document order prompt and can still. This might point to another benefit of LM-based path reranking: Since the performance is hardly affected by the document order, we do not have to worry about finding paths in the correct order (if such order exists) since the LM will still be able to assess the path relevance given different orders. ## D.4 Comparison To Few-Shot Systems So far, we have mainly compared PROMPTRANK to systems trained on many more examples. Here we compare PROMPTRANK to few-shot LOUVRE (Seonwoo et al., 2021) and PathRetriever (Asai et al., 2020). To this end, we train PathRetriever on N examples from HotpotQA for N ∈ 50, 100, 500, 1000 and compare its performance to PROMPTRANK (Ndemos = 10). Since we were unable to obtain good performance by fine-tuning LOUVRE on few examples, we directly compare to the results reported in their paper, where 1% of training data is used (~90 examples). Table A5 shows performance of both few-shot systems compared to PROMPTRANK. While PathRetriever's performance improves as we add more examples, we can see that it is much less data efficient than PROMPTRANK. Even with 1K examples i.e., around 10x more data than PROMPTRANK, it performs significantly worse across all metrics. We also observe that PROMPTRANK performs better than LOUVRE in terms of R@2 and AR@2 (more than 6 points better) and very close with respect to other metrics even though PROMPTRANK does not involve any (pre)training. | Approach | # Ex | R@2 | R@10 | AR@2 AR@10 | | |-------------------------------------------------|------------------------------------------------------------------------------------------------|----------------------------------|--------|--------------|----| | 50 | 7.1 (4.4) | 14.5 (4.6) 29.5 (6.2) 40.0 (3.7) | | | | | 100 10.8 (1.1) 19.1 (0.3) 34.8 (1.5) 43.1 (0.6) | | | | | | | PathRetriever | 500 15.7 (0.3) 22.4 (0.3) 40.4 (0.3) 46.4 (0.4) 1K 17.7 (0.4) 23.6 (0.3) 41.8 (0.3) 47.1 (0.3) | | | | | | LOUVRE | 1% | 53.5 | 75.5 | 72.3 | - | | PROMPTRANK | 128 54.4 | 73.5 | 78.9 | 88.9 | | | (Ndemos = 10) | | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
ma-etal-2023-dice
{DICE}: Data-Efficient Clinical Event Extraction with Generative Models
https://aclanthology.org/2023.acl-long.886
Event extraction for the clinical domain is an under-explored research area. The lack of training data along with the high volume of domain-specific terminologies with vague entity boundaries makes the task especially challenging. In this paper, we introduce DICE, a robust and data-efficient generative model for clinical event extraction. DICE frames event extraction as a conditional generation problem and introduces a contrastive learning objective to accurately decide the boundaries of biomedical mentions. DICE also trains an auxiliary mention identification task jointly with event extraction tasks to better identify entity mention boundaries, and further introduces special markers to incorporate identified entity mentions as trigger and argument candidates for their respective tasks. To benchmark clinical event extraction, we compose MACCROBAT-EE, the first clinical event extraction dataset with argument annotation, based on an existing clinical information extraction dataset MACCROBAT. Our experiments demonstrate state-of-the-art performances of DICE for clinical and news domain event extraction, especially under low data settings.
# Dice: Data-Efficient Clinical Event Extraction With Generative Models Mingyu Derek Ma∗ Alexander K. Taylor∗ **Wei Wang Nanyun Peng** Computer Science Department University of California, Los Angeles {ma, ataylor2, weiwang, violetpeng}@cs.ucla.edu ## A 45 - Year - Old Lady Sought Dermatology Consultation For Severely Tender Erythematous Vesicles And Bullae Over Back , Chest And Arms . Sign Symptom Detailed Description Texture Biological Structure Biological Structure Biological Structure Abstract Event extraction for the clinical domain is an under-explored research area. The lack of training data along with the high volume of domainspecific terminologies with vague entity boundaries makes the task especially challenging. In this paper, we introduce DICE, a robust and data-efficient generative model for clinical event extraction. DICE frames event extraction as a conditional generation problem and introduces a contrastive learning objective to accurately decide the boundaries of biomedical mentions. DICE also trains an auxiliary mention identification task jointly with event extraction tasks to better identify entity mention boundaries, and further introduces special markers to incorporate identified entity mentions as trigger and argument candidates for their respective tasks. To benchmark clinical event extraction, we compose MACCROBAT-EE, the first clinical event extraction dataset with argument annotation, based on an existing clinical information extraction dataset, MACCROBAT (Caufield et al., 2019). Our experiments demonstrate state-of-the-art performances of DICE for clinical and news domain event extraction, especially under low data settings. ## 1 Introduction Event extraction (EE) is an information extraction task that aims to identify event triggers and arguments from unstructured texts (Ahn, 2006). The EE task consists of two subtasks: 1) event detection, in which the model extracts trigger text and predicts the event type; and 2) event argument extraction, in which the model extracts argument text and predicts the role of each argument given an event trigger and associated event type. Clinical EE aims to extract clinical events, which are occurrences at specific points in time during a clinical process, such as diagnostic procedures, symptoms, etc. The arguments for such events are ∗Equal contribution. | Sign_symptom | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | A man presented with an abnormal nodule measuring 0.8 x 1.5 cm in the left upper lung lobe imaged through chest computed tomography scanning. Diagnostic_procedure Event trigger nodule Event type Sign_symptom Detailed description abnormal Area 0.8 x 1.5 cm Biological structure left upper lung lobe Event trigger computed tomography Event type Diagnostic_ procedure Biological structure chest | Figure 1: Illustration of a SIGN_SYMPTOM event triggered by "nodule" with multiple arguments including an AREA argument "0.8x1.5cm", and a DIAGNOSTIC_PROCEDURE event whose predicate is "computed tomography" described by argument "chest" of role BIOLOGICAL_STRUCTURE. entities that modify or describe properties of these events (Caufield et al., 2019). Figure 1 shows an example sentence with two clinical events. The overwhelming volume and details of clinical information necessitate clinical EE, which benefits many downstream tasks such as adverse medical event detection (Rochefort et al., 2015), drug discovery (Wang et al., 2009), clinical workflow optimization (Hsu et al., 2016), and automated clinical decision support (Yadav et al., 2013). However, there are several non-trivial challenges of clinical EE compared to general domain EE. First, most triggers and arguments of clinical events consist of domain-specific terms that are more than 50% longer than the general domain on average, as shown in Table 1, and have vague boundaries because most clinical mentions1contain several descriptors. For instance, given the text span "massive heart attack", "heart attack" should be identified as the trigger (instead of "massive heart attack" or "attack") because it refers to a specific condition, and "massive" is an argument of the role type SEVER-ITY. However, when we consider "right common carotid artery", the entire text span describes a biological structure, and thus it functions as an argument of the role type BIOLOGICAL_STRUCTURE despite "right" and "common" being descriptors 1Clinical mentions are defined as meaningful text spans of occurrences or their properties (Caufield et al., 2019). 15898 for "carotid artery". The second challenge is the diversity and density of clinical arguments: there are on average 10 unique argument roles for each clinical event type compared to 3.7 in the general domain. Finally, it is challenging to obtain highquality annotated data for clinical events due to both patient privacy concerns and the cost of expert annotations. Due to these challenges, there have been no clinical EE datasets with argument annotations to the best of our knowledge. In this paper, we present DICE, a Data-effIcient generative model for Clinical Event extraction.2 We build upon existing prompt-based generative event extraction models to formulate EE as a sequence-to-sequence text generation task (Hsu et al., 2022; Ma et al., 2023b). To handle the special challenges of clinical EE, DICE 1) introduces a mention identification-enhanced EE model, which specializes in clinical mention identification by performing contrastive learning to distinguish correct mentions from the ones with perturbed mention boundary, training an auxiliary mention identification module to learn implicit mention properties, and adding explicit mention markers to hint mention boundaries; 2) performs independent queries for each argument role to better handle long-tail argument roles. To address the training data availability issue, we introduce MACCROBAT-EE, the first clinical event extraction dataset with argument information, which we derive from clinical experts' annotation on PubMed clinical case reports. We benchmark DICE on MACCROBAT-EE against several recent event extraction models. Experiments show that DICE achieves state-of-the-art clinical event extraction results on MACCROBATEE, and we observe a larger performance gain under low-resource settings. Moreover, DICE also achieves better performance on the ACE05 dataset, demonstrating its generalizability to other domains. Our contributions are threefold: 1) We develop DICE, a mention-enhanced clinical event extraction model that better identifies mention boundaries and is scalable to many argument roles; 2) We construct the first clinical event extraction dataset with argument annotations; 3) Our model achieves stateof-the-art performance on clinical and news EE and demonstrates more significant performance gains under low-resource settings. 2Please refer to https://derek.ma/DICE for code and data. ## 2 Related Works 2.1 General Domain Event Extraction Many prior works formulate EE as token-level classification tasks and trained in an ED-EAE pipeline-style (Wadden et al., 2019; Yang et al., 2019; Ma et al., 2021b) or optimized jointly (Li et al., 2013; Yang and Mitchell, 2016; Lin et al., 2020; Nguyen et al., 2022a). Recent work formulates the EE task as text generation with transformer-based pre-trained language models that prompt the generative model to fill in synthetic (Paolini et al., 2021; Huang et al., 2021; Lu et al., 2021; Li et al., 2021) or natural language templates (Huang et al., 2022; Hsu et al., 2022; Ma et al., 2022; Ye et al., 2022). These generative EE models are not optimized to handle complicated domain-specific mentions. To our knowledge, there is no existing approach to clinical EE using a text generation formulation, which we hypothesize is due to both data unavailability and to the aforementioned domain challenges. ## 2.2 Event Extraction In Biomedical Domain Biomedical EE is a type of biomedical IE tasks (Soysal et al., 2017; Fu et al., 2020; Xu et al., 2023). Existing approaches to biomedical EE (Huang et al., 2020; Trieu et al., 2020; Wadden et al., 2019; Ramponi et al., 2020; Wang et al., 2020) typically focus on extracting interactions or relationships between biological components such as proteins, genes, drugs, diseases and outcomes related to these interactions (Ananiadou et al., 2010). The mentions in these biological component interactions are short, distinctive biomedical terms and do not have rich event type-argument role ontologies because of the lack of interaction types present in the datasets (Ohta et al., 2011; Kim et al., 2011, 2013; Pyysalo et al., 2011, 2012). Li et al. (2020) develop a clinical event extraction model, but it only handles single-word events without considering arguments (Bethard et al., 2016). Our work addresses these concerns by introducing MACCROBAT-EE as well as providing a benchmark in a previously under-explored domain. ## 3 Clinical Domain Event Extraction 3.1 Task Formulation We follow the framework of prior works that decomposes the EE task into Event Detection (ED) and Event Argument Extraction (EAE), while introducing our novel Mention Identification module as an auxiliary task performed alongside both the ED and EAE modules. ED subtask takes a sentence (passage) as input to extract event triggers and predict event types. The trigger must be a sub-sequence of the passage and the event type must be one of the nevent_*type* pre-defined types. The EAE subtask takes a tuple of (passage, event trigger, event type), and extracts arguments from passage and predicts the argument role. Each event type holds a pool of n event_*type* arg_*role* argument roles as defined in the event ontology. ## 3.2 The Maccrobat**-Ee Dataset** Due to high annotation costs and privacy concerns, dataset availability is a primary bottleneck for clinical EE. We propose a repurposing of an existing expert-annotated dataset, MACCROBAT (Caufield et al., 2019),3to compose a clinical EE benchmark, MACCROBAT-EE. The MACCROBAT dataset consists of 200 pairs of English clinical case reports from PubMed accompanying annotation files with partial event annotation provided by 6 annotators with prior experience in biomedical annotations. To our knowledge, this is the only openly accessible collection of clinical case reports annotated for entities and relations by human experts. Following existing sentence-level EE works (Lin et al., 2020), we construct an event extraction dataset with full event structure, MACCROBAT-EE, which contains annotated span information for entities, *event triggers*, event types, *event arguments* and *argument roles* for each sentence. Mentions are defined as meaningful text spans of occurrences and their properties (Caufield et al., 2019). We include all tagged mentions in MACCROBAT as *entities*, and further specify that mentions tagged as events and their respective types are included as *event triggers* and event types. To infer event arguments and their roles, which are not provided in MACCROBAT, we consider nonevent entities that hold a MODIFY relation with event triggers as arguments, and we use the assigned entity types as argument roles. We infer arguments via the MODIFY relation because its definition of an entity modifying an event matches well with the argument definition of further characterizing the properties of an event as shown in Appendix B.2. The entity type in MACCROBAT defines a type of fine-grained physical or procedure property, which matches the argument role definition of being a type of participant or attribute of an event. We traverse all (event type, argument role) pairs to obtain the argument roles possible for each event type to create an event ontology, as shown in Appendix B.3. The definitions of each event type and argument role written by clinical experts are provided. ## 3.3 Data Statistics ![2_image_0.png](2_image_0.png) In Table 1, we show the statistics for MACCROBAT-EE as well as the comparable values for two widely-used EE datasets, ACE05 (Doddington et al., 2004) and ERE-EN (Song et al., 2015). MACCROBAT-EE differs from general-domain EE datasets because it contains fewer sentences and the average occurrences of entities, triggers, and arguments per sentence are significantly higher. Note that the average length of the entities in MACCROBAT-EE is significantly longer. Besides single-span entities, there are also nested and discontinuous entities used as event arguments in MACCROBAT-EE. This demonstrates that MACCROBAT-EE fills a different niche than ACE05 and ERE-EN and provides a valuable benchmark for EE under a clinical setting with high mention density, and allowing for future work to adapt clinical case report domain-specific features. ## 3.4 Human Verification We conduct a human annotation to examine the coverage of the induced arguments and the correctness of their roles. Arguments and their roles in 96% out of 100 randomly sampled events are considered comprehensive and appropriate by both of the two annotators with consensus. ![3_image_0.png](3_image_0.png) ## 4 The Dice **Event Extraction Model** We formulate EE as a conditional generation task, so that we can incorporate domain knowledge such as event type and argument role definitions via natural language in the input prompt. To tackle the challenges of clinical EE, we 1) further enhance the EE model's specialization in mention identification by techniques introduced in §4.2 to handle long clinical mentions with vague boundaries; and 2) perform an independent query for each event type/argument role for better long-tail performance in settings with many event types/argument roles as introduced in §4.1. Figure 2 shows the model design. ## 4.1 Seq2Seq Components There are three components: 1) Mention Identification (MI) which identifies the candidate pool of event triggers or event arguments, 2) Event Detection (ED) which extracts event triggers and predicts event types, and 3) Event Argument Extraction (EAE) which extracts arguments and predicts argument roles. We integrate these components to form the MI-ED-EAE pipeline (details in §4.3). We use pre-trained text generation model T5-large (Raffel et al., 2020) as the backbone LM. The input is a natural language sequence consisting of the original input *passage* and *prompt*. We design input-output formats with shared common elements across different tasks to enable synergistic joint optimization, as all three modules aim to generate a sub-sequence of the input passage. Mention Identification (MI). To better align the MI task with the ED and EAE tasks, the MI module extracts all mentions that are candidate event triggers or arguments from the input passage. The input is the *passage* and the output includes all trigger or argument candidates in the input passage separated by a special token "[SEP]" following the prefix "Mentions are". If there are no mentions, a placeholder is generated (i.e. "Mentions are <mention>"). We extract mentions by inputting the entire passage as well as sentence segments selected by a sliding window with a size of a few words, which enables shorter outputs and higher mention coverage. We enforce the condition that the order of output mentions match the order of their appearance in the passage. This consistency helps the generative model to learn its expected behavior as well as allows for prior mention predictions to inform subsequent mention predictions. We keep the full passages in addition to the sliced sub-sequence during both training and inference to ensure the longer dependencies are captured. Event Detection (ED). The ED module extracts event triggers from the passage. For a given passage, we construct nevent_*type* queries. For each query, we input the concatenation of *passage* and the following *prompt* segments: *event type name* and *event type description*. The output of the ED task is the concatenation of the event trigger texts predicted for the queried event type separated by a special token "[SEP]", following the prefix "Event triggers are". When there is no valid trigger for the queried event type (which are considered to be negative samples), a special placeholder is generated (i.e. "Event triggers are <trigger>"). The balance between positive and negative samples is a hyperparameter that may be tuned for a better precisionrecall trade-off. We decode the output sequence and obtain a list of (event type, trigger) pairs. Event Argument Extraction (EAE). The EAE module extracts event arguments from queries consisting of the input passage, a given role type, and a pair consisting of an event trigger and its event type. We perform n event_*type* arg_*roles* queries to extract arguments corresponding to each potential argument role where n event_*type* arg_*roles* is the number of unique argument roles for a certain event type. The input sequence contains passage, *event type name*, and event type description segments in addition to: - *Trigger markers* which are special tokens (i.e. "<trigger>" and "</trigger>") to wrap trigger text to explicitly provide the trigger position - *Trigger phrase* such as "Event trigger is plaque" - *Argument role name* for the queried argument role, such as "Argument role is Severity" - *Argument role description* The expected output begins with a reiteration (Ma et al., 2023a) of the querying argument role (e.g. "Severity is") followed by the concatenated predicted argument texts or a placeholder ("<argument>") if there are no valid predictions. ## 4.2 Mention Identification Enhanced Ee We propose techniques to enhance the generative model's ability to accurately identify long mentions with vague boundaries: 1) contrastive learning with instances of perturbed mention boundaries, 2) explicit boundary hints with markers and 3) implicit joint mention representation learning. Contrastive learning with mention boundary perturbation. Understanding the role of mention descriptors and distinguishing the subtle boundary difference are not specifically optimized during pre-training or fine-tuning with the text generation objective. We propose to create such a task and train the model specifically to recognize the mention with the correct boundaries from a pool of mentions with similar but shifted boundaries. Following the seq2seq formulation introduced in §4.1, we construct N input-output sequence pairs ⟨ini*, out*i⟩ where the input sequence ini consists of passage and prompt, and the gold output outi contains the ground-truth mentions, triggers or arguments for MI, ED or EAE respectively. For a certain input ini, we consider the ground-truth output outi as a positive output (e.g. "Mentions are ... right common carotid artery"). We create the k negative instances (i.e. n 1 i , ..., nk i ) of ini by perturbing the left and right boundaries of mentions in outito add/remove words (e.g. removing "right", removing "artery", or adding "the" before "right" etc.). We create the negative instances by perturbing output sequences without changing the input, and the contrastive learning objective applies to MI, ED and EAE. This results in a group of instances for iniincluding both positive and negative instances: Xi = {⟨outi*, in*i⟩, ⟨n 1 i , ini⟩ *, . . . ,* ⟨n k i , ini⟩}. Applying the process, we obtain instance groups for all input-output pairs X = {X1*, . . . ,* XN }. We use cross-entropy loss LCE to learn to generate the correct output outi given input ini. We introduce an InfoNCE loss (Oord et al., 2018) to learn to identify the positive output (items in the numerator) from a pool of output candidates with mention boundary perturbations (items in the denominator) (Ma et al., 2021a; Meng et al., 2021; Shen et al., 2020): $$\mathcal{L}_{N}=\frac{1}{\left|\mathbb{X}\right|}\sum_{\mathbf{X}_{i}\in\mathbb{X}}\left[\log\frac{f\left(out_{i},in_{i}\right)}{\sum_{\left\{n_{i}^{j},in_{i}\right\}\in\mathbf{X}_{i}\ f\left(n_{i}^{j},in_{i}\right)}\right]}\right]$$ where $j\in[0,1,2,...,k]$ and $n_{i}^{0}$ is the positive out where j ∈ [0, 1, 2*, ..., k*] and n i is the positive output outi. We define the function f (*s, in*i) as the probability of generating a sequence s given input ini, which is estimated by multiplying logits for each token of the output produced by the decoder under the teacher-forcing paradigm while iniis fed to the encoder. This estimation is normalized by the output length and produces the output value of f (*s, in*i). We combine the two losses into the final objective L(Θ) = LCE + LN . Explicit mention marker. Wrapping key spans with special token markers provides beneficial hints to the generative model that improve its understanding of how the components of the sentence are associated syntactically. We wrap trigger or argument mentions for the ED and EAE tasks, respectively, to provide a candidate pool for the identification task. To minimize the impact of error propagation of the imperfect MI module on downstream tasks, we consider two conditions: 1) the ED/EAE modules with markers must be robust enough to handle the compromised precision and incomplete coverage of the gold mentions and 2) the granularity of the candidate pool must not be too coarse or too fine. To address the first concern, we generate two versions of the data: one with mention markers and one with no markers, and train the ED/EAE module over the augmented data. This trains the model to be robust in cases where the MI module provides imprecise predictions. The second concern stems from the too broad a candidate pool making the markers less informative and too strict a candidate pool making it difficult for the MI module to correctly identify mentions. To account for this issue, we use trigger mentions for the ED task and argument mentions for the EAE task as candidate pools as opposed to using words of a certain part-of-speech or named entities type. The unique properties of triggers (describing an entire process or behavior that can be linked to a specific time) and arguments (concrete details or descriptive content) make them more useful as candidate sets. MI as an implicit auxiliary task. Existing works include a named-entity recognition task to provide additional supervision signals for EE (Zhao et al., 2019; Zhang et al., 2019; Sun et al., 2020; Wadden et al., 2019) for other formulations except for generative models. Since we design all three extraction tasks (ED, EAE and MI) as generation tasks, and ED and EAE can be considered as special MI with certain interest focus, identifying mentions is a synergistic capability contributing to performing ED and EAE. Thus, we add trigger MI and argument MI as auxiliary tasks to jointly optimize with the ED and EAE tasks, respectively. ## 4.3 Training And Inference Schedule sampling. To gently bridge the discrepancy between gold and predicted upstream results (ED results passed to EAE, trigger/argument MI results passed to ED/EAE), we adopt the scheduled sampling technique to perform curriculum learning (Bengio et al., 2015). We force the downstream model to deal with imperfect upstream results gradually by decaying the upstream results from the gold ones to the predicted ones linearly. We perform the decay at the beginning of each epoch. Training. We first train standalone trigger and argument MI modules to provide mention candidates. We then train ED+MI joint model and EAE+MI joint model with auxiliary trigger and argument MI modules respectively. We also add markers around trigger/argument mention candidates. For efficient training, the model uses downsampled negative instances (i.e. instances with mismatched trigger/argument and event type/argument role). Inference. We use the trigger and argument mention markers produced by the standalone trigger and argument MI modules in the downstream ED+MI and EAE+MI joint models. The event triggers and their types predicted by the ED+MI joint model are provided as input to the EAE+MI joint model in a pipeline fashion. ## 5 Experiments In The Clinical Domain We evaluate DICE on MACCROBAT-EE and compare it with existing event extraction models. ## 5.1 Experimental Setup Data splits. We divide the 200 MACCROBAT-EE documents according to an 80%/10%/10% split for the training, validation, and testing sets, respectively. For low-resource settings, we consider 10%, 25%, 50%, and 75% of the number of *documents* used to build the training dataset while retaining the original validation and testing sets for evaluation. Evaluation metrics. We follow previous EE works and report precision, recall and F1 scores for the following four tasks. 1) Trigger Identification: identified trigger span is correct. 2) Trigger Classification: identified trigger is correct and its predicted *event type* is correct. 3) Argument Identification: identified argument span is correct. 4) Argument Classification: identified event argument is correct and its predicted *argument role* is also correct. Variants. We term two variants of our model. We refer to pipelined ED and EAE modules *without* the mention enhancement techniques described in §4.2, *with* long-tail argument handling and text generation cross-entropy loss only as Vanilla DICE, and the full model as DICE. 4 Baselines. We benchmark the performance of the recent EE models on MACCROBAT-EE, including: **Text2Event** (Lu et al., 2021): a sequenceto-structure model that converts the input passage to a trie data structure to retrieve event arguments; OneIE (Li et al., 2013): a multi-task EE model trained with global features;5and **DEGREE** (Hsu et al., 2022): a prompt-based generative model that consists of distinct ED and EAE modules 4We show hyperparameters, implementation and baseline reproduction details in Appendix D. 5Note that additional entity annotation is used during training, while it is not used in other models. | Trigger | Argument | | | | | | | | | | | | | |-----------|--------------|----------------|----------------|----------------|----------------|-------|--------|-------|-------|--------|-------|-------|-------| | # | Model | Identification | Classification | Identification | Classification | | | | | | | | | | Prec. | Recall | F1 | Prec. | Recall | F1 | Prec. | Recall | F1 | Prec. | Recall | F1 | | | | 1 | Text2Event | - | - | - | 66.64 | 60.57 | 63.46 | - | - | - | 55.29 | 47.89 | 51.33 | | 2 | OneIE | 74.60 | 74.93 | 74.77 | 68.74 | 68.96 | 68.85 | 48.99 | 52.59 | 50.72 | 39.82 | 42.95 | 41.32 | | 3 | DEGREE | 71.91 | 66.33 | 69.01 | 67.59 | 62.59 | 65.00 | 46.84 | 24.31 | 32.02 | 44.75 | 23.23 | 30.58 | | 4 | Vanilla DICE | 65.03 | 74.08 | 69.26 | 60.51 | 70.28 | 65.03 | 49.10 | 53.60 | 51.25 | 45.95 | 50.76 | 48.24 | | 5 | DICE | 73.53 | 76.98 | 75.22 | 68.12 | 72.97 | 70.46 | 55.41 | 57.87 | 56.61 | 53.02 | 55.03 | 54.01 | | Trigger | Argument | | | | | | | | | | | | | |------------------------------------|----------------------------|----------------|----------------|----------------|-------|-------|--------|-------|-------|--------|-------|-------|-------| | # Mention-enhancing techniques | Identification | Classification | Identification | Classification | | | | | | | | | | | Prec. | Recall | F1 | Prec. | Recall | F1 | Prec. | Recall | F1 | Prec. | Recall | F1 | | | | 1 | Vanilla DICE | 65.03 | 74.08 | 69.26 | 60.51 | 70.28 | 65.03 | 70.76 | 76.48 | 73.51 | 66.47 | 72.71 | 69.45 | | 2 | Vanilla w/ aux. task | 69.54 | 74.59 | 71.98 | 65.02 | 71.00 | 67.88 | 73.24 | 76.48 | 74.83 | 68.31 | 73.03 | 70.59 | | 3 | Vanilla w/ marker | 72.91 | 70.71 | 71.79 | 68.58 | 67.70 | 68.14 | 74.27 | 76.91 | 75.57 | 69.66 | 72.82 | 71.20 | | 4 Vanilla w/ contrastive | 70.02 | 75.12 | 72.48 | 66.93 | 72.04 | 69.39 | 73.86 | 77.41 | 75.59 | 69.92 | 72.89 | 71.37 | | | 5 Vanilla w/ all three (Full DICE) | 73.53 | 76.98 | 75.22 | 68.12 | 72.97 | 70.46 | 75.73 | 77.62 | 76.66 | 71.14 | 73.91 | 72.50 | | | 6 | Vanilla w/ perfect marker† | 97.04 | 94.11 | 95.55 | 85.23 | 88.66 | 86.91 | 91.91 | 90.72 | 91.31 | 81.71 | 86.73 | 84.14 | that fill in event type-specific human written templates. To adapt DEGREE to the new dataset, we create the ED/EAE templates by concatenating event type/argument role phrases (e.g. "Biological_structure is artery"). ## 5.2 Overall Ed And Eae Results We show the superiority of DICE in both highresource and low-resource settings. High-resource results. Table 2 shows the results for high-resource settings. Among the baselines, OneIE and Text2Event achieve the best F1 score on trigger extraction and argument extraction respectively. DEGREE reports low performance on the argument extraction task due to the challenges of generating long sequences containing all argument roles. DICE outperforms the baselines on *both* trigger and argument extraction tasks, with 2.7 points F1 score improvements for argument classification. Low-resource results. We show the results of training in lower-resource settings in Figure 3 and Appendix C.3. We observe that DICE outperforms all baselines on all four tasks under all low-resource settings. The performance gap between DICE and the baselines increases in the lower training data percentage settings. In the argument classification task, DICE outperforms Text2Event by more than 8 (10%) and 9 (25%) points in F1 score. ![6_image_0.png](6_image_0.png) 10% 25% 50% 75% 100% ![6_image_2.png](6_image_2.png) ![6_image_3.png](6_image_3.png) 10% 25% 50% 75% 100% ![6_image_1.png](6_image_1.png) ## 5.3 Ablation Studies We show ablation studies about mention-enhancing techniques and MI module design in this section and more studies about input prompt segments and formulation in Appendix C.2. Mention-enhancing techniques. We analyze the effects of the proposed mention-enhancing techniques in Table 3. We observe contrastive learning, auxiliary task, and mention markers contribute improvements of 1.92, 1.14 and 1.75 in the F1 score on argument classification, respectively. We observe that DICE improves over vanilla DICE by 5.43 and 3.05 in the F1 score for trigger and argument classification, respectively. We include an oracle setting on Line 6 that provides ground-truth mention markers during inference to illustrate the influence of the accuracy of the MI module. Halluci Miss ![7_image_1.png](7_image_1.png) MI module design. We compare our MI module with the representative of sequence tagging model OneIE, which produces BIO label for each input token, and state-of-the-art generative named-entity recognition model Yan et al. (2021), which generates token indexes, on the entity identification task. We report the performance in Table 4. The results show that the sliding window technique significantly improves recall (Line 5 vs 3) and contrastive learning improves overall performance (Line 5 vs 4). Our MI module outperforms all baselines and achieves the best F1 score. ## 5.4 Error Analysis We analyze the errors propagated through the 4 steps in the pipeline for DICE using predicted triggers on the argument classification task which shows the culmination of the errors propagated through the pipeline. The results in Figure 4a indicate that the identification sub-tasks, especially trigger identification, are the performance bottlenecks. We further break identification errors into three types: 1) complete miss: the predicted span has no overlap with the ground-truth span; 2) partial miss: the predicted span is a subset of the ground-truth span; 3) hallucination: the predicted span partially overlaps with the ground-truth span, but also incorrectly includes additional tokens. As shown in Figure 4a, the majority of errors produced by the trigger identification step are complete misses, ![7_image_0.png](7_image_0.png) whereas argument identification suffers from both partial and complete misses. We also observe that the left boundaries of the trigger and argument spans are more difficult to identify as 76% of partial misses and 69% of hallucinations correctly identify the right boundary but miss the left boundary. This can be explained by that the dominant word of the entity is typically on the rightmost (e.g. "attack" in "heart attack"), whereas the left boundary requires separating the target entity from its descriptors (e.g. "massive heart attack"). We further compare the error types between the vanilla DICE and full version of DICE with mention identification enhancement techniques in Figure 4b. We observe that DICE produces fewer error cases across all error types in both trigger and argument identification steps, which supports our assertion that our mention identification enhancement techniques improve the identification of mentions with vague boundaries. 0.2 0.2 overall 0 overall 0 ## 5.5 Qualitative Analysis To identify challenges for future works, we summarize 4 types of common errors made by DICE and show examples in Table 5. In the first example, the MI module of DICE only identifies a subsequence of the true mention (e.g., "hearing loss" vs. "bilateral sensorineural high-frequency hearing loss"), leading to a partial miss that shows the 1 Task: ED **Passage**: An {audiology evaluation} showed {severe} {bilateral} {sensorineural} {high-frequency} {hearing loss} ( {-70 dB} ). Ground-truth: (bilateral sensorineural high-frequency hearing loss, Sign_symptom) Pred. of DICE: (hearing loss, Sign_symptom) 2 Task: EAE **Passage**: The patient underwent a {resection} of the { 15 cm segment IVb } mass [SIGN_SYMPTOM] in {June 2010} . Ground-truth: (15 cm, Distance), (segment IVb, Biological_structure) Pred. of DICE: (15 cm segment IVb, Biological_structure) 3 Task: ED **Passage**: Core biopsies from the {breast lump} showed {ductal carcinoma} in situ (sample labelled P1.1). Ground-truth: (biopsies, Diagnostic_procedure), (ductal carcinoma, Disease_disorder) Pred. of DICE: *None* 4 Task: EAE **Passage**: Serum total bilirubin and {tumor markers} , carcinoembryonic antigen [DIAGNOSTIC_PROCEDURE] ( {CEA} ) and carbohydrate antigen 19-9 [DIAGNOSTIC_PROCEDURE] ( {CA19-9} ), were all {within {normal ranges}} . Ground-truth: *None* Pred. of DICE: (within normal ranges, Lab_value) was predicted as the argument for both events. ED module mistakenly includes incorrect descriptors. In the second example, DICE hallucinates that a DISTANCE descriptor "15 cm" is part of the BIOLOGICAL_STRUCTURE "segment IVb", which indicates that the EAE module struggles to separate mention boundaries. In the third example, the first event "biopsies" is missed by both the ED module and the MI module. However, despite the MI module correctly identifying "ductal carcinoma" as a mention, the ED module does not identify it as an event trigger. In the fourth example, DICE identifies "within normal ranges" as the LAB_VALUE for the two DIAGNOSTIC_PROCEDURE events, which are not valid LAB_VALUE for tumor marker tests. ## 6 Experiments In The General Domain We evaluate DICE's generalizability by performing EE on the widely-used news-domain dataset ACE05 (Doddington et al., 2004), which contains 33 event types and 22 distinct argument roles. We perform both full-shot and low-resource experiments with 10% of the training data using the same data pre-processing, data splits and metrics as prior works (Wadden et al., 2019; Lin et al., 2020), and we compare with the same set of baselines introduced in §5.1. Baseline selection criteria and more results are presented in Appendix C.1. We show the result in Table 6. We observe that DICE achieves a better performance on both low and high-resource settings for both trigger and argument classification tasks. We observe that DEGREE's performance is much closer to our model than in the clinical domain, which is due to two factors. First, the benefit of the independent query design used in DICE is diminished because ACE05 has far fewer argument roles that need to be filled in for each event type (on average 4.73) compared with in MACCROBAT-EE (on average 10). Second, DEGREE benefits from the implicit argument role dependencies established in its human-created | Model | 10% | 100% | | | |------------|-------|--------|-------|-------| | Tri-C | Arg-C | Tri-C | Arg-C | | | Text2Event | 47.0‡ | 24.9‡ | 71.9† | 53.8† | | OneIE | 61.5‡ | 26.8‡ | 74.7† | 56.8† | | DEGREE | 66.1† | 42.1† | 72.2† | 55.8† | | DICE | 68.9 | 44.7 | 75.5 | 57.6 | event templates for ACE05, which were unavailable for the clinical domain. We also observe that mentions in the general domain are easier to identify as our MI module achieves 92% F1 score for entity identification on ACE05, while achieving 77% F1 score on MACCROBAT-EE. Although the mentions in the general domain are not as complex as clinical mentions, the performance of DICE supports our claim that mention-enhanced event extraction generalizes to the general domain. ## 7 Conclusion And Future Work We present DICE, a generative event extraction model designed for the clinical domain. DICE is adapted to tackle long and complicated mentions by conducting contrastive learning on instances with mention boundary perturbation, jointly optimizing EE tasks with the auxiliary mention identification task as well as the addition of mention boundary markers. We also introduce MACCROBAT-EE, the first clinical EE dataset with argument annotation as a testbench for future clinical EE works. Lastly, our evaluation shows that DICE achieves state-ofthe-art EE performance in the clinical and news domains. In the future, we aim to apply transfer learning from higher-resource domains. ## Acknowledgments Many thanks to I-Hung Hsu, Derek Xu, Tanmay Parekh and Masoud Monajatipoor for internal reviews, to lab members at PLUS lab, ScAi and UCLA-NLP for suggestions, and to the anonymous reviewers for their feedback. This work was partially supported by NSF 2106859, 2200274, AFOSR MURI grant \#FA9550-22-1-0380, Defense Advanced Research Project Agency (DARPA) grant \#HR00112290103/HR0011260656, and a Cisco Sponsored Research Award. ## Limitations This work presents a repurposing of an existing dataset, MACCROBAT, and a set of novel techniques for adapting event extraction to the clinical domain. Among these new techniques is the handling of long-tailed argument roles, in which we independently query each role type. This presents an issue with scalability to domains with yet more complexity, as training the full DICE while querying both all event types and all argument types present in MACCROBAT-EE requires considerable resources during inference. ## Ethical Statement Our experiments and proposed model framework are intended to encourage exploration in the clinical information extraction domain while avoiding the risk of privacy leakage. The data we use in this work is publicly available and fully de-identified. Though recent research has found it to be difficult to reconstruct protected personal information from such data, there remains some small risk that future models may be able to do so. We have not altered the content of data in any that would increase the likelihood of such an occurrence and are thus not risking private information leakage. ## References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8, Sydney, Australia. Association for Computational Linguistics. Sophia Ananiadou, Sampo Pyysalo, Jun'ichi Tsujii, and Douglas B Kell. 2010. Event extraction for systems biology by text mining the literature. *Trends in* biotechnology, 28(7):381–390. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems, 28. Steven Bethard, Guergana Savova, Wei-Te Chen, Leon Derczynski, James Pustejovsky, and Marc Verhagen. 2016. SemEval-2016 task 12: Clinical TempEval. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1052– 1062, San Diego, California. Association for Computational Linguistics. J Harry Caufield, Yichao Zhou, Yunsheng Bai, David A Liem, Anders O Garlid, Kai-Wei Chang, Yizhou Sun, Peipei Ping, and Wei Wang. 2019. A comprehensive typing system for information extraction from clinical narratives. *medRxiv*, page 19009118. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA). Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 671–683, Online. Association for Computational Linguistics. Sunyang Fu, David Chen, Huan He, Sijia Liu, Sungrim Moon, Kevin J. Peterson, Feichen Shen, Liwei Wang, Yanshan Wang, Andrew Wen, Yiqing Zhao, Sunghwan Sohn, and Hongfang Liu. 2020. Clinical concept extraction: A methodology review. *Journal* of Biomedical Informatics, 109:103526. Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1381– 1393, Online. Association for Computational Linguistics. I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng. 2022. DEGREE: A data-efficient generation-based event extraction model. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1890–1908, Seattle, United States. Association for Computational Linguistics. William Hsu, Simon X Han, Corey W Arnold, Alex AT Bui, and Dieter R Enzmann. 2016. A data-driven approach for quality assessment of radiologic interpretations. *Journal of the American Medical Informatics* Association, 23(e1):e152–e156. Kuan-Hao Huang, I-Hung Hsu, Prem Natarajan, KaiWei Chang, and Nanyun Peng. 2022. Multilingual generative language models for zero-shot crosslingual event argument extraction. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4633–4646, Dublin, Ireland. Association for Computational Linguistics. Kung-Hsiang Huang, Sam Tang, and Nanyun Peng. 2021. Document-level entity-based extraction as template generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5257–5269, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Kung-Hsiang Huang, Mu Yang, and Nanyun Peng. 2020. Biomedical event extraction with hierarchical knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1277– 1285, Online. Association for Computational Linguistics. Jin-Dong Kim, Yue Wang, Toshihisa Takagi, and Akinori Yonezawa. 2011. Overview of Genia event task in BioNLP shared task 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 7–15, Portland, Oregon, USA. Association for Computational Linguistics. Jin-Dong Kim, Yue Wang, and Yamamoto Yasunori. 2013. The Genia event extraction shared task, 2013 edition - overview. In *Proceedings of the BioNLP* Shared Task 2013 Workshop, pages 8–15, Sofia, Bulgaria. Association for Computational Linguistics. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82, Sofia, Bulgaria. Association for Computational Linguistics. Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 894–908, Online. Association for Computational Linguistics. Zhijing Li, Chen Li, Yu Long, and Xuan Wang. 2020. A system for automatically extracting clinical events with temporal information. *BMC Medical Informatics and Decision Making*, 20(1):1–13. Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7999–8009, Online. Association for Computational Linguistics. Xiao Liu, Heyan Huang, Ge Shi, and Bo Wang. 2022. Dynamic prefix-tuning for generative template-based event extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5216–5228, Dublin, Ireland. Association for Computational Linguistics. Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2795–2806, Online. Association for Computational Linguistics. Mingyu Derek Ma, Muhao Chen, Te-Lin Wu, and Nanyun Peng. 2021a. HyperExpan: Taxonomy expansion with hyperbolic representation learning. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4182–4194, Punta Cana, Dominican Republic. Association for Computational Linguistics. Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, and Nanyun Peng. 2023a. Parameter-efficient low-resource dialogue state tracking by prompt tuning. In *Proc. Interspeech* 2023. Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, and Nanyun Peng. 2021b. EventPlus: A temporal event understanding pipeline. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, pages 56– 65, Online. Association for Computational Linguistics. Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, and Wei Wang. 2023b. STAR: Boosting low-resource event extraction by structure-to-text data generation with large language models. *arXiv preprint arXiv:2305.15090*. Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, and Jing Shao. 2022. Prompt for extraction? PAIE: Prompting argument interaction for event argument extraction. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6759–6774, Dublin, Ireland. Association for Computational Linguistics. Yu Meng, Chenyan Xiong, Payal Bajaj, Paul Bennett, Jiawei Han, Xia Song, et al. 2021. Coco-lm: Correcting and contrasting text sequences for language model pretraining. *Advances in Neural Information* Processing Systems, 34:23102–23114. Minh Van Nguyen, Viet Dac Lai, and Thien Huu Nguyen. 2021. Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 27–38, Online. Association for Computational Linguistics. Minh Van Nguyen, Bonan Min, Franck Dernoncourt, and Thien Nguyen. 2022a. Joint extraction of entities, relations, and events via modeling inter-instance and inter-label dependencies. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4363–4374, Seattle, United States. Association for Computational Linguistics. Minh Van Nguyen, Bonan Min, Franck Dernoncourt, and Thien Nguyen. 2022b. Learning cross-task dependencies for joint extraction of entities, events, event arguments, and relations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9349–9360, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tomoko Ohta, Sampo Pyysalo, and Jun'ichi Tsujii. 2011. Overview of the epigenetics and posttranslational modifications (EPI) task of BioNLP shared task 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 16–25, Portland, Oregon, USA. Association for Computational Linguistics. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *ArXiv preprint*, abs/1807.03748. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In *International* Conference on Learning Representations. Sampo Pyysalo, Tomoko Ohta, Makoto Miwa, HanCheol Cho, Jun'ichi Tsujii, and Sophia Ananiadou. 2012. Event extraction across multiple levels of biological organization. *Bioinformatics*, 28(18):i575– i581. Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Dan Sullivan, Chunhong Mao, Chunxia Wang, Bruno Sobral, Jun'ichi Tsujii, and Sophia Ananiadou. 2011. Overview of the infectious diseases (ID) task of BioNLP shared task 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 26–35, Portland, Oregon, USA. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Alan Ramponi, Rob van der Goot, Rosario Lombardo, and Barbara Plank. 2020. Biomedical event extraction as sequence labeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5357–5367, Online. Association for Computational Linguistics. Christian M Rochefort, David L Buckeridge, and Alan J Forster. 2015. Accuracy of using automated methods for detecting adverse events from electronic health record data: a research protocol. *Implementation* Science, 10(1):1–9. Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, and Weizhu Chen. 2020. A simple but toughto-beat data augmentation approach for natural language understanding and generation. arXiv preprint arXiv:2009.13818. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 89–98, Denver, Colorado. Association for Computational Linguistics. Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2017. CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines. *Journal of the American Medical Informatics Association*, 25(3):331–336. Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2020. Recurrent interaction network for jointly extracting entities and classifying relations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3722–3732, Online. Association for Computational Linguistics. Hai-Long Trieu, Thy Thy Tran, Khoa NA Duong, Anh Nguyen, Makoto Miwa, and Sophia Ananiadou. 2020. Deepeventmine: end-to-end neural nested event extraction from biomedical texts. *Bioinformatics*, 36(19):4910–4917. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784– 5789, Hong Kong, China. Association for Computational Linguistics. Sijia Wang, Mo Yu, Shiyu Chang, Lichao Sun, and Lifu Huang. 2022. Query and extract: Refining event extraction as type-oriented binary decoding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 169–182, Dublin, Ireland. Association for Computational Linguistics. Xiaoyan Wang, George Hripcsak, Marianthi Markatou, and Carol Friedman. 2009. Active computerized pharmacovigilance using natural language processing, statistics, and electronic health records: a feasibility study. Journal of the American Medical Informatics Association, 16(3):328–337. Xing David Wang, Leon Weber, and Ulf Leser. 2020. Biomedical event extraction as multi-turn question answering. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, pages 88–96, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jiashu Xu, Mingyu Derek Ma, and Muhao Chen. 2023. Can NLI provide proper indirect supervision for lowresource biomedical relation extraction? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada. Association for Computational Linguistics. Kabir Yadav, Efsun Sarioglu, Meaghan Smith, and Hyeong-Ah Choi. 2013. Automated outcome classification of emergency department computed tomography imaging reports. *Academic Emergency Medicine*, 20(8):848–854. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808–5822, Online. Association for Computational Linguistics. Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In *Proceedings of the 2016 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299, San Diego, California. Association for Computational Linguistics. Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5284– 5294, Florence, Italy. Association for Computational Linguistics. Hongbin Ye, Ningyu Zhang, Shumin Deng, Xiang Chen, Hui Chen, Feiyu Xiong, Xi Chen, and Huajun Chen. 2022. Ontology-enhanced prompt-tuning for fewshot learning. In *Proceedings of the ACM Web Conference 2022*, WWW '22, page 778–787, New York, NY, USA. Association for Computing Machinery. Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019. Extracting entities and events as a single task using a transition-based neural model. In *Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,* IJCAI-19, pages 5422–5428. International Joint Conferences on Artificial Intelligence Organization. Zixuan Zhang and Heng Ji. 2021. Abstract Meaning Representation guided graph encoding and decoding for joint information extraction. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 39–49, Online. Association for Computational Linguistics. Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):817–824. ## A Potential Questions What is the difference between the existing generative EE model DEGREE and DICE? Compared with DEGREE, our model: 1) further enhances the EE model's specialization in mention identification by three techniques to learn mentionrelated capabilities introduced in §4.2 to handle long clinical mentions with vague boundaries; and 2) performs an independent query for each argument role for better long-tail performance in settings with many argument roles as introduced in §4.1. Would training and inference efficiency be an issue? As we perform an independent query for each event type/argument role in the ED/EAE model, it is a tradeoff between performance and running cost. Though during training, we only sample a subset of negative instances to train the model for faster convergence. For example, to create seq2seq input-output pairs for a certain sentence for ED, we create 1 positive pair (i.e. there is an event in the sentence for the query event type) and k (instead of nevent_*type*, where k is much smaller than nevent_*type*) negative pairs (i.e. no event exists for the query event type). Why use standalone MI modules to produce mention candidates? We use standalone trigger and argument MI modules to create markers for downstream ED+MI and EAE+MI joint models, instead of using the MI module jointly trained in the ED+MI or EAE+MI models because the standalone one yields better performance. ## B Dataset Maccrobat**-Ee Details** B.1 Maccrobat **Annotation** MACCROBAT is annotated according to the Annotation for Case Reports using Open Biomedical Annotation Terms (ACROBAT) defined in (Caufield et al., 2019). ACROBAT describes events and entities as meaningful text spans, but differentiates events as occurrences that may be ordered chronologically and entities as objects that may modify or describe events. According to the annotation guideline, entity text spans are limited to the shortest viable length. Each event and entity is given a type such that certain events are associated with certain argument roles. According to ACROBAT, Entity text spans are limited to the shortest viable length. For example, the text span "mild asthma attack" would be annotated by labeling "asthma attack" as an event as that is the shortest span that conveys the occurrence of the event. "Mild" would be labeled an entity and the annotation would add a relation indicating that "mild" modifies "asthma attack". MACCROBAT contains 12 relation types, but for our purposes we only consider the MOD-IFY relation that occurs when an entity describes or characterizes an event. ## B.2 Details Of Inferring Event Arguments According to ACE2005 English Events Guidelines (AEEG),6the arguments of events are defined as entities and values within the scope of an event and only the closest entities and values will be selected, where a value is defined to be "a string that further characterizes the properties of some Entity or Event". The MODIFY relation in the MACCROBAT dataset connects 2 arguments, and it is defined as the "generic relationship in which one entity or event modifies another entity or event, including instances where an entity is identified following an event" (Caufield et al., 2019). The MODIFY relation satisfies the argument definition described by the AEEG by incorporating within-sentence relationships between an entity that modifies or describes an event. Thus, given a certain event trigger, we consider non-event entities that hold a MODIFY relation with the trigger as arguments of this event. We take the assigned type of the selected entity according to MACCROBAT as the role of the argument. To create an event ontology, which includes all possible event types and possible argument roles or each event type, we traverse all (event type, argument role) pairs to obtain the unique argument roles possible for each event type. ## B.3 Event Ontology We show the full event ontology, including all event types and their possible argument roles, in Table 12. ## C Additional Experimental Results C.1 **Additional Baselines For General Domain** Event Extraction Baseline selection criteria. We select published EE models reporting performance on the ACE05 dataset using ED and EAE training data *only* without using external resources (e.g. knowledge graph) or additional tasks (e.g. relation extraction, entity recognition) as our baselines for general domain EE experiments. We use the same data pre-processing, data splits and metrics as prior works (Wadden et al., 2019; Lin et al., 2020). Additional baselines. In addition to the baselines we introduced in §5.1, we compare with DyGIE++ (Wadden et al., 2019), a span graphenhanced classification model for EE; **BERT_QA** (Du and Cardie, 2020), which formulates EE as an extractive question answering task with a sequence tagging classifier; **TANL** (Paolini et al., 2021), which frames EE as a translation task between augmented natural languages; **BART-Gen** (Li et al., 2021), which uses a sequence tagging model (Hou et al., 2020) with additional keywords as input for ED and performs EAE by filling in event template with a conditional generation model; and **GTEE-DynPref** (Liu et al., 2022), which tunes dynamic prefix for generative EE models. We do not compare with Nguyen et al. (2022b, 2021); Zhang and Ji (2021) because they jointly learn additional tasks besides ED and EAE (i.e. entity recognition and relation extraction) and there is no codebase provided by the time of this work. We do not compare with Wang et al. (2022) since its performance is worse than DEGREE (Hsu et al., 2022) and GTEE-DynPref (Liu et al., 2022) according to Nguyen et al. (2022b). | # | Model | 10% | 100% | | | |-------|--------------|-------|--------|-------|-------| | Tri-C | Arg-C | Tri-C | Arg-C | | | | 1 | DyGIE++ | - | 15.7 § | 70.0‡ | 50.0‡ | | 2 | BERT_QA | 50.1‡ | 27.6‡ | 72.4† | 53.3† | | 3 | TANL | 54.8‡ | 29.0‡ | 68.4‡ | 47.6‡ | | 4 | BART-Gen | - | - | 71.1† | 53.7† | | 5 | GTEE-DynPref | - | - | 72.6† | 55.8† | | 6 | Text2Event | 47.0‡ | 24.9‡ | 71.9† | 53.8† | | 7 | OneIE | 61.5‡ | 26.8‡ | 74.7† | 56.8† | | 8 | DEGREE | 66.1† | 42.1† | 72.2† | 55.8† | | 9 | DICE | 68.9 | 44.7 | 75.5 | 57.6 | Experimental results. Table 7 shows the comparison with more baselines. ## C.2 Additional Ablation Studies Input prompt segments. We analyze the importance of prompt segments in Table 8. For ED, we find that event type name is more important. For EAE, removing either the event type description (Line 5) or the argument role description (Line 9) leads to the most significant performance decreases. These results emphasize the benefits of incorporating the rich semantic information contained in the names and definitions for both event type and argument roles. # Prompt segments Identification Classification ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) ![14_image_3.png](14_image_3.png) P R F1 P R F1 ![14_image_0.png](14_image_0.png) 1 w/o type name 71.19 63.32 67.02 67.41 60.73 63.90 2 w/o type description 66.38 71.00 68.61 62.28 67.19 64.64 3 Vanilla DICE 65.03 74.08 69.26 60.51 70.28 65.03 4 w/o type name 69.34 77.35 73.13 64.17 73.03 68.31 5 w/o type description 67.80 77.45 72.31 62.94 73.46 67.79 6 w/o trigger phrase 71.20 77.89 74.39 66.21 73.79 69.80 7 w/o trigger marker 68.55 78.53 73.20 64.70 75.51 69.69 8 w/o arg. role name 70.13 77.99 73.85 65.20 73.79 69.23 9 w/o role description 75.22 70.54 72.81 67.91 65.39 66.63 10 Vanilla DICE 70.76 76.48 73.51 66.47 72.71 69.45 ![14_image_4.png](14_image_4.png) Extraction vs typing formulation. We formulate ED and EAE as conditional text generation tasks and consider two designs for our input and target format. The first is the DICE design in which we expect the model to **extract** content given queries with event type/argument role information. The second design formulates a **typing** task that provides a query to the generative model for each mention so that the expected output is the predicted event type or argument role for the querying mention. This approach is motivated by the notion that the output space of the typing formulation is much smaller than that of the extraction task. We formulate the ED and EAE tasks as typing tasks by querying each possible mention. For the ED task, we first use the standalone mention identification module introduced in §4.1 to extract all possible triggers detected by the MI module, and then we query the generative model with the following example input and output format: Input: ... calcified <query>plaque</query> ... artery. Output: Event type is Sign_symptom. The output is constrained to belong to the candidate pool of event types or the placeholder event type "<Type>" following the prefix "Event type is ". For the EAE task, we first extract all possible argument candidates and then query each candidate with the input sentence containing event trigger, event trigger marker, event type name and event type description: Input: ... densely <query>calcified</query> ![15_image_0.png](15_image_0.png) <trigger>plaque</trigger> ... artery. /n Event type is Sign_symptom. /n Any symptom or clinical finding. /n Event trigger is plaque. Output: Argument role is Detailed_description. Similarly, the output is constrained to the candidate pool of argument roles possible for the given event type following the prefix "Argument role is ". | # | Formulation | Identification | Classification | | | | | |---------------------------|---------------|------------------|------------------|-------|-------|-------|-------| | P | R | F1 | P | R | F1 | | | | Event Detection | | | | | | | | | 1 | Typing | 74.64 | 67.19 | 70.72 | 69.24 | 63.82 | 66.42 | | 2 | Extraction | 65.03 | 74.08 | 69.26 | 60.51 | 70.28 | 65.03 | | Event Argument Extraction | | | | | | | | | 3 | Typing | 58.59 | 44.95 | 50.87 | 53.63 | 41.14 | 46.56 | | 4 | Extraction | 70.76 | 76.48 | 73.51 | 66.47 | 72.71 | 69.45 | Table 9: Ablation study of generative task formulation. The results in Table 9 show that the typing formulation improves ED performance over extraction (though still worse than mention-enhanced DICE), but leads to a much worse EAE performance. This is likely due to the typing task becoming more difficult as the number of candidate class increases and complicated typing spaces varied by event types. ## C.3 Full Low-Resource Results We show the full low-resource experimental results illustrated in Figure 3 in Table 10. ## D Details Of Implementation And Experiments D.1 Implementation Details Mention Identification. The sliding window scans the passage from beginning to end with a pre-defined window size and step size, which significantly boosts the coverage of the predicted mentions. During both training and inference, we retain the original full-length input passage in addition to the sliding window segments. Table 10: Performance on the downsampled training sets. We report the F1 score for each task using different downsampled training data. We create three random splits for each proportion and report the average performance. | Model | 10% | 25% | 50% | 75% 10% | 25% | 50% | 75% | |------------------------|-------------------------------------------------------------------------------------------------|-------|-------|-----------|-------------------------|-------|-------| | Trigger Identification | Trigger Classification | | | | | | | | Text2Event | - | - | - | - | 52.54 53.72 59.21 62.78 | | | | OneIE | 68.22 71.28 73.73 74.47 61.46 65.08 67.54 68.50 | | | | | | | | DEGREE | 62.12 63.78 66.32 69.73 58.31 61.03 63.14 64.77 | | | | | | | | DICE | 71.47 72.79 74.07 74.88 65.82 66.54 67.91 68.72 Argument Identification Argument Classification | | | | | | | | Text2Event | - | - | - | - | 37.74 40.09 46.53 50.37 | | | | OneIE | 32.13 39.65 43.75 47.12 24.95 32.36 35.70 38.55 | | | | | | | | DEGREE | 26.60 30.41 31.06 31.63 26.60 28.43 29.48 29.59 | | | | | | | | DICE | 49.97 53.55 54.42 55.83 45.67 48.97 50.42 52.83 | | | | | | | Training and evaluation. We select the best epoch based on the highest F1 score of the most downstream MI/ED/EAE task on the validation set. When evaluating correctness, we only accept an exact match between the generated trigger/argument and the ground-truth trigger/argument as a correct prediction. We use beam search with 2 beams to generate the output sequences for all three generative tasks. The generation stops either when the "end_of_sentence" token is generated or the output length reaches 30. Frameworks. Our entire codebase is implemented in PyTorch.7 The implementations of the transformer-based models are extended from the Huggingface8codebase (Wolf et al., 2020). ## D.2 Experiments Details We report the median result for five runs with different random seeds by default. For the low-resources result shown in Figure 3, we sample different selections of training data of corresponding proportion for each run. All the models in this work are trained on NVIDIA A6000 GPUs on a Ubuntu 20.04.2 operating system. ## D.3 Baseline Reproduction Mention Identification. For results in Table 4, we use BART-large for Yan et al. because Yan et al. (2021) only supports a generative model with absolute position embedding. OneIE uses BERTlarge as its default and we use T5-large for our proposed DICE-MI module. 7https://pytorch.org/ 8https://github.com/huggingface/transformers ED and EAE. We use authors' codebases to produce baseline results. OneIE jointly learns ED, EAE, and MI tasks and we provide entity information to its MI module with event types and role types stripped to equate its training information with the training information provided to our model DICE. For DEGREE, human-written templates that organize the argument roles of an event type in a sentence are required by the model. We construct these templates using phrases such as "<Argument role> is <argument text>" for all potential argument roles of an event type as the template. ## D.4 Hyperparameters For the ED module, we define positive instances as (PASSAGE, EVENT TYPE) pairs where the passage contains one or more event triggers of this event type. Negative instances are pairs in which the passage contains no event triggers of the event type. We create 10 negative instances for each positive instance. For the EAE module, we define positive instance as the (PASSAGE, EVENT TRIGGER, EVENT TYPE, ARGUMENT ROLE) tuple that there exists an argument text contained in the passage that meets the query criteria. We create 10 negative instances for each positive instance. For the MI module, we use a window size of 10 words, with a sliding step of 4 words. We retain the original full sequence in both training and evaluation. We use an AdamW optimizer with a 1e-5 learning rate without gradient accumulation. We show the hyperparameter search ranges and the final choices in Table 11. | Hyperparameter | Search Range | Best | |--------------------------------------------------------------------------------------|-----------------------------------|-------------| | Negative instance # for ED | 1, 2, 3, 4, 5, 8, 10, all | 10 | | Negative instance # for EAE | 1, 2, 3, 4, 5, 8, 10, all | 10 | | MI module sliding window size | 4, 6, 8, 10, 12 | 10 | | MI module sliding window step | 2, 4, 6, 8, 10 | 4 | | MI module sliding window retains original long sequence during training | True, False | True | | MI module sliding window retains original long sequence during inference True, False | False | | | Batch size | 1, 2, 3, 4 | 4 | | Learning rate | 1e-4, 5e-5, 1e-5, 5e-6, 1e-6 1e-5 | | | Decoding method | beam search, greedy | beam search | | Max epochs | 70 | | Table 11: Hyperparameter search ranges and the best settings. | Event Type | Role | | | | |-----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|----| | Sign_symptom | Biological_structure, Detailed_description, Severity, Lab_value, Distance, Shape, Area, Color, Texture, Frequency, Volume, Quantitative_concept, Qualitative_concept, Biological_attribute, Subject, Other_entity, History, Mass | | | | | Diagnostic_procedure | Lab_value, | Biological_structure, | Detailed_description, | Qualita | | tive_concept, Nonbiological_location, Frequency,Distance, Subject, Shape, Quantitative_concept, Texture, Severity, Age, Color, Area, Volume, Administration, Mass | | | | | | Therapeutic_procedure | Detailed_description, Biological_structure, Lab_value, Dosage, Nonbiological_location, Frequency, Distance,Qualitative_concept, Subject, Quantitative_concept, Area, Administration, Other_entity | | | | | Disease_disorder | Detailed_description, Biological_structure, Severity, Lab_value, Quantitative_concept, Distance, Nonbiological_location, Shape, Volume, Qualitative_concept, Area, Subject, Biological_attribute | | | | | Medication | Dosage, Administration, Detailed_description, Frequency, Lab_value, Nonbiological_location, Quantitative_concept, Biological_structure, Volume | | | | | Clinical_event | Nonbiological_location, | Detailed_description, | Frequency, | Biologi | | cal_structure, Subject, Lab_value, Quantitative_concept, Volume | | | | | | Lab_value | Biological_structure, Detailed_description, Color, Severity, Frequency | | | | | Activity | Detailed_description, Nonbiological_location, Biological_structure, Other_entity, Frequency, Lab_value, Quantitative_concept | | | | | Other_event | Biological_structure, Quantitative_concept, Nonbiological_location, Severity, Detailed_description | | | | | Outcome | Nonbiological_location, Subject, Detailed_description, Age | | | | | Date | - | | | | | Time | - | | | | | Duration | - | | | | | Table 12: Event types and corresponding argument roles in MACCROBAT-EE, the argument roles are ordered by | | | | | Table 12: Event types and corresponding argument roles in MACCROBAT-EE, the argument roles are ordered by their appearance frequency. The most appeared roles are listed first. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section after Conclusion ✓ A2. Did you discuss any potential risks of your work? Ethical statement section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract section. Section 1: Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 For Data, Section 4 For Model ✓ B1. Did you cite the creators of artifacts you used? Section 3 for data, Section 4 for model B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use a previously published dataset, the anonymization work has been done in previous work ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We use a previously published dataset, the anonymization work has been done in previous work ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.2 The MACCROBAT-EE Dataset ## C ✓ **Did You Run Computational Experiments?** Section 5 And 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 and 6, Appendix D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix C3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3.4 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? The annotation task is simple, we provide a textual description of the task ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Authors are served as annotators directly without additional recruitment D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-etal-2023-xsemplr
{XS}em{PLR}: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
https://aclanthology.org/2023.acl-long.887
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a wide range of multilingual language models including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models (Codex, BLOOM). We design 6 experiment settings covering various lingual combinations (monolingual, multilingual, cross-lingual) and numbers of learning samples (full dataset, few-shot, and zero-shot). Our experiments show that encoder-decoder models (mT5) achieve the highest performance compared with other popular models, and multilingual training can further improve the average performance. Notably, multilingual large language models (e.g., BLOOM) are still inadequate to perform CLSP tasks. We also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training. Our dataset and code are available at \url{https://github.com/psunlpgroup/XSemPLR}.
# Xsem**Plr: Cross-Lingual Semantic Parsing In Multiple Natural** Languages And Meaning Representations Yusen Zhang1 Jun Wang2 Zhiguo Wang2 **Rui Zhang**1 1Penn State University 2AWS AI Labs {yfz5488,rmz5227}@psu.edu, {juwanga,zhiguow}@amazon.com ## Abstract Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSEMPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. We use XSEMPLR to conduct a comprehensive benchmark study on a wide range of multilingual language models including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models (Codex, BLOOM). We design 6 experiment settings covering various lingual combinations (monolingual, multilingual, cross-lingual) and numbers of learning samples (full dataset, few-shot, and zero-shot). Our experiments show that encoder-decoder models (mT5) achieve the highest performance compared with other popular models, and multilingual training can further improve the average performance. Notably, multilingual large language models (e.g., BLOOM) are still inadequate to perform CLSP tasks. We also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual fewshot training. Our dataset and code are available at https://github.com/psunlpgroup/ XSemPLR. ## 1 Introduction Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) (Li et al., 2020; Xu et al., 2020a; Dou et al., 2022; Sherborne and Lapata, 2021, 2022). As demonstrated in Figure 1, Cross-Lingual Semantic Parsing covers natural languages for geographically diverse users and various meaning representations, empowering applications such as natural language interfaces to databases, question answering over knowledge graphs, virtual assistants, smart home device control, human-robot interaction, and code generation. However, current research on CLSP has three drawbacks. First, most existing research focuses on semantic parsing in English (Zelle and Mooney, 1996; Wang et al., 2015; Yu et al., 2018), limiting the development of multilingual information access systems for users in other languages. Second, current datasets have a poor coverage of NLs and MRs. Although there are encouraging efforts in developing CLSP models (Li et al., 2020; Dou et al., 2022; Sherborne and Lapata, 2022), their experiments only cover a few NLs and MRs, impeding comprehensive and unified evaluation on a diverse range of tasks. Third, due to the lack of a comprehensive CLSP benchmark, the performance of multilingual language models on CLSP is understudied. Some pretrained language models are proposed to solve cross-lingual tasks such as XLM-R (Conneau et al., 2019) and mT5 (Xue et al., 2020), while other large language models are designed for code generation such as Codex (Chen et al., 2021a) and BLOOM (Scao et al., 2022). However, little research has focused on evaluating models on CLSP. In this paper, we propose XSEMPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations as summarized in Table 1. In order to cover a large variety of languages and meaning representations, we first select 9 high-quality CLSP datasets and then clean and format them in a unified manner. Then, we conduct a comprehensive benchmarking study on three categories of multilingual language models including pretrained encoder15918 ![1_image_0.png](1_image_0.png) based models augmented with pointer generator (mBERT, XLM-R), pretrained encoder-decoder models (mBART, mT5), and decoder-based large language models (Codex, BLOOM). To evaluate these models, we design 6 experiment settings covering various lingual combinations and learning sample scales, including Monolingual (and Monolingual Few-shot), Multilingual, and Cross-lingual Zero-Shot/Few-Shot Transfer. Our results show that the encoder-decoder model (mT5) yields the best performance on monolingual evaluation compared with other models. Then, we pick two models with the best monolingual performance (i.e., mT5 and XLM-R) to conduct fewshot and zero-shot cross-lingual transfer learning from English to other low-resource languages. Results show a significant performance gap between monolingual training (Taget NL -> Target NL1) and cross-lingual transfer learning (En -> Target NL). Furthermore, we find that this gap can be significantly reduced by few-shot learning on target NL. We further train these two models in a multilingual setting and find such training can boost the performance in some of the languages, while, however, it usually hurts the performance in English. Finally, we test two large language models Codex (Chen et al., 2021a) and BLOOM (Scao et al., 2022). We find the performance gap of cross-lingual transfer learning is significant for these two models as well. Our contributions are summarized as follows: (1) We propose XSEMPLR to unify and benchmark 9 datasets covering 5 tasks, 22 natural languages, and 8 meaning representations for cross-lingual semantic parsing; (2) We perform a holistic evaluation of 3 groups of state-of-the-art multilingual language models on XSEMPLR, demonstrating noticeable performance gaps of cross-lingual transfer models comparing English and other languages; (3) ![1_image_1.png](1_image_1.png) We show two effective strategies for boosting performance in low-resource languages: multilingual training and cross-lingual transfer learning. ## 2 Xsem**Plr Benchmark** Figure 2 shows the construction pipeline of XSEMPLR. We first select 9 CLSP datasets according to our design principles. Then, we collect other NLs of the selected datasets. Finally, we clean the datasets by removing outliers and performing alignment between different languages. ## 2.1 Design Principles We carefully pick 9 datasets from all available semantic parsing datasets to construct XSEMPLR according to two principles. First, the picked datasets need to have **high quality**, which means they are either annotated by humans or augmented with careful crafting (Moradshahi et al., 2020), and the translation of user inputs are provided by humans instead of machine translation models. Second, XSEMPLR needs to be **comprehensive** (Hu et al., 2020), which means including diverse NLs and MRs for a broad range of tasks and applications. ## 2.2 Data Collection Table 1 summarizes the characteristics and statistics of different datasets in XSEMPLR. Multilingual ATIS (MATIS) contains user questions for a flight-booking task. We collect the origi- Task Dataset Meaning Representation Language Executable Domain Train Dev Test NLI for Databases MATIS SQL 7 ✓ 1 4303 481 444 NLI for Databases MGeoQuery SQL,Lambda,FunQL,Prolog 8 ✓ 1 548 49 277 NLI for Databases MSpider SQL 3 ✓ 138 8095 1034 - NLI for Databases MNLmaps Functional Query Language 2 ✓ 1 1500 - 880 QA on Knowledge Graph MOvernight Lambda Calculus 3 ✓ 8 8754 2188 2740 QA on Knowledge Graph MCWQ SPARQL 4 ✓ 1 4006 733 648 QA on Web MSchema2QA ThingTalk Query Language 11 ✓ 2 8932 - 971 Task-Oriented DST MTOP Hierarchical Intent and Slot 6 ✗ 11 5446 863 1245 Code Generation MCoNaLa Python 4 ✓ 1 1903 476 896 nal English questions from ATIS (Price, 1990; Dahl et al., 1994) and add the translations from Xu et al. (2020b). For MRs, we focus on the task of Natural Language Interface (NLI) to databases and thus collect SQL from Iyer et al. (2017) and FineganDollak et al. (2018). Multilingual GeoQuery (MGeoQuery) contains user questions about US geography. We collect original English questions from GeoQuery (Zelle and Mooney, 1996) and add other translations (Lu and Ng, 2011; Jones et al., 2012; Susanto and Lu, 2017b). GeoQuery has several MRs available. We collect Prolog and Lambda Calculus from Guo et al. (2020), FunQL from Susanto and Lu (2017b), and SQL from Finegan-Dollak et al. (2018) 2. Multilingual Spider (MSpider) is a humanannotated complex and cross-domain text-to-SQL datasets. We collect Spider (Yu et al., 2018) with English questions and add other NLs from Min et al. (2019) and Nguyen et al. (2020). Multilingual NLmaps (MNLmaps) is a Natural Language Interface to query the OpenStreetMap database. We collect NLMaps (Lawrence and Riezler, 2016) in English, and add translations in German (Haas and Riezler, 2016). Multilingual Overnight (MOvernight) is a multidomain semantic parsing dataset in lambda DCS. We include English Overnight (Wang et al., 2015) and add translations from Sherborne et al. (2020). Multilingual Schema2QA (MSchema2QA) is a question answering dataset over schema.org web data in ThingTalk Query Language. We include training examples with all 11 available languages and pair them with the MR in the corresponding language following Moradshahi et al. (2020) and Xu et al. (2020a). To make the dataset size comparable to others, we include 5% of the training set. MCWQ is a multilingual knowledge-based question answering dataset grounded in Wikidata (Cui et al., 2021). We collect all questions in MCWQ in 4 languages. The split follows maximum compound divergence (MCD) (Keysers et al., 2020) so that the test set contains novel compounds to evaluate compositionality generalization ability. MTOP is a multilingual semantic parsing dataset for task-oriented dialogs with meaning representations of hierarchical intent and slot annotations (Gupta et al., 2018; Li et al., 2020). We include examples with all 6 languages and pair the translations with the compositional decoupled representation in the corresponding language. MCoNaLa is a multilingual code generation benchmark for Python by extending English CoNaLa (Yin et al., 2018; Wang et al., 2022). We include all 4 languages. ## 2.3 Data Alignment And Unification We perform data alignment and unification over 9 datasets to construct a unified high-quality benchmark. To be specific, for the first 6 datasets introduced in Section 2.2, because each of them has multiple parts proposed in different work, we merge these parts by aligning the same user question in different languages into the same meaning representation. For the other 3 datasets, we directly use the entire samples since no other parts need to be merged. We also try to unify the language of MRs (e.g., adopting a single form of SQL queries; keeping only one English MR when there is more than one in MTOP). We also remove a few samples in MATIS and MGeoQuery with no MRs. We provide more details in Appendix including the examples of each dataset (Table 5), data construction (Appendix A), natural languages (Appendix A), and meaning representations (Appendix A). ## 2.4 Evaluation Metrics We evaluate the predicted results using various automatic metrics. For the Spider dataset, we follow Yu et al. (2018) and use their proposed tool for evaluation 3. For the other datasets, we simply use exact matching, i.e., token-by-token string comparison, to see if the prediction is the same as the ground truth label. For a fair comparison with stateof-the-art models, we also use the metrics proposed in their models, including Execution Score, Denotation Accuracy, and Code BLEU (Section 4.2). ## 2.5 Data Analysis Natural Languages XSEMPLR contains diverse and abundant natural languages in both highresource and low-resource groups, including 22 languages belonging to 15 language families (Appendix A). Most state-of-the-art performances are achieved in English and a few other high-resource languages. However, the lack of information in the low-resource languages brings unanswered questions to model generalization. Therefore, both these 2 types of languages are included in XSEMPLR, to form a unified cross-lingual dataset for semantic parsing. Among these 22 languages, English is the most resourced language with many popular datasets in semantic parsing. Some languages spoken in Western Europe are also relatively high-resource languages, such as German and Spanish. We also involve many low-resource languages as well, such as Vietnamese and Thai. Meaning Representations XSEMPLR includes 8 meaning representations for different applications: Prolog, Lambda Calculus, Functional Query Language (FunQL), SQL, ThingTalk Query Language, SPARQL, Python, and Hierarchical intent and slot. All of them can be executed against underlying databases or knowledge graphs, except for the last one which is designed for complex compositional requests in task-oriented dialogues. The first four are domain-specific because they contain specific predicates defined for a given domain, while the last four are considered open-domain and open-ontology (Guo et al., 2020). It is also worth noting that these MRs are not equivalent 3All numbers reported in the paper is "Exact Set Match without Values" in https://yale-lily.github.io/ spider. to their general expressiveness. For example, the ThingTalk query language is a subset of SQL in expressiveness (Moradshahi et al., 2020), and FunQL is less expressive than Lambda Calculus partially due to the lack of variables and quantifiers. ## 3 Experiment Setup We describe our evaluation settings and models for a comprehensive benchmark study on XSEMPLR. ## 3.1 Evaluation Settings We consider the following 6 settings for training and testing. Translate-Test. We train a model on the English training data and translate target NL test data to English using the public Google NMT system (Wu et al., 2016). This setting uses one semantic parsing model trained on English but also relies on available machine translation models for other languages. This serves as a strong yet practical baseline for other settings. Monolingual. We train a monolingual model on each target NL training data. This setting creates one model per target NL. In addition to benchmarking them, we design this setting for two reasons: (1) It helps the comparison between monolingual and cross-lingual performance; (2) We pick the best models from this setting to further conduct cross-lingual and few-shot/zero-shot experiments. Additionally, since some target NL training data can be expensive to obtain, we also test a **Monolingual Few-shot** setting by training monolingual models with only 10% training data. Multilingual. Thanks to the progress in multilingual embeddings and pretrained multilingual language models, we can train one multilingual model on all NL training data. This setting uses only one model to serve all NLs. Cross-lingual Zero-shot Transfer. Models are trained only on English NL data and then tested on a target-NL test set. This setting uses one model for all target NLs and evaluates the cross-lingual transfer ability without any target-NL training data. Besides, to test the value of additional target NL training data, we finetune the model on 10% targetNL training data. This **Cross-lingual Few-shot** Transfer setting creates one model per target NL. We use these two settings to evaluate the capability | MATIS | MGeoQuery | MSpider | MNLmaps | MOvernight | MCWQ | MSchema2QA | MTOP | MCoNaLa‡ | Average | | |-------------------------------------------------------|-------------|-----------|-----------|--------------|--------|--------------|--------|------------|-----------|-------| | Translate-Test mT5 | 44.50 | 53.88 | 45.26 | 66.36 | 59.69 | 19.85 | 3.18⋆ | 29.78⋆ | 8.13 | 36.74 | | Monolingual mBERT+PTR | 30.63 | 72.18 | 40.40 | 83.82 | 57.47 | 23.46 | 52.53 | 75.41 | 5.87 | 49.09 | | XLM-R+PTR | 31.31 | 71.41 | 47.30 | 85.17 | 59.10 | 23.53 | 62.37 | 80.36 | 7.69 | 52.03 | | mBART | 41.93 | 62.29 | 33.31 | 83.19 | 59.60 | 30.02 | 50.35 | 75.76 | 6.78 | 49.25 | | mT5 | 53.15 | 74.26 | 50.73 | 91.65 | 66.29 | 30.15 | 65.16 | 81.83 | 10.29 | 58.16 | | Monolingual Few-Shot XLM-R+PTR 23.44 | 17.91 | 36.04 | 19.77 | 40.74 | 5.64 | 49.00 | 60.42 | 0.38 | 28.15 | | | mT5 | 24.85 | 25.48 | 38.10 | 26.93 | 53.59 | 7.68 | 33.27 | 61.90 | 1.05 | 30.32 | | Codex† | 18.02 | 31.93 | 30.66 | 34.26 | 3.43 | 2.93 | 21.62 | 10.08 | 13.87 | 18.53 | | BLOOM† | 0.00 | 17.84 | 2.13 | 12.16 | 0.62 | 0.00 | 5.21 | 5.16 | 8.40 | 5.72 | | Multilingual XLM-R+PTR | 39.72 | 71.35 | 40.20 | 85.91 | 61.03 | 30.79 | 61.82 | 81.68 | - | 59.06 | | mT5 | 54.45 | 76.57 | 32.30 | 91.31 | 67.55 | 28.51 | 60.92 | 82.95 | - | 61.82 | | Cross-lingual Zero-Shot Transfer XLM-R+PTR 6.05 39.85 | 18.53 | 60.23 | 36.77 | 4.27 | 20.22 | 51.46 | 0.12 | 26.39 | | | | mT5 | 31.85 | 27.35 | 41.93 | 34.89 | 52.68 | 4.06 | 44.04 | 50.18 | 0.77 | 31.97 | | Codex† | 16.31 | 28.53 | 27.56 | 32.05 | 2.99 | 2.16 | 19.57 | 14.08 | 8.35 | 16.84 | | BLOOM† | 0.00 | 11.29 | 1.70 | 7.05 | 0.38 | 0.00 | 3.93 | 1.67 | 6.16 | 3.58 | | Cross-lingual Few-Shot Transfer XLM-R+PTR 15.71 51.08 | 43.68 | 64.89 | 52.03 | 20.16 | 53.51 | 72.79 | - | 46.73 | | | | mT5 | 49.57 | 57.31 | 49.42 | 71.70 | 62.53 | 24.85 | 59.24 | 74.83 | - | 56.18 | of the model to transfer from a fine-tuned model of high-resource NL to a low-resource test set. ## 3.2 Models We evaluate three different groups of multilingual language models on XSEMPLR. ## Multilingual Pretrained Encoders With Pointerbased Decoders (Enc-Ptr). The First Group Is multilingual pretrained encoders with decoders augmented with pointers. Both encoders and decoders use Transformers (Vaswani et al., 2017). The decoder uses pointers to copy entities from natural language inputs to generate meaning representations (Rongali et al., 2020; Prakash et al., 2020). We use two types of multilingual pretrained encoders, mBERT (Devlin et al., 2018) and XLMR (Conneau et al., 2019), and both are trained on web data covering over 100 languages. Multilingual Pretrained Encoder-Decoder Models (Enc-Dec). The second group uses pretrained encoder-decoder models, including mBART (Chipman et al., 2022) and mT5 (Xue et al., 2020) which uses text-to-text denoising objective for pretraining over multilingual corpora. ## Multilingual Large Language Models (Llms). The third group is multilingual large language models based on GPT (Brown et al., 2020) including Codex (Chen et al., 2021a) and BLOOM (Scao et al., 2022). Codex is fine-tuned on publicly available code from GitHub. While it is not trained on a multilingual corpus, it has shown cross-lingual semantic parsing capabilities (Shi et al., 2022b). BLOOM is a 176B-parameter multilingual language model pretrained on 46 natural and 13 programming languages from the ROOTS corpus (Laurençon et al., 2022). We mainly use these models to evaluate the ability of few-shot learning using in-context learning without any further finetuning. Specifically, we append 8 samples and the test query to predict the MR. For Monolingual Fewshot, samples and the query are in the same NL, while for Cross-lingual Zero-shot Transfer, samples are in English and the query is in the target NL. ## 4 Results And Analysis Table 2 shows the performance of all 6 models on 6 settings. Our results and analysis aim to answer the following research questions: - RQ 1: What is the best model and training strategy for performance, and how does it compare with previous state-of-the-art? (Section 4.1, 4.2) - RQ 2: How capable are the current multilingual LLMs on the task of CLSP? (Section 4.3) - RQ 3: What is the effect of few-shot learning? (Section 4.4) - RQ 4: What is the effect of multilingual learning? (Section 4.5) - RQ 5: What is the effect of cross-lingual transfer learning? (Section 4.6) - RQ 6: How performance varies across different natural languages and meaning representations? (Section 4.7, 4.8) ## 4.1 Analysis Of Monolingual We obtain the following main findings on Monolingual setting: Enc-Dec (mT5) obtains the best performance. Among the two transformer-based pointer generators, XLM-R+Transformer (XLMR+PTR) (52.034) performs slightly better than mBERT+Transformer (mBERT+PTR) (49.09). Among mBART and mT5, mT5 (58.16) outperforms mBART (49.25) by a large margin. Besides, although mT5 outperforms XLM-R by 6.13, XLMR is still able to outperform mBART by 2.78. Thus, we pick mT5 among mT5/mBART, and XLMR among XLM-R/mBERT to conduct the experiments on the other settings. Next, we evaluate mT5 model on TranslationTest setting. As shown in the table, mT5 in Monolingual setting outperforms Translation-Test by a large margin (58.16 vs. 36.74). This shows that multilingual language models are more effective than Translation-Test methods. In other words, it is necessary to train a multilingual model even though we have a high-quality translation system. ## 4.2 Comparison With Sota Table 3 lists the performance of mT5 in Monolingual setting with the previous state-of-the-art. Some of the previous work use denotation accuracy and execution accuracy which are different from the exact match we use. To make our results comparable with previous work, we apply the evaluation tools of previous work to XSEMPLR. As shown in the table, Enc-Dec (mT5) outperforms previous work on all NLs of MSchema2QA, MCWQ, 4If not specified, the numbers in this section are the averaged exact matching scores across all NLs. ![5_image_0.png](5_image_0.png) MNLMaps, MATIS datasets and obtains comparable results on the others. ## 4.3 Analysis Of Codex And Bloom We evaluate Codex and BLOOM to test the performance of in-context learning of large language models. As shown in Table 2, LLMs (Codex and BLOOM) are outperformed by mT5 model by a large margin for both Few-shot (11.79/24.60) and Zero-shot (15.13/28.39) settings. This suggests that multilingual LLMs are still inadequate for crosslingual semantic parsing tasks. ## 4.4 Comparison Between Few-Shot Settings We also test the Enc-Dec (mT5) and Enc-PTR (XLM-R) models on two types of few-shot experiments, including Monolingual and Cross-lingual Few-Shot. As can be seen, mT5 of cross-lingual few-shot outperforms monolingual few-shot by a large 22.21 exact match score (excluding MCoNaLa), while XLM-R has a smaller gain of 15.12. We can summarize two observations: 1) pretraining on the English NL can significantly boost the performance of few-shot on target NLs (En + Target Few-shot -> Target NL), and 2) the model with higher crosslingual capability gains more improvement, such as mT5 gains more than XLM-R. Both observations demonstrate the capability of cross-lingual models to transfer knowledge from the source to the target NLs. ## 4.5 Analysis Of Multilingual Training We compare the performance of Monolingual and Multilingual settings. As can be seen in Table 2, | Dataset | Language | SOTA (Source) | XSEMPLR | Metric | |------------|------------------------------------|------------------------------------|---------------------|---------------------| | English | 77.10 (Li et al., 2023) | 67.60 | Exact Match | | | English | 81.00 (Li et al., 2023) | 69.10 | Execution | | | Vietnamese | 69.00 (Shi et al., 2022a) | 43.00 | Exact Match | | | Vietnamese | 64.50 (Shi et al., 2022a) | 42.00 | Execution | | | Chinese | 66.1⋆ (Shi et al., 2022a) | 39.90 | Exact Match | | | MSpider | Arabic | 29.17 (Moradshahi et al., 2020) | 53.55 | Exact Match | | German | 51.84 (Moradshahi et al., 2020) | 72.19 | Exact Match | | | Spanish | 56.01 (Moradshahi et al., 2020) | 68.69 | Exact Match | | | Farsi | 54.88 (Moradshahi et al., 2020) | 60.25 | Exact Match | | | Finnish | 52.43 (Moradshahi et al., 2020) | 68.28 | Exact Match | | | Italian | 54.87 (Moradshahi et al., 2020) | 67.97 | Exact Match | | | Japanese | 46.27 (Moradshahi et al., 2020) | 62.41 | Exact Match | | | Polish | 49.69 (Moradshahi et al., 2020) | 60.87 | Exact Match | | | Turkish | 56.84 (Moradshahi et al., 2020) | 70.03 | Exact Match | | | Chinese | 36.60 (Moradshahi et al., 2020) | 56.54 | Exact Match | | | MSchema2QA | English | 27.70 (Cui et al., 2022) | 39.29 | Exact Match | | Hebrew | 16.60 (Cui et al., 2022) | 33.02 | Exact Match | | | Kannada | 16.60 (Cui et al., 2022) | 23.74 | Exact Match | | | Chinese | 23.00 (Cui et al., 2022) | 24.56 | Exact Match | | | MCWQ | | | | | | MNLMaps | English | 85.70 (Duong et al., 2017) | 92.73 | Exact Match | | German | 83.00 (Duong et al., 2017) | 90.57 | Exact Match | | | English | 77.20 (Sherborne and Lapata, 2023) | 83.78 | Denotation accuracy | | | Farsi | 67.80 (Sherborne and Lapata, 2023) | 80.59 | Denotation accuracy | | | Portuguese | 66.10 (Sherborne and Lapata, 2023) | 78.60 | Denotation accuracy | | | Spanish | 64.10 (Sherborne and Lapata, 2023) | 76.58 | Denotation accuracy | | | German | 66.60 (Sherborne and Lapata, 2023) | 80.63 | Denotation accuracy | | | Chinese | 64.90 (Sherborne and Lapata, 2023) | 78.38 | Denotation accuracy | | | MATIS | English | 90.00 (Zou and Lu, 2018) | 79.06 | Denotation accuracy | | Thai | 86.10 (Zou and Lu, 2018) | 72.56 | Denotation accuracy | | | German | 76.80 (Zou and Lu, 2018) | 73.29 | Denotation accuracy | | | Greek | 83.20 (Zou and Lu, 2018) | 76.90 | Denotation accuracy | | | Chinese | 82.10 (Zou and Lu, 2018) | 75.81 | Denotation accuracy | | | Indonesian | 83.90 (Zou and Lu, 2018) | 80.14 | Denotation accuracy | | | Swedish | 83.90 (Zou and Lu, 2018) | 79.78 | Denotation accuracy | | | Farsi | 76.80 (Zou and Lu, 2018) | 69.68 | Denotation accuracy | | | MGeoQuery† | English | 81.90 (Sherborne and Lapata, 2021) | 69.38‡ | Denotation accuracy | | MOvernight | German | 66.20 (Sherborne and Lapata, 2021) | 66.90‡ | Denotation accuracy | | Chinese | 66.00 (Sherborne and Lapata, 2021) | 62.59‡ | Denotation accuracy | | | Russian | 9.56 (Wang et al., 2022) | 6.38 | Code BLEU-4 | | | MCoNaLa | Spanish | 2.64 (Wang et al., 2022) | 2.55 | Code BLEU-4 | | Japanese | 9.90 (Wang et al., 2022) | 7.66 | Code BLEU-4 | | Table 3: Comparison between mT5 monolingual and state-of-the-art models, except that MCoNaLa dataset uses cross-lingual zero-shot settings because the dataset only contains English training samples. mT5 obtains better or comparable performance on all datasets. ⋆ Previous SOTA model only contains exact match scores for Chinese. † The SOTA model of MGeoQuery uses Lambda as MR while XSEMPLR uses SQL. ‡ The SOTA model of MOvernight uses denotation accuracy and XSEMPLR uses exact match. ![7_image_0.png](7_image_0.png) mT5 improves by 2.31 on MGeoQuery, and XLMR improves by 8.41 on MATIS dataset. This demonstrates that Enc-Dec/Enc-PTR (mT5/XLMR) can be improved by training in a mixture of various languages. However, not all datasets can boost performance via such training. The average change of mT5/XLM-R is around -2/+2 points. We further explore the reason for the performance drop in multilingual training. As shown in Figure 3, most of the major NLs can obtain performance gain, except that English performance drops in 7 datasets and gains in 3 datasets. This is known as "Curse of Multilinguality" (Pfeiffer et al., 2022). Similarly in CLSP, performance of English (high resource NLs) is easier to drop in multilingual training. ## 4.6 Cross-Lingual Performance Gap To examine the transfer ability of the cross-lingual models, we investigate the performance difference between the Monolingual and Cross-lingual Few/Zero-shot for each dataset using mT5. As shown in Figure 4, by examining the distance between green and orange lines, we find that for the zero-shot setting, the cross-lingual transfer performance gap is significant, which is even larger than 50% on the NLmaps dataset, demonstrating the limitation of current cross-lingual models. However, by examining the difference between orange and blue lines, we also find that using even 10% of samples in target data, the transfer gap will be shortened rapidly. The few-shot gap usually shrinks to around half of the zero-shot gap, e.g., the Schema2QA dataset. For MATIS, the gap even shrinks to around 5 which is very close to the performance of the monolingual setting. ## 4.7 Analysis Over Natural Languages We pick the best model mT5 and analyze its performance in the zero-shot setting in Figure 5. Results show that the performance of Chinese transfer learning (En -> Zh) and English monolingual training (En -> En) usually is the largest compared with transfer learning of other NLs. On the other hand, German usually has the smallest transfer performance loss. This is probably because of two reasons. First, the Chinese data source is less than German when pretraining on mT5. Second, the language family of English is closer to German (IE: Germanic) compared with Chinese (Sino-Tibetan). This phenomenon is discussed in Hu et al. (2020), and we find this conclusion also holds for crosslingual semantic parsing tasks. ## 4.8 Analysis Over Meaning Representations Table 4 shows the performance of mT5 on various MRs in MGeoQuery. In almost all languages, FunQL outperforms the other three meaning representations, and SQL obtains the worst performance. This is consistent with the observation of Guo et al. (2020). We speculate that there are two possible reasons: (1) the grammar of SQL is more complex than the others, and FunQL enjoys much easier grammar (Li et al., 2022), and (2) FunQL contains a number of brackets that provide information of Table 4: Monolingual performance of mT5 on MGeoQuery. FunQL/SQL obtains the best/worst performance. | SQL | Prolog | Lambda | FunQL | | |------------|----------|----------|---------|-------| | English | 76.50 | 81.59 | 76.50 | 89.89 | | German | 68.23 | 64.26 | 72.20 | 71.83 | | Thai | 68.59 | 63.90 | 70.04 | 76.17 | | Chinese | 70.04 | 63.18 | 74.37 | 77.62 | | Farsi | 64.98 | 61.73 | 64.62 | 75.45 | | Greek | 71.84 | 75.81 | 78.70 | 85.92 | | Indonesian | 75.09 | 75.09 | 78.34 | 87.00 | | Swedish | 75.45 | 77.26 | 79.78 | 84.48 | | Average | 71.34 | 70.35 | 74.32 | 81.04 | structure to the models (Shu et al., 2021). ## 5 Related Work Cross-lingual Semantic Parsing Most semantic parsing datasets are originally in English such as GeoQuery (Zelle and Mooney, 1996), ATIS (Finegan-Dollak et al., 2018), Overnight (Wang et al., 2015), and Spider (Yu et al., 2018). Cross-lingual Semantic Parsing datasets are usually constructed by translating the English user questions into other languages (Dou et al., 2022; Athiwaratkun et al., 2022). For example, Lu and Ng (2011) translate GeoQuery English queries to create a Chinese version. Min et al. (2019) and Nguyen et al. (2020) create Chinese and the Vietnamese translation of Spider. However, existing CLSP datasets follow different formats and are independently studied as separate efforts. We aim to provide a unified benchmark and modeling framework to facilitate systematic evaluation and generalizable methodology. Multilingual Language Models There has been significant progress in multilingual language models. MUSE (Conneau et al., 2017) aligns monolingual word embeddings in an unsupervised way without using any parallel corpora. XLM (Lample and Conneau, 2019) is a pretrained language model based on RoBERTa (Liu et al., 2019) which offers cross-lingual contextualized word representations. Similarly, mBERT is developed as the multilingual version of BERT Devlin et al. (2018). XLMR (Conneau et al., 2019) outperforms mBERT and XLM in sequence labeling, classification, and question answering. Focusing on sequence-to-sequence tasks such as machine translation, mBART (Liu et al., 2020) extends BART by introducing multilingual denoising pretraining. mT5 (Xue et al., 2020) extends T5 by pretraining on the multilingual dataset mC4. Multilingual large language models have been proposed such as BLOOM (Scao et al., 2022) and XGLM (Lin et al., 2022). From multilingual embeddings to multilingual large language models, there have been more effective representations as well as more languages covered (Srivastava et al., 2022). We aim to systematically evaluate these models on CLSP, which is understudied by existing work. Cross-lingual NLP Benchmarks Cross-lingual benchmarks have been established in many NLP tasks. XNLI is a large-scale corpus aimed to provide a standardized evaluation set (Conneau et al., 2018). Hu et al. (2020) developed XTREME to evaluate how well multilingual representations in 40 languages can generalize. XGLUE is another dataset used to implement evaluation in various cross-lingual tasks (Liang et al., 2020). MLQA (Lewis et al., 2019), XQuAD (Artetxe et al., 2019), and XOR QA (Asai et al., 2020) are three evaluation frameworks for cross-lingual question answering. Sun and Duh (2020) introduce CLIRMatrix by collecting multilingual datasets from Wikipedia for cross-lingual information retrieval (Zbib et al., 2019; Oard et al., 2019; Zhang et al., 2019; Shi et al., 2021; Chen et al., 2021b). For cross-lingual summarization, NLCS was built by Zhu et al. (2019) to tackle the problem of the divided summarization and translation. Nonetheless, there is no unified benchmark for CLSP, and thus we are unable to calibrate the performance of multilingual language models on CLSP. ## 6 Conclusion We build XSEMPLR, a unified benchmark for cross-lingual semantic parsing with multiple natural languages and meaning representations. We conduct a comprehensive benchmark study on three representative types of multilingual language models. Our results show that mT5 with monolingual training yields the best performance, while notably multilingual LLMs are still inadequate to perform cross-lingual semantic parsing tasks. Moreover, the performance gap between monolingual training and cross-lingual transfer learning is still significant. These findings call for both improved semantic parsing capabilities of multilingual LLMs and stronger cross-lingual transfer learning techniques for semantic parsing. ## Limitations While we cover a wide range of different factors of cross-lingual semantic parsing (e.g., tasks, datasets, natural languages, meaning representations, domains), we cannot include all possible dimensions along with these aspects. Furthermore, we focus on the linguistic generalization ability for semantic parsing because the questions are translated from the English datasets. In the future, we will explore questions raised by native speakers in each language to study the model ability under variations in cultural backgrounds and information-seeking needs. ## Acknowledgment We thank Victoria Lin, Bailin Wang, Robin Jia, Ice Pasupat, Tianze Shi, Bing Xiang, Luke Zettlemoyer for their early feedback and discussions. We thank Peng Shi, Yucheng Nie, Junru Liu, Tom Sherborne, Harsh Maniar, Xiangyu Dong, Chen Wang, Songlin Hou, Haoran Zhang, Nan Zhang, and Sarkar Das for their valuable help and comments. ## References Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. arXiv preprint arXiv:1910.11856. Akari Asai, Jungo Kasai, Jonathan H Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2020. Xor qa: Cross-lingual open-retrieval question answering. arXiv preprint arXiv:2010.11856. Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, et al. 2022. Multi-lingual evaluation of code generation models. *arXiv preprint arXiv:2210.14868*. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533–1544. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Yanda Chen, Chris Kedzie, Suraj Nair, Petra Galušcáková, Rui Zhang, Douglas W Oard, and Kath- ˇ leen McKeown. 2021b. Cross-language sentence selection via data augmentation and rationale training. arXiv preprint arXiv:2106.02293. Hugh A Chipman, Edward I George, Robert E McCulloch, and Thomas S Shively. 2022. mbart: Multidimensional monotone bart. *Bayesian Analysis*, 17(2):515–544. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. *arXiv preprint* arXiv:1710.04087. Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. arXiv preprint arXiv:1809.05053. Ruixiang Cui, Rahul Aralikatte, Heather Lent, and Daniel Hershcovich. 2021. Multilingual compositional wikidata questions. arXiv preprint arXiv:2108.03509. Ruixiang Cui, Rahul Aralikatte, Heather Lent, and Daniel Hershcovich. 2022. Compositional generalization in multilingual semantic parsing over wikidata. *Transactions of the Association for Computational Linguistics*, 10:937–955. Deborah A Dahl, Madeleine Bates, Michael K Brown, William M Fisher, Kate Hunicke-Smith, David S Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In *HUMAN LANGUAGE* TECHNOLOGY: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, and Jian-Guang Lou. 2022. Multispider: Towards benchmarking multilingual text-to-sql semantic parsing. arXiv preprint arXiv:2212.13492. Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip R Cohen, and Mark Johnson. 2017. Multilingual semantic parsing and code-switching. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 379–389. Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-sql evaluation methodology. arXiv preprint arXiv:1806.09029. Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, and Ting Liu. 2020. Benchmarking meaning representations in neural semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1520–1540. Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. *arXiv* preprint arXiv:1810.07942. Carolin Haas and Stefan Riezler. 2016. A corpus and semantic parser for multilingual natural language querying of openstreetmap. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 740–750. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al. 2022. Folio: Natural language reasoning with firstorder logic. *arXiv preprint arXiv:2209.00840*. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. arXiv preprint arXiv:1704.08760. Bevan Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic parsing with bayesian tree transducers. In *Proceedings of the 50th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 488–496. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. 2019. Measuring compositional generalization: A comprehensive method on realistic data. *arXiv preprint arXiv:1912.09713*. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. *Advances in* Neural Information Processing Systems (NeurIPS). Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, et al. 2022. The bigscience roots corpus: A 1.6 tb composite multilingual dataset. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Carolin Lawrence and Stefan Riezler. 2016. Nlmaps: A natural language interface to query openstreetmap. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 6–10. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian ˘ Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2020. Mtop: A comprehensive multilingual task-oriented semantic parsing benchmark. arXiv preprint arXiv:2008.09335. Jinyang Li, Binyuan Hui, Reynold Cheng, Bowen Qin, Chenhao Ma, Nan Huo, Fei Huang, Wenyu Du, Luo Si, and Yongbin Li. 2023. Graphix-t5: Mixing pretrained transformers with graph-aware layers for textto-sql parsing. *arXiv preprint arXiv:2301.07507*. Zhenwen Li, Jiaqi Guo, Qian Liu, Jian-Guang Lou, and Tao Xie. 2022. Exploring the secrets behind the learning difficulty of meaning representations for semantic parsing. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 3616–3625, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, et al. 2020. Xglue: A new benchmark dataset for cross-lingual pretraining, understanding and generation. *arXiv* preprint arXiv:2004.01401. Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2022. Few-shot learning with multilingual generative language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9019–9052. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Wei Lu and Hwee Tou Ng. 2011. A probabilistic forestto-string model for language generation from typed lambda calculus expressions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1611–1622. Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A pilot study for chinese sql semantic parsing. *arXiv* preprint arXiv:1909.13293. Mehrad Moradshahi, Giovanni Campagna, Sina J Semnani, Silei Xu, and Monica S Lam. 2020. Localizing open-ontology qa semantic parsers in a day using machine translation. *arXiv preprint arXiv:2010.05106*. Anh Tuan Nguyen, Mai Hoang Dao, and Dat Quoc Nguyen. 2020. A pilot study of text-to-sql semantic parsing for vietnamese. arXiv preprint arXiv:2010.01891. Douglas W Oard, Marine Carpuat, Petra Galušcáková, ˇ Joseph Barrow, Suraj Nair, Xing Niu, Han-Chin Shing, Weijia Xu, Elena Zotkina, Kathleen McKeown, et al. 2019. Surprise languages: rapid-response cross-language ir. In *ACM NTCIR-14 Conference*, volume 10. Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022. Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3479–3495, Seattle, United States. Association for Computational Linguistics. Prafull Prakash, Saurabh Kumar Shashidhar, Wenlong Zhao, Subendhu Rongali, Haidar Khan, and Michael Kayser. 2020. Compressing transformer-based semantic parsing models using compositional code embeddings. *arXiv preprint arXiv:2010.05002*. Patti Price. 1990. Evaluation of spoken language systems: The atis domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing. In *Proceedings of The Web Conference 2020*, pages 2962–2968. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Tom Sherborne and Mirella Lapata. 2021. Zeroshot cross-lingual semantic parsing. arXiv preprint arXiv:2104.07554. Tom Sherborne and Mirella Lapata. 2022. Zero-shot cross-lingual semantic parsing. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4134–4153, Dublin, Ireland. Association for Computational Linguistics. Tom Sherborne and Mirella Lapata. 2023. Metalearning a cross-lingual manifold for semantic parsing. *Transactions of the Association for Computational Linguistics*, 11:49–67. Tom Sherborne, Yumo Xu, and Mirella Lapata. 2020. Bootstrapping a crosslingual semantic parser. *arXiv* preprint arXiv:2004.02585. Peng Shi, Linfeng Song, Lifeng Jin, Haitao Mi, He Bai, Jimmy Lin, and Dong Yu. 2022a. Cross-lingual textto-SQL semantic parsing with representation mixup. In *Findings of the Association for Computational* Linguistics: EMNLP 2022, pages 5296–5306, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2021. Cross-lingual training of dense retrievers for document retrieval. In *Proceedings of the 1st Workshop* on Multilingual Representation Learning, pages 251– 253. Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2022b. Xricl: Cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing. *arXiv preprint arXiv:2210.13693*. Chang Shu, Yusen Zhang, Xiangyu Dong, Peng Shi, Tao Yu, and Rui Zhang. 2021. Logic-consistency text generation from semantic parses. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4414–4426, Online. Association for Computational Linguistics. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615. Shuo Sun and Kevin Duh. 2020. Clirmatrix: A massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4160–4170. Raymond Hendy Susanto and Wei Lu. 2017a. Neural architectures for multilingual semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 38–44. Raymond Hendy Susanto and Wei Lu. 2017b. Semantic parsing with neural hybrid trees. In Proceedings of the AAAI Conference on Artificial Intelligence. Shyam Upadhyay, Manaal Faruqui, Gokhan Tür, Hakkani-Tür Dilek, and Larry Heck. 2018. (almost) zero-shot cross-lingual spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034–6038. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *arXiv preprint arXiv:1706.03762*. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342. Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F Xu, and Graham Neubig. 2022. Mconala: A benchmark for code generation from multiple natural languages. arXiv preprint arXiv:2203.08388. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*. Silei Xu, Giovanni Campagna, Jian Li, and Monica S Lam. 2020a. Schema2qa: High-quality and low-cost q&a agents for the structured web. In *Proceedings of* the 29th ACM International Conference on Information & Knowledge Management, pages 1685–1694. Weijia Xu, Batool Haider, and Saab Mansour. 2020b. End-to-end slot alignment and recognition for crosslingual nlu. *arXiv preprint arXiv:2004.14353*. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In 2018 IEEE/ACM 15th international conference on mining software repositories (MSR), pages 476–486. IEEE. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887. Rabih Zbib, Lingjun Zhao, Damianos Karakos, William Hartmann, Jay DeYoung, Zhongqiang Huang, Zhuolin Jiang, Noah Rivkin, Le Zhang, Richard Schwartz, et al. 2019. Neural-network lexical translation for cross-lingual ir from text and speech. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 645–654. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In *Proceedings of the national conference* on artificial intelligence, pages 1050–1055. Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420. Rui Zhang, Caitlin Westerfield, Sungrok Shim, Garrett Bingham, Alexander Fabbri, Neha Verma, William Hu, and Dragomir Radev. 2019. Improving lowresource cross-lingual document retrieval by reranking with deep bilingual representations. *arXiv* preprint arXiv:1906.03492. Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. Ncls: Neural cross-lingual summarization. arXiv preprint arXiv:1909.00156. Yanyan Zou and Wei Lu. 2018. Learning cross-lingual distributed logical representations for semantic parsing. *arXiv preprint arXiv:1806.05461*. ## A Data Construction Details In this section, we introduce the details of data collection, natural languages, meaning representations, and dataset statistics. ## A.1 Data Collection Multilingual ATIS ATIS (Price, 1990; Dahl et al., 1994) contains user questions for a flightbooking task. The original user questions are in English. We add the translations in Spanish, German, French, Portuguese, Japanese, Chinese from Xu et al. (2020b). Furthermore, Upadhyay et al. (2018) provide translations in Hindi and Turkish but only for a subset of utterances. Susanto and Lu (2017a) provide translations in Indonesian and Chinese, and Sherborne et al. (2020) provide translations in Chinese and German, but neither is available through LDC. Therefore, we don't include these. For meaning representations, we focus on the task of NLI to databases and thus collect SQL from Iyer et al. (2017); Finegan-Dollak et al. (2018), while there are other formats available such as logical forms (Zettlemoyer and Collins, 2012) and BIO tags for slot and intent (Upadhyay et al., 2018). To unify SQL formats across datasets, we rewrite the SQL queries following the format of Spider (Yu et al., 2018). We follow the question splits from Finegan-Dollak et al. (2018). Through manual inspection, we discard 52 examples which do not have aligned translations from Xu et al. (2020b). This gives 5228 examples with 4303 training, 481 dev, and 444 test. Multilingual GeoQuery GeoQuery (Zelle and Mooney, 1996) contains user questions about US geography. The original user questions are in English. One of the earliest work on cross-lingual semantic parsing is on the Chinese version of GeoQuery created by Lu and Ng (2011). Later, Jones et al. (2012) create German, Greek, and Thai translations, and Susanto and Lu (2017b) create Indonesian, Swedish, and Farsi translations. We include all these 8 languages. Furthermore, GeoQuery has several meaning representations available. To include multiple meaning representations, we collect Prolog and Lambda Calculus from Guo et al. (2020), FunQL from Susanto and Lu (2017b), and SQL from Finegan-Dollak et al. (2018). To unify SQL formats across datasets, we rewrite the SQL queries following the format of Spider (Yu et al., 2018). We follow the question splits from FineganDollak et al. (2018). Through manual inspection, we discard 3 examples that do not have corresponding FunQL representations. This gives 874 examples with 548 training, 49 dev, and 277 test. Multilingual Spider Spider (Yu et al., 2018) is a human-annotated complex and cross-domain textto-SQL datasets. The original Spider uses English utterances and database schemas. To include utterances in other languages, we include the Chinese version (Min et al., 2019) and the syllable-level Vietnamese version (Nguyen et al., 2020). In this way, each SQL query is paired with a database schema in English and an utterance in three languages. Because the test set is not public, we include only the training and dev set. We also exclude GeoQuery examples from its training set because we use the full version of GeoQuery separately. This creates 8095 training examples and 1034 dev examples following the original splits (Yu et al., 2018). Multilingual NLmaps NLMaps (Lawrence and Riezler, 2016) is a Natural Language Interface to query the OpenStreetMap database about geographical facts. The original questions are in English, and later Haas and Riezler (2016) provide translations in German. The meaning representation is Functional Query Language designed for OpenStreetMap, which is similar to FunQL of GeoQuery. We follow the original split with 1500 training and 880 test examples. Multilingual Overnight Overnight (Wang et al., 2015) is a multi-domain semantic parsing dataset with lambda DCS logical forms executable in SEMPRE (Berant et al., 2013). The questions cover 8 domains in Calendar, Blocks, Housing, Restaurants, Recipes, Publications, Social, Basketball. The original dataset is in English, and Sherborne et al. (2020) provide translation in German and Chinese. They use machine translation for the training set and human translation on the dev and test sets. We include the Baidu Translation for Chinese and Google Translate for German. We merge all the domains together as a single dataset and follow the original split with 8754 training, 2188 dev, and 2740 test examples. MCWQ MCWQ (Cui et al., 2021) is a multilingual knowledge-based question answering dataset grounded in Wikidata. This is created by adapting the CFQ (Compositional Freebase Questions) dataset (Keysers et al., 2019) by translating the queries into SQARQL for Wikidata. The questions are in four languages including Hebrew, Kannada, Chinese, and English. The split follows maximum compound divergence (MCD) so that the test set contains novel compounds to test compositionality generalization ability. We follow the MCD3 splits with 4006 training, 733 dev, and 648 test examples. Multilingual Schema2QA Schema2QA (Xu et al., 2020a) is an open-ontology question answering dataset over scraped Schema.org web data with meaning representations in ThingTalk Query Language. Moradshahi et al. (2020) extend the original dataset with utterances in English, Arabic, German, Spanish, Farsi, Finnish, Italian, Japanese, Polish, Turkish, Chinese. The questions cover 2 domains in hotels and restaurants. The training examples are automatically generated based on templatebased synthesis, crowdsourced paraphrasing, and machine translation. The test examples are crowdsourced and manually annotated by an expert with human translations. We include training examples with all 11 languages available and pair the translations with the query in corresponding language. To make the dataset size comparable to others, we include 5% of the training set. This gives 8932 training examples and 971 test examples. We also include a no-value version of the query, because the entities in the translated utterances are localized in the new languages and thus do not align well with the values in English queries. MTOP MTOP (Li et al., 2020) is a multilingual task-oriented semantic parsing dataset with meaning representations based on hierarchical intent and slot annotations (Gupta et al., 2018). It covers 11 domains in Alarm, Calling, Event, Messaging, Music, News, People, Recipes, Reminder, Timer, Weather. It includes 6 languages in English, German, French, Spanish, Hindi, Thai. We include examples with all 6 languages available and pair the translations with the compositional decoupled representation in corresponding language. This gives 5446 training, 863 dev, 1245 test examples. MCoNaLa MCoNaLa (Wang et al., 2022) is a code generation benchmark which requires to generate Python code. It collects English examples from the English Code/Natural Language Challenge (CoNaLa (Yin et al., 2018)) dataset and further annotates a total of 896 NL-code pairs in three languages, including Spanish, Japanese, and Russian. The training and dev set contains 1903 and 476 English examples, separately. ## A.2 Language Details We assemble 9 datasets in various domains for 5 semantic parsing tasks. It covers 8 meaning representations: SQL, Lambda Calculus, Functional Query Language, Prolog, SPARQL, ThingTalk Query Language, Python, Hierarchical Intent and Slot. The questions covers 22 languages in 15 language families: Arabic(Afro-Asiatic), Chinese(Sino-Tibetan), English(IE: Germanic), Farsi(IE: Iranian), Finnish(Uralic), French(IE: Romance), German(IE: Germanic), Greek(IE: Greek), Hebrew(Afro-Asiatic), Hindi(IE: Indo-Aryan), Indonesian(Austronesian), Italian(IE: Romance), Japanese(Japonic), Kannada(Dravidian), Polish(IE: Slavic), Portuguese(IE: Romance), Russian(IE: Slavic), Spanish(IE: Romance), Swedish(IE: Germanic), Thai(Kra-Dai), Turkish(Turkic), Vietnamese(Austro-Asiatic). Each dataset has English for cross-lingual transfer over other languages. ## A.3 Meaning Representation Details Prolog uses first-order logic augmented with higherorder predicates for quantification and aggregation. Lambda Calculus is a formal system for computation, and it represents all first-order logic and naturally supports higher-order functions with constants, quantifiers, logical connectors, and lambda abstract. FunQL is a variable-free language, and it encodes compositionality using nested functionargument structures. SQL is the query language based upon relational algebra to handle relations among entities and variables in databases. The last two, ThingTalk Query Language (Xu et al., 2020a) and Hierarchical intent and slot (Gupta et al., 2018) are recently proposed for Question Answering on Web and Task-Oriented Dialogue State Tracking, respectively. Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. ## A.4 Dataset Statistics Figure 6 shows the statistics of the dataset. As can be seen, the top 3 NLs with the most samples in XSEMPLR are English, Chinese, and German, while the top 3 MRs are Lambda, SQL, and ThingTalk. ## B Experiment Details We introduce the training settings and input/output format for all experiments and settings in this section. ## B.1 Training Settings For experiments on LSTM model (Table 7), we use OpenNMT5as the implementation. For Transformer-PTR models, we use Pytorch6as the implementation. For Codex and BLOOM models, we use OpenAI API7and Huggingface API8, respectively, and for mT5 and mBART models, we leverage Huggingface9as implementation. For each model, we train 300 epochs on MGeoquery due to the less number of training instances in this 5https://opennmt.net/ 6https://pytorch.org/ 7https://platform.openai.com/docs/ api-reference 8https://huggingface.co/inference-api 9https://huggingface.co/ | Dataset | Utterance | Meaning Representation (MR) | | | | | | | |-------------------------------------------------------|---------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|------------------------------------------------------|------------------------------------------|------|----|----| | ATIS | Liste todos os voos que chegam ao General Mitchell International de várias cidades | SELECT DISTINCT T3.FLIGHT_ID FROM CITY AS T1 JOIN AIRPORT_SERVICE AS T2 ON T1.CITY_CODE = T2.CITY_CODE JOIN FLIGHT AS T3 ON T3.FROM_AIRPORT = T2.AIRPORT_CODE JOIN AIRPORT AS T4 ON T3.TO_- AIRPORT = T4.AIRPORT_CODE WHERE T4.AIRPORT_CODE = "MKE" | | | | | | | | بزرگتری | | | | | | | | | | ن | | | | | | | | | | شهر لوئیزیانا | | | | | | | | | | | | | | | | | | | | GeoQuery | answer(A,largest(A,(city(A),loc(A,B),const(B,stateid(louisiana))))) | | | | | | | | | کدام اس | | | | | | | | | | ت | | | | | | | | | | ؟ | | | | | | | | | | Spider | 有多少摇滚歌手 | SELECT count(*) FROM singer WHERE genre = 'Rock' | | | | | | | | NLmaps | Wie viele japanische Restaurants gibt es in Paris? | query(area(keyval('name','Paris'), | keyval('is_in:country','France')), | | | | | | | nwr(keyval('cuisine','japanese')),qtype(count)) | | | | | | | | | | Overnight | what players made less than three assists over a season | ( call SW.listValue ( call SW.getProperty ( ( lambda s ( call SW.filter ( var s ) ( call SW.ensureNumericProperty ( string num_assists ) ) ( string < ) ( call SW.ensureNumericEntity ( number 3 assist ) ) ) ) ( call SW.domain ( string player ) ) ) ( string player ) ) ) | | | | | | | | MCWQ | !N | התחת | M0 | Mהא | | | | | | הילד | | | | | | | | | | של! | M1 !Mע | ASK WHERE ?x0 wdt:P749 M0 . ?x0 wdt:P26 M1 . FILTER ( ?x0 != M1 ) | | | | | | | | Schema2QA mostrami i ristoranti con più recensioni | now | => | ( | sort | param:aggregateRating.reviewCount:Number | desc | of | ( | | @org.schema.Restaurant.Restaurant ) ) [ 1 ] => notify | | | | | | | | | | MTOP | pMpEkn | rn | Eks | [IN:GET_CATEGORY_EVENT [SL:TITLE_EVENT pMpEkn rn ] ] | | | | | | þkAr kA iv\V h{ ? | | | | | | | | | | MCoNaLa | タプルdataを空白 | for i in data: print(' '.join(str(j) for j in i)) | | | | | | | | 区切りで表示する | | | | | | | | | Table 5: Examples of each dataset in XSEMPLR including diverse languages and meaning representations. ATIS: Portuguese-SQL, Geoquery: Farsi-Prolog, Spider: Vietnamese-SQL, NLmaps: German-FunQL, Overnight: English-Lambda Calculus, MCWQ: Hebrew-SPARQL, Schema2QA: Arabic-ThingTalk Query Language, MTOP: Hindi-Hierarchical Intent and Slot, MCoNaLa: Japanese-Python. dataset and 100 epochs on the rest of the datasets. The learning rate is chosen from {1e-5, 3e-5, 5e-5, 1e-4} according to the parameter search on the dev set. For Codex and BLOOM, the maximum length of the generated sequence is set to 256 tokens. For Codex, the temperature is set to 0. For BLOOM, if the generated result does not contain complete MR, we append the generated results to the input and redo the generation and repeat this process over again until the generated result is complete. However, the maximum API call of one sample is set to 5 times. After 5 calls, we use the generated result as the final result. We use default settings for the rest of the parameters. We run the model on 8 RTX A6000 GPUs, and it takes from hours to several days according to the data size. The model architecture from Huggingface is mT5-large, mBART-large, and mBERT-base. For Codex and BLOOM, we use code-davinci00210, and bigscience/bloom. The batch size is set to 16 for training mT5/mBART and 32 for train- ## Ing Transformer-Ptr Models. B.2 Input/Output Format For input of the Transformer-PTR models, we directly feed the query into the model. For MSpider, we append the table to the end of the sequence with the format "[CLS] Query [SEP] Schema name [SEP] Table 1 [SEP] Table 2 ...", each table is represented by "table name.column name". We add "table name.*" to each table to represent all columns. For instance11: [CLS] how many singers do we have? [SEP] * [SEP] stadium.* stadium.stadium_- id stadium.location stadium.name stadium.capacity stadium.highest stadium.lowest stadium.average [SEP] singer.* singer.singer_id singer.name singer.country singer.song_name singer.song_release_year singer.age singer.is_male [SEP] concert.* concert.concert_id concert.concert_- 11In these examples, we use "-" to connect the words crossing lines. ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) name concert.theme concert.stadium_id concert.year [SEP] singer_in_concert.* singer_in_concert.concert_id singer_in_- concert.singer_id [SEP] As for the output, we scan the tokens in the label and replace the ones that appear in the source text with "@ptrN" where "N" is a natural number showing the index of the token in the source text. We remove the "FROM" clause in SQL. In this way, the pointer network can easily identify which tokens are copied from the source. For instance: [CLS] select count ( @ptr19 ) [SEP] concert_singer For mT5 and mBART, we use the tokenizers provided by Huggingface to tokenize the queries. And for MSipder dataset, we append the table columns one by one to the tail, separated by "||". For instance: $$\begin{array}{r l r}{\mathbf{d}}&{{}}&{|\ |}&{}&{\operatorname{stat}}\\ {}&{|\ |}&{}&{}&{\operatorname{stat}}\\ {\operatorname{est}}&{|\ |}&{}&{}&{\mathbf{s}}\\ {\operatorname{i}\mathbf{g}\mathbf{e}}&{|\ |}&{}&{}&{\mathbf{s}}\end{array}$$ how many singers do we have? || stadium.stadium_id || stadium.location || stadium.name || stadium.capacity || stadium.highest || stadium.lowest || stadium.average || singer.singer_- id || singer.name || singer.country singer.song_name || singer.song_- release_year || singer.age singer.is_- male || concert.concert_id || concert.concert_name || concert.theme || concert.stadium_id || concert.year || singer_in_concert.concert_id || singer_- in_concert.singer_id $${\textsf{t}}({\textsf{s i n}}$$ $$\mathrm{\boldmath{\pi}}\mathrm{\boldmath{d}}\ )\ \mathrm{\boldmath{f}}\mathrm{\boldmath{n}}$$ The output is simply the MR itself. select count ( singer_id ) from singer For Codex and BLOOM, we use 8-shot in-context learning (Han et al., 2022). Specifically, we concatenate 8 pairs of examples and a query as the input. For MSpider, we additionally list the information of the schema including table names and column names of each example. It is worth noting that the number of examples of BLOOM for in-context learning decrease to 4 on MATIS dataset and decreases to 1 on MSpider dataset because the number of tokens exceeds the input limit. The example of MSpider input is listed as follows: \# Translate the following sentences into sql: \# Question: \# Who performed the song named "Le Pop"? \# The information of tables: \# 0. Table name is: Songs. The table columns are as follows: SongId, Title \# 1. Table name is: Albums. The table columns are as follows: AId, Title, Year, Label, Type \# 2. Table name is: Band. The table columns are $$\mathrm{tr}19\quad)\quad\mathbb{D}$$ —- 3 Tables Ignored —- \# 6. Table name is: Vocals. The table columns are as follows: SongId, Bandmate, Type \# Translation results are as follows: \# SELECT T2.firstname , T2.lastname FROM Performance AS T1 JOIN Band AS T2 ON T1.bandmate = T2.id JOIN Songs AS T3 ON T3.SongId = T1.SongId WHERE T3.Title = "Le Pop" —- More Examples Ignored —- ![17_image_0.png](17_image_0.png) # Translate the following sentences $$\begin{array}{l l}{{\#\quad\mathrm{Translate}\quad\mathrm{the}}}\\ {{\mathrm{into}\;\;\mathrm{sql}\,\colon}}\end{array}$$ # Question: # Tell me the types of the policy used by the customer named "Dayana Robel". # The information of tables: $\frac{1}{2}$ $$\mathbf{\Sigma}=\mathbf{\Sigma}6\mathbf{\Sigma}$$ \# Translation results are as follows: The expected output is the MR with a starting symbol "\#". \# SELECT DISTINCT t3.policy_type_code FROM customers AS t1 JOIN customers_- policies AS t2 ON t1.customer_id = t2.customer_id JOIN available_policies AS t3 ON t2.policy_id = t3.policy_id WHERE t1.customer_name = "Dayana Robel" ## B.3 Experiment Path The experiments are done in the following order: we first evaluate 2 Enc-PTR and 2 Enc-Dec baseline models in the Monolingual setting. Then, we pick two of them with the best performance to evaluate on all the other settings. Finally, we evaluate LLMs using in-context learning in two finetuningfree settings. ## C Results And Discussions This section lists the results for each NL and MR and introduces the comparison with SOTA, training data size and few-shot learning, and error analysis. ## C.1 Results For Each Nl And Mr We list some of the results of our models on various datasets and languages. Table 7, 8, 9, 11, 10 show the Monolingual performance of LSTM, mBERT+PTR, XLM-R+PTR, mBART, and mT5. Table 12, 13, 14, 15 shows the Monolingual FewShot performance of XLM-R+PTR, mT5, Codex, and BLOOM. Table 16, and 17 show the Multilingual performance of XLM-R+PTR, and mT5. Table 18, 19, 20, 21 show the Cross-lingual ZeroShot Transfer performance of XLM-R+PTR, mT5, Codex, and BLOOM. Table 22, 23 show the Crosslingual Few-Shot Transfer performance of XLMR+PTR, and mT5. ## C.2 Training Data Size And Few-Shot Learning Figure 7 displays the averaged Exact Matching scores (EM) across all languages on MGeoQuery dataset, where each line represents a meaning representation, and each dot on the line represents a few-shot experiment using such meaning representation. The X-axis is the percentage of data we use to train the model. Results show that the performance was largely influenced by the number of samples in the training set. The performance can be as high as 70% if given sufficient data, while training on 10% of training data may lead to 0 scores. Besides, among all four MRs, the performance of FunQL increases most steadily, showing its robustness. ## C.3 Error Analysis We conduct error analysis on MGeoQuery dataset. First, we select the English split with SQL MR, and compare the golden MR and the predictions generated by mT5. We classify the errors into 4 types: - Syntax error: The prediction contains a syntax error. In other words, SQL engine can parse the predictions because of the grammar issues. | Error Type | Description | Proportion(%) | |-----------------------|---------------------------------------------------------|-----------------| | Syntax error | Incorrect program syntax (invalid grammar) | 17.14 | | Semantic error | 64.27 | | | Token | Incorrect or missing column/value/operator | 22.85 | | Structure | Incorrect program structure (valid grammar) | 41.42 | | Incorrect Exact Match | Incorrect exact match with the correct execution answer | 18.47 | Table 6: Error analysis on MGeoQuery English test set. The MR is SQL. - Token error: one of the two types of semantic errors. Predictions contain wrong column names, values (such as strings and numbers), and operators (not including keywords). - Structure error: one of the two types of semantic errors. Predictions contain wrong structures. It means some keywords of SQL are incorrect or missing. - Incorrect Exact Match: although the exact match shows the prediction is different from the golden one, the execution results are the same. As shown in Table 6, most of the errors are semantic errors (64.27%) in which the structure error is around two times of token error (41.42% vs. 22.85%). Syntax error and incorrect exact match occupy around 18% of errors respectively. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ⋆ MSchema2QA MTOP English 48.9 76.8 15.8 72.2 22.4 92 48.1 78.6 Arabic - – - – - – 33.1 – Chinese 44.6 61.2 10.2 - 20.8 75.1 35.9 – Farsi - 52.0 - – - – 24.4 – Finnish - – - – - – 24.7 - French 47.5 - – - – - – 60.8 German 47.7 59.5 - 64.9 2.1 - 38.3 58.5 Greek - 51.4 - – - – - – Hebrew - – - – - 74.0 - – Hindi - – - – - – - 58.6 Indonesian - 69.3 - – - – - – Italian - – - – - – 33.8 – Japanese 2.7 - – - – - 49.6 – Kannada - – - – - 77.7 - – Polish - – - – - – 31.4 – Portuguese 46.4 - – - – - – – Spanish 7.2 - – - – - 44.5 63.8 Swedish - 63.3 - – - – - – Thai - 48.6 - – - – - 60.0 Turkish - – - – - – 41.4 – Vietnamese - – 8.6 - – - – – MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 37.33 88.08 55.4 85.8 61.82 35.49 64.98 86.58 5.87 Arabic - – - – - – 48.09 - – Chinese 32.26 63.9 42.6 - 53.36 22.38 43.87 - – Farsi - 80.86 - – - – 46.65 - – Finnish - – - – - – 50.26 - – French 34.21 - – - – - – 75.18 – German 37.56 85.92 - 81.84 57.22 - 60.56 73.16 – Greek - 86.64 - – - – - – – Hebrew - – - – - 24.38 - – – Hindi - – - – - – - 70.97 – Indonesian - 84.84 - – - – - – - Italian - – - – - – 50.26 - – Japanese - – - – - – 48.97 - – Kannada - – - – - 11.57 - – – Polish - – - – - – 45.31 - – Portuguese 36.64 - – - – - – - – Russian - – - – - – - – – Spanish 5.76 - – - – - 62.51 77.2 - Swedish - 87.36 - – - – - – – Thai - 81.58 - – - – - 69.36 – Turkish - – - – - – 56.33 - – Vietnamese - – 23.2 - – - – - – Table 8: The performance of mBERT+PTR with Monolingual setting on different datasets and different languages. | MATIS | MGeoQuery | MSpider | MNLmaps | MOvernight | MCWQ | MSchema2QA | MTOP | MCoNaLa | | |------------|-------------|-----------|-----------|--------------|--------|--------------|--------|-----------|------| | English | 36.71 | 88.45 | 53.1 | 86.02 | 62.99 | 37.19 | 73.53 | 90.54 | 7.69 | | Arabic | - | - | - | - | - | - | 58.08 | - | - | | Chinese | 34.91 | 77.98 | 44.1 | - | 56.93 | 19.29 | 48.92 | - | - | | Farsi | - | 81.23 | - | - | - | - | 60.56 | - | - | | Finnish | - | - | - | - | - | - | 64.26 | - | - | | French | 38.31 | - | - | - | - | - | - | 78.58 | - | | German | 38.28 | 89.17 | - | 84.32 | 59.27 | - | 68.59 | 79.22 | - | | Greek | - | 85.92 | - | - | - | - | - | - | - | | Hebrew | - | - | - | - | - | 14.66 | - | - | - | | Hindi | - | - | - | - | - | - | - | 77.93 | - | | Indonesian | - | 88.81 | - | - | - | - | - | - | - | | Italian | - | - | - | - | - | - | 63.44 | - | - | | Japanese | - | - | - | - | - | - | 55.26 | - | - | | Kannada | - | - | - | - | - | 22.99 | - | - | - | | Polish | - | - | - | - | - | - | 55.82 | - | - | | Portuguese | 34.01 | - | - | - | - | - | - | - | - | | Russian | - | - | - | - | - | - | - | - | - | | Spanish | 5.63 | - | - | - | - | - | 68.59 | 81.16 | - | | Swedish | - | 89.17 | - | - | - | - | - | - | - | | Thai | - | 85.56 | - | - | - | - | - | 74.7 | - | | Turkish | - | - | - | - | - | - | 69 | - | - | | Vietnamese | - | - | 44.7 | - | - | - | - | - | - | MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 45.72 74.01 52.32 86.82 65.18 38.12 57.16 87.87 6.78 Arabic - – - – - – 44.59 - – Chinese 45.72 65.34 16.48 - 56.93 25.35 42.95 - – Farsi - 59.57 - – - – 38.72 - – Finnish - – - – - – 54.48 - – French 47.97 - – - – - – 74.05 – German 50.23 57.76 - 79.55 56.68 - 59.11 75.18 – Greek - 49.46 - – - – - – – Hebrew - – - – - 33.95 - – – Hindi - – - – - – - 72.59 – Indonesian - 74.01 - – - – - – – Italian - – - – - – 46.65 - – Japanese - – - – - – 53.76 - – Kannada - – - – - 22.69 - – – Polish - – - – - – 43.56 - – Portuguese 43.47 - – - – - – - – Russian - – - – - – - – – Spanish 18.47 - – - – - 57.67 78.66 – Swedish - 68.95 - – - – - – – Thai - 58.12 - – - – - 66.21 – Turkish - – - – - – 46.65 - – Vietnamese - – 14.31 - – - – - – Table 10: The performance of mBART with Monolingual setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 53.60 89.89 68.30 92.73 69.38 39.29 76.00 91.67 10.29 Arabic - – - – - – 53.55 - – Chinese 52.48 77.62 54.90 - 62.59 24.56 56.54 - – Farsi - 75.45 - – - – 60.25 - – Finnish - – - – - – 68.28 - – French 53.60 - – - – - – 82.30 – German 52.93 71.83 - 90.57 66.90 - 72.19 82.38 – Greek - 85.92 - – - – - – – Hebrew - – - – - 33.02 - – – Hindi - – - – - – - 78.98 - Indonesian - 87.00 - – - – - – – Italian - – - – - – 67.97 - – Japanese - – - – - – 62.41 - – Kannada - – - – - 23.74 - – - Polish - – - – - – 60.87 - – Portuguese 53.15 - – - – - – - – Russian - – - – - – - – - Spanish 53.13 - – - – - 68.69 83.91 - Swedish - 84.48 - – - – - – - Thai - 76.17 - – - – - 71.71 – Turkish - – - – - – 70.03 - – Vietnamese - – 57.15 - – - – - – Table 11: The performance of mT5 with Monolingual setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 29.50 27.01 43.44 20.68 47.88 9.41 58.91 69.36 0.38 Arabic - – - – - – 48.71 - – Chinese 28.11 6.51 33.76 - 34.85 6.02 34.91 - – Farsi - 6.04 - – - – 37.69 - – Finnish - – - – - – 56.13 - – French 37.80 - – - – - – 58.21 – German 5.85 21.50 - 18.86 39.49 - 57.57 60.55 – Greek - 26.20 - – - – - – - Hebrew - – - – - 1.08 - – – Hindi - – - – - – - 59.66 – Indonesian - 25.47 - – - – - – – Italian - – - – - – 48.09 - – Japanese - – - – - – 41.55 - – Kannada - – - – - 6.02 - – – Polish - – - – - – 40.99 - – Portuguese 37.33 - – - – - – - – Russian - – - – - – - – – Spanish 2.02 - – - – - 54.48 62.09 – Swedish - 23.40 - – - – - – – Thai - 7.14 - – - – - 52.63 – Turkish - – - – - – 59.94 - – Vietnamese - – 30.92 - – - – - – Table 12: The performance of XLM-R+PTR with Monolingual Few-shot setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 31.98 33.26 40.81 36.25 59.48 10.80 39.24 72.43 1.05 Arabic - – - – - – 24.20 - – Chinese 32.88 16.25 33.46 - 48.47 4.63 19.26 - – Farsi - 17.69 - – - – 23.27 - – Finnish - – - – - – 35.84 - – French 28.60 - – - – - – 62.81 – German 20.27 23.82 - 26.93 52.81 - 36.05 60.91 – Greek - 29.88 - – - – - – – Hebrew - – - – - 6.20 - – – Hindi - – - – - – - 61.20 - Indonesian - 30.42 - – - – - – – Italian - – - – - – 45.73 - – Japanese - – - – - – 29.66 - – Kannada - – - – - 9.10 - – - Polish - – - – - – 28.94 - – Portuguese 27.93 - – - – - – - – Russian - – - – - – - – - Spanish 7.43 - – - – - 47.89 59.90 - Swedish - 32.40 - – - – - – - Thai - 21.21 - – - – - 54.16 – Turkish - – - – - – 35.84 - – Vietnamese - – 37.04 - – - – - – Table 13: The performance of mT5 with Monolingual Few-shot setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa English 17.79 34.39 34.43 36.82 4.34 4.48 22.97 20.21 13.87 Arabic - – - – - – 16.79 - – Chinese 16.89 31.77 27.85 - 2.74 3.86 18.85 - – Farsi - 27.71 - – - – 17.61 - – Finnish - – - – - – 21.52 - – French 18.47 - – - – - – 17.46 – German 18.24 31.59 - 31.70 3.21 - 20.60 18.51 – Greek - 33.03 - – - – - – - Hebrew - – - – - 2.47 - – – Hindi - – - – - – - 0.49 – Indonesian - 32.49 - – - – - – – Italian - – - – - – 24.30 - – Japanese - – - – - – 19.36 - – Kannada - – - – - 0.93 - – – Polish - – - – - – 20.70 - – Portuguese 18.24 - – - – - – - – Russian - – - – - – - – – Spanish 18.47 - – - – - 24.30 1.13 – Swedish - 33.85 - – - – - – – Thai - 30.60 - – - – - 2.67 – Turkish - – - – - – 30.79 - – Vietnamese - – 29.69 - – - – - – Table 14: The performance of Codex with Monolingual Few-shot setting on different datasets and different languages. | MATIS | MGeoQuery | MSpider | MNLmaps | MOvernight | MCWQ | MSchema2QA | MTOP | MCoNaLa | | |------------|-------------|-----------|-----------|--------------|--------|--------------|--------|-----------|------| | English | 0.00 | 21.66 | 2.22 | 15.23 | 0.91 | 0.00 | 9.68 | 7.03 | 8.40 | | Arabic | - | - | - | - | - | - | 5.87 | - | - | | Chinese | 0.00 | 20.76 | 2.71 | - | 0.62 | 0.00 | 4.43 | - | - | | Farsi | - | 11.64 | - | - | - | - | 1.96 | - | - | | Finnish | - | - | - | - | - | - | 3.71 | - | - | | French | 0.00 | - | - | - | - | - | - | 5.25 | - | | German | 0.00 | 19.86 | - | 9.09 | 0.33 | - | 8.24 | 5.66 | - | | Greek | - | 18.05 | - | - | - | - | - | - | - | | Hebrew | - | - | - | - | - | 0.00 | - | - | - | | Hindi | - | - | - | - | - | - | - | 5.50 | - | | Indonesian | - | 22.48 | - | - | - | - | - | - | - | | Italian | - | - | - | - | - | - | 5.77 | - | - | | Japanese | - | - | - | - | - | - | 4.02 | - | - | | Kannada | - | - | - | - | - | 0.00 | - | - | - | | Polish | - | - | - | - | - | - | 2.99 | - | - | | Portuguese | 0.00 | - | - | - | - | - | - | - | - | | Russian | - | - | - | - | - | - | - | - | - | | Spanish | 0.00 | - | - | - | - | - | 8.75 | 4.77 | - | | Swedish | - | 19.59 | - | - | - | - | - | - | - | | Thai | - | 8.66 | - | - | - | - | - | 2.75 | - | | Turkish | - | - | - | - | - | - | 1.96 | - | - | | Vietnamese | - | - | 1.45 | - | - | - | - | - | - | MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP English 40.05 76.42 36.63 85.91 63.69 32.72 61.32 89.57 Arabic - – - – - – 65.19 – Chinese 40.84 69.37 45.70 - 58.07 31.94 68.25 – Farsi - 66.85 - – - – 62.62 – Finnish - – - – - – 62.00 – French 41.30 - – - – - – 82.54 German 39.68 68.38 - 85.91 61.35 - 70.44 81.00 Greek - 73.80 - – - – - – Hebrew - – - – - 28.86 - – Hindi - – - – - – - 78.74 Indonesian - 75.24 - – - – - – Italian - – - – - – 57.88 – Japanese - – - – - – 59.32 – Kannada - – - – - 29.63 - – Polish - – - – - – 64.12 – Portuguese 42.46 - – - – - – – Spanish 34.03 - – - – - 54.58 81.73 Swedish - 74.52 - – - – - – Thai - 66.22 - – - – - 76.48 Turkish - – - – - – 54.27 – Vietnamese - – 38.28 - – - – – Table 16: The performance of XLM-R+PTR with Multilingual setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP English 58.45 82.04 36.07 92.27 70.33 29.94 69.52 90.61 Arabic - – - – - – 56.09 – Chinese 49.83 75.99 30.66 - 63.98 28.24 58.15 - Farsi - 71.48 - – - – 55.17 – Finnish - – - – - – 62.96 - French 55.00 - – - – - – 78.47 German 60.12 74.19 - 90.34 68.36 - 65.27 83.46 Greek - 79.61 - – - – - – Hebrew - – - – - 28.55 - – Hindi - – - – - – - 85.58 Indonesian - 80.42 - – - – - – Italian - – - – - – 58.10 – Japanese - – - – - – 62.55 – Kannada - – - – - 27.31 - – Polish - – - – - – 56.23 – Portuguese 48.47 - – - – - – – Spanish 54.85 - – - – - 63.31 84.12 Swedish - 79.33 - – - – - – Thai - 69.50 - – - – - 75.48 Turkish - – - – - – 62.78 – Vietnamese - – 30.17 - – - – – MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ-MCD3 MSchema2QA MTOP MCoNaLa Arabic - – - – - – 3.91 - – Chinese 0.92 12.83 20.30 - 23.80 2.16 0.51 - – Farsi - 17.80 - – - – 18.33 - – Finnish - – - – - – 26.98 - – French 2.15 - – - – - – 59.90 – German 1.61 51.13 - 60.23 49.74 - 40.37 56.27 - Greek - 58.44 - – - – - – - Hebrew - – - – - 5.56 - – – Hindi - – - – - – - 44.14 – Indonesian - 56.19 - – - – - – - Italian - – - – - – 32.96 - – Japanese - – - – - – 0.31 - 0.20 Kannada - – - – - 5.09 - – – Polish - – - – - – 29.97 - – Portuguese 0.23 - – - – - – - – Russian - – - – - – - – 0.07 Spanish 25.35 - – - – - 39.24 62.65 0.10 Swedish - 65.22 - – - – - – – Thai - 17.35 - – - – - 34.36 – Turkish - – - – - – 9.58 - – Vietnamese - – 16.76 - – - – - – Table 18: The performance of XLM-R+PTR with Cross-lingual Zero-Shot Transfer setting on different datasets and different languages. MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa Arabic - – - – - – 38.31 - – Chinese 18.02 17.69 38.59 - 45.91 1.39 26.67 - – Farsi - 25.27 - – - – 41.40 - – Finnish - – - – - – 50.26 - – French 33.56 - – - – - – 61.92 – German 34.68 53.43 - 34.89 59.45 - 59.32 52.22 – Greek - 50.90 - – - – - – - Hebrew - – - – - 5.86 - – – Hindi - – - – - – - 35.89 – Indonesian - 42.24 - – - – - – – Italian - – - – - – 58.50 - – Japanese - – - – - – 11.64 - 1.43 Kannada - – - – - 4.94 - – – Polish - – - – - – 49.95 - – Portuguese 34.46 - – - – - – - – Russian - – - – - – - – 0.29 Spanish 38.51 - – - – - 55.82 61.36 0.59 Swedish - 68.23 - – - – - – – Thai - 18.05 - – - – - 39.53 – Turkish - – - – - – 48.51 - – Vietnamese - – 45.26 - – - – - – Table 19: The performance of mT5 with Cross-lingual Zero-Shot Transfer setting on different datasets and different languages. Table 20: The performance of Codex with Cross-lingual Zero-Shot Transfer setting on different datasets and different languages. | MATIS | MGeoQuery | MSpider | MNLmaps | MOvernight | MCWQ | MSchema2QA | MTOP | MCoNaLa | | |------------|-------------|-----------|-----------|--------------|--------|--------------|--------|-----------|-------| | Arabic | - | - | - | - | - | - | 17.82 | - | - | | Chinese | 12.61 | 26.62 | 27.18 | - | 2.70 | 3.55 | 17.40 | - | - | | Farsi | - | 25.36 | - | - | - | - | 16.79 | - | - | | Finnish | - | - | - | - | - | - | 22.35 | - | - | | French | 17.57 | - | - | - | - | - | - | 15.76 | - | | German | 18.24 | 30.23 | - | 32.05 | 3.28 | - | 20.19 | 17.87 | - | | Greek | - | 30.96 | - | - | - | - | - | - | - | | Hebrew | - | - | - | - | - | 1.54 | - | - | - | | Hindi | - | - | - | - | - | - | - | 7.92 | - | | Indonesian | - | 31.04 | - | - | - | - | - | - | - | | Italian | - | - | - | - | - | - | 23.48 | - | - | | Japanese | - | - | - | - | - | - | 16.48 | - | 12.86 | | Kannada | - | - | - | - | - | 1.39 | - | - | - | | Polish | - | - | - | - | - | - | 19.26 | - | - | | Portuguese | 17.57 | - | - | - | - | - | - | - | - | | Russian | - | - | - | - | - | - | - | - | 9.57 | | Spanish | 15.54 | - | - | - | - | - | 21.11 | 16.73 | 2.64 | | Swedish | - | 31.77 | - | - | - | - | - | - | - | | Thai | - | 23.74 | - | - | - | - | - | 12.13 | - | | Turkish | - | - | - | - | - | - | 20.80 | - | - | | Vietnamese | - | - | 27.95 | - | - | - | - | - | - | MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP MCoNaLa Arabic - – - – - – 5.66 - – Chinese 0.00 16.07 2.61 - 0.47 0.00 4.63 - – Farsi - 3.34 - – - – 1.54 - – Finnish - – - – - – 1.13 - – French 0.00 - – - – - – 1.54 – German 0.00 16.43 - 7.05 0.29 - 6.49 1.94 – Greek - 9.84 - – - – - – – Hebrew - – - – - 0.00 - – - Hindi - – - – - – - 1.78 – Indonesian - 18.50 - – - – - – – Italian - – - – - – 5.66 - – Japanese - – - – - – 2.37 - 0.08 Kannada - – - – - 0.00 - – – Polish - – - – - – 3.71 - – Portuguese 0.00 - – - – - – - – Russian - – - – - – - – 0.09 Spanish 0.00 - – - – - 7.83 2.26 0.04 Swedish - 14.62 - – - – - – - Thai - 0.27 - – - – - 0.81 - Turkish - – - – - – 0.31 - – Vietnamese - – 0.79 - – - – - – Table 21: The performance of BLOOM with Cross-lingual Zero-Shot Transfer setting on different datasets and different languages. Table 22: The performance of XLM-R+PTR with Cross-lingual Few-Shot Transfer setting on different datasets and different languages. | ATIS | GeoQuery | Spider | NLmaps | Overnight | MCWQ-MCD3 | Schema2QA | MTOP | | |------------|------------|----------|----------|-------------|-------------|-------------|--------|-------| | Arabic | - | - | - | - | - | - | 53.66 | - | | Chinese | 4.16 | 23.22 | 44.12 | - | 46.61 | 14.35 | 37.49 | - | | Farsi | - | 29.00 | - | - | - | - | 46.55 | - | | Finnish | - | - | - | - | - | - | 57.16 | - | | French | 24.40 | - | - | - | - | - | - | 75.10 | | German | 23.27 | 65.31 | - | 64.89 | 57.44 | - | 61.77 | 73.81 | | Greek | - | 70.91 | - | - | - | - | - | - | | Hebrew | - | - | - | - | - | 22.53 | - | - | | Hindi | - | - | - | - | - | - | - | 72.35 | | Indonesian | - | 71.90 | - | - | - | - | - | - | | Italian | - | - | - | - | - | - | 58.29 | - | | Japanese | - | - | - | - | - | - | 39.79 | - | | Kannada | - | - | - | - | - | 23.61 | - | - | | Polish | - | - | - | - | - | - | 53.45 | - | | Portuguese | 23.27 | - | - | - | - | - | - | - | | Spanish | 3.46 | - | - | - | - | - | 63.72 | 78.33 | | Swedish | - | 68.38 | - | - | - | - | - | - | | Thai | - | 28.82 | - | - | - | - | - | 64.35 | | Turkish | - | - | - | - | - | - | 63.23 | - | | Vietnamese | - | - | 43.24 | - | - | - | - | - | MATIS MGeoQuery MSpider MNLmaps MOvernight MCWQ MSchema2QA MTOP Arabic - – - – - – 47.89 – Chinese 48.65 44.32 44.39 - 60.40 29.48 53.35 – Farsi - 44.23 - – - – 42.22 – Finnish - – - – - – 61.48 – French 50.45 - – - – - – 62.81 German 50.32 56.95 - 71.70 64.67 - 68.80 80.68 Greek - 60.11 - – - – - – Hebrew - – - – - 26.85 - – Indonesian - 58.40 - – - – - – Italian - – - – - – 66.63 – Japanese - – - – - – 45.73 – Kannada - – - – - 18.21 - – Polish - – - – - – 57.98 – Portuguese 49.32 - – - – - – – Spanish 49.10 - – - – - 65.81 83.51 Swedish - 64.71 - – - – - – Thai - 44.49 - – - – - 71.71 Turkish - – - – - – 69.00 – Vietnamese - – 54.45 - – - – – ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Last section before Appendix, no number ✗ A2. Did you discuss any potential risks of your work? In this paper, we mainly propose a benchmark and evaluate current SOTA models. Since every component is from previous work it is safe to use. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract is located at the beginning of the paper. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 ✓ B1. Did you cite the creators of artifacts you used? Section 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We maintain a list of licenses and ensure each of them can be used. We will publish the list upon acceptance. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 2 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The document will be published upon acceptance ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Table 1, we discuss the data splits and data statistics. ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We run all experiments once with unified settings. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhu-etal-2023-ink
{INK}: Injecting k{NN} Knowledge in Nearest Neighbor Machine Translation
https://aclanthology.org/2023.acl-long.888
Neural machine translation has achieved promising results on many translation tasks. However, previous studies have shown that neural models induce a non-smooth representation space, which harms its generalization results. Recently, kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference. Despite promising results, kNN-MT usually requires large inference overhead. We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters. The new parameters are then used to refresh the whole representation datastore to get new kNN knowledge asynchronously. This loop keeps running until convergence. Experiments on four benchmark datasets show that INK achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02x memory space and 1.9x inference speedup.
# Ink: Injecting K**Nn Knowledge In Nearest Neighbor Machine Translation** Wenhao Zhu1, Jingjing Xu2, Shujian Huang1, Lingpeng Kong3**, Jiajun Chen**1 1 National Key Laboratory for Novel Software Technology, Nanjing University 2 Shanghai AI Laboratory 3 The University of Hong Kong zhuwh@smail.nju.edu.cn, jingjingxu@pku.edu.cn huangsj@nju.edu.cn, lpk@cs.hku.hk, chenjj@nju.edu.cn ## Abstract Neural machine translation has achieved promising results on many translation tasks. However, previous studies have shown that neural models induce a non-smooth representation space, which harms its generalization results. Recently, kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference. Despite promising results, kNN-MT usually requires large inference overhead. We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters. The new parameters are then used to refresh the whole representation datastore to get new kNN knowledge asynchronously. This loop keeps running until convergence. Experiments on four benchmark datasets show that INK achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02× memory space and 1.9× inference speedup1. ## 1 Introduction Neural machine translation (NMT) have achieved promising results in recent years (Vaswani et al., 2017; Ng et al., 2019; Qian et al., 2021b). The target of NMT is to learn a generalized representation space to adapt to diverse scenarios. However, recent studies have shown that neural networks, such as BERT and GPT, induce non-smooth representation space, limiting the generalization abilities (Gao et al., 2018; Ethayarajh, 2019; Li et al., 2020). In NMT, we also observe a similar phenomenon in the learned representation space where low-frequency tokens disperse sparsely, even for a strong NMT model (More details are described in Section Experiments). Due to the sparsity, many ![0_image_0.png](0_image_0.png) "holes" could be formed. When it is used to translate examples from an unseen domain, the performance drops sharply (Wang et al., 2022a,b) Recently, k-Nearest-Neighbor Machine Translation (kNN-MT) (Khandelwal et al., 2021) provides an effective solution to smooth predictions by equipping an NMT model with a key-*value* datastore. For each entry, the *value* is the target token and key is the contextualized representation at the target position. It requires a training set to record tokens and representations. By aggregating nearest neighbors during inference, the NMT model can achieve decent translation results (Khandelwal et al., 2021; Zheng et al., 2021; Jiang et al., 2022). Despite the success, kNN-MT also brings new issues with the increasing scale of training data. Retrieving neighbors from a large datastore (Wang et al., 2022a) at each decoding step is time-consuming (Martins et al., 2022a; Meng et al., 2022). Furthermore, once the datastore is constructed, representations can not be easily updated, limiting the performance ceiling of kNNMT. Given above strengths and weaknesses of kNNMT, we propose to directly smooth the representation space with a small number of parameters. In this paper, we propose a training framework INK, 1Code will be released at https://github.com/ OwenNJU/INK 15948 to iteratively refine the representation space with the help of extracted kNN knowledge (Fig. 1). Specifically, we adjust the representation distribution by aligning three kinds of representations with Kullback-Leibler (KL) divergence to train a small number of adaptation parameters. First, we align the contextualized representation and its target embedding to keep semantic meanings. Second, we align the contextualized representations of a target token and align the extracted kNN contextualized representations to address the sparsely dispersing problem. After a training epoch, we refresh the datastore asynchronously with refined models to update kNN representations. During inference, we only load the off-the-shelf NMT model and tune adaptation parameters. We conduct experiments on four benchmark datasets. Experiment results show that our framework brings average gains of 1.99 COMET and 1.0 BLEU. Compared with the state-of-the-art kNNMT method (i.e. Robust kNN-MT; Jiang et al. 2022), INK achieves better translation performance with 0.02× memory space and 1.9× inference speed. Our contributions can be summarized below: - We propose a training framework to smooth the representation space according to kNN knowledge. - We devise an inject-and-refine training loop in our framework. Experiments show that refreshing the datastore asynchronously matters. - Our INK system achieves promising improvements and beats the state-of-the-art kNN-MT system. ## 2 Background This section briefly introduces the working process of kNN-MT and the architecture of adapter (Bapna and Firat, 2019). For the latter, we will use it to improve the representation space in our framework. ## 2.1 K**Nn-Mt** Given an off-the-shelf NMT model M and training set C, kNN-MT memorizes training examples explicitly with a key-*value* datastore D and use D to assist the NMT model during inference. Memorize representations into datastore Specifically, we feed training example (*X, Y* ) in C into M in a teacher-forcing manner (Williams and Zipser, 1989). At time step t, we record the contextualized representation2 ht as key and the corresponding target token yt as *value*. We then put the *key-value* pair into the datastore. In this way, the full datastore D can be created through a single forward pass over the training dataset C: $$D=\{(h_{t},y_{t})\mid\forall y_{t}\in Y,(X,Y)\in{\mathcal{C}}\}$$ where each datastore entry explicitly memorizes the mapping relationship between the representation ht and its target token yt. Translate with memorized representations During inference, the contextualized representation of the test translation context (*X, Y*<t) will be used to query the datastore for nearest neighbor representations and their corresponding target tokens Nk = {(h, ˆ yˆ)} k 1 . Then, the retrieved entries are converted to a distribution over the vocabulary: $$p_{\rm knn}(y|X,Y_{<t})\propto\sum_{(\hat{h},\hat{y})\in{\cal N}_{k}}1(y=\hat{y})e^{\frac{-d(h_{+},\hat{h})}{T}}\tag{2}$$ $\frac{1}{2}$ . where ht denotes h(*X, Y*<t) for short, d measures Euclidean distance and T is the temperature. ## 2.2 Adapter Previous research shows that adapter can be an efficient plug-and-play module for adapting an NMT model (Bapna and Firat, 2019). In common, the adapter layer is inserted after each encoder and decoder layer of M. The architecture of the adapter layer is simple, which includes a feed-forward layer and a normalization layer. Given the output vector z ∈ Rd of a specific encoder/decoder layer, the computation result of the adapter layer can be written as: $$\widetilde{z}=W_{2}^{T}[W_{1}^{T}\cdot f(z)]+z$$ 1 · f(z)] + z (3) where f denotes layer-normalization, W1 ∈ Rd×d′, W2 ∈ Rd′×dare two projection matrices. d′is the inner dimension of these two projections. Bias term and activation function is omitted in the equation for clarity. ze is the output of the adapter layer. ## 3 Approach: Ink This section introduces our training framework INK. The key idea of the proposed approach is 2By default, the last decoder layer's output is used as the contextualized representation of the translation context (*X, Y*<t). ![2_image_0.png](2_image_0.png) to use kNN knowledge to smooth the representation space. The training process is built on a cycled loop: extracting kNN knowledge to adjust representations via a small adapter. The updated parameters are then used to refresh and refine the datastore to get new kNN knowledge. We define three kinds of alignment loss to adjust representations, which are described in Section 3.1, Section 3.2, and Section 3.3. An illustration of the proposed framework is shown in Figure 2. ## 3.1 **Align Contextualized Representations And** Token Embeddings The basic way to optimize the adapter to minimize the KL divergence between the NMT system's prediction probability pnmt and the one-hot golden distribution pgold: $\mathcal{L}_{t}^{a}=D_{\text{KL}}[\,p_{\text{gold}}(y|X,Y_{<t})\,\|\,p_{\text{mnt}}(y|X,Y_{<t})\,]$ $=-\log\frac{\sum_{(w,v)\in\mathcal{E}}\,\mathbb{I}(v=y_{t})\kappa(h_{t},w)}{\sum_{(w,v)\in\mathcal{E}}\kappa(h_{t},w)}$ where E is the embedding matrix. w and v denote the token embedding and its corresponding token respectively. ht denotes the contextualized representation h(*X, Y*<t). yt denotes the target token. κ(ht, w) = e h T t w. Following the widely-accepted alignment-and-uniformity theory (Wang and Isola, 2020), this learning objective aligns the contextualized representation ht with the tokens embedding of its corresponding target token. ## 3.2 **Align Contextualized Representations And** K**Nn Token Embeddings** Previous research in kNN-MT has shown that the nearest neighbors in the representation space can produce better estimation via aggregating kNN neighbors (Khandelwal et al., 2021; Zheng et al., 2021; Yang et al., 2022). Apart from the reference target token, the retrieval results provide some other reasonable translation candidates. Taking the translation case in Figure 2 as an example, retrieval results provide three candidate words, where both "happens" and "occurs" are possible translations. Compared with the basic one-hot supervision signal, the diverse kNN knowledge in the datastore can be beneficial for building a representation space with more expressive abilities. Therefore, we extract kNN knowledge by using the contextualized representation htto query the datastore for nearest neighbors Nk = {(h, ˆ yˆ)} k 1 (illustrated in Fig. 2). For more stable training, we reformulate the computation process of kNN distribution as kernel density estimation (KDE) (Parzen, 1962). Formulation The general idea of KDE is to estimate the probability density of a point by referring to its neighborhood, which shares the same spirit with kNN-MT. The computation of kNN distribution can be written as: $$p_{\text{knn}}(y|X,Y_{<t})=\frac{\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\mathbb{1}(y=\hat{y})\kappa(h_{t},\hat{h})}{\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\kappa(h_{t},\hat{h})}\tag{4}$$ where κ can be set as any kernel function. Thus, Equation 2 can be seen as a special case of Equation 4 by setting κ(·, ·) = e−d(·,·)/T. After extracting kNN knowledge, we use it to smooth the representation space by by minimizing the KL divergence between the kNN distribution pknn and NMT distribution pnmt: $\mathcal{L}_{t}^{i}=D_{\mathrm{KL}}[\,p_{\mathrm{km}}(y|X,Y_{<t})\,\|\,\,p_{\mathrm{nm}}(y|X,Y_{<t})\,]$ $$=-\sum_{\vec{y}\in\mathcal{Y}}p_{\mathrm{km}}(\vec{y})\cdot\log\frac{\sum_{(w,v)\in\mathcal{E}}\mathbb{I}\,(v=\vec{y})\kappa(h_{t},w)}{\sum_{(w,v)\in\mathcal{E}}\kappa(h_{t},w)\cdot p_{\mathrm{km}}(\vec{y})}$$ where Y denotes identical tokens in nearest neighbors Nk and pknn(¯y) denotes pknn(y = y¯|*X, Y*<t) for short. E is the embedding matrix. w and v denote the token embedding and its corresponding token respectively. ht denotes h(*X, Y*<t) for short. κ is the kernel function. Following the widely-accepted alignment-and-uniformity theory (Wang and Isola, 2020), this learning objective encourages htto align with the embeddings of retrieved reasonable tokens, e.g., "occurs", "happens". ## 3.3 Align Contextualized Representations Of The Same Target Token Although kNN knowledge could provide fruitful translation knowledge, it is also sometimes noisy (Zheng et al., 2021; Jiang et al., 2022). For example, in Figure 2, the retrieved word "works" is a wrong translation here. To address this problem, we propose to adjust local representation distribution. Specifically, our solution is to optimize the kNN distribution towards the reference distribution by minimizing the KL divergence between the gold distribution pgold and kNN distribution pknn. Thanks to the new formulation (Eq. 4), we can choose kernel function here to achieve better stability for gradient optimization. In the end, we find that exponential-cosine kernel works stably in our framework: $$\kappa(h,h_{t})=e^{\cos(h,h_{t})}$$ cos(h,ht)(5) Therefore, the loss function can be written as: $\mathcal{L}_{t}^{r}=D_{\mathrm{KL}}[\ p_{\mathrm{gold}}(y|X,Y_{<t})\ \|\ p_{\mathrm{km}}(y|X,Y_{<t})\ ]$ $=-\log\frac{\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\mathbb{1}(\hat{y}=y_{t})\kappa(h_{t},\hat{h})}{\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\kappa(h_{t},\hat{h})}$ $\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\kappa(h_{t},\hat{h})$ $\sum_{(\hat{h},\hat{y})\in\mathcal{N}_{k}}\kappa(h_{t},\hat{h})$ where Nk is the retrieved k nearest neighbors. hˆ and yˆ denotes the neighbor representations and the corresponding target token. ht denotes h(*X, Y*<t) for short. Following the widelyaccepted alignment-and-uniformity theory (Wang and Isola, 2020), this learning objective aligns the contextualized representation of the same target token. With this goal, we can make the kNN knowledge less noisy in the next training loop by refreshing the datastore with the updated representations. ## 3.4 Overall Training Procedure The combined learning objective To summarize, we adjust representation space via a small adapter with the combination of three alignment loss L a t , L it , L r t . Given one batch of training examples B = {(*X, Y* )}, the learning objective is minimizing the following loss: $$\mathcal{L}=\frac{1}{|\mathcal{B}|}\sum_{(X,Y)\in\mathcal{B}}\sum_{t}(\mathcal{L}_{t}^{a}+\alpha\mathcal{L}_{t}^{i}+\beta\mathcal{L}_{t}^{r})\tag{6}$$ $\mathcal{L}$ is the $i$-th solution, $\alpha$ is the $i$-th solution. where α, β is the interpolation weight. We notice that, in general, all three learning objective pull together closely related vectors and push apart less related vectors in the representation space, which has an interesting connection to contrastive learning (Lee et al., 2021; An et al., 2022) by sharing the similar goal. Refresh datastore asynchronously In our training loop, once the parameters are updated, we refresh the datastore with the refined representation. In practice, due to the computation cost, we refresh the datastore asynchronously at the end of each training epoch to strike a balance between efficiency and effectiveness As the training reaches convergence, we drop the datastore and only use the optimized adapter to help the off-the-shelf NMT model for the target domain translation. ## 4 Experiments 4.1 Setting We introduce the general experiment setting in this section. For fair comparison, we adopt the same setting as previous research of kNN-MT (Khandelwal et al., 2021; Zheng et al., 2021; Jiang et al., 2022), e.g., using the same benchmark datasets and NMT model. For training INK, we tune the weight α and β among {0.1, 0.2, 0.3}. More implementation details are reported in the appendix. Target Domain Data We use four benchmark German-English dataset (Medical, Law, IT, Koran) (Tiedemann, 2012) and directly use the preprocessed data3released by Zheng et al. (2021). Statistics of four datasets are listed in Table 1. 3https://github.com/zhengxxn/adaptive-knn-mt | Dataset | # Train | # Dev | # Test | |-----------|-----------|---------|----------| | Medical | 248,099 | 2,000 | 2,000 | | Law | 467,309 | 2,000 | 2,000 | | IT | 222,927 | 2,000 | 2,000 | | Koran | 17,982 | 2,000 | 2,000 | NMT Model We choose the winner model4(Ng et al., 2019) of WMT'19 German-English news translation task as the off-the-shelf NMT model for translation and datastore construction, which is based on the big Transformer architecture (Vaswani et al., 2017). Baselines For comparison, we consider three kNN-MT systems, which use datastore in different fashions. We report the translation performance of the adapter baseline to show the effectiveness of our training framework. Besides, we report the translation performance of kNN-KD, which is another work using kNN knowledge to help NMT. - V-kNN (Khandelwal et al., 2021), the vanilla version of k-nearest-neighbor machine translation. - A-kNN (Zheng et al., 2021), an advanced variants of kNN-MT, which dynamically decides the usage of retrieval results and achieve more stable performance. - R-kNN (Jiang et al., 2022), the state-of-theart kNN-MT variant, which dynamically calibrates kNN distribution and control more hyperparameters, e.g. temperature, interpolation weight. - **Adapter** (Bapna and Firat, 2019), adjusting representation by simply align contextualized representation and token embeddings. - k**NN-KD** (Yang et al., 2022), aiming at fromscratch train a NMT model by distilling kNN knowledge into it. Metric To evaluate translation performance, we use the following two metrics: - **BLEU** (Papineni et al., 2002), the standard evaluation metric for machine translation. We report case-sensitive detokenized *sacrebleu*5. 4https://github.com/facebookresearch/fairseq/ tree/main/examples/wmt19 5https://github.com/mjpost/sacrebleu - **COMET** (Rei et al., 2020), a recently proposed metric, which has stronger correlation with human judgement. We report COMET score computed by publicly available *wmt20-* comet-da6 model. Approximate Nearest Neighbor Search We follow previous kNN-MT studies and use Faiss7index (Johnson et al., 2019) to represent the datastore and accelerate nearest neighbors search. Basically, the key file can be removed to save memory space once the index is built. But, it is an exception that R-kNN relies on the key file to re-compute accurate distance between query representation and retrieved representations. ## 4.2 Main Results We conduct experiments to explore the following questions to better understand the effectiveness of our proposed framework and relationship between two ways of smoothing predictions: - RQ1: *Can we smooth the representation* space via small adapter and drop datastore aside during inference? - RQ2: *How much improvement can be brought* by using k*NN knowledge to adjust the representation distribution?* - RQ3: *Will together using adapter and datastore bring further improvement?* ## Ink System Achieves The Best Performance By Smoothing The Representation Space Table 2 presents the comparison results of different systems. Due to the poor quality of representation space, the off-the-shelf NMT model does not perform well. The performance of kNN-KD is unstable, e.g., it performs poorly on IT dataset. kNNMT systems generate more accurate translation. Among them, R-kNN achieves the best performance, which is consistent with previous observation (Jiang et al., 2022). Our INK system achieves the best translation performance with the least memory space. Compared with the strongest kNNMT system, i.e. R-kNN, INK achieves better performance on three out of four domains (Medical, IT, Koran). In average, INK outperforms R-kNN with an improvement of 4.84 COMET and 0.31 BLEU while occupying 0.02× memory space. 6https://github.com/Unbabel/COMET 7https://github.com/facebookresearch/faiss | Systems | Mem. | Medical | Law | IT | Koran | Avg. | | | | | | |---------------------------------|------------|------------|--------|-------|------------|--------|--------|--------|--------|-------|-------| | COMET BLEU | COMET BLEU | COMET BLEU | COMET | BLEU | COMET BLEU | | | | | | | | Off-the-shelf NMT | - | 46.87 | 40.00 | 57.52 | 45.47 | 39.22 | 38.39 | -1.32 | 16.26 | 35.57 | 35.03 | | kNN-KD | - | 56.20 | 56.37 | 68.60 | 60.65 | -1.57 | 1.48 | -13.05 | 19.60 | 27.55 | 34.53 | | NMT + Datastore Augmentation | | | | | | | | | | | | | V-kNN | ×1.7 | 53.46 | 54.27 | 66.03 | 61.34 | 51.72 | 45.56 | 0.73 | 20.61 | 42.98 | 45.45 | | A-kNN | ×1.7 | 57.45 | 56.21 | 69.59 | 63.13 | 56.89 | 47.37 | 4.68 | 20.44 | 47.15 | 46.79 | | R-kNN† | ×1.7 | 58.05 | 54.16 | 69.10 | 60.90 | 54.60 | 45.61 | 3.99 | 20.04 | 46.44 | 45.18 | | R-kNN | ×43.8 | 57.70 | 57.12 | 70.10 | 63.74 | 57.65 | 48.50 | 5.28 | 20.81 | 47.68 | 47.54 | | NMT + Representation Refinement | | | | | | | | | | | | | Adapter | ×1.0 | 60.14 | 56.88 | 70.87 | 60.64 | 66.86 | 48.21 | 4.23 | 21.68 | 50.53 | 46.85 | | INK (ours) | ×1.0 | 61.64∗ | 57.75∗ | 71.13 | 61.90∗ | 68.45∗ | 49.12∗ | 8.84∗ | 23.06∗ | 52.52 | 47.85 | ![5_image_0.png](5_image_0.png) ## Representation Refinement According To Knn knowledge brings large performance improvement In Table 2, compared with the adapter baseline that simply align the contextualized representations and word embeddings, INK outperforms it by 1.99 COMET and 1.00 BLEU in average, which demonstrates the effectiveness of adjusting representation distribution with kNN knowledge. To better show the effect of INK framework, we use adapters of different sizes to refine the representation space. Figure 3 shows the BLEU scores and added memory of different systems on four datasets. We can see that representation-refined system occupies much less memory than the datastoreenhanced system. In general, INK systems locates on the top-right of each figure, which means that INK achieves higher BLEU scores with less memory space. In most cases, INK outperforms adapter with a large margin, which demonstrates the superiority of our training framework. Jointly applying adapter and datastore can further smooth predictions Given the fact that both INK and datastore can smooth predictions, we take a step further and explore to use them together as a hybrid approach. Specifically, on top of our INK system, we follow the fashion of R-kNN to use an additional datastore to assist it during inference. Experiment results are shown in Figure 4. On three out of four datasets, we can observe further improvements over INK. On the Law dataset, | Mean kNN Acc (%) | Systems | [0, 1k) | [1k, 5k) | [5k, 10k) | [10k, 20k) | [20k, 30k) | [30k, 42k) | |--------------------|-----------|-----------|------------|-------------|--------------|--------------|--------------| | k=8 | NMT | 77.75 | 73.25 | 71.88 | 66.00 | 64.38 | 51.13 | | INK | 84.25 | 79.00 | 77.63 | 72.25 | 70.50 | 84.13 | | | k=16 | NMT | 76.25 | 70.88 | 69.13 | 63.19 | 61.31 | 34.06 | | INK | 83.81 | 77.31 | 75.75 | 70.00 | 67.88 | 79.50 | | | k=32 | NMT | 74.59 | 68.06 | 66.25 | 60.19 | 57.31 | 30.13 | | INK | 83.41 | 75.41 | 73.50 | 67.44 | 54.84 | 57.09 | | | k=64 | NMT | 72.97 | 64.89 | 62.97 | 56.67 | 52.22 | 28.13 | | INK | 83.20 | 73.16 | 70.80 | 64.31 | 60.38 | 43.05 | | ![6_image_0.png](6_image_0.png) the performance improvement even reaches 4.19 BLEU. On the Medical and IT dataset, the performance improvement is 0.71 BLEU and 0.79 BLEU respectively. Such phenomenon indicates that the representation space of the NMT model is not fully refined by the adapter. If a more effective framework can be designed, the benefit of smoothing representation space will be further revealed. The results on the Koran dataset is an exception here. We suggest that it is because of the sparse training data, which makes it difficult to accurately estimate kNN distribution during inference. ## 5 Analysis And Discussion We conduce more analysis in this section to better understand our INK system. INK greatly refines the representation space of the NMT model Inspired by Li et al. (2022), we evaluate the quality of the representation space by computing mean kNN accuracy, which measures the ratio of k-nearest representations sharing the same target token with the query representation. Ideally, all of the representations in a neighborhood should share the same target token. Here, we use the contextualized representations from the unseen development set as the query. For each query, the nearest representations from the training set will be checked. Table 3 shows the evaluation results on medical dataset. INK achieves higher accuracy than the NMT model consistently. For low frequency tokens, the representation quality gap is especially large. | Systems | BLEU | ∆ | |---------------------------|--------|-------| | INK w/o datastore refresh | 56.95 | -0.80 | | INK w/o L r t | 57.25 | -0.50 | | INK w/o L i t | 57.26 | -0.49 | | INK | 57.75 | - | Ablation study To show the necessity of different proposed techniques in our INK framework, we conduct ablation study in this section. In Table 4, we can see that keeping the datastore frozen degenerates the translation performance most, which demonstrates the necessity of refreshing datastore asynchronously during training. Removing either of the two alignment loss (L it and L r t ) would cause the translation performance to decline, which validates their importance for adjusting the representation distribution. INK enjoys faster inference speed After refining the representation space, our adapted system no longer need to querying datastore during inference. We compare the inference speed 8 of INK and R-kNN. Considering that decoding with large batch size is a more practical setting (Helcl et al., 2022), we evaluate their inference speed with increasing batch sizes. To make our evaluation results more reliable, we repeat each experiment three times and report averaged inference speed. Table 5 shows the results. As the decoding batch size grows, the speed gap between the two adapted system becomes larger. Our INK can achieve up to 1.9× speedup. Besides, due to the fact that neural parameters allows highly parallelizable computation, the inference speed of INK may be further accelerated in the future with the support of non-autoregressive decoding (Qian et al., 2021a; Bao et al., 2022). | Systems | Batch=8 | Batch=32 | Batch=128 | |-----------|-----------|------------|-------------| | R-kNN | 14.0 | 26.1 | 29.4 | | INK | 19.9 | 46.4 | 55.1 | | Speedup | 1.4× | 1.8× | 1.9× | Table 5: Inference speed (sents/s) of MT systems on Law dataset. Compared with R-kNN, INK enjoys up to 1.9× speedup on inference speed. ## 6 Related Work Nearest Neighbor Machine Translation kNNMT presents a novel paradigm for enhancing the NMT system with a symbolic datastore. However, kNN-MT has two major flaws: (1) querying the datastore at each decoding step is time consuming and the datastore occupies large space. (2) the noise representation in the datastore can not be easily updated, which causes the retrieval results to include noise. Recently, a line of work focuses on optimizing system efficiency. Martins et al. (2022a) and Wang et al. (2022a) propose to prune datastore entries and conduct dimension reduction to compress the datastore. Meng et al. (2022) propose to in-advance narrow down the search space with word-alignment to accelerate retrieval speed. Martins et al. (2022b) propose to retrieve a chunk of tokens at a time and conduct retrieval only at a few decoding steps with a heuristic rule. However, according to their em-8We evaluate the inference speed on a single NVIDIA Titan-RTX. pirical results, the translation performance always declines after efficiency optimization. To exclude noise in the retrieval results, Zheng et al. (2021) propose to dynamically decide the usage of retrieved nearest neighbors with a meta-k network. Jiang et al. (2022) propose to dynamically calibrate the kNN distribution and control more hyperparameters in kNN-MT. Li et al. (2022) propose to build datastore with more powerful pre-trained models, e.g. XLM-R (Conneau et al., 2020). However, all of this methods rely on a full datastore during inference. When the training data becomes larger, the inference efficiency of these approaches will becomes worse. Overall, it remains an open challenge to deploy a high-quality and efficient kNN-MT system. Using k**NN knowledge to build better NMT models** As datastore stores a pile of helpful translation knowledge, recent research starts exploring to use kNN knowledge in the datastore to build a better NMT model. As an initial attempt, Yang et al. (2022) try to from scratch train a better NMT model by distilling kNN knowledge into it. Different from their work, we focus on smoothing the representation space of an off-the-shelf NMT model and enhancing its generalization ability via a small adapter. Besides, in our devised injectand-refine training loop we keep datastore being asynchronously updated, while they use a fixed datastore. ## 7 Conclusion In this paper, we propose a novel training framework INK, to iteratively refine the representation space of the NMT model according to kNN knowledge. In our framework, we devise a inject-andrefine training loop, where we adjust the representation distribution by aligning three kinds of representation and refresh the datastore asynchronously with the refined representations to update kNN knowledge. Experiment results on four benchmark dataset shows that INK system achieves an average gain of 1.99 COMET and 1.0 BLEU. Compared with the state-of-the-art kNN system (Robust kNN-MT), our INK also achieves better translation performance with 0.02× memory space and 1.9× inference speed up. ## 8 Limitation Despite promising results, we also observe that refreshing and querying the datastore during training is time-consuming. Our proposed training framework usually takes 3× ∼ 4× training time. In future work, we will explore methods to improve training efficiency. We include a training loop to dynamically use the latest datastore to inject knowledge into neural networks. However, we still find that the kNN knowledge still helps the inference even after our training loops, demonstrating that there still remains space to improve the effectiveness of knowledge injection. ## Acknowledgement We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China (No. 62176120), the Liaoning Provincial Research Foundation for Basic Research (No. 2022-KF-2602). ## References Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, and Xuanjing Huang. 2022. Cont: Contrastive neural text generation. *arXiv preprint* arXiv:2205.14690. Yu Bao, Hao Zhou, Shujian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, and Lei Li. 2022. latent-GLAT: Glancing at latent variables for parallel text generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In *Proceedings of the Conference on Empirical Methods in* Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the Annual Meeting of the Association* for Computational Linguistics (ACL). Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2018. Representation degeneration problem in training natural language generation models. In *International Conference on Learning Representations (ICLR)*. Jindˇrich Helcl, Barry Haddow, and Alexandra Birch. 2022. Non-autoregressive machine translation: It's not as fast as it seems. In *Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)*. Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie Zhou, Degen Huang, and Jinsong Su. 2022. Towards robust k-nearest-neighbor machine translation. arXiv preprint arXiv:2210.08808. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data. Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In *International Conference* on Learning Representations (ICLR). Seanie Lee, Dong Bok Lee, and Sung Ju Hwang. 2021. Contrastive learning with adversarial perturbations for conditional text generation. In *ICLR*. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Jiahuan Li, Shanbo Cheng, Zewei Sun, Mingxuan Wang, and Shujian Huang. 2022. Better datastore, better translation: Generating datastores from pre-trained models for nearest neural machine translation. arXiv preprint arXiv:2212.08822. Pedro Martins, Zita Marinho, and Andre Martins. 2022a. Efficient machine translation domain adaptation. In Proceedings of the Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge. Pedro Henrique Martins, Zita Marinho, and André FT Martins. 2022b. Chunk-based nearest neighbor machine translation. *arXiv preprint arXiv:2205.12230*. Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, and Jiwei Li. 2022. Fast nearest neighbor machine translation. In Findings of the Association for Computational Linguistics. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission. In *Proceedings of the Conference on Machine Translation (WMT)*. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of the International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research. Emanuel Parzen. 1962. On estimation of a probability density function and mode. *The Annals of Mathematical Statistics*. Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural Computation*. Zhixian Yang, Renliang Sun, and Xiaojun Wan. 2022. Nearest neighbor knowledge distillation for neural machine translation. In *Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)*. Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021a. Glancing transformer for non-autoregressive neural machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021. Adaptive nearest neighbor machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Lihua Qian, Yi Zhou, Zaixiang Zheng, Yaoming Zhu, Zehui Lin, Jiangtao Feng, Shanbo Cheng, Lei Li, Mingxuan Wang, and Hao Zhou. 2021b. The volctrans GLAT system: Non-autoregressive translation meets WMT21. In *Proceedings of the Conference on Machine Translation (WMT)*. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proceedings of the Eighth International Conference on Language Resources and* Evaluation (LREC). - *Fairseq (MIT-license)*, a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization and other text generation tasks. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems (NeurIPS)*. - *Faiss (MIT-license)*, a library for approximate nearest neighbor search. Dexin Wang, Kai Fan, Boxing Chen, and Deyi Xiong. 2022a. Efficient cluster-based k-nearest-neighbor machine translation. In *Proceedings of the Annual* Meeting of the Association for Computational Linguistics (ACL). Qiang Wang, Rongxiang Weng, and Ming Chen. 2022b. Learning decoupled retrieval representation for nearest neighbour neural machine translation. In *Proceedings of the International Conference on Computational Linguistics (COLING)*. ## A Used Scientific Artifacts B Implementation Details Below lists scientific artifacts that are used in our work. For the sake of ethic, our use of these artifacts is consistent with their intended use. We reproduce baseline systems with their released code. We implement our system with *fairseq* (Ott et al., 2019). Adam is used as the optimizer and inverse sqrt is used as the learning rate scheduler. We set 4k warm-up steps and a maximum learning rate as 5e-4. We set batch size as 4096 tokens. All INK systems are trained on a single Tesla A100. During inference, we set beam size as 4 and length penalty as 0.6. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 8 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? appendix a ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? appendix a B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4, 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 4, 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
sun-etal-2023-uncertainty
Uncertainty Guided Label Denoising for Document-level Distant Relation Extraction
https://aclanthology.org/2023.acl-long.889
Document-level relation extraction (DocRE) aims to infer complex semantic relations among entities in a document. Distant supervision (DS) is able to generate massive auto-labeled data, which can improve DocRE performance. Recent works leverage pseudo labels generated by the pre-denoising model to reduce noise in DS data. However, unreliable pseudo labels bring new noise, e.g., adding false pseudo labels and losing correct DS labels. Therefore, how to select effective pseudo labels to denoise DS data is still a challenge in document-level distant relation extraction. To tackle this issue, we introduce uncertainty estimation technology to determine whether pseudo labels can be trusted. In this work, we propose a Document-level distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. Specifically, we propose a novel instance-level uncertainty estimation method, which measures the reliability of the pseudo labels with overlapping relations. By further considering the long-tail problem, we design dynamic uncertainty thresholds for different types of relations to filter high-uncertainty pseudo labels. We conduct experiments on two public datasets. Our framework outperforms strong baselines by 1.91 F1 and 2.28 Ign F1 on the RE-DocRED dataset.
# Uncertainty Guided Label Denoising For Document-Level Distant Relation Extraction Qi Sun1,2, Kun Huang1, Xiaocui Yang2,3**, Pengfei Hong**2, Kun Zhang1∗, **and Soujanya Poria**2∗ 1Nanjing University of Science and Technology 2Singapore University of Technology and Design, 3Northeastern University {319106003718, huangkun, zhangkun}@njust.edu.cn, {pengfei_hong, sporia}@sutd.edu.sg, yangxiaocui@stumail.neu.edu.cn ## Abstract Document-level relation extraction (DocRE) aims to infer complex semantic relations among entities in a document. Distant supervision (DS) is able to generate massive auto-labeled data, which can improve DocRE performance. Recent works leverage pseudo labels generated by the pre-denoising model to reduce noise in DS data. However, unreliable pseudo labels bring new noise, e.g., adding false pseudo labels and losing correct DS labels. Therefore, how to select effective pseudo labels to denoise DS data is still a challenge in document-level distant relation extraction. To tackle this issue, we introduce uncertainty estimation technology to determine whether pseudo labels can be trusted. In this work, we propose a Documentlevel distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. Specifically, we propose a novel instancelevel uncertainty estimation method, which measures the reliability of the pseudo labels with overlapping relations. By further considering the long-tail problem, we design dynamic uncertainty thresholds for different types of relations to filter high-uncertainty pseudo labels. We conduct experiments on two public datasets. Our framework outperforms strong baselines by 1.91 F1 and 2.28 Ign F1 on the RE-DocRED dataset. 1 ## 1 Introduction Document-level Relation Extraction (DocRE) aims to extract relations among entities in a document. In contrast to the conventional RE task that mainly focuses on sentence-level (Zhou et al., 2016; Guo et al., 2019; Tian et al., 2021), DocRE is more challenging due to the complex semantic scenarios, discourse structure of the document, and long-distant interactions between entities. To understand complex inter-sentence entity relations, most existing methods employ transformer- ![0_image_0.png](0_image_0.png) Figure 1: An example of the DS document. We present two types of noise caused by pseudo labels. One is adding new false labels as shown by the solid red line. Another is losing the correct DS label as shown by the red dotted line. We also show the proposed instancelevel uncertainty estimation (UE) scores of pseudo labels. We present partly entities that are marked with different colors. based (Huang et al., 2021a; Zhou et al., 2021; Li et al., 2021) or graph-based models (Nan et al., 2020; Zeng et al., 2020, 2021) that aggregate effective entity representations. Although these methods achieve reasonable performance, they heavily rely on the large-scale human-annotated corpus, which is time-consuming and labor-intensive. Distant supervision mechanism (Mintz et al., 2009) provides large-scale distantly supervised (DS) data autolabeled by existing relational triples from knowledge bases (Xie et al., 2021). Recent works observe that leveraging DS data to pretrain DocRE models can improve performance by a great margin (Xu et al., 2021; Yang Zhou, 2022; Wang et al., 2022). Despite a great quantity of training data autolabeled by distant supervision can enhance the performance of the model, noisy labels in DS data are non-negligible. Yao et al. (2019) show that there are 61.8% noisy inter-sentence instances in ∗Corresponding author 1https://github.com/QiSun123/UGDRE 15960 their provided document-level distant relation extraction dataset. Current efforts (Xiao et al., 2020; Tan et al., 2022a) to alleviate the noise problem mainly employ a pre-denoising model. They train a DocRE model on human-annotated data first and then re-label DS data by the trained model. However, the above methods still persist the risk of noise induction in the DS data due to false positive re-labeling. Besides, false negative pseudo labels also lead to the loss of effective labels in DS data. As shown in Figure 1, we obtain an extra false instance *(The Last Days of Pompeii, Mario* Bonnard, composer) and lose the correct DS instance *(Mario Bonnard , Rome, place of birth)*, when merely relying on pseudo labels. Thus, how to mitigate noise caused by pseudo labels and take full advantage of DS data is still a challenge in document-level distant RE. In this work, we propose a Document-level distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. We first train a pre-denoising DocRE model with both DS and human-annotated data to generate pseudo labels. Since false pseudo labels predicted by the pre-denoising model are inevitable, we introduce Uncertainty Estimation (UE) to determine whether model predictions can be trusted or not. As shown in Figure 1, we can remove the false positive pseudo instance (The Last Days of Pompeii, Mario Bonnard, composer) according to its high uncertainty score. In this way, we can abstain from unreliable decisions of the pre-denoising model, which can mitigate the risk of false pseudo labels. Considering there might be multiple relations between an entity pair, we propose an instance-level UE method to capture uncertainty scores for overlapping relations. Moreover, we design a re-labeling strategy with dynamic class uncertainty thresholds by taking the DocRE long-tail problem into account to obtain high-quality DS data. With the proposed uncertainty guided label denoising mechanism, we design a multi-phase training strategy to further boost the performance of our final DocRE model. The main contributions of our work are summarized as follows: - We propose a document-level relation distant extraction framework with uncertainty guided label denoising, which greatly improves the label quality of DS data. - We propose a novel instance-level uncertainty estimation method for overlapping relations to measure the reliability of instance-level pseudo labels. - We design an iterative re-label strategy with dynamic class uncertainty thresholds for the problem of long-tail in DocRE to filter high uncertainty pseudo labels. - The proposed framework achieves significant performance improvements over existing competitive baselines on two public datasets. Extensive experiments illustrate that the performance of baselines trained on our denoised DS (DDS) data is obviously improved. ## 2 Related Work Sentence-level Relation Extraction. Conventional works on RE mainly focus on sentence-level supervised relation extraction (Zhou et al., 2016; Guo et al., 2019; Sun et al., 2022). Although these models achieve great success in RE, they primarily rely on the large-scale human-annotated corpus that needs time-consuming labels. Therefore, early works prefer to use extra data that are autolabeled by distant supervision (DS) (Zeng et al., 2015; Huang et al., 2021b; Peng et al., 2022; Qin et al., 2018). However, the noisy labels caused by distant supervision will influence the performance of these models. Thus, various works are proposed to select effective instances, separate noisy data, and enhance the robustness of models. Most of them tend to perform attention mechanism(Li et al., 2020; Yuan et al., 2019; Han et al., 2018), negative training (Ma et al., 2021), reinforcement learning(Feng et al., 2018), and soft-label strategies (Liu et al., 2017). However, the above DS methods mainly focus on sentence-level RE, which can not be transferred to DocRE directly. Document-level Relation Extraction. DocRE aims to extract relations between each entity pair expressed by multiple mentions across the sentences in a document. Different from the conventional sentence-level RE, DocRE needs the ability to reason relations in a more complex semantic scene. Existing approaches employ transformerbased models to extract contextual information for aggregating entity representations (Yao et al., 2019; Huang et al., 2021a; Zhou et al., 2021; Li et al., 2021). To further capture non-local syntactic and semantic structural information, some works construct document-level graphs and aggregate graph representations by Graph Neural Net- ![2_image_0.png](2_image_0.png) works (GNN) (Sahu et al., 2019; Wang et al., 2020; Eberts and Ulges, 2021; Christopoulou et al., 2019; Nan et al., 2020; Zeng et al., 2020; Wang Xu and Zhao, 2021; Zeng et al., 2021; Sun et al., 2023). Recent works observe that utilizing large-scale auto-labeled data generated by distant supervision (Mintz et al., 2009) to pretrain the DocRE model can attain great performance improvements (Xu et al., 2021; Yang Zhou, 2022; Wang et al., 2022; Hogan et al., 2022). Most of them directly utilize the DS data and ignore the accuracy of DS labels. To obtain high-quality DS data, several methods introduce re-label strategies based on the predenoising RE model trained on human-annotated data (Xiao et al., 2020; Tan et al., 2022a). However, these methods ignore the noise caused by pseudo labels. In this work, we introduce uncertainty estimation to determine the reliability of pseudo labels, which can reduce the noisy pseudo labels to further improve the quality of DS data. ## 3 Methodology In this section, we introduce our proposed framework in detail. As shown in Figure 2, our UGDRE contains four key components: (1) Training a document-level pre-denoising model by the original DS and human-annotated training data; (2) Estimating uncertainty scores of pseudo labels generated by the pre-denoising model; (3) Denoising the DS data by pseudo labels and uncertainty scores; (4) Leveraging a multi-phase training strategy to iteratively train the DocRE model by denoised DS ## 3.1 Problem Formulation Given a document D = {si} t i=1, which is composed of t sentences. Each document contains a set of entities E = {ei} q i=1, where q is the number of entities. An entity might be mentioned multiple times in a document, formulated as ei = {mij} pi j=1, where piis the number of times the entity eiis mentioned. The aim of the document-level relation extraction is to predict relation types between entities, formulated as {(ei, ej , rk)|ei, ej ∈ *E, r*k ∈ R}, where R is the set of pre-defined relation types. In addition, there can be more than one relation type between a specific entity pair in a document. Thus, the DocRE task can be regarded as a multi-label classification task. In the document-level distant relation extraction setting, we have a clean humanannotated dataset and a distantly supervised dataset, while the quantity of DS data is significantly larger than the human-annotated data. ## 3.2 Document-Level Pre-Denoising Model In order to alleviate the noisy label problem in the DS data, we construct a pre-denoising DocRE model to generate pseudo labels. As shown in Figure 2(a), we adopt BERT (Devlin et al., 2019) to capture the contextual representation {zi} n i=1, where n is the number of tokens in a document. We also adopt a dropout layer to enhance the generalization ability of our DocRE model. To capture non-local dependency among entities, we construct a graph for each document. Specifically, we take all tokens in a document as nodes and connect them using the task-specific rules: (1) To capture the semantic information of mentions, tokens belonging to the same mention are connected. (2) To capture interactions between mentions, tokens of mentions belonging to the same entities are connected. (3) To capture the interactions of entities, tokens of entities that co-occur in a single sentence are connected. We construct the adjacency matrix according to the document graph and apply Graph Convolutional Networks (GCN) to capture graph representations {gi} n i=1, which is formulated as follows: $$g_{i}=\rho\left(\sum_{j=1}^{n}A_{i j}W z_{j}+b\right),\qquad\qquad(1)$$ where W ∈ R d×dand b ∈ R dare trainable parameters. zj is the contextual representation of j-th token, which is introduced above. Aij is the adjacency matrix of the document graph. ρ is the activation function. To obtain the global representations {hi} n i=1, we concatenate the contextual representations {zi} n i=1 and graph representations {gi} n i=1 as follows: $$h_{i}=[z_{i},g_{i}].$$ $$\left(2\right)$$ Following Zhou et al. (2021), we also apply logsumexp pooling (Jia et al., 2019) to obtain entity representations {ei} q i=1. Finally, group bilinear (Van Amersfoort et al., 2020) is utilized to obtain the probabilities {p c ij} Nc c=1 of each class c for the entity pair (ei, ej ) to predict relation types. ## 3.3 Instance-Level Uncertainty Estimation Uncertainty Estimation (UE) is a vital technology for misclassification detection (Vazhentsev et al., 2022), out-of-distribution instances detection (Van Amersfoort et al., 2020), and active learning (Burkhardt et al., 2018). In order to model the uncertainty in pre-denoising DocRE model, we introduce the Monte Carlo (MC) dropout (Gal and Ghahramani, 2016) technology into the DocRE task. As a popular UE technology, MC dropout is formally equivalent to approximate Bayesian inference in deep Gaussian processes (Gal and Ghahramani, 2016). This method requires multiple stochastic forward-pass predictions with activated dropout to capture the model uncertainty. ![3_image_0.png](3_image_0.png) Previous works based on MC dropout (Gal et al., 2017; Vazhentsev et al., 2022) calculate the uncertainty score of the model prediction as follows: $$u_{s}=\frac{1}{N_{c}}\sum_{c=1}^{N_{c}}(\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}(p_{t}^{c}-\overline{{{p^{c}}}})^{2}),\qquad(3)$$ where Nc is the number of the class number. Nt is the number of stochastic forward passes. p c t is the probability of the c-th class at t-th stochastic forward passes. p c =1 Nt PNt t=1 p c t is the average probability of the c-th class. The above uncertainty estimation method provides one uncertainty score for each prediction. However, there exist multiple relations for one entity pair, which can be called overlapping relations. Intuitively, different overlapping relations should have their own uncertainty scores. As shown in Figure 3(a), there are two different types of relations between an entity pair *(The Last Days of Pompeii,* Mario Bonnard). It is hard to separate the false positive pseudo label *composer* and correct positive pseudo label *director* by previous UE methods (Gal et al., 2017; Vazhentsev et al., 2022). To solve this issue, we modify the estimation process to obtain the instance-level UE score for each positive pseudo label between an entity pair, which can be seen in Figure 3(b). Inspired by ATLOP (Zhou et al., 2021) that introduces a threshold class c˜ to separate positive and negative relation classes. We calculate the adaptive threshold score τij for entity pair (ei, ej ) as follows: $$\tau_{i j}=\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}p_{i j t}^{\tilde{c}},\qquad\qquad(4)$$ where p c˜ ijt is the probability of the threshold class for entity pair (ei, ej ) at t-th stochastic forward passes. Ntis the number of stochastic forward passes. Then, we regard classes of which average ![4_image_0.png](4_image_0.png) probability p c ij =1 Nt PNt t=1 p c ijt are higher than the threshold τij as positive classes. If all the class probabilities are lower than the probability of the threshold class, we regard "NA (no relationship)" as the relationship type for the entity pair. Then, we calculate the uncertainty score of each positive class for entity pair (ei, ej ) as follows: $$u_{ij}^{c^{*}}=\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}(p_{ijt}^{c^{*}}-\overline{p_{ij}^{c^{*}}})^{2},c^{*}\in\{\overline{p_{ij}^{c}}>\tau_{ij}\},\tag{5}$$ where p c∗ ijt is the probability of the positive class c∗at t-th stochastic forward passes. p c∗ ij = 1 Nt PNt t=1 p c∗ ij is the average probability of the positive class c∗. In this way, we can obtain each positive pseudo label with its uncertainty score between an entity pair, which is shown in Figure 2(b). Each pseudo instance is formulated as (ei, ej , rc∗ , uc∗ ij ). ## 3.4 Uncertainty Guided Label Denoising After obtaining instance-level pseudo labels and their corresponding uncertainty scores, we re-label DS data by the proposed uncertainty-guided denoising strategy (Figure 2(c)). We observe that the distribution of uncertainty scores for each relation class is obviously different, which is shown in Figure 4. Moreover, it can be observed that frequent classes usually contain lower average uncertainty than long-tail classes. Therefore, considering the long-tail problem in the DocRE task, we propose dynamic class uncertainty thresholds to filter pseudo labels with high uncertainty. For each class of relation, the corresponding uncertainty threshold is calculated as follows: $$\eta_{c^{*}}=\overline{{{u^{c^{*}}}}}+\sqrt{\frac{1}{N_{c^{*}}^{\eta}-1}\sum_{l=1}^{N_{c^{*}}^{\eta}}(u_{l}^{c^{*}}-\overline{{{u^{c^{*}}}}})^{2}},\quad\mathbf{(6)}$$ where u $$\mathbf{\partial}\cdot u_{l}^{c^{*}}{\mathrm{~is~tl}}$$ lis the uncertainty score of the l-th pseudo instance of class c∗. u c∗ =1 N η c∗ PN η c∗ l=1 u c∗ iis the average of uncertainty scores for class c∗in all pseudo instances. N η c∗ is the quantity of pseudo instances that belong to class c∗. In our re-label strategy (Figure 2(c)), for each entity pair (ei, ej ), we replace the original DS label with the pseudo label rc∗ that contain the lower uncertainty score u c∗ ij than its class uncertainty thresholds ηc∗ . In this way, we are able to reduce false positive pseudo labels and keep correct DS labels with high-quality pseudo labels. Besides, to reduce false negative pseudo labels, we keep the original DS positive labels where do not exist positive pseudo labels between an entity pair. Algorithm 1 Multi-phase Training Strategy Define: Human-annotated training and test data: HA and DT, DS data: DS, Iteration: K, DocRE Model: M, Pseudo labels with UE: P U, Denoised DS data: DDS, Relations: T R. D3 data: $DDS$, **R**calations. $T$**. **Require: $DS$, $HA$, $K$, $M$. **Ensurre: $K>0$**. $M_{pretrain}\gets Train\,(M,DS)$ $M_{finetune}\gets Train\,(M_{pretrain},HA)$ **for**$i=1$; $i<=K$; $i++$**do** $1.$ $PU\gets Predict\,(M_{finetune},DS)$ $2.$ $DDS\gets Denoise\,(DS,PU)$ $3.$ $M\gets Realized\,(M_{finetune})$ $4.$ $M_{pretrain}\gets Train\,(M,DDS)$ $5.$ $M_{finetune}\gets Train\,(M_{pretrain},HA)$ $6.$ $DS\gets DDS$ **end for** $T$_D $:=David\,(M_{predation},DT)$ return T R ## 1 Introduction The $\pi^{0}$-$\pi^{0}$ scattering is a very important tool to study the $\pi^{0}$-$\pi^{0}$ scattering. The scattering is a very important tool to study the $\pi^{0}$ scattering. ## 3.5 Multi-Phase Training Strategy In order to take full advantage of the DS data for further boosting the performance of the DocRE model, we design a multi-phase training strategy to iteratively re-label the DS data, which is shown in Algorithm 1. We introduce the overall process as follows. (1) We train the initial pre-denoising RE model with the original DS data and humanannotated data. (2) We leverage the pre-denoising | DocRED | Re-DocRED | | | | | | | | |-------------------------------|-------------|-------|--------|-------|--------|-------|--------|-------| | Model | Dev | Test | Dev | Test | | | | | | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | | | ATLOP (Zhou et al., 2021) | 63.42 | 61.57 | 63.48 | 61.43 | 74.34 | 73.62 | 74.23 | 73.53 | | DocuNet (Zhang et al., 2021) | 64.35 | 62.66 | 64.00 | 61.93 | 76.22 | 75.50 | 75.35 | 74.61 | | NCRL (Yang Zhou, 2022) | 63.87 | 61.65 | 63.45 | 60.98 | 75.85 | 74.91 | 75.90 | 75.00 | | SSR-PU (Wang et al., 2022) | 63.00 | 60.43 | 62.66 | 59.80 | 76.83 | 75.57 | 76.23 | 74.96 | | KD-NA* (Tan et al., 2022a) | 64.17 | 62.18 | 64.12 | 61.77 | 76.14 | 75.25 | 76.00 | 75.12 | | KD-DocRE* (Tan et al., 2022a) | 64.81 | 62.62 | 64.76 | 62.56 | 76.47 | 75.30 | 76.14 | 74.97 | | UGDRE (Ours) | 65.71 | 63.62 | 65.58 | 63.26 | 78.28 | 77.32 | 78.14 | 77.24 | RE model to generate pseudo instances with uncertainty scores on DS data. (3) We perform a re-label strategy to obtain denoised distantly supervised (DDS) data. (4) We re-initialize and train the pre-denoising DocRE with DDS data and humanannotated data to boost performance. We iterate the above (2), (3), and (4) phases until we obtain the best DocRE model. ## 4 Experiments 4.1 Dataset And Settings Dataset. DocRED (Yao et al., 2019) is a popular DocRE dataset with 96 pre-defined relation types, which is constructed from Wikipedia and Wikidata. It provides a distant-supervised dataset with 101873 documents and a large-scale human-annotated dataset with 5053 documents. Re-DocRED is a high-quality revised version of human-annotated documents of DocRED, which is provided by Tan et al. (2022b) recently. ReDocRED contains 3053, 500, and 500 documents for training, development, and testing. See Appendix A.1 for more details. Settings. Following previous works (Zhou et al., 2021; Tan et al., 2022a), we adopt BERT*base* (Devlin et al., 2019) as the context encoder. We use AdamW (Loshchilov and Hutter, 2019) as the optimizer. We set the learning rate to 3e-5. We apply warmup for the initial 6% steps. We set the batch size to 8 for both the training and test process. The rate of the dropout is 0.25. All hyper-parameters are tuned on the development set. The experiments are conducted on a single NVIDIA RTX A600048G GPU. DocRED and RE-DocRED both contain 3053 human-annotated training documents for finetuning and 101873 distantly supervised training documents for pretraining. Thus, for each dataset, our framework takes about 55 hours and consumes about 23G GPU memory for training. Following Yao et al. (2019), we use F1 and *IgnF*1 as the evaluation metrics. The *IgnF*1 represents F1 scores, which excludes the relational facts shared by the human-annotated training set. ## 4.2 Compared Methods We compare our UGDRE with several strong baselines that are trained on both DS and humanannotated data. **ATLOP** (Zhou et al., 2021) utilizes an adaptive thresholding loss to solve the overlapping relation problem, and adopts a localized context pooling to aggregate entity representations. DocuNet (Zhang et al., 2021) regards the DocRE task as a semantic segmentation task that provides a new view to extract document-level relations. NCRL (Yang Zhou, 2022) uses a multi-label loss that prefers large label margins between the "NA" relation class and each predefined class. **SSR-PU** (Wang et al., 2022) is a positive-unlabeled learning framework, which adapts DocRE with incomplete labeling. **KD-DocRE** (Tan et al., 2022a) attempts to overcome the differences between humanannotated and DS data by knowledge distillation. They also provide the **KD-NA** (Tan et al., 2022a), which is pretrained by DS data first and then finetuned by human-annotated data. ## 4.3 Experimental Results We compare our UGDRE framework with the above baselines, which are also based on BERT*base* (Devlin et al., 2019) and trained on both DS data and human-annotated data. As shown in Table 1, our framework UGDRE outperforms the previous baselines on both DocRED and RE-DocRED datasets. Specifically, our UGDRE achieves 65.71 | Origin | After Denoising | | | | | | | | | | |------------------|-------------------|-------|--------|-------|--------|-------|--------|-------|---------|--------| | Model | Improvement | | | | | | | | | | | Dev | Test | Dev | Test | | | | | | | | | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | ∆F1 | ∆Ign F1 | | | +DocRED ATLOP | 54.38 | 51.62 | 53.10 | 50.01 | 59.00 | 56.35 | 58.34 | 55.35 | +5.24 | +5.34 | | DocuNet | 53.79 | 50.91 | 52.96 | 49.69 | 59.03 | 56.17 | 58.05 | 54.85 | +5.09 | +5.16 | | NCRL | 54.53 | 51.66 | 53.26 | 50.03 | 59.39 | 56.71 | 58.50 | 55.46 | +5.24 | +5.43 | | KD-NA | 54.02 | 50.94 | 54.10 | 50.65 | 58.39 | 55.31 | 58.20 | 54.79 | +4.10 | +4.14 | | UGDRE (Ours) | 54.74 | 51.91 | 54.47 | 51.27 | 59.75 | 56.84 | 58.92 | 55.67 | +4.45 | +4.40 | | +RE-DocRED ATLOP | 43.48 | 42.69 | 42.59 | 41.77 | 75.99 | 74.86 | 75.29 | 74.16 | +32.70 | +32.39 | | DocuNet | 44.22 | 43.38 | 43.89 | 43.02 | 76.38 | 75.18 | 75.64 | 74.44 | +31.75 | +31.42 | | NCRL | 44.71 | 43.87 | 44.09 | 43.23 | 76.39 | 75.19 | 75.69 | 74.50 | +31.60 | +31.27 | | KD-NA | 45.55 | 44.58 | 45.38 | 44.41 | 76.11 | 74.78 | 75.37 | 74.00 | +29.99 | +29.59 | | UGDRE (Ours) | 45.56 | 44.71 | 44.76 | 43.94 | 76.47 | 75.24 | 75.57 | 74.32 | +30.81 | +30.38 | F1 and 78.14 F1 on the test set of DocRED and RE-DocRED datasets, respectively. Our UGDRE outperforms the KD-DocRE (Tan et al., 2022a) that leverages knowledge distillation to denoise by 2.00 F1 and 0.82 F1 on the test set of RE-DocRE and DocRED datasets. Moreover, our UGDRE significantly outperforms the latest strong baseline SSR-PU (Wang et al., 2022) by 1.91 F1 and 2.28 Ign F1 on the Re-DocRED dataset. This suggests the effectiveness of our uncertainty-guided denoise strategy. Besides, we observe that improvements on the RE-DocRED dataset are obviously higher than DocRED dataset, which can be caused by the following: 1) The RE-DocRED dataset is a revised version of the DocRED dataset by adding more positive instances. It alleviates the imbalance problem of positive and negative instances. 2) The predenoising model trained on RE-DocRED achieves a higher ability to discover relations, which will enhance the denoise process. ## 5 Analysis And Discussion In this section, we conduct extensive experiments to further analyze the effectiveness of our proposed denoising strategy and instance-level UE method. We also conduct the ablation study to discuss the contribution of each component of the framework. ## 5.1 Effectiveness Of The Denoising Strategy In order to intuitively demonstrate the effectiveness of our uncertainty-guided denoising strategy. We present experimental results of several DocRE baselines only trained on original DS data and our denoised DS (DDS) data. As shown in Table 2, we ![6_image_1.png](6_image_1.png) ![6_image_0.png](6_image_0.png) F1 ![6_image_2.png](6_image_2.png) ![6_image_3.png](6_image_3.png) can observe that all baselines trained on our DDS data obtain significant performance improvements on both DocRED and RE-DocRED. In contrast to the original DS data, the performance of baselines trained on our DDS data increases more than 4 F1 and 29 F1 on the test set of the DocRED and REDocRED datasets. This further demonstrates the effectiveness of our uncertainty guided denoising strategy. We observe that when training on original DS data, the performance of baselines on the REDocRED dataset is obviously lower than the DocRED dataset. This is because there are more positive instances in the RE-DocRED dataset than in the DocRED dataset, which makes the noise problem more obvious. Thus, the performance improvement of models trained on our denoised data will also be more obvious. The performance improvements of baselines that are fine-tuned on humanannotated data can be seen in Appendix A.2. In addition, we also evaluate the performance of the above models on each relation type. As shown in Figure 5, the performance improvement ![7_image_0.png](7_image_0.png) | Model | Dev | Test | | | |------------|--------|--------|--------|-------| | F1 | Ign F1 | F1 | Ign F1 | | | SR | 73.41 | 72.43 | 72.40 | 71.39 | | Entropy | 74.42 | 73.27 | 73.49 | 72.31 | | PV Dropout | 75.54 | 74.41 | 74.87 | 73.71 | | UGDRE | 76.47 | 75.24 | 75.57 | 74.32 | of the long-tail relation type *country of origin* is obviously higher than the frequent relation type publication date after training on our denoised data. This indicates the effectiveness of our dynamic class uncertainty thresholds designed for the longtail problem in DocRE task. ## 5.2 **Effectiveness Of Instance-Level Uncertainty** Estimation We compare our proposed instance-level UE with existing popular UE technologies (Vazhentsev et al., 2022) as follows: 1) Softmax Response (SR); 2) Entropy; 3) Probability Variance (PV) with MC dropout. The performance of the DocRE model trained on denoised DS data that performed different UE technology is shown in Table 3. It can be observed that the DocRE model based on our instance-level UE outperforms SR, entropy, and PV dropout based methods on the test set of the REDocRED dataset. This is because our instance-level UE provides specific UE scores for different overlapping relations, which enables our downstream uncertainty guided relabel strategy to separate the false pseudo label from the overlapping relations. | Model | Dev | Test | | | |--------------|--------|--------|--------|-------| | F1 | Ign F1 | F1 | Ign F1 | | | UGDRE | 78.28 | 77.32 | 78.14 | 77.24 | | w/o Pretrain | 74.25 | 73.36 | 74.10 | 73.21 | | w/o DDS | 76.91 | 76.00 | 76.16 | 75.23 | | w/o UE | 77.66 | 76.80 | 76.84 | 75.99 | The experimental results also demonstrate the effectiveness of our proposed instance-level UE method. ## 5.3 Case Study We present several samples of DS data that are denoised by our UGDRE framework in Figure 6. It can be observed that our framework denoises the DS data by 1) adding the extra correct positive instance, such as *(Johnnys, April 1962, inception)*; 2) Removing false DS instances, such as (My Grandfather's Clock, Henry Clay Work, lyrics by). Moreover, we also present pseudo labels with our instance-level UE scores to show the process of re-relabel strategy. As shown in the second and fourth samples of Figure 6, our framework is able to reduce the false pseudo labels by their high uncertainty scores, such as *(Kyu Sakamoto, Sukiyaki,* notable work) and (Glenn Frey, Eagles, member of). ## 5.4 Ablation Study To analyze the effectiveness of each component in our UGDRE framework, we conduct the ablation study by removing different components. As shown in Table 3, the performance decreases as ![8_image_0.png](8_image_0.png) removing each component, which demonstrates the effectiveness of our framework. When we remove the pretrain mechanism with DS data, the DocRE model trained by merely human-annotated data achieves 74.10 F1 and 73.21 Ign F1 on the test set of RE-DocRED dataset. This drop demonstrates that leveraging DS data can enhance the performance of the DocRE model. Removing the denoised distantly supervised (DDS) data leads to a 1.98 and 2.01 drop in terms of F1 and Ign F1 on the test set of RE-DocRED dataset. This indicates the significant effect of our uncertainty guided label denoising strategy. Our UGDRE framework is also effective on sentence-level distant RE, which can be seen in Appendix A.3. As shown in Figure 7, we also present the performance of each iteration of the model that is pretrained on DDS and fine-tuned on human-annotated data. We can observe that the final model performance achieves the best by the second iteration of Algorithm 1, which proves the effectiveness of our multi-phase training strategy. Moreover, the removal of our instance-level uncertainty estimation also causes an obvious drop, which illustrates the importance of estimating uncertainty in our framework. ## 6 Conclusion In this paper, we propose a Document-level distant Relation Extraction framework with Uncertainty Guided label denoising, UGDRE. Specifically, we propose instance-level uncertainty estimation to measure the reliability of pseudo labels. Considering the long-tail problem, we design dynamic class uncertainty thresholds to filter high-uncertainty pseudo labels. Our proposed uncertainty guided denoising strategy can improve the quality of DS data. Experimental results demonstrate that our UGDRE outperforms competitive baselines. Moreover, extensive experiments verify the effectiveness of our label denoising. There are various challenges in DocRE worth exploring, one is to research the lowresource relation extraction. ## Limitations In this section, we discuss the limitations of our proposed framework. Our UGDRE can reduce the false positive pseudo label by estimating the uncertainty of the model prediction. However, it is difficult to reduce the false negative pseudo labels by uncertainty estimation. Our framework also relies on human-annotated data to train the pre-denoising model, which causes the sensitivity of our framework to the quality of human-annotated data. Thus, the improvements of models that continue training on the DocRED dataset are not as well as on the RE-DocRED dataset. Moreover, iterative training introduces additional computing overhead, which makes the training process time-consuming. ## Acknowledgements Thanks to all co-authors for their hard work. The work is supported by the Chinese Scholarship Council, the National Program on Key Research Project of China (Project no. 2020XXXXXX6404), the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOE-T2EP20220-0017), and A*STAR under its RIE 2020 AME programmatic grant (project reference no. RGAST2003). Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. ## References Sophie Burkhardt, Julia Siekiera, and Stefan Kramer. 2018. Semisupervised bayesian active learning for text classification. In *Bayesian Deep Learning Workshop at NeurIPS*. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In *Proceedings of EMNLP*, pages 4927–4938. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*, pages 4171–4186. Markus Eberts and Adrian Ulges. 2021. An end-to-end model for entity-level relation extraction using multiinstance learning. In *Proceedings of EACL*, pages 3650–3660. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In *Proceedings of* AAAI. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *Proceedings of ICML*, pages 1050–1059. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of ICML, pages 1183–1192. Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In *Proceedings of ACL*, pages 241–251. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In *Proceedings of* EMNLP, pages 2236–2245. William Hogan, Jiacheng Li, and Jingbo Shang. 2022. Fine-grained contrastive learning for relation extraction. In *Proceedings of EMNLP*, pages 1083–1095. Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai, and Dongyan Zhao. 2021a. Three sentences are all you need: Local path enhanced document relation extraction. In *Proceedings of ACL*, pages 998–1004. Wenti Huang, Yiyu Mao, Liu Yang, Zhan Yang, and Jun Long. 2021b. Local-to-global gcn with knowledgeaware representation for distantly supervised relation extraction. *Knowledge-Based Systems*, page 107565. Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-level n-ary relation extraction with multiscale representation learning. In Proceedings of NAACL, pages 3693–3704. Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of ACL*, pages 1359– 1370. Yang Li, Guodong Long, Tao Shen, Tianyi Zhou, Lina Yao, Huan Huo, and Jing Jiang. 2020. Self-attention enhanced selective gate with entity-aware embedding for distantly supervised relation extraction. In *Proceedings of AAAI*, 05, pages 8269–8276. Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017. A soft-label method for noise-tolerant distantly supervised relation extraction. In Proceedings of EMNLP, pages 1790–1795. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *Proceedings of ICLR*. Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Xuan-Jing Huang, and Yaqian Zhou. 2021. Sent: Sentence-level distant relation extraction via negative training. In Proceedings of ACL, pages 6201–6213. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In *Proceedings of ACL*, pages 1003–1011. Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of ACL, pages 1546–1557. Tao Peng, Ridong Han, Hai Cui, Lin Yue, Jiayu Han, and Lu Liu. 2022. Distantly supervised relation extraction using global hierarchy embeddings and local probability constraints. *Knowledge-Based Systems*, page 107637. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. In Proceedings of ACL, pages 496–505. Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Inter-sentence relation extraction with document-level graph convolutional neural network. In *Proceedings of ACL*, pages 4309– 4316. Qi Sun, Kun Zhang, Kun Huang, Tiancheng Xu, Xun Li, and Yaodi Liu. 2023. Document-level relation extraction with two-stage dynamic graph attention networks. *Knowledge-Based Systems*, 267:110428. Qi Sun, Kun Zhang, Laishui Lv, Xun Li, Kun Huang, and Ting Zhang. 2022. Joint extraction of entities and overlapping relations by improved graph convolutional networks. *Applied Intelligence*, 52(5):5212– 5224. Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022a. Document-level relation extraction with adaptive focal loss and knowledge distillation. In Findings of ACL, pages 1672–1681. Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, and Sharifah Mahani Aljunied. 2022b. Revisiting docred - addressing the false negative problem in relation extraction. In *Proceedings of EMNLP*. Yuanhe Tian, Guimin Chen, Yan Song, and Xiang Wan. 2021. Dependency-driven relation extraction with attentive graph convolutional networks. In *Proceedings of ACL*, pages 4458–4471. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. 2020. Uncertainty estimation using a single deep deterministic neural network. In *Proceedings of ICML*, pages 9690–9700. Artem Vazhentsev, Gleb Kuzmin, Artem Shelmanov, Akim Tsvigun, Evgenii Tsymbalov, Kirill Fedyanin, Maxim Panov, Alexander Panchenko, Gleb Gusev, Mikhail Burtsev, Manvel Avetisian, and Leonid Zhukov. 2022. Uncertainty estimation of transformer predictions for misclassification detection. In *Proceedings of ACL*, pages 8237–8252. Difeng Wang, Wei Hu, Ermei Cao, and Weijian Sun. 2020. Global-to-local neural networks for documentlevel relation extraction. In *Proceedings of EMNLP*, pages 3711–3721. Ye Wang, Xinxin Liu, Wenxin Hu, and Tao Zhang. 2022. A unified positive-unlabeled learning framework for document-level relation extraction with different levels of labeling. In *Proceedings of EMNLP*, page 4123–4135. Kehai Chen Wang Xu and Tiejun Zhao. 2021. Discriminative reasoning for document-level relation extraction. In *Findings of ACL*, pages 1653–1663. Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, and Leyu Lin. 2020. Denoising relation extraction from documentlevel distant supervision. In *Proceedings of EMNLP*, pages 3683–3688. Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Huang, and Yanghua Xiao. 2021. Revisiting the negative data of distantly supervised relation extraction. In *Proceedings of ACL*. Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, and Zhendong Mao. 2021. Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. In Proceedings of AAAI, 16, pages 14149–14157. Wee Sun Lee Yang Zhou. 2022. None class ranking loss for document-level relation extraction. In *Proceedings of IJCAI*, pages 4538–4544. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. Docred: A large-scale document-level relation extraction dataset. In *Proceedings of ACL*, pages 764–777. Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, Shiliang Pu, Fei Wu, and Xiang Ren. 2019. Cross-relation cross-bag attention for distantlysupervised relation extraction. In *Proceedings of* AAAI, pages 419–426. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In *Proceedings of EMNLP*, pages 1753–1762. Shuang Zeng, Yuting Wu, and Baobao Chang. 2021. Sire: Separate intra- and inter-sentential reasoning for document-level relation extraction. In *Findings* of EMNLP, pages 524–534. Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for documentlevel relation extraction. In *Proceedings of EMNLP*, pages 1630–1640. Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level relation extraction as semantic segmentation. In Proceedings of IJCAI, pages 3999–4006. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In *Proceedings of ACL*, pages 207–212. Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In *Proceedings of AAAI*, 16, pages 14612– 14620. ## A Appendix A.1 Statistics Of Datasets We present statistics of the public DocRE datasets, including DocRED (Yao et al., 2019), RE-DocRED (Tan et al., 2022b), and the distantly supervised data provided by Yao et al. (2019). | Dataset | # Document | Avg. # Instance | |----------------------|--------------|-------------------| | DocRED-Train | 3,053 | 12.5 | | DocRED-Dev | 1,000 | 12.3 | | DocRED-Test | 1,000 | 12.8 | | Re-DocRED-Train | 3,053 | 28.1 | | Re-DocRED-Dev | 500 | 34.6 | | Re-DocRED-Test | 500 | 34.9 | | Distantly Supervised | 101,873 | 14.8 | Table 5: Statistics of the Re-DocRED, DocRED, and distantly supervised dataset. ## A.2 Results Of Baselines With Fine-Tuning We present the results of baseline models that are pretrained on our denoised data and fine-tuned on the human-annotated data of the RE-DocRED dataset. As shown in Table 6, the final performance of most baseline models that are pretrained on our denoised data is significantly improved. ## A.3 Sentence-Level Relation Extraction Our framework can also be applied to the sentencelevel relation extraction task. We reconstruct a sentence-level relation extraction dataset from the distantly supervised training data and RE-DocRED datasets. The sentence-level RE dataset contains 231,107 DS training data, 10,033 human-annotated training data, 1,862 human-annotated development data, and 1,794 human-annotated test data. We perform our UGDRE on the sentence-level RE task in the same training way. As shown in Table | DS | After Denoising | | | | | | | | | | |--------------|-------------------|-------|--------|-------|--------|-------|--------|-------|---------|------| | Model | Improvement | | | | | | | | | | | Dev | Test | Dev | Test | | | | | | | | | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | F1 | Ign F1 | ∆F1 | ∆Ign F1 | | | ATLOP | 74.34 | 73.62 | 74.23 | 73.53 | 77.30 | 76.63 | 76.95 | 76.28 | 2.72 | 2.75 | | DocuNet | 76.22 | 75.50 | 75.35 | 74.61 | 77.69 | 76.90 | 77.72 | 76.97 | 2.37 | 2.36 | | NCRL | 75.85 | 74.91 | 75.90 | 75.00 | 77.71 | 76.84 | 76.78 | 75.92 | 0.88 | 0.92 | | KD-NA | 76.14 | 75.25 | 76.00 | 75.12 | 78.16 | 77.23 | 77.73 | 76.86 | 1.73 | 1.74 | | UGDRE (Ours) | 76.91 | 76.00 | 76.16 | 75.23 | 78.28 | 77.32 | 78.14 | 77.24 | 1.98 | 2.01 | | Model | Dev | Test | | | |--------------|--------|--------|--------|-------| | F1 | Ign F1 | F1 | Ign F1 | | | UGDRE-SRE | 79.00 | 78.38 | 78.52 | 77.94 | | w/o Pretrain | 76.88 | 76.28 | 76.41 | 75.86 | | w/o DDS | 78.08 | 77.47 | 77.68 | 77.14 | Table 7: Experimental results of our framework on sentence-level relation extraction task. 7, the performance of the final sentence-level RE model UGDRE-SRE that pretrained on the DDS data is also improved. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction (Sec.1) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Experiments(Sec.4), Analysis and Discussion(Sec.5) ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Dataset and Settings (Sec.4.1) The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Dataset and Settings (Sec.4.1) ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Experiments(Sec.4), Analysis and Discussion(Sec.5) C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ramshetty-etal-2023-cross
Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning
https://aclanthology.org/2023.acl-long.890
The robustness of multimodal deep learning models to realistic changes in the input text is critical for applicability on important tasks such as text-to-image retrieval and cross-modal entailment. To measure robustness, several existing approaches edit the text data, but without leveraging the cross-modal information present in multimodal data. Such information from the visual modality, such as color, size, and shape, provides additional attributes that users can include in their inputs. Thus, we propose cross-modal attribute insertions as a realistic perturbation strategy for vision-and-language data that inserts visual attributes of the objects in the image into the corresponding text (e.g., {``}girl on a chair{''} to {``}little girl on a wooden chair{''}). Our proposed approach for cross-modal attribute insertions is modular, controllable, and task-agnostic. We find that augmenting input text using cross-modal insertions causes state-of-the-art approaches for text-to-image retrieval and cross-modal entailment to perform poorly, resulting in relative drops of {\textasciitilde}15{\%} in MRR and {\textasciitilde}20{\%} in F1 score, respectively. Crowd-sourced annotations demonstrate that cross-modal insertions lead to higher quality augmentations for multimodal data than augmentations using text-only data, and are equivalent in quality to original examples. We release the code to encourage robustness evaluations of deep vision-and-language models: \url{https://github.com/claws-lab/multimodal-robustness-xmai}
# Cross-Modal Attribute Insertions For Assessing The Robustness Of Vision-And-Language Learning Shivaen Ramshetty∗and **Gaurav Verma**∗and **Srijan Kumar** Georgia Institute of Technology {sramshetty3, gverma, srijan}@gatech.edu ## Abstract The robustness of multimodal deep learning models to realistic changes in the input text is critical for their applicability to important tasks such as text-to-image retrieval and cross-modal entailment. To measure robustness, several existing approaches edit the text data, but do so without leveraging the cross-modal information present in multimodal data. Information from the visual modality, such as color, size, and shape, provide additional attributes that users can include in their inputs. Thus, we propose cross-modal attribute insertions as a realistic perturbation strategy for vision-and-language data that inserts visual attributes of the objects in the image into the corresponding text (e.g., "girl on a chair" → "little girl on a wooden chair"). Our proposed approach for cross-modal attribute insertions is modular, controllable, and task-agnostic. We find that augmenting input text using cross-modal insertions causes state-of-the-art approaches for text-to-image retrieval and cross-modal entailment to perform poorly, resulting in relative drops of ∼ 15% in MRR and ∼ 20% in F1 score, respectively. Crowd-sourced annotations demonstrate that cross-modal insertions lead to higher quality augmentations for multimodal data than augmentations using text-only data, and are equivalent in quality to original examples. We release the code to encourage robustness evaluations of deep vision-and-language models: https://github.com/claws-lab/ multimodal-robustness-xmai. ## 1 Introduction The ability to model the interaction of information in vision and language modalities powers several web applications - text-to-image search (He et al., 2016), summarizing multimodal content (Zhu et al., 2018), visual question answering (Antol et al., ![0_image_0.png](0_image_0.png) Figure 1: We propose Cross-Modal Attribute Insertions (XMAI) - an approach that leverages cross-modal interactions in multimodal data to obtain meaningful text augmentations that methods using text-only information (e.g., CLARE) cannot provide. These augmentations highlight vulnerabilities of multimodal models; in this case, the corresponding image is retrieved at a worse rank (104 → 506) for the modified caption. 2015), and editing images using language commands (Shi et al., 2021). Ensuring satisfactory user experience within such applications necessitates the development of multimodal models that can *robustly* process text and image data, jointly. Existing research has demonstrated the brittle reasoning mechanism of text-only and image-only models by introducing variations in the inputs (Evtimov et al., 2020; Li et al., 2021a). Furthermore, prior work have established controlled generation methods for text (Ross et al., 2022), including counterfactuals for model assessment (Madaan et al., 2021; Wu et al., 2021). However, beyond applying modality-specific perturbations to multimodal (image + text) data (Qiu et al., 2022) , existing research has not studied the robustness of models to *likely* augmentations in text that leverage cross-modal interactions. Specifically, current research on text augmentation considers the following likely variations: skipping certain words, introducing typographical errors, inserting noun or verb modifiers, or using synonyms. Consequently, to study the robustness of deep models, several automated methods have been developed to intro- *Equal contribution. duce these variations in the text. However, while these *text-only* perturbations can cover more variations, they are by no means exhaustive with respect to multimodal data. In the context of multimodal data, the text accompanying an image can be meaningfully perturbed to include information from the image. For instance, users can issue a query on a search engine that specifies attributes of the desired image(s); '*a male driver posing with a red* car' instead of '*a driver posing with a car*.' Existing augmentation approaches can only model text-only data and cannot introduce relevant crossmodal information (like 'male' and 'red' in the above example) while generating augmentations. We propose novel text variations that leverage the image modality to insert relevant information into the text, which we call cross-modal attribute insertions. Our method inserts attributes of objects that are both present in the image and mentioned in the text. To do so, *cross-modal attribute* insertion uses object detection to capture objects and their attributes in the image (Anderson et al., 2018), and masked-language modeling to place those attributes prior to the object's mentions in the text (Devlin et al., 2019) (see Figure 1). Additionally, we use embedding similarities to expand the search space of possible augmentations, and introduce an adversarial component to estimate the robustness of multimodal models. Our proposed approach is highly modular, controllable, and task-agnostic. Different modules govern attribute selection from images, cross-modal object matching, attribute insertion in text, and adversarial strength of the augmented example. The contribution of these modules toward the final augmented text can be controlled using weights that can be tuned as hyper-parameters. Finally, our approach for generating augmentations does not involve any parameter training, which makes it task-agnostic and broadly applicable. We demonstrate the applicability of our crossmodal attribute insertion approach by generating augmentations for assessing the robustness of models for two different multimodal tasks - (a) text-toimage retrieval and (b) cross-modal entailment. Together, these two tasks are representative of ranking and classification multimodal tasks. Our evaluation comprises assessing the robustness of state-of-theart multimodal learning approaches for these tasks to our augmentations as well as quantifying the relevance of generated augmentations to unmodified examples. We contrast our cross-modal attribute insertions with several baseline approaches that model text-only information. Our key contributions and findings are: - We propose cross-modal attribute insertions as a new realistic variation in multimodal data. Our proposed approach introduces these variations in a modular, controllable, and task-agnostic manner. - We demonstrate that state-of-the-art approaches for text-to-image retrieval and cross-modal entailment are not robust to cross-modal attribute insertions, demonstrating relative drops of ∼ 15% and ∼ 20% in MRR and F1 score, respectively. - While being as effective as existing text-only augmentation methods in highlighting model vulnerabilities, our approach produces augmentations that human annotators perceive to be of better quality than the most competitive text-only augmentation method. Furthermore, our method matches the quality of unmodified textual examples, while being at least 9× faster than the most competitive baseline across the two multimodal tasks. Overall, we find that cross-modal attribute insertions produce novel, realistic, and human-preferred text augmentations that are complementary to current text-only perturbations, and effectively highlight the vulnerabilities of multimodal models. Future work could employ our augmentation strategy to evaluate and develop more robust multimodal models. ## 2 Related Work Text Augmentations in Multimodal Data: Existing research investigating the robustness of deep learning models for natural language processing has proposed several automated approaches to introduce plausible variations in the text. Ribeiro et al. (2020) and Naik et al. (2018) propose a comprehensive list of perturbations that NLP models should be robust to - including distracting phrases, URLs, word contractions and extensions. Many of these perturbations are task-agnostic and hence can be used to modify the text in multimodal (image + text) data as well. Similarly, other task-agnostic approaches to modify text data include random deletion, swapping, and insertion of words (Wei and Zou, 2019) and replacing, inserting, and merging words or phrases using masked language modeling (Li et al., 2021a). TextAttack (Morris et al., 2020) provides a comprehensive categorization of such methods and a framework to implement them. ![2_image_0.png](2_image_0.png) However, these methods lack in two critical ways: (i) automated text augmentations often compromise the semantic meaning to notable extents (Wang et al., 2021), and (ii) they only rely on the information contained in the text modality. In this work, we introduce augmentations in the textual part of multimodal data using TextAttack methods and consider them as *baseline* augmentations. Then to overcome the flaws mentioned, we propose an approach that leverages the information in the visual modality to insert visual attributes in the textual modality (i.e., cross-modal attribute insertions). Robustness of Multimodal Models: Previous studies independently introduce unimodal perturbations in the visual or textual part of the input to study multimodal robustness. This could involve introducing imperceptible adversarial noise in the images and independently modifying the text using the augmentation approaches discussed earlier (Li et al., 2021a; Ribeiro et al., 2020; Wei and Zou, 2019). For instance, Chen et al. (2020) synthesize counterfactual samples of multimodal data using language models to modify the text. To ensure the preservation of the semantic meaning in the augmented text, Sheng et al. (2021) and Li et al. (2021b) employ humans to perturb the textual questions to fool the state-of-the-art models for Visual Question Answering (Antol et al., 2015). In a step towards using cross-modal interactions in imagetext data to generate realistic variations, Verma et al. (2022) proposed adding relevant information from the image to expand the original textual description, and assess the robustness of multimodal classifiers. Our work proposes a different approach to leverage cross-modal associations to augment multimodal data. Instead of expanding the original text, we insert attributes of objects in the image that are also mentioned in the corresponding text to modify the original text. Additionally, our work considers more multimodal tasks by studying text-to-image retrieval and cross-modal entailment. ## 3 Cross-Modal Attribute Insertions Our approach for augmenting text in multimodal data involves identifying objects in the image that are also mentioned in the text, and inserting words similar to their attributes in the text at relevant places. An overview of our approach (XMAI) is depicted in Figure 2. We denote paired multimodal units as (I, T ), where I represents the input image and T is the text corresponding to that image. Our goal is to transform T into T′such that the text includes relevant information from I while effectively highlighting the target model's vulnerabilities. Our method to infuse object attributes in the text can be broken into four parts: (a) object and attribute detection in I, (b) BERT-based [MASK] prediction in T while ensuring (c) similarity of inserted tokens with detected object attributes, and (d) enforcing dissimilarity between modified text T′and I to obtain robustness estimates of multimodal models. Object and Attribute Detection: For each image I we use a pre-trained bottom-up attention model (Anderson et al., 2018; Yu et al., 2020) to extract objects and their associated attributes. The bottom-up attention model identifies objects and corresponding attributes with a one-to-one mapping. We use these objects and attributes to modify T , by intro15976 ducing masks (i.e., the [MASK] token), in front of the mentions of the objects in T . However, a strict matching criterion would ignore similar objects or alternatively named objects in T . To address this, whenever the text does not have any direct object matches we use a Parts of Speech (PoS) tagger to identify nouns that could represent objects in the image. These identified nouns are compared to objects using cosine similarity between the word embeddings. If the cosine similarity between a noun T and a detected object in I is above some threshold, t, then a [MASK] token is placed before that noun. Overall, this step ensures that insertions are made only for objects in T that are seen in the corresponding image I to obtain T′. Mask Prediction: Next, we aim to fill in the [MASK] tokens with contextually relevant object attributes. To do so, we use the pre-trained language model BERT (Devlin et al., 2019). We sample top-k predictions from the BERT model based on probability scores that also meet the following criteria: the predicted word should not be a stop word and should not exist in the 3-hop neighborhood of the current [MASK]. Furthermore, since T may contain more than one [MASK] token, we carry out this process sequentially for each [MASK] to utilize newly introduced contexts. Following this process, we obtain k candidate insertions that are contextually relevant for each of the identified objects in T that also exists in I. In the next step, to maintain cross-modal relevance, we consider the similarity of these candidate attributes with the attributes detected in I. Attribute Similarity: To better select a word for a specific mask that aligns well with information in I, we only consider predicted tokens similar to the attributes of the associated object detected in I. In order to do so, we utilize embedding-based similarities between each predicted token and the detected attribute string. The image attributes can describe the shape, size, color, or other characteristics (like '*floral* dress') of detected objects. Cross-Modal Dissimilarity for Estimating Robustness: Finally, to estimate the robustness of multimodal models, we explicitly include a component that encourages dissimilarity in the embeddings of the image I and the modified text T′. For each possible modified text T′, we compute the cosine distance between its embedding obtained using the CLIP model (Radford et al., 2021) and that of the corresponding image's CLIP embedding. While the mask prediction and attribute similarity steps ensure that the attribute insertions are semantically meaningful and maintain cross-modal relevance, the cross-modal dissimilarity between T′ and I ensures that we leverage the vulnerabilities in the encoding mechanism of multimodal models. We use CLIP as the encoder for this step as it is a strong representative of the state-of-the-art vision-language models. Text Augmentation Strategy: We now choose the final augmentation of T by combining the above four components ––– object and attribute detection, mask prediction, attribute similarity, and cross-modal dissimilarity for estimating robustness. After placing [MASK] tokens in front of the identified objects mentions or similar nouns in T , we consider the top-k BERT predictions for each of the [MASK] words, denoted by wi ∀ i ∈ {1*, . . . , k*}. We take the predicted probability scores of these k words and normalize them to sum to one, denoting each by pi. The attribute similarity step computes the similarities for wi with the corresponding attribute, which are then normalized to sum to one and denoted by si. Finally, we create k augmentations of T , each denoted by T′ i , and compute the cosine distance of their CLIP embeddings with that of the corresponding image I. The distances are also normalized to sum to one and denoted by di. Mathematically, the cumulative score for a predicted word wiis given as, $${\mathcal{S}}_{w_{i}}=\lambda_{1}\cdot p_{i}+\lambda_{2}\cdot s_{i}+\lambda_{3}\cdot d_{i}\qquad(1)$$ where, λ1, λ2, and λ3 are hyper-parameters that control the contribution of mask prediction using BERT, attribute similarity, and cross-modal dissimilarity, respectively. The word wi with the maximum score S is the word that is inserted in the place of the [MASK]. For text with multiple [MASK] tokens, we repeat this process iteratively in the order of their occurrence in T . By design, our cross-modal attribute insertion approach is modular as different components serve complementary functions toward the final objective of introducing semantically meaningful augmentations. It is also controllable using hyper-parameters λ1, λ2, λ3, k, and t. Finally, our approach is training-free and, therefore, can be applied to investigate several tasks and models. ## 4 Experiments We study the effect of cross-modal attribute insertions on two multimodal tasks: text-to-image retrieval (i.e., retrieving images for a textual description) and cross-modal entailment (i.e., predicting the relationship between textual hypothesis and visual premise). 4.1 Text → **Image Retrieval** Task: Given a set of text and image pairs as input, the goal is to retrieve the associated image for each text. The retrieval occurs for each text over a set of images, in our case we use a subset of 1000 textimage pairs, with the objective being to rank the original/ground-truth image the highest. Axiom for Retrieval: Given an image I in the search repository and two search queries Q1 and Q2, such that Q1 contains more specific details of objects than Q2, I should be retrieved at the same or higher rank for query Q1 than for Q2. Dataset: For this task, we use the MSCOCO dataset (Lin et al., 2014), which contains imagescaption pairs. Specifically, we use the imagecaption pairs from 2017's validation split. This subset of the data contains 5, 000 unique images and 25, 010 captions, where each image can have multiple captions. To assess robustness, we perform augmentations on 25, 010 captions one by one while ranking all the images for each caption. Model Under Investigation: For this task we consider the CLIP model (ViT-B/32) (Radford et al., 2021). CLIP is pretrained on 400 million imagecaption pairs using contrastive learning, resulting in image and text encoders that produce unimodal embeddings that lie in a common latent space. The CLIP model has demonstrated great generalizability to various downstream tasks, including zeroshot and few-shot image classification and crossmodal retrieval. We obtain the CLIP embeddings for each image and caption in the MSCOCO dataset and rank all the images for a given caption based on their cosine similarities. We then contrast the ranking performance of the CLIP model using the original and augmented captions as textual queries. ## 4.2 Cross-Modal Entailment Task: Cross-modal entailment aims to determine whether the relationship between a visual premise and a textual hypothesis is 'entailment,' 'contradiction,' 'neutral.' Specifically, 'entailment' is observed when the textual hypothesis is logically implied (true) by the image while 'contradiction' indicates that the textual hypothesis is not implied (false) by the visual premise. Finally, 'neutral' represents an inconclusive or uncertain relationship between the hypothesis and the premise. Axiom for Entailment: If the relationship between a visual premise and a textual hypothesis is 'entailment,' it should not change to 'contradictory' if the textual hypothesis is enriched with the information from the visual modality. Dataset: We perform this task on SNLI-VE (Xie et al., 2019), a visual entailment dataset. We use the test set of the dataset, comprising 17, 859 image (premise) & text (hypothesis) pairs. For robustness assessment, we augment all text hypotheses while keeping the visual premise the same. Model Under Investigation: We investigate the pre-trained METER model (Dou et al., 2022), which consists of vision-and-language transformers that are trained end-to-end. The model's comprises CLIP's vision encoder and RoBERTa (Liu et al., 2019) as the text encoder. The model pretraining objectives consist of masked language modeling and image-text-matching on four datasets: MSCOCO, Conceptual Captions (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), and Visual Genome (Krishna et al., 2017). In addition, METER is fine-tuned on SNLI-VE in order to achieve competitive performance. We contrast the performance of the model on the unmodified SNLIVE dataset with the performance on the augmented version of SNLI-VE dataset. ## 4.3 Baselines For Perturbations We compare our cross-modal attribute insertion approach (XMAI) with competitive baselines that are capable of introducing perturbations based on textonly information. We utilize the TextAttack (Morris et al., 2020) 1framework for implementing all the baseline perturbation strategies. Deletion: A perturbation strategy that randomly removes words from the text. EDA (Wei and Zou, 2019): This approach combines random deletion, random swapping, random insertion, and synonym replacement to modify each caption. We keep all parameters as default and set the percentage of words to swap to 20%. CheckList (Ribeiro et al., 2020): Developed to generate a diverse set of evaluation examples, CheckList works by coalescing name replacement, 1https://github.com/QData/TextAttack location replacement, number alteration, and word contraction/extension. CLARE (Li et al., 2021a): This perturbation strategy uses language models to replace, insert, and merge tokens in captions. We use TextAttack's default fast implementation of CLARE. ## 4.4 Xmai Implementation Details We choose k = 3 for the number of top-k predicted BERT words for each [MASK] token and flair/pos-english-fast for PoS tagging of text. Next, to compare the nouns in the text with the objects identified in the image, we use word embeddings produced by a Transformer-based model (bert-base-nli-mean-tokens on HuggingFace (Wolf et al., 2020)). We set the threshold, t, for cosine similarity between nouns in T and objects in I to be 0.7. For [MASK] filling, we use the bert-base-cased model on HuggingFace and the list of stopwords is adopted from NLTK. 2 To compute the similarity between attributes detected in I and BERT predictions, we employ SpaCy's pretrained tok2vec model (en_core_web_md), which contains 300dimensional embeddings for ∼ 500k words (Honnibal et al., 2020). Lastly, the pre-trained CLIP model (ViT-B/32) is used to compute image and text embeddings in a common latent space. For our main experiments, we set the values of λi as λ1 = 1, λ2 = 5, and λ3 = 5. ## 4.5 Evaluation Metrics We measure the impact of perturbations in the text on the capabilities of multimodal models using task-specific metrics. We quantify text-to-image retrieval performance using mean reciprocal rank (MRR). For cross-modal entailment, we report standard classification metrics (accuracy, precision, recall, and F1 score). While the effectiveness of perturbations is important for highlighting model vulnerabilities, it is also imperative to measure the relevance of augmented text T′ with original text T and image I. To this end, we compute mean cosine similarity Sim*T −T* ′ between original and modified texts (i.e., T & T′, respectively) using a sentence Transformer model (all-mpnet-base-v2) (Reimers and Gurevych, 2019). Similarly, we report BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005), using NLTK to further compare the 2https://www.nltk.org/ texts (considering n-grams of size upto 4 in the case of BLEU). Additionally, we compute the mean cosine similarity SimI−T ′ using CLIP (ViT-B/32) embeddings. ## 5 Results And Analysis Recall that our primary goal is to use XMAI to obtain a complementary and novel set of text augmentations that can highlight the vulnerabilities of multimodal models. To this end, we contrast the performance of the models under investigation on the original and the modified examples, and quantify the relevance of the modified text with respect to the original text and image. We recruit human annotators to compare the quality of the augmentations generated using our approach with (i) the ones generated using the most competitive baseline, and (ii) the original text. Robustness of Multimodal Models: Table 1 shows that for the text → image retrieval task, our cross-modal attribute insertions cause the greatest drop in observed MRR; the MRR drops from an original value of 0.632 to 0.536. Similarly, Table 2 shows that for the cross-modal entailment task our approach performs second only to CLARE - an observation that is consistent across all the metrics, accuracy, precision, recall, and F1. It is worth noting that our approach is the only one that uses the information from the image modality to introduce textual perturbations and hence the resultant perturbed examples are characteristically different than the ones that are produced using baseline methods like CLARE. We will revisit this using qualitative examples. Overall, results demonstrate that state-of-the-art vision-and-language learning approaches for text-to-image retrieval and crossmodal entailment tasks are not robust to our proposed augmentations. Relevance of Augmentations: Tables 1 and 2 show that XMAI produces augmentations T′that maintain high-relevance with the original text T and the image I, in terms of Sim*T −T* ′ and SimI−T ′. It is interesting to note that the BLEU scores for augmentations generated by XMAI are notably lower than that for the baseline augmentations. On the contrary, METEOR scores show that XMAI's augmentations are "closer" to the original texts compared to most baselines. XMAI's poor BLEU scores can be largely attributed to BLEU's tendency to penalize novel insertions severely compared to removals or replacements, as it is a precision- | Approach | MRR ↓ | SimT −T ′ ↑ | SimI−T ′ ↑ | BLEU ↑ | METEOR ↑ | |-------------|---------|---------------|--------------|----------|------------| | Original | 0.632 | 1.000 | 0.294 | 1.000 | 1.000 | | Deletion | 0.581 | 0.948 | 0.288 | 0.758 | 0.910 | | EDA | 0.570 | 0.914 | 0.287 | 0.647 | 0.931 | | Checklist | 0.630 | 0.996 | 0.294 | 0.982 | 0.993 | | CLARE | 0.567 | 0.899 | 0.285 | 0.749 | 0.947 | | XMAI (Ours) | 0.536 | 0.924 | 0.283 | 0.623 | 0.969 | | Approach | Acc. ↓ | Precision ↓ | Recall ↓ | F1 ↓ | SimT −T ′ ↑ | SimI−T ′ ↑ | BLEU ↑ | METEOR ↑ | |-------------|----------|---------------|------------|--------|---------------|--------------|----------|------------| | Original | 0.792 | 0.790 | 0.792 | 0.791 | 1.000 | 0.246 | 1.000 | 0.998 | | Deletion | 0.742 | 0.742 | 0.742 | 0.740 | 0.892 | 0.240 | 0.632 | 0.853 | | EDA | 0.719 | 0.719 | 0.719 | 0.718 | 0.878 | 0.241 | 0.541 | 0.900 | | Checklist | 0.791 | 0.790 | 0.791 | 0.790 | 0.973 | 0.246 | 0.993 | 0.986 | | CLARE | 0.632 | 0.652 | 0.631 | 0.618 | 0.804 | 0.238 | 0.592 | 0.911 | | XMAI (Ours) | 0.643 | 0.682 | 0.643 | 0.625 | 0.873 | 0.235 | 0.621 | 0.963 | Table 2: **Results on the cross-modal entailment task.** Augmentations that cause a greater drop in classification metrics are more effective at highlighting the lack of multimodal robustness, while the similarity metrics capture their relevance with the original example. The best results are in **bold** and the second best results are underlined. ![6_image_0.png](6_image_0.png) based metric (Banerjee and Lavie, 2005).3In Appendix A.3 (Table 4), we further note that, on average, XMAI inserts 1.660 (±0.947) new words in MSCOCO captions and 1.269 (±0.768) new words in SNLI-VE hypotheses. This is considerably higher than the rate of insertions made by other methods, especially Checklist, where an obvious consequence of making a fewer number of augmentations is better text similarity across a corpus. We thus attribute the poor performance of XMAI in terms of BLEU scores to BLEU's inability to handle insertions appropriately. This is further substantiated by the human assessments. ## 5.1 Human Assessment Of Augmentations We recruit annotators using Amazon Mechanical Turk (AMT) to answer the following two questions: (i) do cross-modal attribute insertions lead to better text augmentations than the most competitive baseline (i.e., CLARE), and *(ii)* are cross-modal attribute insertions as good as the original text accompanying the image. Please see Appendix A.1 for details on the recruitment filters and compensation on AMT. XMAI versus CLARE: We randomly sampled 100 examples from the validation set of the MSCOCO dataset and showed the modified captions using CLARE and XMAI. 5 annotators annotated each example. We asked annotators to indicate their agreement to the following question after seeing two captions for a given image using a 5point Likert scale (1: Strongly disagree, ..., 5: Strongly agree): Caption 2 is a better description of the shown image than *Caption 1* in terms of its quality and accuracy. Caption 1 and Caption 2 were randomly flipped between CLARE and XMAI to avoid any position bias. Furthermore, to ensure quality annotations, we randomly inserted some "attention check" examples that instructed annotators to ignore all previous instructions and mark specified responses on the Likert scale. We discarded responses from annotators who marked the attention-check examples incorrectly and re-collected the annotations. For 63% of the examples, a majority of annotators (i.e., at least 3 out of 5) preferred the captions modified using XMAI over CLARE. The captions modified using CLARE were preferred for 26% examples. The rest were either marked as 'equivalent' (i.e., 3: Neither disagree nor agree) or had ambiguous majority votes. Overall, the results demonstrate that the annotators preferred the captions modified using XMAI over the ones modified using CLARE, in terms of their accuracy in describing the image and their quality. We next assess how XMAI modified captions compare against the original captions. XMAI versus Original: We randomly sampled 100 examples from the validation set of the MSCOCO dataset and randomly chose 50 of them to be modified using XMAI while leaving the other 50 unmodified. We first primed the annotators to view 5 original image caption pairs, noting them as reference examples.4 We then asked the annotators to view a *list* of image-caption pairs and evaluate the caption quality using the following prompt: Rate the caption quality for the given image based on the reference examples shown earlier. A response of 1 on the 5-point Likert scale indicated 'extremely poor quality' whereas that of 5 indicated 'extremely good quality.' The shown list comprised randomly shuffled original image-caption pairs and modified image-caption pairs using XMAI, and a few attention-check examples. Each example received annotations from 5 different annotators. The unmodified captions received an average score of 4.12 (±0.37) whereas that for the modified caption using XMAI was 4.07 (±0.33). The observed inter-rater agreement was strong, with a Krippendorf's α of 0.78. Additionally, a two-sided t-test with unequal variances assumption failed to reject the null hypothesis (p > 0.05) that the average Likert scores for the original and modified captions are from different distributions. In sum, the perceived quality of the modified captions using XMAI is not statistically different from that of the original captions. Computational Efficiency: In Appendix Fig. 7 we demonstrate that our approach for inserting crossmodal attributes is 14.8× and 9.4× faster than the most competitive baseline approach (i.e., CLARE) on MSCOCO and SNLI-VE, respectively. Combined with the fact that XMAI augmentations are perceived to be of better quality than CLARE augmentations and are effective at highlighting model vulnerabilities, the increased computational efficiency allows for more rapid and realistic model validation. In the following section, we demonstrate via qualitative examples that, being the only approach that leverages cross-modal interactions in multimodal data, the augmentations produced by XMAI are novel to those produced by the text-only baselines. ## 5.2 Qualitative Analysis In Figure 3 we show illustrative examples of the insertions introduced using our approach and contrast them with existing text-only perturbations. We observe that our cross-modal insertions lead to a complementary set of augmentations that are not covered by text-only approaches. We note that our method does not remove any information present in the original caption/hypothesis. This prevents our method from drastically changing the original semantics, which has been a known shortcoming of text-only perturbations (Wang et al., 2021). In Figure 3(A), we note that EDA produces a grammatically incoherent augmentation ("*Image of next front the a house* of to...") and CLARE inserts an inaccurate attribute ("*round table*"). Whereas, our approach only in- ![8_image_0.png](8_image_0.png) λ1 λ1 ![8_image_1.png](8_image_1.png) λ3λ3λ3 serts relevant attributes to the original text ("The Image of the window front of a large house next to an outdoor image of a woman holding a small wooden table."). In Figure 3(B&D) we see that XMAI modifies the text using the information in the corresponding images - for instance, our approach identifies the neon LEDs and inserts 'neon' in front of 'sign.' However, EDA and CLARE introduce inaccurate details. XMAI is also capable of multiple meaningful insertions. Our work is the first to enable cross-modal insertion capabilities to obtain meaningful augmentations of multimodal (image + text) data. ## 5.3 Ablations For Λi **Sensitivity** In Figure 4, we visualize the change in retrieval performance with respect to independent changes in λ1, λ2, and λ3. In other words, we vary a given λi while keeping other hyper-parameters at their aforementioned values. We find that increasing λ1 and λ2 improves the relevance of augmentations but reduces their effectiveness in highlighting vulnerabilities. Intuitively, these components increase the likelihood that our approach picks insertions with high BERT prediction scores (controlled by λ1) and similarities with the identified image attribute (controlled by λ2). On the other hand, increasing λ3, which controls the contributions of the robustness assessment component, generates less relevant augmentations that are highly effective. This observation also aligns with our goal of exploiting the lack of robust encoding mechanisms to highlight model vulnerabilities. Overall, these results demonstrate that the individual components of our approach play significant roles, and can be controlled using the λi hyperparameters. Similar trends are observed for the cross-modal entailment task; see Appendix Fig. 6. We discuss the ablations pertaining to the similarity threshold for matching image objects and text nouns in Appendix A.2. ## 6 Conclusion A *robust* understanding of vision-and-language data is crucial for powering several applications. We propose cross-modal attribute insertions - i.e., adding the visual attributes to text, as a new variation that is likely in multimodal data and to which multimodal models should be robust. Our approach produces novel augmentations that are complementary to existing methods that model text-only data, and are preferred over them by human annotators. Using our augmentations we effectively highlight the vulnerabilities of state-of-the-art multimodal models for text-to-image retrieval and cross-modal entailment. In the future, we aim to empirically study the effect of including XMAI augmented data in task-specific training sets and expand to a broader set of multimodal tasks and metrics. ## 7 Limitations And Broader Perspective Limitations and bias of pre-trained models: Our work uses detected objects and their attributes in the images to introduce novel insertions in the corresponding text. To this end, it is important to address the limitations of the state-of-the-art object and attribute detection methods. The undesired artifacts of these methods could be categorized as inaccurate or biased. The detected objects could be incorrect, but since we only consider objects that are also mentioned in the text, the effect of incorrect object detections is non-existent in our augmentations. However, we notice that some of the detected attributes in images and BERT predictions reflect stereotypical associations and have been documented in prior works (Li and Xu, 2021; Kaneko and Bollegala, 2022). We acknowledge that the current state of deep learning research is limited, and the consequential shortcomings are reflected in our augmentations to some extent. Broader social impact: The authors do not foresee any negative social impacts of this work. We believe our cross-modal augmentations will enable an exhaustive evaluation of the robustness of visionand-language models, leading to more reliable multimodal systems. We release the code for our experiments to aid reproducibility and enable future research on this topic. Annotations, IRB approval, and datasets: The annotators for evaluations done in this study were recruited via Amazon Mechanical Turk. We specifically recruited 'Master' annotators located in the United States; and paid them at an hourly rate of 12 USD for their annotations. The human evaluation experiments were approved by the Institutional Review Board (IRB) at the authors' institution. The datasets used in this study are publicly available and were curated by previous research. We abide by their terms of use. ## 8 Acknowledgements This research/material is based upon work supported in part by NSF grants CNS-2154118, IIS-2027689, ITE-2137724, ITE-2230692, CNS-2239879, Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112290102 (subcontract No. PO70745), and funding from Microsoft, Google, and Adobe Inc. GV is partly supported by the Snap Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the position or policy of DARPA, DoD, SRI International, NSF and no official endorsement should be inferred. We thank the anonymous reviewers for their constructive comments and the CLAWS research group members for their help with the project. ## References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In *Proceedings of the IEEE international conference* on computer vision, pages 2425–2433. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. 2020. Counterfactual samples synthesizing for robust visual question answering. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 10800–10809. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, et al. 2022. An empirical study of training end-to-end vision-and-language transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18166–18176. Ivan Evtimov, Russel Howes, Brian Dolhansky, Hamed Firooz, and Cristian Canton Ferrer. 2020. Adversarial evaluation of multimodal models under realistic gray box assumption. *arXiv preprint* arXiv:2011.12902. Yonghao He, Shiming Xiang, Cuicui Kang, Jian Wang, and Chunhong Pan. 2016. Cross-modal retrieval via deep and bidirectional representation learning. *IEEE* Transactions on Multimedia, 18(7):1363–1377. Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spacy: Industrialstrength natural language processing in python. Masahiro Kaneko and Danushka Bollegala. 2022. Unmasking the mask–evaluating social biases in masked language models. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 11954–11962. Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel S. Weld. 2021. GENIE: A leaderboard for human-in-the-loop evaluation of text generation. *CoRR*, abs/2101.06561. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021a. Contextualized perturbation for textual adversarial attack. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053–5069, Online. Association for Computational Linguistics. Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu. 2021b. Adversarial vqa: A new benchmark for evaluating the robustness of vqa models. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision, pages 2042–2051. Zhiheng Li and Chenliang Xu. 2021. Discover the unknown biased attribute of an image classifier. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pages 14970–14979. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In *European conference on computer vision*, pages 740–755. Springer. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Diptikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13516–13524. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In *The 27th International Conference on Computational Linguistics (COLING)*, Santa Fe, New Mexico, USA. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. *Advances in neural information processing systems*, 24. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. 2022. Are multimodal models robust to image and text perturbations? *arXiv preprint arXiv:2212.08044*. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics. Alexis Ross, Tongshuang Wu, Hao Peng, Matthew Peters, and Matt Gardner. 2022. Tailor: Generating and perturbing text with semantic controls. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3194–3213, Dublin, Ireland. Association for Computational Linguistics. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, Melbourne, Australia. Association for Computational Linguistics. Sasha Sheng, Amanpreet Singh, Vedanuj Goswami, Jose Magana, Tristan Thrush, Wojciech Galuba, Devi Parikh, and Douwe Kiela. 2021. Human-adversarial visual question answering. *Advances in Neural Information Processing Systems*, 34:20346–20359. Jing Shi, Ning Xu, Yihang Xu, Trung Bui, Franck Dernoncourt, and Chenliang Xu. 2021. Learning by planning: Language-guided global image editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13590–13599. Gaurav Verma, Vishwa Vinay, Ryan Rossi, and Srijan Kumar. 2022. Robustness of fusion-based multimodal classifiers to cross-modal content dilutions. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 360– 374, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. In *Advances in Neural Information Processing Systems*. Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2021. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6707–6723, Online. Association for Computational Linguistics. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. *arXiv preprint* arXiv:1901.06706. Zhou Yu, Jing Li, Tongan Luo, and Jun Yu. 2020. A pytorch implementation of bottom-upattention. https://github.com/MILVLG/ bottom-up-attention.pytorch. Junnan Zhu, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2018. Msmo: Multimodal summarization with multimodal output. In *Proceedings of the 2018 conference on empirical methods in natural language processing*, pages 4154–4164. A Appendix ## A.1 Human Evaluation Details For our annotation tasks, we recruited annotators using Amazon Mechanical Turk. We set the criteria to 'Master' annotators who had at least 99% approval rate and were located in the United States. The rewards were set by assuming an hourly rate of 12 USD for all the annotators. In addition, the annotators were informed that the aggregate statistics of their annotations would be used and shared as part of academic research. Previous research has demonstrated the role of providing priming examples in obtaining highquality annotations (Khashabi et al., 2021). Therefore, we showed unmodified examples from the MSCOCO corpus to help annotators establish a reference for quality and accuracy. For both the crowdsourced evaluations, we inserted some "attentioncheck" examples to ensure the annotators read the text carefully before responding. This was done by explicitly asking the annotators to mark a randomlychosen score on the Likert scale regardless of the actual content. We discard the annotations from annotators who did not correctly respond to all the attention-check examples. ## A.2 Further Ablations Extended Variations in t: To illustrate the effect of changing the threshold, t, we plot our described MSCOCO metrics with respect to variations in t in Figure 5. We see that as the criterion for matching nouns in the text and objects in the image is made more stringent, the quality and relevance of the augmentations improve further. However, the effectiveness of the resulting augmentations in highlighting model vulnerabilities decreases. It is worth noting as the model becomes more selective in inserting attributes (due to fewer matched nouns and objects), we witness a stark increase in BLUE scores. Variations in t effectively capture the trade-off between maintaining the relevance of augmentations and effectively highlighting model vulnerabilities. Even though it is possible to construct augmentations that will be more effective in making the multimodal models perform poorly, it would sacrifice the relevance and quality of resulting augmentations. Our main results demonstrate that with t = 0.7, we obtain high-quality and human-preferred augmentations that are also effective in highlighting vulnerabilities. Variations in k: We perform another ablation by increasing the value of k for the top-k predictions made by pre-trained BERT model. Table 3 shows that increasing the search space for possible insertion tokens leads to a notable drop in the retrieval Mean Image- >Caption ![12_image_0.png](12_image_0.png) Time to Modify (s) lambda_a lambda_b lambda_c threshold Mean Caption 1 5 0 0.7 0.668 0.694 0.667 0.655 0.878 0.242 0.621 3510.933 λ λa λb λc 1 5 0.5 0.7 0.664 0.692 0.664 0.651 0.878 0.241 0.621 3504.548 0 0.621 0.611 0.655 1 5 1 0.7 0.662 0.69 0.661 0.648 0.878 0.241 0.621 3526.335 0.5 0.623 0.61 0.651 1 5 5 0.7 0.643 0.682 0.643 0.625 0.873 0.235 0.621 3569.84 1 0.625 0.61 0.648 1 5 10 0.7 0.632 0.674 0.631 0.61 0.868 0.231 0.621 3511.657 5 0.636 0.625 0.625 10 0.654 0.638 0.61 Varying Lambda vs Mean Hypothesis Relevance 0.5 5 5 0.7 0.642 0.68 0.641 0.623 0.873 0.235 0.621 2961.961 λ λa λb λc 1 5 5 0.7 0.643 0.682 0.643 0.625 0.873 0.235 0.621 3569.84 0 0.872 0.868 0.878 5 5 5 0.7 0.653 0.686 0.652 0.636 0.876 0.236 0.621 3097.684 0.5 0.873 0.868 0.878 10 5 5 0.7 0.668 0.696 0.668 0.654 0.88 0.239 0.621 2944.93 1 0.873 0.868 0.878 5 0.876 0.873 0.873 10 0.88 0.877 0.868 Figure 5: Ablations on threshold, t, across task-specific metrics for both tasks. λ λa λb λc 0 0.282 0.278 0.29 0.5 0.282 0.278 0.289 1 0.283 0.278 0.289 5 0.284 0.283 0.283 10 0.288 0.287 0.278 0.28 0.282 0.284 0.286 0.288 0.29 0.292 | 1 | 0.283 | 0.278 | 0.289 | | | | |-----|---------|-------------|------------|---------------------------------------------------------|----------|------------| | 5 | 0.284 | 0.283 | 0.283 | | | | | 10 | 0.288 | 0.287 | 0.278 | 0.286 0.284 0.282 0.28 Mean Caption to Image Similarity | | | | k | MRR ↓ | SimT −T ′ ↑ | SimI−T ′ ↑ | BLEU ↑ | METEOR ↑ | Time (s) ↓ | | 3 | 0.536 | 0.924 | 0.283 | 0.623 | 0.969 | 0.196 | | 5 | 0.506 | 0.919 | 0.278 | 0.623 | 0.970 | 0.311 | | 10 | 0.476 | 0.914 | 0.274 | 0.623 | 0.970 | 0.474 | Table 3: Ablations by varying the number of BERT predictions considered (i.e., k) for the text-to-image retrieval task on MSCOCO. The reported time is in seconds (per caption). Table 4: Mean (and standard deviation) of the number of novel insertions/replacements in modified MSCOCO captions and SNLI-VE hypotheses with respect to their original counterparts. | Method | MSCOCO Captions | SNLI-VE Hypotheses | |-------------|-------------------|----------------------| | µ (±σ) | µ (±σ) | | | Deletion | 0.000 (± 0.022) | 0.000 (± 0.015) | | EDA | 0.679 (± 0.784) | 0.562 (± 0.633) | | Checklist | 0.077 (± 0.284) | 0.087 (± 0.300) | | CLARE | 0.982 (± 0.161) | 0.990 (± 0.132) | | XMAI (Ours) | 1.660 (± 0.947) | 1.269 (± 0.768) | performance over resulting augmentations. However, the relevance values with the original imageand text drop too. Increasing the search space allows the model to explore potential insertions that | Method | MSCOCO Captions | SNLI-VE Hypotheses | | | |-------------|-------------------|----------------------|---------|---------| | r/i | any | r/i | any | | | Deletion | 12 | 25, 010 | 4 | 17, 857 | | EDA | 12, 492 | 24, 980 | 8, 876 | 17, 786 | | Checklist | 1, 817 | 1, 879 | 1, 464 | 1, 492 | | CLARE | 24, 456 | 24, 972 | 17, 619 | 17, 817 | | XMAI (Ours) | 24, 244 | 24, 970 | 16, 275 | 16, 512 | Ablation λ1 λ2 λ3 k t MRR ↓ SimT −T ′ ↑ SimI−T ′ ↑ BLEU ↑ **METEOR** ↑ Original 1 5 5 3 0.7 0.536 0.924 0.283 0.623 0.969 λ2 0 1 0 3 0.7 0.593 0.929 0.290 0.623 0.970 0 -1 0 3 0.7 0.588 0.922 0.290 0.623 0.968 λ3 0 0 1 3 0.7 0.499 0.917 0.277 0.623 0.969 0 0 -1 3 0.7 0.672 0.932 0.301 0.623 0.969 Table 6: Effect of variations in λ2, λ3, and k for the text-to-image retrieval on MSCOCO. 0 5 0.7 0.633 0.676 0.633 0.611 0.868 0.231 0.621 3128.107 ![13_image_0.png](13_image_0.png) 0.5 5 0.7 0.633 0.675 0.632 0.61 0.868 0.231 0.621 3005.911 1 5 0.7 0.632 0.675 0.632 0.61 0.868 0.231 0.621 3041.793 5 5 0.7 0.643 0.682 0.643 0.625 0.873 0.235 0.621 3569.84 10 5 0.7 0.654 0.686 0.654 0.638 0.877 0.239 0.621 2949.558 5 5 0.7 0.643 0.625 0.873 1 0.625 ![13_image_1.png](13_image_1.png) 5 10 0.7 0.631 0.61 0.868 5 0.636 Mean Sim ( T-T') 5 5 0.7 0.64 0.679 0.64 0.621 0.872 0.235 0.621 3032.666 5 5 0.7 0.642 0.68 0.641 0.623 0.873 0.235 0.621 2961.961 λ λa λb λc (D) (E) (F) 5 5 0.7 0.643 0.682 0.643 0.625 0.873 0.235 0.621 3569.84 0 0.872 0.868 0.878 could produce highly dissimilar cross-modal representations, thereby helping the adversarial component of our framework, but with the compromise being the relevancy of the augmentations. We also note that exploring more insertion possibilities increases the per-caption augmentation time taken by XMAI. Varying λ2 and λ3: To comprehensively understand the effect of variations in λi, specifically λ2 and λ3, we set each to 1 or −1 (one at a time) while setting the other two lambdas to 0. Results in Table 3 empirically show that both the attribute similarity and robustness assessment components are essential and can serve dual purposes; i.e., negative values of λ2 and λ3 allow the associated components to serve the opposite purpose. In contrast to their original objective, when the associated λ values are set to negative values, attribute similarity decreases performance and robustness assessment increases it. ## A.3 Number Of Insertions We report the number of insertions or replacements each augmentation method makes to the original text as well as the number of texts modified for each dataset. The results are reported in Table 4 and 5. We find that our XMAI approach introduces more novel words in the augmentation than any other approach, while also augmenting nearly the same amount of captions as most competitive baseline approaches. This observation, combined with the fact that human annotators prefer XMAI augmentations over baseline augmentations, shows that cross-modal insertions can be used to introduce new information without causing semantic deterioration of the text. Additionally, these results allow us to attribute the low BLEU scores observed in Table 1 to the higher number of insertions that XMAI makes. Note that BLEU computation is precision-based and hence penalizes novel insertions more severely. It is interesting to note that even though 'Deletion' is expected to have no insertions or replacements, we found that in very few cases, due to the adopted implementation as well as the noise in the text could result in fragmented or fused words that were being considered novel compared to the original text and therefore counted as an insertion/replacement. ## A.4 Augmentation Time In Figure 7 we show the average time to augment the input text for each of the methods. The results are plotted using a logarithmic scale to ensure a clearer depiction across methods that vary considerably in terms of computational time. Simpler approaches such as Deletion, EDA, and CheckList can modify tens or hundreds of samples each second. Intuitively, the reliance on simple ![14_image_0.png](14_image_0.png) Figure 7: Comparing the per-caption augmentation time of various methods across the two tasks. Results are shown in logarithmic scale due to the disparity between computational times across different methods. rules makes these approaches very efficient. On the other hand, context-aware methods such as CLARE and XMAI are slower due to their reliance on large language models and involved selection processes. However, XMAI can augment the text a magnitude faster than CLARE, even after using the fast implementation of CLARE from TextAttack. We don't consider the time to compute objects and attributes for XMAI for two reasons. First, the cost to perform this step was < 1 hour for our datasets and the relationship between methods remains the same. Secondly, objects and attributes only need to be computed once, so there is no additive cost for augmentation unless changes are made to the detection component. ## A.5 Compute Resources Our experiments were split between a single Tesla V100 for object detection (∼ 1 hour) and NVIDIA Tesla T4 GPUs for our augmentation (∼ 3 hours). 25 Algorithm 1: Algorithmic block describing the text augmentation method for XMAI. For details reference back to and follow along with Section 3. 1 // ▷ Cross-Modal Attribute Insertions Input :An image-text pair denoted by (I, T ) Output :Augmented text T′. 2 // ▷ Object and Attribute Detection 3 Detect objects and attributes in I 4 Introduce masks into T where direct matches exist 5 If no direct matches, use word similarity b/w detected objects in I & nouns in T 6 // ▷ Mask Prediction 7 for i = 1, ..., N[*MASK*] do 8 Use BERT to obtain top-k predictions for current mask 9 For current mask, maintain probability score vector p 10 // ▷ Attribute Similarity 11 for j = 1*, ...,* k do 12 Compute maximum attribute similarity between relevant object attributes and the current predicted word, sj 13 **end for** 14 // ▷ Cross-Modal Dissimilarity for Estimating Robustness 15 Create k candidate augmentations 16 Compute CLIP dissimilarity for each candidate augmentation, d1, ..., dk 17 // ▷ Text Augmentation Strategy 18 Compute the final score vector, Sw 19 Insert word with maximum score in Sw in place of current [MASK] 20 **end for** 21 Output text T′ with insertions ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Sections 3 and 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 7 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 7 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 4.1 and 4.2 ## C ✓ **Did You Run Computational Experiments?** Left Blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5.1 and Appendix A.1 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 5.1 and Appendix A.1 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix A.1 and Section 7 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix A.1 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 7 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 7 and Appendix A.1
muennighoff-etal-2023-crosslingual
Crosslingual Generalization through Multitask Finetuning
https://aclanthology.org/2023.acl-long.891
Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task genrealization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at \url{https://github.com/bigscience-workshop/xmtf}.
Crosslingual Generalization through Multitask Finetuning Niklas Muennighoff1 Thomas Wang1 Lintang Sutawika2,3 **Adam Roberts**4 Stella Biderman3,5 Teven Le Scao1 M Saiful Bari6 **Sheng Shen**7 Zheng-Xin Yong8 Hailey Schoelkopf 3,9 Xiangru Tang 9 **Dragomir Radev**9 Alham Fikri Aji10 Khalid Almubarak11 Samuel Albanie12 **Zaid Alyafeai**13 Albert Webson8 Edward Raff5 **Colin Raffel**1 1Hugging Face 2Datasaur.ai 3EleutherAI 4Google Research, Brain Team 5Booz Allen Hamilton 6NTU 7 UC Berkeley 8 Brown University 9 Yale University 10 MBZUAI 11 PSAU 12 University of Cambridge 13 KFUPM niklas@hf.co ## Abstract Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task generalization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-ofthe-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on humanwritten prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and languageagnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at https://github.com/ bigscience-workshop/xmtf. ## 1 Introduction Large language models pretrained on vast amounts of text show some capability of solving tasks expressed in natural language, even without explicit training on these tasks (Brown et al., 2020). Finetuning on groups of language tasks has been shown to significantly boost this zero-shot task generalization of language models (Wei et al., 2021; Sanh et al., 2022; Min et al., 2021). For example, Sanh et al. (2022) finetune on tasks like summarization and question answering leading to better performance on unseen tasks like natural language inference. Previous work has focused on multitask finetuning in the context of large English language models and tasks. Multilingual large language models show the same zero-shot learning capabilities for both monolingual and crosslingual tasks (Goyal et al., 2021a; Lin et al., 2021; Patel et al., 2022; Soltan et al., 2022). However, zero-shot performance tends to be significantly lower than finetuned performance. Thus, task-specific or language-specific transfer learning via finetuning remains the predominant practice (Devlin et al., 2018; Conneau et al., 2019; Aribandi et al., 2021). This is particularly challenging for low-resource languages or tasks with limited data available, such as writing a fable that teaches a specified moral. In the spirit of multitask finetuning, it would be desirable to improve the zero-shot task generalization of multilingual models to make them usable on tasks from low-resource languages without requiring further finetuning. To address this goal, we focus on crosslingual multitask finetuning. Due to the difficulty of collecting supervised task data in low-resource languages, previous work typically aims to transfer capabilities learned from finetuning on English data, which can improve performance on nonEnglish language tasks (Wu and Dredze, 2019; Phang et al., 2020; Chalkidis et al., 2021; Vu et al., 2022). We investigate whether English-only multitask finetuning also improves performance on non-English held-out tasks using the multilingual BLOOM (Scao et al., 2022a) and mT5 (Xue et al., 2020) models. We find that after finetuning on the English-only multitask mixture used for T0 (Sanh et al., 2022) (P3), performance on a diverse set of 15991 ![1_image_0.png](1_image_0.png) ## Non-English Held-Out Tasks Increases. To investigate whether multilingual task data can further improve performance, we extend P3 to xP3 by adding datasets from 46 different languages that cover tasks previously not present in P3 (such as translation and program synthesis). Finetuning on xP3 leads to even better zero-shot task generalization in both English and non-English compared to the P3-trained baseline. Models finetuned on xP3 perform best on English prompts, even for non-English samples. Hypothesizing that better performance could be attained by training on nonEnglish prompts, we construct a variant of xP3 with machine-translated prompts called xP3mt. We find that finetuning on machine-translated prompts is enough to significantly increase performance on held-out tasks with non-English human-written prompts. However, reducing the number of English prompts in the finetuning also worsens English prompt performance on multilingual tasks. Notably, we also find that models finetuned on xP3 generalize to held-out tasks in languages never intentionally seen during pretraining nor finetuning. We conduct a contamination analysis and find that only small amounts of these languages were included in the pretraining corpus. Thus, we hypothesize the models learn some language- and task-agnostic capabilities. We publicly release all our datasets and models (URLs in Appendix §C). ## 2 Related Work 2.1 Multitask Learning Multitask finetuning (Sanh et al., 2022) (or instruction tuning (Wei et al., 2021)) has emerged as a ![2_image_0.png](2_image_0.png) recipe for improving the zero-shot task generalization of large language models. Typically, these works define a task as a collection of datasets that require a certain set of skills. To inform large language models which task to perform given an input, a prompt is used to add natural language instructions to dataset instances (Schick and Schütze, 2020; Scao and Rush, 2021). In this line of work, zero-shot task generalization refers to the ability to perform a held-out task based on prompted instructions alone. Our work builds on T0 (Sanh et al., 2022), a variant of T5 (Raffel et al., 2020) that underwent MTF and was subsequently shown to have strong zero-shot task generalization capabilities. Increasing the number and diversity of finetuning tasks and datasets has been shown to increase model performance (Min et al., 2021; Fries et al., 2022; Wang et al., 2022d; Scialom et al., 2022; Chung et al., 2022; Mishra et al., 2021b). PromptSource (Bach et al., 2022) is a software application that provides a framework for developing and applying prompts. PromptSource was used to construct P3, the training dataset of T0. While most prior work has focused on using English prompts on English datasets, Wang et al. (2022c) trained both English and multilingual models on prompted datasets. Their multilingual model, called mTkInstruct, attains strong crosslingual performance. In contrast with Wang et al. (2022c), our sole focus is crosslingual zero-shot generalization. Therefore, we consider a wider variety of prompting settings and perform a more detailed evaluation of multilingual capabilities. Separately, Radford et al. (2019) find that accidental inclusion of non-English text gave the GPT-2 model a limited ability to process and generate non-English text. We similarly discover that our finetuned models can process text in languages not intentionally trained on. ## 2.2 Multilingual Models Many language models are pretrained on English data only. Multilingual pretrained language models (Lample and Conneau, 2019; Conneau et al., 2019; Fan et al., 2021) aim to enable processing a wide variety of non-English languages. Unlike monolingual models, multilingual models can also be used for crosslingual tasks, such as translation. For language generation, recent efforts have focused on two different model architectures based on the Transformer (Vaswani et al., 2017). On the one hand, encoder-decoder transformers trained with a denoising objective such as mBART (Liu et al., 2020) and mT5 (Xue et al., 2020) learn to predict tokens masked out in the input sequence. Predicting masked tokens is only a pretraining task and these models are generally finetuned on downstream datasets before being used. On the other hand, decoder-only models pretrained on next token prediction such as mGPT (Shliazhko et al., 2022), XGLM (Lin et al., 2021) and BLOOM (Scao et al., 2022a) can be used to solve tasks expressed in natural language directly in a zero-shot or few-shot setting (Brown et al., 2020). XGLM demonstrated competitive few-shot performance even when the model was prompted in a language different than the sample being processed. In particular, using English prompts for multilingual datasets provides better performance with XGLM than human-translating the English prompt to the dataset language. In this work, we use the BLOOM models (Scao et al., 2022a,b), which were pretrained on the ROOTS corpus (Laurençon et al., 2022) in 46 natural languages and 13 programming languages. We also finetune mT5 (Xue et al., 2020) to compare encoder-decoder and decoder-only performance. mT5 is pretrained on a corpus sampled from mC4 covering 101 languages. ## 3 Finetuning Data And Models To study crosslingual multitask prompted finetuning, we create xP3 by extending the P3 dataset collection with additional non-English tasks. We finetune both BLOOM and mT5 models on xP3. We refer to Appendix §C for public links to released models and datasets. ## 3.1 Finetuning Data We build on the P3 (Sanh et al., 2022) task taxonomy and add 30 new multilingual datasets illustrated in Figure 1. We define four task clusters previously not present in P3: translation, simplification, program synthesis, and miscellaneous code datasets. As 11% of BLOOM's pretraining data is code, we add code datasets classified as program synthesis (text-to-code) or miscellaneous. The latter includes tasks such as estimating the computational complexity of a provided code snippet and generating a name for a given function. We extend the XWinograd dataset (Tikhonov and Ryabinin, 2021) with winograd schemas from CLUE (Xu et al., 2020) to increase its Chinese samples from 16 to 504. Similar to P3, a fraction of our prompts invert the task at hand. For example, a prompt may invert a closed-book QA sample by asking the model to generate a question given an answer. With xP3 we aim to replicate the language distribution of the ROOTS corpus (Laurençon et al., 2022) used to pretrain BLOOM. Thus, xP3 consists of the same 46 natural languages and code as ROOTS. ROOTS, xP3 and the mT5 corpus (Xue et al., 2020) language distributions are visualized in Figure 2. 39% of xP3 data is English, slightly more than the 30% of English data in ROOTS. Various African languages such as Twi (tw) and Bambara (bm) form the tail of xP3's language distribution. Many of them are not included in the mT5 pretraining corpus. In xP3, Twi and others are represented solely as a translation task using data from Flores200 (NLLB Team et al., 2022). To study the importance of non-English prompts, we construct a machine-translated variant of xP3, xP3mt. We translate prompts of monolingual datasets into the respective dataset language. For example, for the Chinese dataset C3 (Sun et al., 2020) prompts in xP3mt are in Chinese instead of English in xP3. For crosslingual datasets prompts remain in English in xP3mt (such as Wiki-Lingua, which involves producing a summary in one language based on text in another language). We use the Google Cloud API for machine translation1. Figure 3 compares the dataset variants we train on. ## 3.2 Models We use publicly available pretrained BLOOM models ranging from 560 million to 176 billion parameters. BLOOM models are large decoder-only language models pretrained for around 350 billion tokens with an architecture similar to GPT-3 (Brown et al., 2020). We finetune the models for an additional 13 billion tokens with loss only being computed on target tokens. For example, given the input "Translate to English: Je t'aime." and a space-separated target "I love you.", the model is trained to predict only the targets. As targets vary in length from just one to hundreds of tokens, we downscale the loss of each token by the length of the target it belongs to. This ensures short targets (e.g. for multiple-choice QA) get the same weight as long targets (e.g. for translation). We skip samples longer than 2048 tokens and use packing to train efficiently on multiple samples at a time (Kosec et al., 2021). We select the final checkpoint based on validation performance. For mT5 models, we finetune using the T5X (Roberts et al., 2022) framework on TPUs. mT5 uses the same encoder-decoder architecture, pretraining objective (masked language modeling), and pretraining length (1 trillion tokens) as T5 (Raffel et al., 2020). For finetuning mT5, we follow the same procedure as described above for BLOOM, except that inputs are fed into the encoder and thus are not space-separated from targets. We produce three core model variants available in different sizes: ## - **Bloomz-P3 / Mt0-P3:** Models Finetuned On The English-Only P3. - **BLOOMZ / mT0:** Models finetuned on xP3, which consists of multilingual datasets with English prompts. - **BLOOMZ-MT / mT0-MT:** Models finetuned on xP3mt, which consists of multi-1https://cloud.google.com/translate ![4_image_0.png](4_image_0.png) We evaluate on three held-out tasks: coreference resolution, sentence completion and natural language inference (NLI) as depicted in Figure 1. We also evaluate on HumanEval due to its popularity for code evaluations using the pass@k metric (Chen et al., 2021). For datasets that involve choosing the correct completion from several options, we follow prior work (Sanh et al., 2022; Brown et al., 2020) and use rank classification: We compute the log-likelihood of each possible completion and select the highest scoring option. For each evaluation dataset, we select 5 prompts at random from PromptSource and use them for all language splits of the dataset. We report the median of the 5 prompts for results per language split. Thus, in constrast to XGLM (Lin et al., 2021), we do not tune prompts based on performance on validation data. A selection of prompts can be found in Appendix §M. For evaluation on generative tasks, such as translation, we use lm-evaluation-harness (Gao et al., 2021) and report BLEU scores (Papineni et al., 2002). ## 4 Results We first examine generalization to new tasks in languages included in finetuning in §4.1. Then, in §4.2, we look at language generalization: Can models generalize to tasks in languages that (a) they have only seen during pretraining and (b) they have never seen intentionally? In §4.3, we investigate performance on multilingual prompts and finetuning on xP3mt. Scaling laws are analyzed in §4.4. Finally, §4.5 looks at performance on generative tasks and §4.6 at the effect of language ## 4.1 Task Generalization Previous work has shown that large language models finetuned on prompted multitask mixtures generalize to unseen tasks (Zhong et al., 2021; Wei et al., 2021; Mishra et al., 2021b,a; Wang et al., 2022c). In Figure 4, we show that the same applies to multilingual models: Finetuned BLOOMZ and BLOOMZ-P3 models significantly improve over BLOOM and XGLM on held-out tasks. Despite an order of magnitude fewer parameters, mT0 (13 billion parameters) is ahead of BLOOMZ (176 billion parameters). We attribute this to the encoderdecoder architecture paired with a masked language modeling pretraining objective (Wang et al., 2022a; Tay et al., 2022a) as well as the longer pretraining of mT5 (Hoffmann et al., 2022; Su et al., 2022) (1 trillion tokens for mT5 vs. 366 billion for BLOOM). Despite also having gone through crosslingual multitask finetuning, mTk-Instruct performs significantly worse than the same-sized mT0. We attribute this to our prompting style, which aims to replicate natural human communication. mTkInstruct is finetuned on more structured prompts with specific "Definition", "Input" and "Output" fields. Similarly, Wang et al. (2022c) find that T0 performs worse than Tk-Instruct on their prompts. We also find models finetuned on the 39% English xP3 (BLOOMZ, mT0-13B) outperform models finetuned on the 100% English P3 (BLOOMZP3, mT0-13B-P3) on *English tasks* (Appendix §B). Even the fully English T0-11B model (Sanh et al., 2022) is outperformed by our mT0-13B model on entirely *English tasks*. Ignoring embedding parameters T0-11B and mT0-13B have about the same size. This is likely due to xP3 adding additional ![5_image_0.png](5_image_0.png) tasks and prompts, which has been shown to help generalization (Chung et al., 2022; Iyer et al., 2022). mT0-13B beating T0-11B indicates that the benefit of scaling tasks is larger than the benefit of pretraining and finetuning on relatively more English tokens. ## 4.2 Language Generalization Here we add another layer of generalization: languages. Figure 4 already shows that finetuning on English data only (P3) leads to better performance on non-English data: For example, BLOOMZ-P3 improves by over 50% on multilingual sentence completion compared to BLOOM. Thus, zero-shot task performance in languages only seen during pretraining improves after finetuning on English. This has major practical benefits as it can be more difficult to collect data for low-resource languages. Next, we investigate performance on languages the model has *never intentionally seen*. Due to the scale of large language model pretraining, it is difficult to label tasks or languages as strictly unseen. It is likely that the training data unintentionally includes small fractions of these languages (just as many tasks might appear "implicitly" in the pretraining corpus (Sanh et al., 2022)). In Figure 5 we show that after multitask finetuning on xP3, the models can perform unseen tasks in languages that were not intentionally trained on. After probing the pretraining corpus of BLOOM, we do find small amounts of these languages that were unintentionally included (Appendix §D). However, for XNLI, performance increases across all languages, many of which only show up in tiny fractions in our language contamination analysis, such as Thai with 0.006%. If we extrapolate this proportion to the entire ROOTS corpus, the BLOOM models would have seen a mere 20 million tokens of Thai during pretraining. One possibility is that betterthan-random XNLI performance can be attained with little or no language understanding. In Appendix §H, we investigate edit distances of XNLI samples and find that there are differences across labels, however, likely not significant enough to enable this kind of generalization. ## 4.3 Multilingual Prompting Task Prompt Average accuracy BLOOMZ BLOOMZ-MT mT0-13B mT0-13B-MT XNLI EN **52.99** 49.01 48.24 **51.29** MT 37.56 **41.16** 39.31 **41.66** HT 40.4 **43.88** 44.95 **46.87** XCOPA EN 72.52 73.24 **81.4** 80.36 MT 70.04 71.84 **81.16** 79.64 XStoryCloze EN **81.73** 81.39 81.99 **82.3** MT 80.89 81.76 **83.37** 82.86 XWinograd EN **60.07** 59.15 70.49 **73.24** MT 58.48 **60.14** 66.89 **72.33** multilingual datasets), we created xP3mt, an extension with machine-translated prompts. To investigate performance on non-English prompts, we additionally human- and machine-translated the English evaluation prompts from Figure 4. In Table 1, we report performance on these. Results on machinetranslated prompts in languages that are not part of the finetuning corpus, such as those in Figure 5, are in Appendix §I. Table 1 shows that BLOOMZ performs much better on English than on nonEnglish prompts. BLOOMZ-MT, which is finetuned on xP3mt, significantly improves on multilingual prompts. On XNLI, BLOOMZ-MT raises the average performance on human-translated prompts from 41.13 to 45.55. This comes at the cost of a reduction in its performance on English prompts, from 53.58 to 49.74. For mT0, the MT version provides similar performance gains on XNLI and XWinograd non-English prompts, while results on XCOPA and XStoryCloze are mixed. Similar to Lin et al. (2021), we also find that models perform better on human-translated prompts than machine-translated ones for XNLI. ## 4.4 Scaling ![6_image_1.png](6_image_1.png) In Figure 4, the average performance of BLOOM is near the random baselines of 0.50 for Sentence Completion and Coreference Resolution and 0.33 for NLI. We think this is due to all of our experiments being zero-shot and using untuned prompts (Perez et al., 2021a). We find in Figure 6 that even at 560M parameters, multitask finetuning improves zero-shot generalization. The gap between pretrained and multitask finetuned models grows significantly as parameters increase. Scaling up parameters benefits all languages evaluated. ## 4.5 Generation Tasks ![6_image_0.png](6_image_0.png) In this section, we investigate the impact of multitask finetuning on generative tasks. In Figure 7, we plot validation performance throughout the training process. We find that while performance on natural language understanding tasks continues to increase, generative performance jumps initially and then decreases. Relatedly, in Table 2, we find that multitask finetuning does not improve performance on HumanEval (Chen et al., 2021). Only for small models, such as BLOOM-560M vs. BLOOMZ-560M, there are meaningful performance gains. When no code data is included in finetuning (BLOOMZ-P3) performance decreases significantly. mT0 models, which have not been pretrained on code, fail to solve any HumanEval problems (see full results in Appendix §K). Given a Python docstring, HumanEval requires models to complete a function. Inspecting generations reveals that the multitask finetuned models are biased towards short generations. In Appendix §E, we show example solutions from HumanEval and compute average length statistics. BLOOMZ tries to solve problems with 70% fewer characters than BLOOM. Pass@k k = 1 k = 10 k = 100 GPT-Neo 1.3B 4.79% 7.47% 16.30% GPT-Neo 2.7B 6.41% 11.27% 21.37% GPT-J 6B 11.62% 15.74% 27.74% GPT-NeoX 20B 15.4% 25.6% 41.2% Codex-300M 13.17% 20.37% 36.27% Codex-679M 16.22% 25.7% 40.95% Codex-2.5B 21.36% 35.42% 59.5% Codex-12B 28.81% 46.81% 72.31% BLOOM-560M 0.82% 3.02% 5.91% BLOOM-1.1B 2.48% 5.93% 9.62% BLOOM-1.7B 4.03% 7.45% 12.75% BLOOM-3B 6.48% 11.35% 20.43% BLOOM-7.1B 7.73% 17.38% 29.47% BLOOM 15.52% 32.20% 55.45% BLOOMZ-560M 2.18 % 4.11% 9.00% BLOOMZ-1.1B 2.63% 6.22% 11.68% BLOOMZ-1.7B 4.38% 8.73% 16.09% BLOOMZ-3B 6.29% 11.94% 19.06% BLOOMZ-7.1B 8.06% 15.03% 27.49% BLOOMZ 12.06% 26.53% 48.44% BLOOMZ-P3 6.13% 11.79% 18.73% This bias towards short answers and the performance drop on generative tasks come from finetuning on short texts. Most tasks in our finetuning dataset, xP3, are single sentences. We show in Appendix §G that finetuning on fewer short tasks via early stopping, adding long tasks or upweighting long tasks leads to longer generations and slightly better performance. We find it most effective, however, to force a minimum generation length at inference. This is done by ignoring any probability mass the model assigns to its end-of-sequence token for a desired number of tokens. Only after the generation has reached the desired length, can the model generate the end-of-sequence token, thus finishing the generation. Forcing a minimum generation length improves the BLEU score on a translation task by 9 points, see Appendix §G for quantitative and Figure 15 for qualitative results. ![7_image_0.png](7_image_0.png) ## 4.6 Effect Of Language Proportions In Figure 8, we find that finetuned BLOOM models perform better on languages seen extensively during pretraining. As the language distribution in the finetuning dataset, xP3, closely follows that of pretraining, these languages are also seen most frequently during finetuning. Specifically, XCOPA and XNLI show significantly better performance on these high-resource languages, such as English, Spanish or French, which all make up more than 10% of pretraining individually. The trend is less consistent for XWinograd. This may be caused by the fact that XWinograd language subsets are not translations of each other and have a significantly different number of samples. Thus, some language subsets of XWinograd may be inherently more difficult than others. ## 5 Conclusion In this work we investigated crosslingual multitask finetuning. We developed xP3, a corpus consisting of tasks in 46 languages. Further, we have extended xP3 to xP3mt with machine-translated prompts. We have finetuned pretrained BLOOM and mT5 models on the newly created corpora as well as the English-only P3 corpus to produce BLOOMZ and mT0 models. We found that English-only finetuning suffices for a multilingual pretrained large language model to generalize to tasks in other pretrained languages. However, finetuning on multiple languages using xP3 provided even better performance. We have further observed finetuned models to be capable of generalization to new tasks in languages they have never intentionally seen. We investigated multilingual prompting and found performance after finetuning on English prompts only to be poor. However, finetuning on a corpus with machinetranslated prompts (xP3mt) lead to significantly better performance on human-written non-English prompts. Comparing models from 560 million up to 176 billion parameters revealed that the performance gap between only pretraining and finetuning widens as parameters increase. Lastly, we found multitask finetuning on billions of short targets biases models to produce short answers, which can hurt performance on generative tasks. We proposed a simple workaround by forcing a minimum generation length at inference. To contribute to future progress on improving zero-shot generalization, we release all datasets and models introduced in this work. ## 6 Limitations We highlight several limitations of our work: Unnatural prompting format The choice to separate inputs and targets using a space character has proven effective to multitask finetune our decoder-only models. Nonetheless, poorly formatted prompts may result in undesirable behavior. For example, given the following prompt: "Translate to English: Je t'aime", the model may continue the input with additional French content before starting to solve the task, i.e. translating the input from French to English. This can be mitigated by improving the prompts with a trailing full stop or a newline symbol. Encoder-decoder models, such as our mT0, do not suffer from this problem, as inputs and targets are fed into different parts of the model. Limited languages in xP3 The pretraining corpus of mT0 contains more than 101 languages (Xue et al., 2020), however, we finetune on only 46 languages. Likely, finetuning on the full 101 languages mT0 has seen during pretraining would lead to better performance. However, we decided to use only the languages of BLOOM in order to study language generalization (§4.2). Similarly, one could likely attain better performance by enhancing xP3 with more datasets, such as via BIG-Bench (Srivastava et al., 2022; Suzgun et al., 2022), or more prompts, such as via NL-Augmenter (Dhole et al., 2021). We have released an extended version of xP3 dubbed xP3x that covers 277 languages and is around ten times larger than xP3, but are yet to finetune models on it. Performance While our models show strong capabilities of performing tasks zero-shot, there remain numerous failure modes that are common in large language models (Rae et al., 2021; Bommasani et al., 2021; Zhang et al., 2022; Smith et al., 2022; Ouyang et al., 2022; Taylor et al., 2022; Chowdhery et al., 2022; Biderman et al., 2023; Allal et al., 2023; Li et al., 2023). In Figure 16 of Appendix §F, BLOOMZ fails to understand the moral of a fable resulting in an undesirable generation. Similarly, in Figure 15, mT0-13B is asked to provide an explanation, but answers with a question. We have made several modifications to the multitask finetuning recipe, such as loss weighting, mixing in long tasks, and various multilingual aspects, leading to the strong zero-shot performance of our models. However, there are many other changes to the multitask finetuning procedure that are worth exploring to get better models (Honovich et al., 2022; Wang et al., 2022b; Longpre et al., 2023a; Liu et al., 2023; Dettmers et al., 2023; Yin et al., 2023). Further, the pre-trained models we use, BLOOM and mT5, are suboptimal in many aspects such as compute allocation (Hoffmann et al., 2022; Muennighoff et al., 2023), pretraining datasets (Longpre et al., 2023b; Touvron et al., 2023; Chung et al., 2023), pre-training objective (Tay et al., 2022b) and possibly model architecture (Komatsuzaki et al., 2022; Shen et al., 2023). Future work should investigate multitask finetuning better base models. Learning new languages during finetuning While we have investigated generalization to languages only seen during pretraining, we did not investigate generalization to languages only seen during finetuning. Our mT0 models are finetuned on several new languages not seen in pretraining (see Figure 2). Out of those, we only evaluated on code (HumanEval), where mT0 performed at the random baseline (0.00 in Table 10). We point to follow-up work that has investigated the question of teaching BLOOMZ new languages (Yong et al., 2022; Cahyawijaya et al., 2023) and work investigating adaptation of BLOOM (Ennen et al., 2023; Yong and Nikoulina, 2022). ## Acknowledgments This work was granted access to the HPC resources of Institut du développement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocation 2021-A0101012475 made by Grand équipement national de calcul intensif (GENCI). In particular, all the evaluations and data processing ran on the Jean Zay cluster of IDRIS, and we want to thank the IDRIS team for responsive support throughout the project, in particular Rémi Lacroix. We thank the XGLM team for providing access to XStoryCloze. We thank volunteers who humantranslated XNLI prompts. We thank Noah Constant and Douwe Kiela for feedback on drafts of this paper. We thank Victor Sanh, Stephen Bach, Sasha Rush and Jordan Clive for support throughout the project. ## References 2018. Neural code search evaluation dataset. page arXiv:1908.09804 [cs.SE]. 2020. Wikilingua: A new benchmark dataset for multilingual abstractive summarization. arXiv preprint arXiv:2010.03093. Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. Santacoder: don't reach for the stars! *arXiv preprint arXiv:2301.03988*. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Prakash Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2021. Ext5: Towards extreme multi-task scaling for transfer learning. *CoRR*, abs/2111.10952. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. *CoRR*, abs/1910.11856. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. *arXiv* preprint arXiv:2108.07732. Stephen H. Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged S. Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Xiangru Tang, Mike Tian-Jian Jiang, and Alexander M. Rush. 2022. Promptsource: An integrated development environment and repository for natural language prompts. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. *arXiv preprint arXiv:2304.01373*. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. 2022. Gpt-neox-20b: An open-source autoregressive language model. *arXiv preprint arXiv:2204.06745*. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow. If you use this software, please cite it using these metadata, 58. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. *arXiv preprint* arXiv:2108.07258. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Samuel Cahyawijaya, Holy Lovenia, Tiezheng Yu, Willy Chung, and Pascale Fung. 2023. Instructalign: Teaching novel languages with to llms through alignment-based cross-lingual instruction. arXiv preprint arXiv:2305.13627. Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021. Multieurlex–a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. *arXiv preprint* arXiv:2109.00904. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. *arXiv preprint* arXiv:2107.03374. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Hyung Won Chung, Noah Constant, Xavier Garcia, Adam Roberts, Yi Tay, Sharan Narang, and Orhan Firat. 2023. Unimax: Fairer and more effective language sampling for large-scale multilingual pretraining. *arXiv preprint arXiv:2304.09151*. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Yiming Cui, Ting Liu, Li Xiao, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, and Guoping Hu. 2018. A span-extraction dataset for chinese machine reading comprehension. arXiv preprint arXiv:1810.07366. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. *arXiv preprint arXiv:2305.14314*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Kaustubh D Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, et al. 2021. Nl-augmenter: A framework for task-sensitive natural language augmentation. *arXiv preprint arXiv:2112.02721*. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2022. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. arXiv preprint arXiv:2203.06904. Philipp Ennen, Po-Chun Hsu, Chan-Jan Hsu, Chang-Le Liu, Yen-Chen Wu, Yin-Hsiang Liao, Chin-Tung Lin, Da-Shan Shiu, and Wei-Yun Ma. 2023. Extending the pre-training of bloom for improved support of traditional chinese: Models, methods and results. *arXiv* preprint arXiv:2303.04715. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. *J. Mach. Learn. Res.*, 22(107):1–48. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999. Jason Alan Fries, Leon Weber, Natasha Seelam, Gabriel Altay, Debajyoti Datta, Samuele Garda, Myungsun Kang, Ruisi Su, Wojciech Kusa, Samuel Cahyawijaya, et al. 2022. Bigbio: A framework for data-centric biomedical natural language processing. arXiv preprint arXiv:2206.15076. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation. Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021a. Larger-scale transformers for multilingual masked language modeling. arXiv preprint arXiv:2105.00572. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzm'an, and Angela Fan. 2021b. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. Francisco Guzm'an, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics. Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. 2020. Global relational models of source code. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. 2021. Measuring coding challenge competence with apps. *NeurIPS*. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning language models with (almost) no human labor. *arXiv* preprint arXiv:2212.09689. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Dániel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. 2022. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017. Joongwon Kim, Mounica Maddela, Reno Kriz, Wei Xu, and Chris Callison-Burch. 2021. BiSECT: Learning to split and rephrase sentences with bitexts. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6193– 6209, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, and Neil Houlsby. 2022. Sparse upcycling: Training mixture-ofexperts from dense checkpoints. *arXiv preprint* arXiv:2212.05055. Matej Kosec, Sheng Fu, and Mario Michael Krell. 2021. Packing: Towards 2x nlp bert acceleration. *arXiv* preprint arXiv:2107.02027. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, et al. 2022. The bigscience roots corpus: A 1.6 tb composite multilingual dataset. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In *Soviet physics doklady*, volume 10, pages 707–710. Soviet Union. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023. Starcoder: may the source be with you! *arXiv* preprint arXiv:2305.06161. Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2021. Few-shot learning with multilingual language models. arXiv preprint arXiv:2112.10668. Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. *arXiv* preprint arXiv:2205.05638. Qian Liu, Fan Zhou, Zhengbao Jiang, Longxu Dou, and Min Lin. 2023. From zero to hero: Examining the power of symbolic tasks in instruction tuning. *arXiv* preprint arXiv:2304.07995. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Robert L Logan, Ivana Balaževic, Eric Wallace, Fabio ´ Petroni, Sameer Singh, and Sebastian Riedel. 2021. Cutting down on prompts and parameters: Simple few-shot learning with language models. *arXiv* preprint arXiv:2106.13353. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. 2023a. The flan collection: Designing data and methods for effective instruction tuning. *arXiv preprint arXiv:2301.13688*. Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, et al. 2023b. A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. *arXiv preprint arXiv:2305.13169*. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Metaicl: Learning to learn in context. *arXiv preprint arXiv:2110.15943*. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021a. Cross-task generalization via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021b. Natural instructions: Benchmarking generalization to new tasks from natural language instructions. *CoRR*, abs/2104.08773. Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. *arXiv preprint* arXiv:2202.08904. Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling data-constrained language models. Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. 2022. Mteb: Massive text embedding benchmark. *arXiv preprint arXiv:2210.07316*. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint 2207.04672. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Ajay Patel, Bryan Li, Mohammad Sadegh Rasooli, Noah Constant, Colin Raffel, and Chris CallisonBurch. 2022. Bidirectional language models are also few-shot learners. *arXiv preprint arXiv:2209.14500*. Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021a. True few-shot learning with language models. *Advances in Neural Information Processing Systems*, 34:11054–11070. Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021b. True few-shot learning with language models. *CoRR*, abs/2105.11447. Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, and Samuel R Bowman. 2020. English intermediatetask training improves zero-shot cross-lingual transfer too. *arXiv preprint arXiv:2005.13013*. Edoardo M. Ponti, Goran Glavas, Olga Majewska, Qianchu Liu, Ivan Vuli'c, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal commonsense reasoning. *arXiv preprint*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Alessandro Raganato, Tommaso Pasini, Jose CamachoCollados, and Mohammad Taher Pilehvar. 2020. Xlwic: A multilingual benchmark for evaluating semantic contextualization. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7193–7206. Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aakanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio. *arXiv* preprint arXiv:2203.17189. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *2011 AAAI Spring Symposium Series*. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zeroshot task generalization. In The Tenth International Conference on Learning Representations. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022a. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? *arXiv preprint* arXiv:2103.08493. Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Bideman, Hady Elsahar, Niklas Muennighoff, Jason Phang, et al. 2022b. What language model to train if you have one million gpu hours? *arXiv preprint* arXiv:2210.15424. Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. *arXiv preprint* arXiv:2001.07676. Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few-shot text classification and natural language inference. *CoRR*, abs/2001.07676. Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Continual-t0: Progressively instructing 50+ tasks to language models without forgetting. arXiv preprint arXiv:2205.12393. Sheng Shen, Le Hou, Yanqi Zhou, Nan Du, Shayne Longpre, Jason Wei, Hyung Won Chung, Barret Zoph, William Fedus, Xinyun Chen, et al. 2023. Flan-moe: Scaling instruction-finetuned language models with sparse mixture of experts. arXiv preprint arXiv:2305.14705. Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2022. mgpt: Few-shot learners go multilingual. *arXiv preprint arXiv:2204.07580*. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990. Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, and Prem Natarajan. 2022. Alexatm 20b: Few-shot learning using a large-scale multilingual seq2seq model. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615. Hui Su, Xiao Zhou, Houjing Yu, Yuwen Chen, Zilin Zhu, Yang Yu, and Jie Zhou. 2022. Welm: A wellread pre-trained language model for chinese. *arXiv* preprint arXiv:2209.10372. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging chinese machine reading comprehension. Trans. Assoc. Comput. Linguistics, 8:141–155. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022a. Unifying language learning paradigms. arXiv preprint arXiv:2205.05131. Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal Schuster, Steven Zheng, et al. 2022b. Ul2: Unifying language learning paradigms. In The Eleventh International Conference on Learning Representations. Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. 2022c. Transcending scaling laws with 0.1% extra compute. arXiv preprint arXiv:2210.11399. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science. *arXiv* preprint arXiv:2211.09085. J"org Tiedemann. 2020. The Tatoeba Translation Challenge - Realistic data sets for low resource and multilingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174–1182. Association for Computational Linguistics. Alexey Tikhonov and Max Ryabinin. 2021. It's all in the heads: Using attention heads as a baseline for cross-lingual transfer in commonsense reasoning. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, and Noah Constant. 2022. Overcoming catastrophic forgetting in zero-shot cross-lingual generation. *arXiv preprint arXiv:2205.12647*. Ben Wang and Aran Komatsuzaki. 2021. Gpt-j-6b: A 6 billion parameter autoregressive language model. Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022a. What language model architecture and pretraining objective work best for zero-shot generalization? arXiv preprint arXiv:2204.05832. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022b. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. 2022c. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705. Zhenhailong Wang, Xiaoman Pan, Dian Yu, Dong Yu, Jianshu Chen, and Heng Ji. 2022d. Zemi: Learning zero-shot semi-parametric language models from multiple tasks. *arXiv preprint arXiv:2210.00185*. Albert Webson and Ellie Pavlick. 2021. Do promptbased models really understand the meaning of their prompts? Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020. Clue: A chinese language understanding evaluation benchmark. *arXiv preprint* arXiv:2004.05986. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification. In Proc. of EMNLP. Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, and Kai-Wei Chang. 2023. Dynosaur: A dynamic growth paradigm for instruction-tuning data curation. *arXiv preprint arXiv:2305.14327*. Zheng-Xin Yong and Vassilina Nikoulina. 2022. Adapting bigscience multilingual model to unseen languages. *arXiv preprint arXiv:2204.04873*. Zheng-Xin Yong, Hailey Schoelkopf, Niklas Muennighoff, Alham Fikri Aji, David Ifeoluwa Adelani, Khalid Almubarak, M Saiful Bari, Lintang Sutawika, Jungo Kasai, Ahmed Baruwa, et al. 2022. Bloom+ 1: Adding language support to bloom for zero-shot prompting. *arXiv preprint arXiv:2212.09535*. Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. *arXiv preprint arXiv:2106.10199*. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. Paws: Paraphrase adversaries from word scrambling. arXiv preprint arXiv:1904.01130. Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Meta-tuning language models to answer prompts better. *CoRR*, abs/2104.04670. Ming Zhu, Aneesh Jain, Karthik Suresh, Roshan Ravindran, Sindhu Tipirneni, and Chandan K. Reddy. 2022. Xlcost: A benchmark dataset for cross-lingual code intelligence. | Contents 1 Introduction | 1 | | | |---------------------------|--------------------------------------------|----|----| | 2 | Related work | 2 | | | 2.1 | Multitask learning | 2 | | | 2.2 | Multilingual models | 3 | | | 3 | Finetuning data and models | 4 | | | 3.1 | Finetuning data | | 4 | | 3.2 | Models | | 4 | | 4 | Results | 5 | | | 4.1 | Task generalization | | 5 | | 4.2 | Language generalization | | 6 | | 4.3 | Multilingual prompting | 6 | | | 4.4 | Scaling | | 7 | | 4.5 | Generation tasks | 7 | | | 4.6 | Effect of language proportions | | 8 | | 5 | Conclusion | 8 | | | 6 | Limitations | 9 | | | A | Contributions | 18 | | | B | Task generalization breakdown | 18 | | | C | Artifacts | 20 | | | D | ROOTS language contamination | 20 | | | E | Code generations | 21 | | | F | Qualitative examples | 21 | | | G | Increasing generation length | 24 | | | H | XNLI edit distances | 24 | | | I | Multilingual prompting in unseen languages | 25 | | | J | Ideas that did not work | 26 | | | K | Full results | 26 | | | L | Version control | 28 | | | M | Prompts used | 28 | | ## A Contributions This research was conducted under the BigScience project for open research, a year-long initiative targeting the study of large models and datasets. The goal of the project is to research language models in a public environment. The project has hundreds of researchers from more than 50 countries and over 250 institutions. The BigScience project was initiated by Thomas Wolf at Hugging Face, and this collaboration would not have been possible without his effort. In the following, we list contributions made to this work. Niklas Muennighoff evaluated all models, created xP3 and wrote most of the paper. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts and Hailey Schoelkopf wrote the training and evaluation code. Niklas Muennighoff and Adam Roberts trained the models. Niklas Muennighoff, Teven Le Scao, Hailey Schoelkopf, Zheng-Xin Yong, Thomas Wang, Khalid Almubarak, Alham Fikri Aji, M Saiful Bari and Zaid Alyafeai contributed prompts or datasets. Lintang Sutawika, Stella Biderman, Zheng-Xin Yong, Khalid Almubarak, M Saiful Bari and Albert Webson initiated the project. Sheng Shen conducted the contamination analysis. Samuel Albanie wrote the prompt appendix. Thomas Wang and Zheng-Xin Yong converted checkpoints. Colin Raffel, Thomas Wang, Teven Le Scao, M Saiful Bari, Edward Raff and Dragomir Radev advised the project. Niklas Muennighoff, Lintang Sutawika, Teven Le Scao, Colin Raffel, Stella Biderman, Alham Fikri Aji, Adam Roberts, Samuel Albanie, Sheng Shen, M Saiful Bari, Albert Webson, Xiangru Tang, Dragomir Radev and Edward Raff contributed to the paper. ## B Task Generalization Breakdown In Figure 9, we compare performance on English held-out tasks. We find that (a) finetuning on xP3 outperforms P3 (b) multilingual mT0 is better than monolingual T0 on *English tasks*. We think both improvements come from xP3 having more prompts and datasets than P3 (Chung et al., 2022). ![17_image_0.png](17_image_0.png) In Figure 10, we visualize task generalization to multilingual datasets. The same data is aggregated in Figure 4. Performance by prompt varies substantially highlighting that prompt engineering may still be necessary after MTF. We also find that mT0 consistently outperforms BLOOMZ on Swahili (SW), possibly due to it being a larger part of its pretraining corpus (see Figure 2 and §4.6). ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) ## C Artifacts | Artifact | Explanation | Public link | |-------------------|-----------------------------------------------------------------------|---------------------------------------------------------| | ROOTS | Multilingual pretraining corpus of BLOOM | https://huggingface.co/bigscience-data | | mC4 | Multilingual pretraining corpus used for mT5 | https://huggingface.co/datasets/mc4 | | P3 | Multitask finetuning dataset with English data & English prompts | https://huggingface.co/datasets/bigscience/P3 | | xP3 | Multitask finetuning dataset with multilingual data & English prompts | https://huggingface.co/datasets/bigscience/xP3 | | xP3all | Same as xP3 with held-out evaluation sets | https://huggingface.co/datasets/bigscience/xP3all | | xP3mt | Same as xP3 with English & multilingual machine-translated prompts | https://huggingface.co/datasets/bigscience/xP3mt | | xP3megds | Processed version of xP3 for easy usage with Megatron-DeepSpeed | https://huggingface.co/datasets/bigscience/xP3megds | | xP3x | Extension of xP3 to 277 languages | https://huggingface.co/datasets/Muennighoff/xP3x | | XGLM-7.5B | 7.5B parameter pretrained multilingual transformer | https://huggingface.co/facebook/xglm-7.5B | | T0-11B | 11B parameter model finetuned on P3 | https://huggingface.co/bigscience/t0 | | mTk-Instruct-3.7B | 3.7B parameter multitask finetuned multilingual transformer | https://huggingface.co/allenai/mtk-instruct-3b-def-pos | | mTk-Instruct-13B | 13B parameter multitask finetuned multilingual transformer | https://huggingface.co/allenai/mtk-instruct-11b-def-pos | | BLOOM-560M | 560M parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom-560m | | BLOOM-1.1B | 1.1B parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom-1b1 | | BLOOM-1.7B | 1.7B parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom-1b7 | | BLOOM-3B | 3B parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom-3b | | BLOOM-7.1B | 7.1B parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom-7b1 | | BLOOM | 176B parameter model pretrained on ROOTS | https://huggingface.co/bigscience/bloom | | BLOOMZ-560M | 560M parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz-560m | | BLOOMZ-1.1B | 1.1B parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz-1b1 | | BLOOMZ-1.7B | 1.7B parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz-1b7 | | BLOOMZ-3B | 3B parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz-3b | | BLOOMZ-7.1B | 7.1B parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz-7b1 | | BLOOMZ-7.1B-MT | 7.1B parameter model finetuned on xP3mt | https://huggingface.co/bigscience/bloomz-7b1-mt | | BLOOMZ-7.1B-P3 | 7.1B parameter model finetuned on P3 | https://huggingface.co/bigscience/bloomz-7b1-p3 | | BLOOMZ | 176B parameter model finetuned on xP3 | https://huggingface.co/bigscience/bloomz | | BLOOMZ-MT | 176B parameter model finetuned on xP3mt | https://huggingface.co/bigscience/bloomz-mt | | BLOOMZ-P3 | 176B parameter model finetuned on P3 | https://huggingface.co/bigscience/bloomz-p3 | | mT5-300M | 300M parameter model pretrained on a sampled version of mC4 | https://huggingface.co/google/mt5-small | | mT5-580M | 580M parameter model pretrained on a sampled version of mC4 | https://huggingface.co/google/mt5-base | | mT5-1.2B | 1.2B parameter model pretrained on a sampled version of mC4 | https://huggingface.co/google/mt5-large | | mT5-3.7B | 3.7B parameter model pretrained on a sampled version of mC4 | https://huggingface.co/google/mt5-xl | | mT5-13B | 13B parameter model pretrained on a sampled version of mC4 | https://huggingface.co/google/mt5-xxl | | mT0-300M | 300M parameter model finetuned on xP3 | https://huggingface.co/bigscience/mt0-small | | mT0-580M | 580M parameter model finetuned on xP3 | https://huggingface.co/bigscience/mt0-base | | mT0-1.2B | 1.2B parameter model finetuned on xP3 | https://huggingface.co/bigscience/mt0-large | | mT0-3.7B | 3.7B parameter model finetuned on xP3 | https://huggingface.co/bigscience/mt0-xl | | mT0-13B | 13B parameter model finetuned on xP3 | https://huggingface.co/bigscience/mt0-xxl | | mT0-13B-MT | 13B parameter model finetuned on xP3mt | https://huggingface.co/bigscience/mt0-xxl-mt | | mT0-13B-P3 | 13B parameter model finetuned on P3 | https://huggingface.co/bigscience/mt0-xxl-p3 | Table 3 lists all artifacts used or released in this work. We make all our work accessible under the most permissive licenses available to us. ## D Roots Language Contamination While the BLOOM ROOTS corpus (Laurençon et al., 2022) was collected from 46 natural languages and 13 programming languages, we find that sentences from the same document do not always belong to the collected (meta) language. Some sentences use languages like Russian or Japanese that were not the intentionally collected parts. This "language contamination" may stem from "code-mixing" or different languages being used in code comments. To investigate the extent of contamination, we randomly sample 1% of the documents from ROOTS for a total of 51M documents. For each document, we use cld32(Xue et al., 2020) to identify the languages used in each sentence and compare them with the meta language of the document. We summarize our results in Figure 11. It shows that ROOTS contains unintentionally collected languages, such as Burmese (my: 0.00003%), Thai (th: 0.006%), Turkish (tr: 0.03%), Greek (el: 0.03%), Russian (ru: 0.03%), Bulgarian (bg: 0.05%), Estonian (et: 0.06%), Haitian (ht: 0.12%), German (de: 0.21%), Italian (it: 0.28%) and Japanese (ja: 0.54%). These "unseen" languages only have 2https://github.com/google/cld3 ![20_image_0.png](20_image_0.png) Figure 11: Language composition of ROOTS-IDENTIFY-1%, ROOTS-1% and the mT5 corpus. All mT5 languages are depicted. ROOTS-1% is a random 1% sample of ROOTS with its assigned meta-languages. ROOTS-IDENTIFY1% are the actual languages in ROOTS-1% re-identified using cld3. small sentence proportions in our subsample compared to English (en: 46.23%), French (fr: 15.73%) and Spanish (es: 13.38%). Yet, they may help the language generalization of BLOOMZ models described in §4.2. Japanese is mostly mixed in the meta English documents (47%), meta Code documents (8%) and meta Chinese documents (5%). Meanwhile, Russian is mostly mixed in the meta English documents (52%), meta Code documents (19%) and meta French documents (11%). ## E Code Generations Table 4 provides statistics on code generations and code data. We find that BLOOM generates on average ![20_image_1.png](20_image_1.png) 70% more characters and 17x more comments than BLOOMZ for a given problem from HumanEval. Figure 12 compares an example solution from BLOOM and BLOOMZ. While both solutions are correct, BLOOMZ is biased towards short and concise answers. ![20_image_2.png](20_image_2.png) Figure 12: Code generations of BLOOM and BLOOMZ on HumanEval. The model is prompted to generate after the final """. The generation is stopped after an end-of-sequence token or a return statement followed by a newline. | Data (→) | HumanEval generations | Fine-tuning data | | |-----------------------------|-------------------------|--------------------|------| | BLOOM | BLOOMZ | in xP3 (code data) | | | Average characters | 247 | 144 | 531 | | Average Python comments (#) | 0.69 | 0.04 | 0.85 | Table 4: Number of characters and comments for generations and fine-tuning data. For finetuning data, the statistics are computed for the targets that the model is tasked to generate, not the input. ## F Qualitative Examples ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) ![22_image_0.png](22_image_0.png) లో బేక్ పోర్షన్ర్ష అంటే **ఏమిటి**? ![22_image_1.png](22_image_1.png) ## G Increasing Generation Length In §4.5, we found performance on generative tasks to worsen in later stages of training. To investigate this problem further, we study a 7.1 billion parameter BLOOM model that is finetuned for 13 billion tokens, which results in a low BLEU score of 0 and very short generations as shown in Table 5 (Default). We can solve this problem with two high-level strategies: (a) Reducing short tasks during finetuning and (b) Forcing a minimum generation length. For (a), we do so by either early stopping, upweighting long tasks or adding new long tasks. As the majority of our finetuning data are single sentences, early stopping has the effect of finetuning on fewer short sentences. Upweighting long tasks is done by removing the loss normalization explained in §3.2. This has the effect of each token getting equal weight regardless of the task, which upweights long tasks, as they have more tokens. Finally, for adding long tasks, we add tasks that require multi-sentence generations, such as generating an entire news article given a title. These long tasks collectively make up 10% of finetuning data for this ablation. All three solutions result in longer average generations as shown in Table 5 and slightly better BLEU scores, albeit effects are still small. For (b), we force the model to generate a minimum number of tokens at inference. Our benchmarking task, MultiEURLEX (Chalkidis et al., 2021), requires multi-sentence generations with an average target length of 1965 characters (about 491 tokens). By forcing the model to generate at least 768 tokens, we ensure that the generation is at least as long as the target. This boosts the BLEU score significantly to 9.05. This approach is thus an effective strategy to maintain long generations of good quality. For our final models, we employ early stopping, adding of long tasks and recommend forcing a minimum generation length at inference for long generations. We do not upweight longer tasks, as it worsens accuracy on our NLU validation tasks by 10%. The number of tokens our final models are fine-tuned for are displayed in Table 6. | Model | Finetuning tokens | BLEU Score | Average generation length (characters) | |---------------------------------|---------------------|--------------|------------------------------------------| | Default | 13 billion | 0.00 | 122 | | Early stopping | 6 billion | 0.00 | 155 | | Upweight longer tasks | 13 billion | 0.06 | 364 | | Add more long tasks | 13 billion | 0.06 | 136 | | Forcing 768 tokens at inference | 13 billion | 9.05 | 3072 | Table 5: 7.1 billion parameter BLOOMZ models with various modifications benchmarked on MultiEURLEX English-French translation (Chalkidis et al., 2021). We benchmark three prompts on both English to French and French to English translation. We then take the median performance across the three prompts for each translation direction and average the two scores to arrive at the BLEU score reported. | Model | mT0-300M | mT0-560M | mT0-1.2B | mT0-3.7B | mT0-13B | | |---------|-------------|-------------|-------------|------------|-------------|--------| | Tokens | 4.62 | 4.62 | 4.62 | 1.85 | 1.29 | | | Model | BLOOMZ-560M | BLOOMZ-1.1B | BLOOMZ-1.7B | BLOOMZ-3B | BLOOMZ-7.1B | BLOOMZ | | Tokens | 3.67 | 0.502 | 8.39 | 8.39 | 4.19 | 2.09 | Table 6: Tokens in billions that final models are finetuned for. We early-stop models based on validation performance. For -MT and -P3 variants we take the checkpoint after the same number of steps as for their default versions. ## H Xnli Edit Distances As models are surprisingly capable of solving XNLI in languages they were never intentionally trained on (§4.2), we investigate whether XNLI can be solved without any language understanding. To do so, we compute edit distances using the Levenshtein methodology (Levenshtein et al., 1966) between premise | Premise | Hypothesis | Lev. distance | Label | |--------------------------------------------------|--------------------------------------------------|-----------------|---------------| | probably so probably so um-hum | probably yes so uh-huh | 13 | Entailment | | equivalent to increasing national saving to 19 . | National savings are 18 now . | 34 | Neutral | | The Inglethorps did not appear . | The Inglethorps were the first ones to turn up . | 26 | Contradiction | Table 7: Three samples from the English XNLI split. To solve XNLI models need to classify whether the premise entails, is neutral to or contradicts the hypothesis. Samples are cherry-picked. and hypothesis. Table 7 shows three samples from the English XNLI and their edit distances. Our hypothesis is that entailment pairs generally need to cover similar content, and thus have similar distance. Contradiction pairs still need to cover similar content but differ in at least one major way. Meanwhile for neutral pairs, hypothesis and premise may be about completely different topics, hence they should have the highest distance. In Table 8 we compute distances across all Thai, Turkish and Greek samples, three languages where we found language generalization to occur for BLOOMZ. Results confirm our hypothesis that distances are generally largest for neutral samples and smallest for entailment samples. However, the aggregate differences are very small with only a few edits difference. For example, Thai contradiction samples only have 2.5 edits more on average than entailment samples. Thus, comparing characters based on edit distance alone is likely not sufficient to fully explain the language generalization of models in §4.2. | Label (→) | Entailment | Neutral | Contradiction | |------------------------|--------------|-----------|-----------------| | Language (↓) Thai (th) | 79.08 | 82.64 | 81.52 | | Turkish (tr) | 76.93 | 80.59 | 80.24 | | Greek (el) | 90.90 | 95.10 | 93.93 | Table 8: Levenshtein distances between hypothesis and premise averaged across samples from different XNLI labels. Each label has 830 samples per language subset. ## I Multilingual Prompting In Unseen Languages Table 9 shows aggregate performances on languages not intentionally seen during pretraining nor finetuning for BLOOMZ and only seen during pretraining for mT0. For BLOOMZ, performance drops significantly when translating the prompts to the respective unseen languages. Unlike on translated prompts for seen languages (§4.3), BLOOMZ-MT performs worse than BLOOMZ for machine-translated prompts in unseen languages. This is likely because BLOOMZ-MT has not been finetuned on prompts in these languages. For mT0 differences are less significant. | Task | Prompt | Average accuracy | | | | |-------------|-----------|--------------------|------------|-------|-------| | BLOOMZ | BLOOMZ-MT | mT0-13B | mT0-13B-MT | | | | XNLI | EN | 45.65 | 43.2 | 48.52 | 51.33 | | MT | 36.48 | 35.67 | 41.86 | 39.78 | | | XCOPA | EN | 54.27 | 53.67 | 72.67 | 71.6 | | MT | 53.2 | 53.0 | 71.57 | 70.87 | | | XStoryCloze | EN | 61.59 | 61.36 | 79.31 | 80.13 | | MT | 60.5 | 59.91 | 80.21 | 80.28 | | | XWinograd | EN | 55.98 | 54.54 | 70.81 | 72.0 | | MT | 53.11 | 52.46 | 67.86 | 70.45 | | ## J Ideas That Did Not Work We list several experiments that did not improve over baseline results: Non-causal In a non-causal or prefix language model, the model attends bidirectionally over input tokens and only causally over target tokens. Given a pretrained causal decoder, other work found that multitask finetuning in a non-causal setup performed better than causal finetuning (Wang et al., 2022a; Tay et al., 2022c). However, in our experiments, non-causal finetuning did not improve over causal finetuning. Special tokens Instead of separating inputs and targets with a space, we experimented with special tokens. Using the end-of-sequence token as a separator or a completely new token that the model would learn during finetuning significantly worsened results. The models may need to train on more tokens, possibly even during pretraining, to learn these new special tokens (Zeng et al., 2022). Fixing prompts PromptSource has been written with encoder-decoder models in mind, where inputs and targets are fed into different models. As a consequence, human-written prompts in PromptSource often lack separators between input and target. For our decoder models, we decided to separate them with a space. We additionally experimented with leaving them as is or rewriting a significant amount of prompts, but neither improved significantly over space separation. BitFit Previous work has shown bias-only finetuning (Zaken et al., 2021) of large language models to be sufficient for strong downstream performance (Logan et al., 2021; Hu et al., 2021; Muennighoff, 2022; Liu et al., 2022; Ding et al., 2022; Muennighoff et al., 2022). We found multitask finetuning of only biases to perform 15 absolute percentage points worse on the average of held-out tasks for BLOOMZ-7.1B. ## K Full Results Table 10 shows all evaluation results on test datasets. Table 11 displays evaluation results on validation datasets which we use for checkpoint selection. | Pretrained | Pretrained + Multitask finetuned | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |----------------------------------------------|--------------------------------------|------------------------------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------------------|-------------------|-------------------|-------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----|----| | Task | Dataset | Config Split | Prompt Metric XGLM-7.5B BLOOM-560M BLOOM-1.1B BLOOM-1.7B BLOOM-3B BLOOM-7.1B BLOOM | T0-11B mTk-Instruct-3.7B mTk-Instruct-13B mT0-300M mT0-560M mT0-1.2B mT0-3.7B mT0-13B mT0-13B-MT mT0-13B-P3 BLOOMZ-560M BLOOMZ-1.1B BLOOMZ-1.7B BLOOMZ-3B BLOOMZ-7.1B BLOOMZ-7.1B-MT BLOOMZ-7.1B-P3 BLOOMZ BLOOMZ-MT BLOOMZ-P3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | Coref. resolution winogrande | xl | validation EN Median acc. 49.25 | 49.88 | 50.99 | 49.57 | 49.96 | 49.41 | 48.62 60.46 50.99 | 52.33 | 49.57 | 51.62 | 50.51 | 52.01 | 62.27 | 62.51 | 56.91 | 49.80 | 51.07 | 50.75 | 51.78 | 55.41 | 55.88 | 51.78 | 58.41 | 58.64 | 55.64 | | | | | | Coref. resolution winogrande | xl | validation EN Max acc. | 50.12 | 50.99 | 51.62 | 50.91 | 51.46 | 50.91 | 49.64 63.61 51.14 | 54.54 | 50.51 | 53.28 | 51.78 | 52.49 | 63.38 | 62.67 | 58.56 | 52.41 | 52.33 | 51.14 | 53.67 | 55.80 | 56.51 | 54.06 | 59.27 | 59.98 | 57.06 | | | | | Coref. resolution xwinograd | en | test | EN Median acc. 50.88 | 50.62 | 51.10 | 50.67 | 50.97 | 50.15 | 50.28 62.75 52.22 | 52.77 | 50.11 | 51.01 | 52.30 | 57.94 | 79.91 | 81.33 | 59.87 | 50.24 | 50.15 | 52.09 | 54.84 | 60.09 | 59.31 | 52.26 | 67.87 | 64.73 | 59.74 | | | | | Coref. resolution xwinograd | en | test | EN Max acc. | 51.61 | 51.53 | 51.57 | 51.66 | 51.70 | 50.71 | 51.27 70.71 53.12 | 60.82 | 51.31 | 51.40 | 54.80 | 61.89 | 81.29 | 83.31 | 70.71 | 51.01 | 50.49 | 56.34 | 59.23 | 66.02 | 65.76 | 53.72 | 69.08 | 69.33 | 60.65 | | | | Coref. resolution xwinograd | fr | test | EN Median acc. 50.60 | 46.99 | 48.19 | 50.60 | 46.99 | 50.60 | 51.81 54.22 50.60 | 53.01 | 50.60 | 51.81 | 49.40 | 56.63 | 77.11 | 73.49 | 55.42 | 49.40 | 53.01 | 51.81 | 49.40 | 53.01 | 53.01 | 53.01 | 65.06 | 59.04 | 53.01 | | | | | Coref. resolution xwinograd | fr | test | EN Max acc. | 51.81 | 51.81 | 56.63 | 55.42 | 54.22 | 51.81 | 53.01 56.63 53.01 | 60.24 | 53.01 | 55.42 | 56.63 | 59.04 | 78.31 | 78.31 | 61.45 | 51.81 | 56.63 | 55.42 | 53.01 | 57.83 | 55.42 | 55.42 | 68.67 | 68.67 | 59.04 | | | | Coref. resolution xwinograd | fr | test | MT Median acc. 46.99 | 48.19 | 53.01 | 48.19 | 46.99 | 50.60 | 49.40 54.22 50.60 | 53.01 | 49.40 | 53.01 | 53.01 | 56.63 | 68.67 | 75.90 | 53.01 | 48.19 | 50.60 | 50.60 | 50.60 | 51.81 | 55.42 | 51.81 | 56.63 | 57.83 | 53.01 | | | | | Coref. resolution xwinograd | fr | test | MT Max acc. | 51.81 | 51.81 | 55.42 | 53.01 | 55.42 | 51.81 | 50.60 59.04 57.83 | 63.86 | 54.22 | 55.42 | 55.42 | 59.04 | 75.90 | 75.90 | 61.45 | 51.81 | 59.04 | 51.81 | 50.60 | 57.83 | 57.83 | 54.22 | 65.06 | 66.27 | 56.63 | | | | Coref. resolution xwinograd | jp | test | EN Median acc. 49.22 | 50.36 | 50.89 | 51.62 | 51.41 | 50.89 | 50.26 51.51 52.66 | 53.18 | 50.89 | 50.68 | 51.41 | 56.93 | 74.35 | 77.37 | 60.27 | 49.74 | 49.84 | 49.95 | 50.26 | 50.68 | 49.64 | 50.36 | 57.46 | 55.47 | 51.09 | | | | | Coref. resolution xwinograd | jp | test | EN Max acc. | 52.03 | 51.09 | 52.03 | 52.35 | 52.24 | 52.76 | 50.99 51.82 53.18 | 56.20 | 52.14 | 51.41 | 52.24 | 60.27 | 78.62 | 78.62 | 65.59 | 50.57 | 51.09 | 52.55 | 52.45 | 52.87 | 51.62 | 51.93 | 59.65 | 58.39 | 56.00 | | | | Coref. resolution xwinograd | jp | test | MT Median acc. 48.91 | 50.89 | 50.26 | 50.78 | 51.93 | 49.53 | 51.72 51.51 51.20 | 53.28 | 51.41 | 50.05 | 50.26 | 55.27 | 73.31 | 78.42 | 61.00 | 50.78 | 50.57 | 49.64 | 50.68 | 49.95 | 50.26 | 50.36 | 52.87 | 52.66 | 50.89 | | | | | Coref. resolution xwinograd | jp | test | MT Max acc. | 50.99 | 52.03 | 52.03 | 52.24 | 52.97 | 50.99 | 53.18 52.03 53.70 | 56.41 | 52.45 | 51.09 | 53.08 | 59.02 | 78.21 | 80.19 | 66.11 | 52.03 | 51.82 | 49.95 | 52.14 | 52.76 | 51.82 | 51.51 | 53.91 | 53.60 | 54.33 | | | | Coref. resolution xwinograd | pt | test | EN Median acc. 50.57 | 51.33 | 51.71 | 51.71 | 50.19 | 48.67 | 50.95 52.47 52.09 | 56.27 | 49.81 | 49.81 | 53.61 | 58.17 | 72.24 | 76.05 | 56.27 | 50.19 | 50.19 | 50.95 | 52.47 | 53.99 | 54.37 | 51.33 | 63.50 | 60.08 | 53.99 | | | | | Coref. resolution xwinograd | pt | test | EN Max acc. | 53.99 | 53.99 | 53.99 | 53.99 | 54.37 | 50.19 | 51.33 54.75 52.09 | 58.56 | 50.57 | 52.09 | 55.13 | 60.84 | 76.43 | 80.99 | 61.98 | 52.09 | 51.33 | 53.23 | 53.61 | 57.79 | 57.41 | 53.99 | 64.26 | 64.64 | 60.46 | | | | Coref. resolution xwinograd | pt | test | MT Median acc. 50.95 | 52.09 | 50.57 | 49.81 | 50.57 | 50.57 | 53.23 52.47 53.23 | 52.47 | 49.81 | 47.15 | 52.47 | 54.75 | 71.48 | 75.67 | 55.89 | 52.47 | 50.57 | 49.81 | 50.19 | 52.85 | 53.61 | 51.33 | 60.46 | 59.70 | 54.75 | | | | | Coref. resolution xwinograd | pt | test | MT Max acc. | 53.99 | 53.99 | 53.99 | 53.99 | 53.99 | 53.99 | 53.99 56.65 54.37 | 55.89 | 51.71 | 52.09 | 56.27 | 66.16 | 77.95 | 80.61 | 64.26 | 53.99 | 54.75 | 53.23 | 52.47 | 53.99 | 55.51 | 52.09 | 64.26 | 62.74 | 59.32 | | | | Coref. resolution xwinograd | ru | test | EN Median acc. 53.33 | 51.43 | 52.38 | 54.29 | 52.70 | 54.29 | 54.29 51.43 53.97 | 56.83 | 49.52 | 51.11 | 52.38 | 56.83 | 74.29 | 73.97 | 56.51 | 52.06 | 49.52 | 51.75 | 52.38 | 53.97 | 53.02 | 48.57 | 57.78 | 56.51 | 52.70 | | | | | Coref. resolution xwinograd | ru | test | EN Max acc. | 53.97 | 53.97 | 53.97 | 56.19 | 54.92 | 55.24 | 57.14 53.33 55.56 | 60.32 | 53.65 | 52.70 | 55.56 | 59.05 | 76.51 | 79.05 | 62.22 | 53.97 | 50.48 | 53.33 | 53.97 | 54.92 | 55.87 | 49.21 | 60.95 | 60.32 | 56.19 | | | | Coref. resolution xwinograd | ru | test | MT Median acc. 53.33 | 51.75 | 52.38 | 53.97 | 52.06 | 53.97 | 52.70 50.16 53.33 | 54.29 | 52.06 | 51.75 | 52.70 | 52.38 | 66.98 | 71.43 | 55.87 | 51.43 | 51.43 | 53.02 | 49.52 | 52.06 | 52.70 | 47.62 | 54.29 | 55.87 | 54.92 | | | | | Coref. resolution xwinograd | ru | test | MT Max acc. | 54.60 | 53.97 | 53.97 | 54.60 | 54.92 | 55.56 | 55.87 52.70 54.92 | 58.73 | 54.29 | 53.97 | 54.60 | 54.60 | 72.06 | 75.24 | 58.41 | 53.97 | 53.97 | 55.24 | 53.97 | 53.33 | 54.92 | 53.97 | 60.32 | 57.14 | 57.14 | | | | Coref. resolution xwinograd | zh | test | EN Median acc. 49.01 | 49.21 | 48.81 | 50.20 | 50.00 | 50.60 | 49.21 49.21 52.18 | 56.75 | 52.78 | 52.18 | 51.59 | 57.54 | 69.25 | 76.19 | 58.53 | 54.17 | 53.97 | 51.39 | 55.16 | 57.94 | 54.37 | 52.18 | 68.65 | 62.10 | 51.59 | | | | | Coref. resolution xwinograd | zh | test | EN Max acc. | 50.79 | 52.18 | 52.78 | 53.77 | 55.16 | 55.36 | 52.98 49.40 54.76 | 57.14 | 54.17 | 53.77 | 54.17 | 62.90 | 77.38 | 79.17 | 65.67 | 54.76 | 55.16 | 56.15 | 60.91 | 63.69 | 62.70 | 52.98 | 69.05 | 70.63 | 55.95 | | | | Coref. resolution xwinograd | zh | test | HT Median acc. - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 50.99 | - | 49.40 | - | - | - | | | Coref. resolution xwinograd | zh | test | HT Max acc. | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 59.72 | - | 52.18 | - | - | - | | Coref. resolution xwinograd | zh | test | MT Median acc. 48.02 | 49.01 | 49.01 | 49.40 | 49.60 | 50.79 | 49.60 49.21 53.17 | 53.17 | 51.19 | 51.79 | 50.60 | 56.35 | 67.86 | 72.42 | 57.74 | 50.79 | 51.19 | 51.79 | 52.98 | 52.38 | 57.94 | 50.40 | 62.70 | 67.46 | 57.14 | | | | | Coref. resolution xwinograd | zh | test | MT Max acc. | 49.21 | 55.56 | 53.17 | 56.15 | 53.57 | 56.94 | 57.74 49.21 54.56 | 57.74 | 53.37 | 53.97 | 54.37 | 62.10 | 72.82 | 82.34 | 64.09 | 51.98 | 54.17 | 54.17 | 55.16 | 60.71 | 62.50 | 52.38 | 70.24 | 76.39 | 60.71 | | | | NLI | anli | r1 | validation EN Median acc. 33.30 | 33.60 | 33.50 | 33.40 | 32.90 | 33.40 | 36.20 44.50 29.90 | 34.20 | 33.30 | 31.30 | 30.70 | 37.50 | 48.00 | 48.50 | 44.90 | 29.60 | 29.10 | 33.10 | 38.60 | 40.90 | 40.10 | 34.50 | 46.00 | 45.60 | 40.60 | | | | | NLI | anli | r1 | validation EN Max acc. | 33.50 | 34.40 | 33.70 | 33.80 | 33.40 | 33.70 | 37.60 45.00 34.80 | 35.40 | 34.70 | 33.30 | 33.30 | 38.20 | 49.50 | 49.50 | 47.30 | 33.40 | 33.30 | 34.00 | 40.10 | 42.10 | 42.60 | 35.10 | 48.60 | 49.70 | 41.70 | | | | NLI | anli | r2 | validation EN Median acc. 33.40 | 33.20 | 33.10 | 33.30 | 33.20 | 33.30 | 33.70 39.30 32.40 | 32.50 | 33.20 | 33.30 | 32.50 | 34.40 | 41.70 | 40.60 | 37.90 | 32.00 | 33.20 | 34.30 | 34.60 | 38.20 | 37.60 | 33.90 | 41.90 | 41.00 | 37.80 | | | | | NLI | anli | r2 | validation EN Max acc. | 35.00 | 33.70 | 33.50 | 36.00 | 34.90 | 33.40 | 34.80 39.60 33.20 | 34.20 | 34.00 | 33.50 | 34.70 | 34.80 | 43.00 | 42.00 | 40.20 | 33.40 | 33.50 | 36.10 | 36.80 | 39.50 | 39.40 | 35.40 | 44.10 | 45.00 | 39.30 | | | | NLI | anli | r3 | validation EN Median acc. 32.92 | 33.50 | 33.42 | 33.17 | 33.33 | 33.08 | 34.58 41.33 32.83 | 33.33 | 33.00 | 33.00 | 33.50 | 37.42 | 44.83 | 46.25 | 40.50 | 33.25 | 33.08 | 35.42 | 37.75 | 38.00 | 38.92 | 34.08 | 42.67 | 41.33 | 40.08 | | | | | NLI | anli | r3 | validation EN Max acc. | 34.25 | 35.58 | 33.50 | 33.67 | 33.58 | 33.58 | 36.33 43.75 33.00 | 34.83 | 33.83 | 33.33 | 34.75 | 39.00 | 46.08 | 48.17 | 44.17 | 33.50 | 34.50 | 37.08 | 40.00 | 41.00 | 42.00 | 37.58 | 45.50 | 45.58 | 42.83 | | | | NLI | super_glue | cb | validation EN Median acc. 39.29 | 42.86 | 42.86 | 28.57 | 32.14 | 44.64 | 33.93 76.79 44.64 | 26.79 | 44.64 | 46.43 | 50.00 | 69.64 | 82.14 | 87.50 | 67.86 | 51.79 | 53.57 | 58.93 | 67.86 | 57.14 | 71.43 | 53.57 | 76.79 | 76.79 | 75.00 | | | | | NLI | super_glue | cb | validation EN Max acc. | 41.07 | 60.71 | 48.21 | 42.86 | 51.79 | 57.14 | 42.86 78.57 51.79 | 57.14 | 50.00 | 50.00 | 51.79 | 85.71 | 85.71 | 87.50 | 76.79 | 53.57 | 58.93 | 71.43 | 75.00 | 80.36 | 83.93 | 62.50 | 82.14 | 87.50 | 85.71 | | | | NLI | super_glue | rte | validation EN Median acc. 52.71 | 53.07 | 47.65 | 49.46 | 54.15 | 52.35 | 50.18 83.39 56.68 | 51.26 | 58.84 | 65.70 | 62.09 | 76.90 | 83.03 | 83.75 | 80.14 | 64.26 | 53.07 | 73.29 | 72.56 | 79.06 | 79.06 | 67.15 | 81.95 | 81.23 | 73.65 | | | | | NLI | super_glue | rte | validation EN Max acc. | 53.07 | 54.15 | 52.71 | 53.79 | 57.40 | 55.96 | 54.15 84.84 61.37 | 55.23 | 61.01 | 66.43 | 64.26 | 78.70 | 85.56 | 84.84 | 83.03 | 67.15 | 65.70 | 76.17 | 76.17 | 84.12 | 82.67 | 78.70 | 85.56 | 85.92 | 85.20 | | | | NLI | xnli | ar | validation EN Median acc. 33.33 | 33.82 | 33.57 | 33.98 | 35.94 | 33.82 | 33.78 33.90 35.10 | 35.10 | 33.90 | 40.84 | 38.15 | 49.72 | 56.51 | 57.63 | 54.82 | 39.80 | 41.33 | 46.99 | 48.23 | 51.20 | 49.28 | 47.99 | 54.38 | 51.85 | 46.39 | | | | | NLI | xnli | ar | validation EN Max acc. | 34.98 | 36.95 | 34.78 | 35.90 | 36.59 | 37.99 | 34.46 34.22 39.72 | 38.31 | 37.43 | 41.85 | 42.61 | 51.85 | 57.91 | 58.03 | 56.06 | 44.46 | 46.59 | 50.04 | 53.29 | 53.25 | 55.58 | 50.64 | 60.68 | 58.03 | 55.22 | | | | NLI | xnli | ar | validation HT Median acc. 34.30 | 33.37 | 33.33 | 34.06 | 33.37 | 33.33 | 33.37 33.33 33.65 | 35.62 | 32.57 | 33.37 | 34.34 | 39.32 | 42.93 | 49.16 | 47.55 | 34.78 | 35.30 | 35.82 | 39.36 | 41.85 | 39.00 | 36.35 | 38.63 | 37.31 | 50.32 | | | | | NLI | xnli | ar | validation HT Max acc. | 37.31 | 33.45 | 33.41 | 35.38 | 35.26 | 37.79 | 34.18 33.65 35.50 | 37.79 | 36.02 | 34.14 | 35.14 | 49.56 | 50.04 | 55.22 | 54.82 | 36.47 | 38.27 | 45.06 | 47.39 | 48.07 | 46.10 | 43.21 | 42.29 | 43.01 | 56.71 | | | | NLI | xnli | ar | validation MT Median acc. 33.33 | 33.33 | 33.33 | 33.33 | 33.49 | 35.06 | 33.61 33.33 33.25 | 33.78 | 33.29 | 33.37 | 33.33 | 33.41 | 33.33 | 35.14 | 33.37 | 33.25 | 33.33 | 33.53 | 33.90 | 33.49 | 34.66 | 33.53 | 33.33 | 41.97 | 33.37 | | | | | NLI | xnli | ar | validation MT Max acc. | 34.22 | 33.45 | 35.42 | 33.69 | 34.54 | 36.67 | 36.95 33.45 34.10 | 36.27 | 33.33 | 33.53 | 34.22 | 33.94 | 34.18 | 42.85 | 39.48 | 36.18 | 40.32 | 35.94 | 41.89 | 49.12 | 48.55 | 42.89 | 35.50 | 45.42 | 47.51 | | | | NLI | xnli | bg | validation EN Median acc. 33.37 | 33.33 | 33.33 | 33.41 | 33.33 | 33.37 | 33.13 34.66 34.30 | 34.50 | 33.86 | 40.44 | 41.49 | 52.65 | 59.24 | 59.80 | 56.79 | 37.27 | 35.46 | 38.59 | 39.36 | 43.49 | 41.20 | 41.65 | 47.19 | 43.69 | 41.16 | | | | | NLI | xnli | bg | validation EN Max acc. | 37.23 | 33.45 | 34.66 | 34.78 | 34.62 | 34.66 | 33.90 35.66 39.92 | 36.59 | 37.55 | 42.33 | 43.94 | 54.18 | 59.88 | 59.92 | 58.23 | 39.76 | 40.40 | 42.17 | 43.82 | 43.61 | 44.90 | 43.98 | 48.43 | 46.75 | 46.63 | | | | NLI | xnli | bg | validation MT Median acc. 33.33 | 33.33 | 33.33 | 33.37 | 33.33 | 33.09 | 33.05 33.33 34.34 | 36.63 | 33.33 | 33.69 | 34.10 | 40.72 | 38.39 | 46.67 | 41.93 | 33.33 | 34.74 | 33.33 | 33.33 | 33.49 | 33.65 | 33.86 | 34.10 | 33.41 | 33.41 | | | | | NLI | xnli | bg | validation MT Max acc. | 33.33 | 33.73 | 33.33 | 33.45 | 34.62 | 34.70 | 33.49 33.33 37.19 | 39.44 | 33.65 | 35.54 | 38.31 | 48.55 | 54.70 | 52.53 | 48.23 | 33.33 | 39.80 | 34.10 | 36.27 | 35.02 | 34.66 | 34.58 | 39.20 | 37.43 | 43.98 | | | | NLI | xnli | de | validation EN Median acc. 33.37 | 33.37 | 34.14 | 33.29 | 33.45 | 33.33 | 33.09 43.94 36.10 | 35.14 | 33.94 | 41.37 | 42.25 | 52.93 | 60.12 | 59.84 | 57.03 | 37.71 | 36.27 | 41.12 | 40.76 | 45.86 | 44.22 | 41.85 | 53.13 | 45.18 | 46.43 | | | | | NLI | xnli | de | validation EN Max acc. | 36.10 | 34.02 | 35.02 | 33.65 | 35.18 | 34.26 | 33.65 45.90 40.52 | 35.50 | 35.78 | 42.41 | 44.18 | 54.78 | 60.64 | 60.16 | 58.59 | 39.36 | 40.12 | 42.73 | 45.26 | 46.83 | 48.92 | 47.03 | 54.38 | 53.69 | 50.16 | | | | NLI | xnli | de | validation MT Median acc. 33.13 | 33.29 | 33.37 | 33.37 | 33.33 | 33.41 | 33.41 33.94 34.30 | 39.32 | 33.33 | 33.33 | 33.41 | 37.87 | 37.51 | 35.26 | 33.41 | 33.41 | 34.78 | 33.61 | 34.26 | 33.69 | 33.86 | 35.70 | 39.56 | 36.79 | 39.32 | | | | | NLI | xnli | de | validation MT Max acc. | 36.06 | 33.45 | 33.45 | 33.65 | 34.54 | 34.58 | 36.39 40.08 35.74 | 41.45 | 33.86 | 33.41 | 34.54 | 47.47 | 51.37 | 45.82 | 50.56 | 35.50 | 35.58 | 38.27 | 37.63 | 36.22 | 37.99 | 36.47 | 42.13 | 38.96 | 44.70 | | | | NLI | xnli | el | validation EN Median acc. 33.29 | 33.33 | 33.45 | 33.37 | 33.33 | 33.33 | 33.49 33.57 34.50 | 34.82 | 34.62 | 39.76 | 41.57 | 52.57 | 58.80 | 58.88 | 56.10 | 37.95 | 35.50 | 38.43 | 40.36 | 41.08 | 40.92 | 40.68 | 45.94 | 42.29 | 39.12 | | | | | NLI | xnli | el | validation EN Max acc. | 36.83 | 33.90 | 34.02 | 34.58 | 35.54 | 34.42 | 33.73 34.10 40.08 | 37.31 | 37.43 | 40.92 | 43.94 | 53.78 | 59.00 | 59.20 | 57.35 | 40.96 | 39.32 | 41.81 | 42.61 | 41.53 | 42.89 | 41.89 | 47.43 | 46.55 | 43.05 | | | | NLI | xnli | el | validation MT Median acc. 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.37 | 33.33 33.33 33.53 | 34.70 | 33.21 | 33.33 | 33.33 | 33.29 | 45.06 | 34.78 | 33.49 | 33.33 | 33.33 | 33.41 | 33.33 | 33.78 | 33.33 | 33.37 | 34.82 | 34.74 | 33.61 | | | | | NLI | xnli | el | validation MT Max acc. | 34.22 | 33.41 | 34.62 | 33.33 | 33.37 | 34.90 | 33.37 34.70 39.08 | 36.22 | 33.73 | 33.33 | 39.28 | 34.98 | 51.24 | 42.37 | 35.58 | 33.49 | 33.37 | 35.50 | 33.37 | 34.82 | 34.38 | 33.94 | 37.19 | 37.43 | 35.70 | | | | NLI | xnli | en | validation EN Median acc. 33.49 | 34.38 | 33.61 | 33.61 | 33.57 | 33.29 | 33.49 59.12 36.79 | 35.66 | 34.10 | 43.13 | 40.60 | 55.18 | 61.24 | 61.93 | 60.36 | 44.62 | 39.04 | 49.84 | 52.21 | 55.38 | 54.38 | 46.35 | 60.92 | 57.47 | 55.02 | | | | | NLI | xnli | en | validation EN Max acc. | 36.79 | 35.34 | 34.74 | 35.22 | 37.83 | 33.45 | 36.02 60.16 41.00 | 36.83 | 38.47 | 43.78 | 44.26 | 56.83 | 62.01 | 62.25 | 61.00 | 46.43 | 47.11 | 55.02 | 57.31 | 59.68 | 58.92 | 55.90 | 67.47 | 61.81 | 59.72 | | | | NLI | xnli | es | validation EN Median acc. 33.33 | 33.37 | 33.33 | 33.49 | 33.90 | 33.86 | 34.10 45.70 34.26 | 35.58 | 34.26 | 40.36 | 42.49 | 51.97 | 60.32 | 60.72 | 58.03 | 43.90 | 41.24 | 46.91 | 50.32 | 52.53 | 46.10 | 45.34 | 58.51 | 43.98 | 52.09 | | | | | NLI | xnli | es | validation EN Max acc. | 33.41 | 34.38 | 33.98 | 33.86 | 35.98 | 35.94 | 39.48 48.67 38.96 | 36.27 | 36.75 | 41.93 | 45.34 | 54.78 | 60.80 | 60.92 | 59.40 | 44.98 | 47.55 | 52.97 | 56.14 | 55.10 | 57.35 | 53.73 | 61.24 | 59.12 | 59.32 | | | | NLI | xnli | es | validation HT Median acc. 33.37 | 33.33 | 33.33 | 33.45 | 33.33 | 33.33 | 33.33 33.33 34.66 | 36.75 | 33.33 | 33.41 | 33.37 | 35.66 | 38.76 | 54.74 | 33.33 | 33.37 | 34.10 | 33.33 | 33.45 | 33.33 | 37.63 | 33.33 | 33.37 | 46.55 | 33.37 | | | | | NLI | xnli | es | validation HT Max acc. | 33.49 | 34.86 | 33.33 | 34.86 | 34.94 | 33.37 | 34.94 43.05 35.22 | 42.05 | 33.90 | 33.78 | 36.02 | 39.04 | 58.15 | 60.76 | 51.65 | 37.23 | 38.96 | 48.03 | 53.09 | 48.76 | 53.13 | 52.97 | 56.83 | 56.99 | 58.76 | | | | NLI | xnli | es | validation MT Median acc. 33.45 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.61 34.22 33.82 | 36.27 | 33.29 | 33.33 | 33.37 | 33.53 | 45.54 | 55.34 | 33.45 | 33.33 | 33.45 | 33.33 | 33.45 | 33.33 | 39.80 | 34.50 | 33.37 | 49.52 | 33.90 | | | | | NLI | xnli | es | validation MT Max acc. | 34.22 | 35.38 | 34.10 | 34.90 | 34.02 | 35.38 | 36.14 37.27 40.08 | 39.00 | 33.53 | 33.57 | 34.14 | 47.67 | 55.38 | 60.12 | 47.23 | 43.21 | 34.46 | 43.01 | 51.24 | 55.10 | 53.86 | 52.21 | 57.71 | 53.82 | 58.96 | | | | NLI | xnli | fr | validation EN Median acc. 33.33 | 33.82 | 33.90 | 33.45 | 34.50 | 34.46 | 33.45 45.38 35.78 | 35.26 | 34.94 | 40.36 | 40.36 | 52.37 | 59.52 | 59.56 | 58.15 | 44.22 | 42.65 | 48.59 | 50.52 | 52.41 | 51.00 | 47.39 | 57.71 | 53.69 | 51.81 | | | | | NLI | xnli | fr | validation EN Max acc. | 35.46 | 35.26 | 34.86 | 33.90 | 37.47 | 37.27 | 36.95 46.71 36.75 | 35.94 | 37.15 | 42.45 | 42.01 | 54.22 | 59.88 | 59.88 | 58.47 | 45.54 | 48.51 | 52.21 | 55.78 | 55.26 | 56.67 | 53.37 | 61.37 | 59.12 | 57.99 | | | | NLI | xnli | fr | validation HT Median acc. 33.49 | 33.82 | 33.53 | 34.82 | 34.22 | 33.98 | 33.90 33.61 35.26 | 36.06 | 33.41 | 33.98 | 34.58 | 35.90 | 39.76 | 55.86 | 33.94 | 33.33 | 35.14 | 37.59 | 34.58 | 46.55 | 48.92 | 42.73 | 49.52 | 51.53 | 48.59 | | | | | NLI | xnli | fr | validation HT Max acc. | 36.27 | 34.02 | 35.70 | 35.50 | 35.82 | 34.34 | 35.06 47.59 41.45 | 36.83 | 34.14 | 38.92 | 37.79 | 47.31 | 57.07 | 58.80 | 50.92 | 43.53 | 47.75 | 46.83 | 51.89 | 53.21 | 53.45 | 47.11 | 58.47 | 56.95 | 55.50 | | | | NLI | xnli | fr | validation MT Median acc. 34.10 | 33.33 | 33.33 | 33.90 | 32.69 | 33.45 | 33.49 33.61 37.07 | 35.58 | 33.53 | 34.78 | 34.38 | 35.10 | 37.79 | 55.66 | 34.98 | 33.73 | 34.26 | 37.83 | 36.67 | 41.41 | 47.23 | 41.85 | 46.95 | 52.01 | 49.00 | | | | | NLI | xnli | fr | validation MT Max acc. | 34.98 | 34.54 | 35.22 | 34.98 | 33.49 | 34.70 | 35.66 34.46 43.37 | 35.90 | 35.26 | 38.27 | 36.51 | 44.02 | 57.83 | 58.71 | 50.32 | 40.48 | 46.35 | 42.13 | 51.41 | 48.84 | 52.41 | 47.91 | 57.91 | 54.82 | 55.86 | | | | NLI | xnli | hi | validation EN Median acc. 33.33 | 33.41 | 33.49 | 34.54 | 33.78 | 33.33 | 33.53 33.33 34.18 | 34.02 | 34.18 | 38.88 | 38.19 | 47.87 | 56.14 | 57.31 | 54.94 | 39.08 | 37.59 | 44.54 | 46.18 | 48.35 | 44.70 | 41.41 | 53.53 | 44.38 | 45.78 | | | | | NLI | xnli | hi | validation EN Max acc. | 34.86 | 34.94 | 34.34 | 35.10 | 37.07 | 35.38 | 36.75 33.65 39.48 | 34.86 | 35.38 | 39.76 | 41.89 | 50.24 | 57.23 | 57.47 | 55.46 | 41.81 | 42.89 | 48.07 | 51.49 | 50.88 | 53.45 | 49.84 | 56.83 | 52.53 | 55.02 | | | | NLI | xnli | hi | validation HT Median acc. 33.41 | 33.33 | 33.29 | 33.33 | 33.33 | 33.33 | 34.18 33.33 34.22 | 34.22 | 33.29 | 36.79 | 37.59 | 39.24 | 47.99 | 44.78 | 34.62 | 37.27 | 35.50 | 36.59 | 34.50 | 40.56 | 37.31 | 43.90 | 44.54 | 49.64 | 47.15 | | | | | NLI | xnli | hi | validation HT Max acc. | 34.62 | 34.46 | 34.14 | 33.33 | 33.37 | 33.41 | 35.98 35.22 38.67 | 37.23 | 34.94 | 37.99 | 41.69 | 48.67 | 56.06 | 56.39 | 53.41 | 40.32 | 41.12 | 41.24 | 43.41 | 42.61 | 47.03 | 49.28 | 60.20 | 52.37 | 52.21 | | | | NLI | xnli | hi | validation MT Median acc. 33.29 | 33.37 | 33.33 | 33.33 | 33.45 | 33.37 | 34.22 33.33 35.22 | 33.33 | 33.37 | 34.86 | 34.26 | 41.61 | 47.59 | 36.39 | 33.45 | 34.54 | 37.39 | 36.71 | 33.33 | 33.94 | 34.50 | 33.90 | 38.51 | 44.14 | 38.07 | | | | | NLI | xnli | hi | validation MT Max acc. | 33.73 | 33.98 | 33.45 | 34.14 | 33.61 | 33.45 | 36.22 33.33 38.31 | 36.91 | 34.30 | 36.79 | 39.36 | 47.23 | 50.24 | 48.59 | 33.69 | 37.15 | 39.04 | 39.08 | 36.18 | 36.27 | 36.55 | 39.88 | 41.61 | 49.32 | 43.94 | | | | NLI | xnli | ru | validation EN Median acc. 33.33 | 33.33 | 33.37 | 33.33 | 33.61 | 33.33 | 33.29 38.15 33.86 | 35.30 | 33.73 | 41.16 | 41.37 | 49.84 | 57.95 | 57.67 | 55.10 | 36.99 | 35.78 | 41.45 | 43.73 | 46.67 | 44.62 | 43.90 | 52.33 | 46.75 | 44.74 | | | | | NLI | xnli | ru | validation EN Max acc. | 36.10 | 34.34 | 35.02 | 33.82 | 35.10 | 35.26 | 34.66 41.24 38.84 | 37.99 | 37.35 | 41.93 | 42.13 | 53.09 | 58.88 | 58.67 | 56.55 | 39.64 | 42.81 | 45.10 | 47.11 | 47.75 | 50.24 | 46.55 | 54.02 | 52.85 | 50.12 | | | | NLI | xnli | ru | validation MT Median acc. 33.41 | 33.33 | 33.33 | 33.37 | 33.41 | 34.14 | 33.37 33.37 34.10 | 36.10 | 35.26 | 33.57 | 33.29 | 39.44 | 39.72 | 33.94 | 34.14 | 33.33 | 33.49 | 33.73 | 33.29 | 33.78 | 34.46 | 33.94 | 42.89 | 38.59 | 41.49 | | | | | NLI | xnli | ru | validation MT Max acc. | 33.98 | 33.53 | 33.61 | 33.82 | 34.86 | 36.79 | 33.45 33.65 37.63 | 38.27 | 36.67 | 35.86 | 35.42 | 42.93 | 56.71 | 48.03 | 38.11 | 33.86 | 33.57 | 36.63 | 33.57 | 38.96 | 37.31 | 34.94 | 45.34 | 46.31 | 43.49 | | | | NLI | xnli | sw validation EN Median acc. 33.25 | 33.82 | 33.45 | 33.53 | 33.98 | 33.33 | 33.21 33.73 34.82 | 34.14 | 33.94 | 38.23 | 39.64 | 45.78 | 55.46 | 55.70 | 53.25 | 37.19 | 35.50 | 41.20 | 41.77 | 43.90 | 42.29 | 36.51 | 50.36 | 43.98 | 37.27 | | | | | | NLI | xnli | sw validation EN Max acc. | 34.82 | 34.94 | 35.46 | 34.46 | 36.75 | 36.55 | 33.73 33.78 37.79 | 35.46 | 35.18 | 39.68 | 40.08 | 49.60 | 55.66 | 56.79 | 53.73 | 38.35 | 41.29 | 44.34 | 47.83 | 46.63 | 48.27 | 43.49 | 52.09 | 50.36 | 50.04 | | | | | NLI | xnli | sw validation HT Median acc. 33.45 | 33.33 | 33.41 | 33.33 | 33.33 | 33.37 | 34.54 33.94 35.02 | 34.58 | 33.41 | 33.33 | 33.57 | 37.75 | 41.73 | 46.95 | 43.25 | 34.54 | 33.53 | 34.02 | 33.41 | 35.70 | 33.61 | 34.10 | 34.98 | 39.40 | 34.54 | | | | | | NLI | xnli | sw validation HT Max acc. | 35.54 | 34.50 | 33.45 | 33.41 | 33.37 | 35.02 | 35.94 34.58 35.42 | 37.19 | 34.02 | 35.46 | 36.59 | 46.31 | 52.37 | 49.60 | 49.68 | 35.14 | 33.98 | 34.94 | 36.10 | 35.94 | 35.58 | 37.19 | 37.71 | 42.85 | 35.02 | | | | | NLI | xnli | sw validation MT Median acc. 33.57 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.41 33.33 33.25 | 35.18 | 33.33 | 33.33 | 33.33 | 35.98 | 33.33 | 33.37 | 34.58 | 33.33 | 33.53 | 33.57 | 32.97 | 33.41 | 33.37 | 33.33 | 35.82 | 35.10 | 33.37 | | | | | | NLI | xnli | sw validation MT Max acc. | 34.22 | 33.33 | 33.45 | 34.38 | 33.37 | 34.62 | 35.46 34.06 35.02 | 37.11 | 33.57 | 34.82 | 34.78 | 38.03 | 39.64 | 37.55 | 41.45 | 34.34 | 34.98 | 33.94 | 33.53 | 37.03 | 33.41 | 33.41 | 36.27 | 36.83 | 33.57 | | | | | NLI | xnli | th | validation EN Median acc. 33.37 | 33.73 | 33.37 | 33.41 | 33.33 | 33.41 | 33.33 33.69 34.74 | 34.58 | 33.41 | 40.92 | 39.68 | 45.34 | 56.31 | 57.19 | 54.98 | 34.14 | 33.61 | 38.88 | 38.19 | 39.00 | 39.48 | 41.65 | 41.45 | 41.33 | 37.31 | | | | | NLI | xnli | th | validation EN Max acc. | 35.22 | 33.86 | 33.90 | 34.46 | 34.02 | 33.82 | 36.31 34.70 39.88 | 35.46 | 37.55 | 41.97 | 40.80 | 52.13 | 57.43 | 58.03 | 56.02 | 35.50 | 42.93 | 40.36 | 42.93 | 40.12 | 41.08 | 43.17 | 43.78 | 43.98 | 42.29 | | | | NLI | xnli | th | validation MT Median acc. 33.53 | 33.33 | 33.37 | 33.33 | 33.33 | 33.33 | 34.58 33.33 34.06 | 35.54 | 33.25 | 35.34 | 33.57 | 35.78 | 38.55 | 39.20 | 40.32 | 33.57 | 33.33 | 33.29 | 33.65 | 32.53 | 33.78 | 33.94 | 33.29 | 34.10 | 34.02 | | | | | NLI | xnli | th | validation MT Max acc. | 35.46 | 33.69 | 33.69 | 33.41 | 35.18 | 36.39 | 40.44 33.33 35.98 | 36.06 | 33.57 | 35.82 | 33.69 | 40.84 | 52.45 | 57.95 | 49.08 | 34.94 | 34.30 | 34.22 | 33.98 | 34.78 | 34.90 | 35.38 | 36.27 | 37.63 | 36.99 | | | | NLI | xnli | tr | validation EN Median acc. 33.33 | 33.33 | 33.33 | 33.33 | 33.37 | 33.33 | 34.22 34.14 35.26 | 34.62 | 34.26 | 39.16 | 40.76 | 48.71 | 56.75 | 56.39 | 54.34 | 36.95 | 33.98 | 35.46 | 36.06 | 36.14 | 37.47 | 38.31 | 42.73 | 39.76 | 39.08 | | | | | NLI | xnli | tr | validation EN Max acc. | 36.51 | 33.73 | 33.57 | 33.37 | 33.41 | 35.18 | 34.66 34.90 40.24 | 36.79 | 36.51 | 40.28 | 41.29 | 50.56 | 57.59 | 57.67 | 55.38 | 37.31 | 37.51 | 37.15 | 37.23 | 37.55 | 38.71 | 40.44 | 45.70 | 43.78 | 43.78 | | | | NLI | xnli | tr | validation MT Median acc. 33.33 | 33.33 | 33.41 | 33.33 | 33.33 | 33.21 | 33.37 33.49 34.34 | 34.02 | 33.29 | 33.21 | 33.94 | 34.06 | 38.80 | 38.67 | 38.67 | 33.49 | 33.33 | 33.33 | 33.25 | 33.33 | 33.37 | 34.46 | 33.33 | 33.37 | 33.82 | | | | | NLI | xnli | tr | validation MT Max acc. | 33.45 | 33.41 | 34.62 | 34.34 | 33.98 | 33.49 | 34.02 34.02 34.86 | 38.80 | 34.26 | 35.94 | 34.46 | 37.63 | 48.35 | 54.98 | 46.99 | 36.67 | 33.61 | 35.34 | 34.46 | 33.73 | 33.98 | 36.18 | 37.27 | 37.27 | 40.88 | | | | NLI | xnli | ur | validation EN Median acc. 33.05 | 33.61 | 33.37 | 33.69 | 33.41 | 34.42 | 34.02 33.13 33.29 | 34.14 | 34.50 | 36.59 | 37.07 | 46.67 | 54.70 | 54.58 | 53.57 | 36.67 | 35.26 | 39.88 | 42.89 | 43.94 | 40.84 | 40.12 | 49.96 | 45.86 | 40.28 | | | | | NLI | xnli | ur | validation EN Max acc. | 34.10 | 33.69 | 34.58 | 34.78 | 34.30 | 35.82 | 34.26 33.33 38.43 | 34.62 | 35.78 | 38.71 | 39.80 | 47.91 | 55.42 | 55.98 | 54.02 | 38.96 | 41.37 | 44.38 | 49.04 | 46.51 | 49.48 | 45.18 | 50.80 | 51.24 | 51.81 | | | | NLI | xnli | ur | validation HT Median acc. 33.90 | 33.61 | 33.09 | 33.53 | 32.93 | 31.69 | 33.69 33.25 34.86 | 34.10 | 34.30 | 34.70 | 35.74 | 35.50 | 48.92 | 45.42 | 35.66 | 34.18 | 33.37 | 37.95 | 38.35 | 34.26 | 35.62 | 36.10 | 34.62 | 40.44 | 38.92 | | | | | NLI | xnli | ur | validation HT Max acc. | 34.98 | 34.06 | 33.37 | 33.78 | 33.53 | 33.13 | 35.06 33.53 35.78 | 35.86 | 35.14 | 37.23 | 39.88 | 41.12 | 52.17 | 53.82 | 46.87 | 36.79 | 33.86 | 39.92 | 41.49 | 41.77 | 40.00 | 37.67 | 41.77 | 46.39 | 46.95 | | | | NLI | xnli | ur | validation MT Median acc. 33.33 | 33.33 | 33.33 | 33.25 | 33.25 | 33.25 | 33.29 33.45 33.33 | 33.49 | 33.61 | 35.02 | 33.49 | 40.08 | 40.32 | 36.10 | 33.49 | 34.14 | 33.33 | 33.29 | 33.33 | 33.33 | 33.37 | 33.37 | 33.33 | 33.82 | 36.67 | | | | | NLI | xnli | ur | validation MT Max acc. | 34.02 | 33.45 | 33.45 | 33.53 | 33.29 | 33.33 | 34.66 33.57 34.06 | 34.78 | 34.22 | 35.46 | 38.88 | 42.69 | 53.78 | 51.81 | 49.80 | 36.02 | 33.33 | 33.33 | 33.45 | 35.66 | 33.78 | 37.51 | 35.50 | 35.86 | 37.87 | | | | NLI | xnli | vi | validation EN Median acc. 33.33 | 34.42 | 33.37 | 33.45 | 33.57 | 33.69 | 33.49 34.70 34.98 | 34.58 | 35.46 | 39.60 | 39.76 | 52.45 | 58.19 | 58.35 | 56.39 | 43.65 | 40.52 | 46.35 | 46.22 | 50.08 | 46.51 | 43.61 | 55.78 | 48.27 | 49.80 | | | | | NLI | xnli | vi | validation EN Max acc. | 37.79 | 35.26 | 34.54 | 34.82 | 37.91 | 37.19 | 33.73 35.58 38.27 | 35.14 | 36.95 | 40.20 | 41.81 | 53.21 | 58.51 | 58.92 | 57.11 | 44.74 | 47.19 | 51.08 | 53.98 | 52.93 | 54.50 | 51.97 | 61.00 | 55.82 | 57.27 | | | | NLI | xnli | vi | validation HT Median acc. 33.41 | 33.05 | 33.41 | 33.29 | 33.37 | 32.97 | 33.73 33.21 34.02 | 37.63 | 33.13 | 33.78 | 34.98 | 41.37 | 43.57 | 33.45 | 45.78 | 37.23 | 37.79 | 35.94 | 40.92 | 41.24 | 50.28 | 33.57 | 39.60 | 46.55 | 33.61 | | | | | NLI | xnli | vi | validation HT Max acc. | 34.86 | 33.53 | 33.61 | 33.78 | 34.14 | 33.53 | 34.46 33.25 37.51 | 38.63 | 33.61 | 37.99 | 39.56 | 46.31 | 56.14 | 56.75 | 49.24 | 39.20 | 42.21 | 44.14 | 43.29 | 47.59 | 52.65 | 39.76 | 46.99 | 54.82 | 48.03 | | | | NLI | xnli | vi | validation MT Median acc. 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 33.33 33.41 | 33.98 | 33.33 | 33.33 | 33.33 | 33.33 | 33.78 | 33.33 | 33.73 | 33.57 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | 33.33 | | | | | NLI | xnli | vi | validation MT Max acc. | 34.62 | 33.41 | 33.82 | 34.66 | 34.14 | 34.02 | 35.02 34.06 33.69 | 34.70 | 33.37 | 33.94 | 34.34 | 33.53 | 36.79 | 33.94 | 35.82 | 37.03 | 34.66 | 34.42 | 33.57 | 33.94 | 38.88 | 34.26 | 39.20 | 33.33 | 33.90 | | | | NLI | xnli | zh | validation EN Median acc. 33.41 | 35.06 | 33.41 | 33.33 | 35.54 | 34.10 | 33.73 33.33 34.62 | 34.74 | 36.14 | 41.37 | 38.92 | 44.54 | 57.11 | 57.39 | 53.90 | 43.94 | 39.40 | 48.11 | 47.71 | 51.24 | 47.07 | 47.39 | 56.22 | 44.02 | 48.47 | | | | | NLI | xnli | zh | validation EN Max acc. | 35.14 | 36.31 | 34.22 | 36.95 | 36.67 | 37.27 | 34.86 33.82 41.85 | 35.10 | 37.07 | 42.49 | 40.84 | 50.64 | 59.12 | 58.71 | 55.22 | 44.66 | 47.63 | 51.12 | 54.18 | 53.61 | 54.30 | 52.29 | 56.91 | 55.50 | 56.95 | | | | NLI | xnli | zh | validation HT Median acc. 33.45 | 33.33 | 33.33 | 33.33 | 33.33 | 33.37 | 33.53 33.33 33.25 | 34.70 | 33.13 | 33.78 | 34.10 | 40.24 | 46.83 | 50.40 | 39.00 | 34.70 | 33.37 | 33.45 | 33.69 | 35.38 | 34.46 | 39.32 | 33.69 | 36.95 | 52.89 | | | | | NLI | xnli | zh | validation HT Max acc. | 33.86 | 33.37 | 33.45 | 33.57 | 34.74 | 35.02 | 36.91 33.33 37.63 | 37.51 | 34.02 | 35.22 | 35.66 | 41.89 | 55.98 | 56.99 | 51.49 | 37.43 | 38.96 | 34.06 | 42.09 | 44.10 | 40.64 | 45.10 | 41.37 | 49.24 | 53.69 | | | | NLI | xnli | zh | validation MT Median acc. 33.69 | 33.25 | 33.25 | 32.61 | 34.38 | 33.41 | 33.82 33.33 34.42 | 34.46 | 32.85 | 33.69 | 34.54 | 36.31 | 48.92 | 54.86 | 33.33 | 33.82 | 33.98 | 34.30 | 34.06 | 34.98 | 46.35 | 35.10 | 34.38 | 49.56 | 38.96 | | | | | NLI | xnli | zh | validation MT Max acc. | 35.62 | 33.49 | 33.45 | 33.86 | 34.54 | 34.02 | 34.70 33.33 36.63 | 35.30 | 34.10 | 34.82 | 35.90 | 39.12 | 51.49 | 56.87 | 39.28 | 35.58 | 36.95 | 36.63 | 39.16 | 42.41 | 48.92 | 39.60 | 50.60 | 52.25 | 44.14 | | | | Program synthesis openai_humaneval None test | EN Pass@1 | - | 0.82 | 2.48 | 4.03 | 6.48 | 7.73 | 15.52 - | - | - | - | - | - | 0.00 | - | - | - | 2.18 | 2.62 | 4.38 | 6.29 | 8.06 | 7.23 | 1.55 | 12.06 | 13.55 | 6.13 | | | | | Program synthesis openai_humaneval None test | EN Pass@10 | - | 3.02 | 5.93 | 7.45 | 11.35 | 17.38 | 32.20 - | - | - | - | - | - | 0.00 | - | - | - | 4.11 | 6.22 | 8.73 | 11.94 | 15.03 | 14.46 | 4.12 | 26.53 | 26.26 | 11.79 | | | | | Program synthesis openai_humaneval None test | EN Pass@100 - | 6.23 | 9.62 | 12.75 | 20.43 | 29.47 | 55.45 - | - | - | - | - | - | 0.00 | - | - | - | 9.00 | 11.68 | 16.09 | 19.06 | 27.49 | 25.86 | 9.60 | 48.44 | 47.01 | 18.73 | | | | | | Sent. completion story_cloze | 2016 validation EN Median acc. 51.68 | 50.08 | 47.25 | 48.48 | 47.51 | 49.44 | 50.99 94.71 46.71 | 48.21 | 52.38 | 57.14 | 58.69 | 77.61 | 95.40 | 93.85 | 96.31 | 58.52 | 59.01 | 79.64 | 85.20 | 89.10 | 88.51 | 84.66 | 95.67 | 95.83 | 94.01 | | | | | | | Sent. completion story_cloze | 2016 validation EN Max acc. | 66.27 | 59.43 | 62.05 | 64.30 | 66.44 | 70.92 | 76.22 94.92 52.27 | 57.08 | 54.36 | 57.83 | 59.49 | 79.10 | 96.04 | 94.66 | 96.63 | 60.29 | 62.75 | 82.90 | 87.33 | 90.43 | 89.58 | 87.07 | 96.26 | 96.69 | 94.66 | | | | | | Sent. completion super_glue | copa validation EN Median acc. 55.00 | 55.00 | 57.00 | 54.00 | 62.00 | 69.00 | 55.00 93.00 53.00 | 58.00 | 51.00 | 53.00 | 65.00 | 66.00 | 90.00 | 86.00 | 90.00 | 51.00 | 58.00 | 66.00 | 73.00 | 83.00 | 80.00 | 78.00 | 88.00 | 90.00 | 89.00 | | | | | | | Sent. completion super_glue | copa validation EN Max acc. | 67.00 | 67.00 | 62.00 | 65.00 | 66.00 | 78.00 | 75.00 94.00 54.00 | 66.00 | 57.00 | 55.00 | 65.00 | 72.00 | 93.00 | 88.00 | 91.00 | 52.00 | 63.00 | 69.00 | 76.00 | 86.00 | 84.00 | 81.00 | 91.00 | 91.00 | 91.00 | | | | | | Sent. completion xcopa | et | validation EN Median acc. 57.00 | 53.00 | 50.00 | 53.00 | 50.00 | 53.00 | 51.00 53.00 52.00 | 51.00 | 53.00 | 49.00 | 53.00 | 65.00 | 72.00 | 79.00 | 75.00 | 48.00 | 49.00 | 48.00 | 50.00 | 49.00 | 51.00 | 52.00 | 48.00 | 52.00 | 49.00 | | | | | | Sent. completion xcopa | et | validation EN Max acc. | 58.00 | 58.00 | 56.00 | 57.00 | 57.00 | 57.00 | 52.00 55.00 56.00 | 61.00 | 57.00 | 51.00 | 56.00 | 70.00 | 75.00 | 81.00 | 79.00 | 53.00 | 55.00 | 50.00 | 51.00 | 50.00 | 51.00 | 57.00 | 50.00 | 54.00 | 53.00 | | | | | Sent. completion xcopa | et | validation MT Median acc. 56.00 | 56.00 | 54.00 | 51.00 | 53.00 | 53.00 | 46.00 49.00 55.00 | 56.00 | 53.00 | 50.00 | 54.00 | 60.00 | 74.00 | 76.00 | 75.00 | 48.00 | 53.00 | 47.00 | 49.00 | 47.00 | 47.00 | 48.00 | 49.00 | 48.00 | 46.00 | | | | | | Sent. completion xcopa | et | validation MT Max acc. | 57.00 | 60.00 | 58.00 | 55.00 | 61.00 | 56.00 | 54.00 54.00 58.00 | 64.00 | 56.00 | 52.00 | 55.00 | 69.00 | 79.00 | 79.00 | 77.00 | 50.00 | 54.00 | 48.00 | 53.00 | 48.00 | 52.00 | 52.00 | 51.00 | 52.00 | 53.00 | | | | | Sent. completion xcopa | ht | validation EN Median acc. 52.00 | 48.00 | 48.00 | 52.00 | 53.00 | 52.00 | 56.00 47.00 53.00 | 47.00 | 55.00 | 55.00 | 59.00 | 60.00 | 76.00 | 76.00 | 75.00 | 42.00 | 46.00 | 48.00 | 52.00 | 49.00 | 51.00 | 51.00 | 55.00 | 51.00 | 51.00 | | | | | | Sent. completion xcopa | ht | validation EN Max acc. | 53.00 | 53.00 | 56.00 | 56.00 | 57.00 | 59.00 | 57.00 51.00 63.00 | 54.00 | 60.00 | 59.00 | 62.00 | 66.00 | 79.00 | 79.00 | 77.00 | 49.00 | 52.00 | 51.00 | 62.00 | 54.00 | 52.00 | 55.00 | 58.00 | 55.00 | 56.00 | | | | | Sent. completion xcopa | ht | validation MT Median acc. 53.00 | 52.00 | 54.00 | 50.00 | 55.00 | 53.00 | 57.00 49.00 58.00 | 45.00 | 54.00 | 55.00 | 54.00 | 60.00 | 72.00 | 75.00 | 73.00 | 45.00 | 44.00 | 47.00 | 49.00 | 50.00 | 51.00 | 54.00 | 53.00 | 50.00 | 56.00 | | | | | | Sent. completion xcopa | ht | validation MT Max acc. | 57.00 | 53.00 | 62.00 | 57.00 | 59.00 | 56.00 | 66.00 56.00 58.00 | 52.00 | 60.00 | 60.00 | 58.00 | 61.00 | 81.00 | 78.00 | 80.00 | 47.00 | 51.00 | 54.00 | 64.00 | 52.00 | 54.00 | 56.00 | 56.00 | 54.00 | 58.00 | | | | | Sent. completion xcopa | id | validation EN Median acc. 51.00 | 53.00 | 51.00 | 56.00 | 50.00 | 54.00 | 54.00 55.00 48.00 | 51.00 | 56.00 | 50.00 | 59.00 | 65.00 | 90.00 | 88.00 | 84.00 | 50.00 | 59.00 | 58.00 | 66.00 | 70.00 | 70.00 | 65.00 | 79.00 | 83.00 | 78.00 | | | | | | Sent. completion xcopa | id | validation EN Max acc. | 54.00 | 56.00 | 61.00 | 58.00 | 53.00 | 59.00 | 62.00 58.00 52.00 | 53.00 | 58.00 | 54.00 | 59.00 | 70.00 | 92.00 | 90.00 | 86.00 | 57.00 | 60.00 | 61.00 | 70.00 | 76.00 | 73.00 | 67.00 | 86.00 | 87.00 | 82.00 | | | | | Sent. completion xcopa | id | validation MT Median acc. 52.00 | 55.00 | 51.00 | 59.00 | 53.00 | 53.00 | 56.00 56.00 47.00 | 53.00 | 53.00 | 53.00 | 55.00 | 59.00 | 90.00 | 88.00 | 84.00 | 53.00 | 56.00 | 56.00 | 60.00 | 62.00 | 64.00 | 64.00 | 76.00 | 83.00 | 75.00 | | | | | | Sent. completion xcopa | id | validation MT Max acc. | 58.00 | 60.00 | 60.00 | 61.00 | 58.00 | 57.00 | 61.00 59.00 51.00 | 57.00 | 59.00 | 55.00 | 61.00 | 71.00 | 91.00 | 89.00 | 87.00 | 54.00 | 57.00 | 59.00 | 64.00 | 67.00 | 71.00 | 70.00 | 82.00 | 84.00 | 87.00 | | | | | Sent. completion xcopa | it | validation EN Median acc. 49.00 | 56.00 | 50.00 | 46.00 | 55.00 | 50.00 | 54.00 66.00 55.00 | 54.00 | 53.00 | 55.00 | 60.00 | 64.00 | 87.00 | 85.00 | 87.00 | 51.00 | 51.00 | 45.00 | 50.00 | 59.00 | 57.00 | 57.00 | 72.00 | 69.00 | 69.00 | | | | | | Sent. completion xcopa | it | validation EN Max acc. | 52.00 | 59.00 | 55.00 | 55.00 | 60.00 | 55.00 | 56.00 68.00 58.00 | 55.00 | 57.00 | 61.00 | 61.00 | 69.00 | 90.00 | 88.00 | 90.00 | 52.00 | 55.00 | 48.00 | 53.00 | 61.00 | 62.00 | 60.00 | 74.00 | 72.00 | 74.00 | | | | | Sent. completion xcopa | it | validation MT Median acc. 53.00 | 53.00 | 53.00 | 45.00 | 54.00 | 52.00 | 54.00 63.00 53.00 | 56.00 | 57.00 | 54.00 | 59.00 | 66.00 | 84.00 | 84.00 | 85.00 | 49.00 | 54.00 | 43.00 | 48.00 | 55.00 | 57.00 | 55.00 | 69.00 | 69.00 | 68.00 | | | | | | Sent. completion xcopa | it | validation MT Max acc. | 55.00 | 58.00 | 55.00 | 48.00 | 57.00 | 55.00 | 57.00 72.00 59.00 | 57.00 | 59.00 | 56.00 | 63.00 | 70.00 | 88.00 | 86.00 | 88.00 | 52.00 | 56.00 | 49.00 | 51.00 | 57.00 | 60.00 | 58.00 | 73.00 | 74.00 | 71.00 | | | | | Sent. completion xcopa | qu | validation EN Median acc. 59.00 | 54.00 | 48.00 | 56.00 | 59.00 | 61.00 | 56.00 52.00 48.00 | 48.00 | 47.00 | 52.00 | 54.00 | 53.00 | 58.00 | 54.00 | 48.00 | 54.00 | 44.00 | 52.00 | 51.00 | 45.00 | 48.00 | 50.00 | 49.00 | 51.00 | 51.00 | | | | | | Sent. completion xcopa | qu | validation EN Max acc. | 61.00 | 56.00 | 56.00 | 58.00 | 59.00 | 65.00 | 59.00 58.00 55.00 | 53.00 | 51.00 | 53.00 | 57.00 | 55.00 | 58.00 | 56.00 | 49.00 | 55.00 | 50.00 | 56.00 | 56.00 | 60.00 | 60.00 | 54.00 | 56.00 | 53.00 | 54.00 | | | | | Sent. completion xcopa | qu | validation MT Median acc. 60.00 | 49.00 | 50.00 | 55.00 | 52.00 | 60.00 | 51.00 57.00 47.00 | 53.00 | 49.00 | 53.00 | 54.00 | 54.00 | 56.00 | 54.00 | 53.00 | 53.00 | 47.00 | 50.00 | 45.00 | 51.00 | 46.00 | 50.00 | 49.00 | 50.00 | 51.00 | | | | | | Sent. completion xcopa | qu | validation MT Max acc. | 63.00 | 60.00 | 58.00 | 57.00 | 55.00 | 63.00 | 60.00 60.00 51.00 | 55.00 | 54.00 | 55.00 | 56.00 | 56.00 | 59.00 | 55.00 | 56.00 | 55.00 | 56.00 | 54.00 | 50.00 | 60.00 | 61.00 | 51.00 | 53.00 | 56.00 | 57.00 | | | | | Sent. completion xcopa | sw validation EN Median acc. 50.00 | 51.00 | 52.00 | 53.00 | 48.00 | 52.00 | 58.00 57.00 45.00 | 47.00 | 50.00 | 54.00 | 53.00 | 48.00 | 70.00 | 76.00 | 71.00 | 52.00 | 56.00 | 53.00 | 55.00 | 47.00 | 54.00 | 48.00 | 60.00 | 64.00 | 56.00 | | | | | | | Sent. completion xcopa | sw validation EN Max acc. | 56.00 | 59.00 | 61.00 | 61.00 | 58.00 | 59.00 | 65.00 58.00 58.00 | 50.00 | 53.00 | 59.00 | 54.00 | 53.00 | 73.00 | 81.00 | 74.00 | 55.00 | 64.00 | 55.00 | 66.00 | 63.00 | 60.00 | 58.00 | 64.00 | 66.00 | 58.00 | | | | | | Sent. completion xcopa | sw validation MT Median acc. 53.00 | 49.00 | 49.00 | 49.00 | 53.00 | 53.00 | 60.00 57.00 46.00 | 47.00 | 50.00 | 54.00 | 51.00 | 52.00 | 77.00 | 76.00 | 72.00 | 53.00 | 57.00 | 53.00 | 54.00 | 59.00 | 59.00 | 56.00 | 63.00 | 62.00 | 58.00 | | | | | | | Sent. completion xcopa | sw validation MT Max acc. | 57.00 | 60.00 | 62.00 | 59.00 | 57.00 | 55.00 | 62.00 59.00 54.00 | 52.00 | 55.00 | 58.00 | 54.00 | 53.00 | 79.00 | 78.00 | 75.00 | 56.00 | 61.00 | 57.00 | 62.00 | 60.00 | 61.00 | 62.00 | 64.00 | 67.00 | 61.00 | | | | | | Sent. completion xcopa | ta | validation EN Median acc. 52.00 | 48.00 | 54.00 | 53.00 | 58.00 | 54.00 | 53.00 55.00 57.00 | 50.00 | 57.00 | 60.00 | 59.00 | 60.00 | 84.00 | 78.00 | 79.00 | 48.00 | 54.00 | 52.00 | 55.00 | 55.00 | 59.00 | 69.00 | 67.00 | 66.00 | 66.00 | | | | | | Sent. completion xcopa | ta | validation EN Max acc. | 61.00 | 55.00 | 54.00 | 59.00 | 61.00 | 56.00 | 63.00 62.00 61.00 | 59.00 | 59.00 | 63.00 | 61.00 | 62.00 | 84.00 | 79.00 | 84.00 | 50.00 | 57.00 | 56.00 | 59.00 | 57.00 | 62.00 | 71.00 | 69.00 | 70.00 | 69.00 | | | | | Sent. completion xcopa | ta | validation MT Median acc. 54.00 | 44.00 | 55.00 | 53.00 | 57.00 | 56.00 | 59.00 55.00 50.00 | 52.00 | 57.00 | 60.00 | 61.00 | 55.00 | 77.00 | 74.00 | 71.00 | 46.00 | 52.00 | 50.00 | 54.00 | 61.00 | 56.00 | 63.00 | 62.00 | 63.00 | 63.00 | | | | | | Sent. completion xcopa | ta | validation MT Max acc. | 58.00 | 55.00 | 60.00 | 55.00 | 62.00 | 57.00 | 68.00 58.00 60.00 | 62.00 | 58.00 | 61.00 | 62.00 | 64.00 | 80.00 | 81.00 | 82.00 | 58.00 | 55.00 | 54.00 | 57.00 | 64.00 | 60.00 | 66.00 | 64.00 | 64.00 | 69.00 | | | | | Sent. completion xcopa | th | validation EN Median acc. 54.00 | 53.00 | 53.00 | 54.00 | 52.00 | 50.00 | 55.00 55.00 52.00 | 55.00 | 60.00 | 50.00 | 51.00 | 56.00 | 73.00 | 71.00 | 74.00 | 54.00 | 55.00 | 56.00 | 54.00 | 57.00 | 55.00 | 56.00 | 54.00 | 51.00 | 51.00 | | | | | | Sent. completion xcopa | th | validation EN Max acc. | 55.00 | 57.00 | 53.00 | 56.00 | 54.00 | 52.00 | 59.00 55.00 56.00 | 57.00 | 65.00 | 51.00 | 53.00 | 60.00 | 74.00 | 74.00 | 77.00 | 58.00 | 59.00 | 60.00 | 55.00 | 57.00 | 61.00 | 63.00 | 58.00 | 53.00 | 59.00 | | | | | Sent. completion xcopa | th | validation MT Median acc. 53.00 | 52.00 | 52.00 | 52.00 | 48.00 | 45.00 | 55.00 55.00 52.00 | 58.00 | 56.00 | 51.00 | 53.00 | 54.00 | 71.00 | 72.00 | 72.00 | 54.00 | 50.00 | 56.00 | 55.00 | 51.00 | 51.00 | 55.00 | 52.00 | 51.00 | 54.00 | | | | | | Sent. completion xcopa | th | validation MT Max acc. | 55.00 | 54.00 | 59.00 | 55.00 | 52.00 | 54.00 | 57.00 55.00 58.00 | 63.00 | 59.00 | 55.00 | 57.00 | 58.00 | 77.00 | 76.00 | 76.00 | 57.00 | 58.00 | 59.00 | 63.00 | 56.00 | 56.00 | 56.00 | 57.00 | 53.00 | 61.00 | | | | | Sent. completion xcopa | tr | validation EN Median acc. 48.00 | 56.00 | 54.00 | 51.00 | 52.00 | 49.00 | 49.00 49.00 48.00 | 49.00 | 51.00 | 55.00 | 47.00 | 55.00 | 73.00 | 73.00 | 74.00 | 55.00 | 47.00 | 55.00 | 53.00 | 49.00 | 55.00 | 53.00 | 54.00 | 51.00 | 54.00 | | | | | | Sent. completion xcopa | tr | validation EN Max acc. | 56.00 | 60.00 | 55.00 | 55.00 | 55.00 | 52.00 | 50.00 51.00 51.00 | 56.00 | 58.00 | 58.00 | 49.00 | 57.00 | 79.00 | 76.00 | 76.00 | 58.00 | 55.00 | 59.00 | 58.00 | 53.00 | 56.00 | 55.00 | 57.00 | 54.00 | 55.00 | | | | | Sent. completion xcopa | tr | validation MT Median acc. 53.00 | 55.00 | 50.00 | 50.00 | 48.00 | 42.00 | 51.00 49.00 50.00 | 49.00 | 52.00 | 56.00 | 50.00 | 55.00 | 77.00 | 73.00 | 73.00 | 55.00 | 52.00 | 58.00 | 54.00 | 44.00 | 48.00 | 54.00 | 51.00 | 50.00 | 55.00 | | | | | | Sent. completion xcopa | tr | validation MT Max acc. | 56.00 | 57.00 | 56.00 | 58.00 | 54.00 | 49.00 | 55.00 53.00 56.00 | 58.00 | 57.00 | 60.00 | 57.00 | 58.00 | 79.00 | 76.00 | 74.00 | 61.00 | 54.00 | 59.00 | 61.00 | 48.00 | 51.00 | 58.00 | 54.00 | 53.00 | 56.00 | | | | | Sent. completion xcopa | vi | validation EN Median acc. 51.00 | 60.00 | 50.00 | 59.00 | 54.00 | 59.00 | 56.00 49.00 51.00 | 54.00 | 43.00 | 51.00 | 57.00 | 62.00 | 85.00 | 83.00 | 82.00 | 56.00 | 58.00 | 68.00 | 73.00 | 78.00 | 71.00 | 65.00 | 84.00 | 84.00 | 74.00 | | | | | | Sent. completion xcopa | vi | validation EN Max acc. | 56.00 | 62.00 | 59.00 | 64.00 | 57.00 | 62.00 | 59.00 53.00 52.00 | 61.00 | 54.00 | 52.00 | 63.00 | 68.00 | 87.00 | 85.00 | 84.00 | 61.00 | 63.00 | 70.00 | 77.00 | 79.00 | 72.00 | 67.00 | 87.00 | 91.00 | 79.00 | | | | | Sent. completion xcopa | vi | validation MT Median acc. 52.00 | 61.00 | 52.00 | 57.00 | 59.00 | 62.00 | 57.00 61.00 50.00 | 57.00 | 48.00 | 50.00 | 57.00 | 60.00 | 87.00 | 84.00 | 81.00 | 54.00 | 57.00 | 63.00 | 64.00 | 68.00 | 72.00 | 61.00 | 82.00 | 85.00 | 76.00 | | | | | | Sent. completion xcopa | vi | validation MT Max acc. | 57.00 | 66.00 | 65.00 | 65.00 | 69.00 | 67.00 | 68.00 65.00 54.00 | 61.00 | 52.00 | 52.00 | 63.00 | 64.00 | 88.00 | 84.00 | 83.00 | 57.00 | 57.00 | 67.00 | 65.00 | 74.00 | 77.00 | 64.00 | 84.00 | 89.00 | 81.00 | | | | | Sent. completion xcopa | zh | validation EN Median acc. 55.00 | 51.00 | 49.00 | 57.00 | 51.00 | 60.00 | 56.00 55.00 51.00 | 53.00 | 53.00 | 53.00 | 56.00 | 63.00 | 85.00 | 83.00 | 79.00 | 55.00 | 55.00 | 62.00 | 72.00 | 76.00 | 77.00 | 72.00 | 86.00 | 84.00 | 80.00 | | | | | | Sent. completion xcopa | zh | validation EN Max acc. | 58.00 | 61.00 | 63.00 | 73.00 | 66.00 | 68.00 | 72.00 55.00 52.00 | 65.00 | 54.00 | 58.00 | 58.00 | 65.00 | 89.00 | 86.00 | 79.00 | 61.00 | 61.00 | 66.00 | 73.00 | 80.00 | 80.00 | 77.00 | 90.00 | 86.00 | 82.00 | | | | | Sent. completion xcopa | zh | validation HT Median acc. - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 76.00 | - | 75.00 | - | - | - | | | | Sent. completion xcopa | zh | validation HT Max acc. | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 78.00 | - | 79.00 | - | - | - | | | Sent. completion xcopa | zh | validation MT Median acc. 54.00 | 52.00 | 49.00 | 57.00 | 52.00 | 61.00 | 53.00 55.00 50.00 | 56.00 | 48.00 | 51.00 | 55.00 | 57.00 | 83.00 | 83.00 | 77.00 | 54.00 | 54.00 | 59.00 | 57.00 | 72.00 | 74.00 | 72.00 | 86.00 | 83.00 | 82.00 | | | | | | Sent. completion xcopa | zh | validation MT Max acc. | 63.00 | 62.00 | 58.00 | 67.00 | 66.00 | 67.00 | 73.00 55.00 52.00 | 58.00 | 56.00 | 52.00 | 58.00 | 59.00 | 88.00 | 87.00 | 79.00 | 59.00 | 55.00 | 67.00 | 61.00 | 81.00 | 80.00 | 76.00 | 90.00 | 86.00 | 83.00 | | | | | Sent. completion xstory_cloze | ar | validation EN Median acc. 51.69 | 49.44 | 49.57 | 49.57 | 50.23 | 50.36 | 52.42 48.11 47.98 | 49.97 | 48.11 | 53.61 | 54.53 | 65.92 | 85.37 | 87.23 | 88.29 | 52.75 | 52.08 | 69.49 | 78.23 | 81.47 | 82.13 | 75.18 | 91.26 | 91.59 | 91.86 | | | | | | Sent. completion xstory_cloze | ar | validation EN Max acc. | 52.95 | 51.36 | 52.08 | 54.20 | 56.25 | 59.17 | 65.52 49.64 49.04 | 51.62 | 48.71 | 54.53 | 56.59 | 67.50 | 90.93 | 90.60 | 88.95 | 53.67 | 53.54 | 73.33 | 80.61 | 83.26 | 83.79 | 77.50 | 92.65 | 92.92 | 92.12 | | | | | Sent. completion xstory_cloze | ar | validation MT Median acc. 51.62 | 48.91 | 49.11 | 49.24 | 51.49 | 49.37 | 52.61 51.69 48.71 | 51.36 | 47.58 | 53.08 | 54.86 | 66.71 | 90.54 | 90.14 | 88.29 | 52.55 | 52.28 | 62.48 | 67.24 | 80.01 | 82.46 | 75.12 | 91.13 | 91.73 | 91.79 | | | | | | Sent. completion xstory_cloze | ar | validation MT Max acc. | 53.01 | 51.75 | 52.48 | 54.20 | 55.72 | 58.37 | 65.12 53.47 49.90 | 53.01 | 48.78 | 54.20 | 55.06 | 70.09 | 91.07 | 91.00 | 89.15 | 54.40 | 53.08 | 70.48 | 78.03 | 83.06 | 83.85 | 78.69 | 92.79 | 94.04 | 92.46 | | | | | Sent. completion xstory_cloze | es | validation EN Median acc. 52.28 | 50.89 | 46.86 | 47.72 | 48.58 | 49.83 | 50.56 81.80 47.19 | 48.11 | 53.28 | 54.33 | 55.33 | 73.73 | 89.94 | 89.68 | 93.32 | 55.33 | 55.99 | 69.56 | 81.67 | 86.76 | 86.76 | 78.89 | 93.18 | 94.77 | 92.85 | | | | | | Sent. completion xstory_cloze | es | validation EN Max acc. | 59.36 | 55.20 | 58.77 | 60.95 | 63.07 | 65.25 | 72.34 83.32 49.97 | 52.35 | 55.20 | 54.93 | 55.72 | 74.39 | 92.52 | 93.32 | 93.58 | 55.86 | 58.04 | 77.96 | 85.90 | 88.88 | 88.62 | 82.93 | 94.31 | 95.23 | 93.91 | | | | | Sent. completion xstory_cloze | es | validation MT Median acc. 52.08 | 50.56 | 46.72 | 49.97 | 49.77 | 50.69 | 50.63 80.08 49.44 | 48.05 | 53.87 | 55.00 | 54.60 | 74.45 | 91.20 | 92.85 | 92.72 | 55.53 | 56.06 | 55.72 | 66.25 | 84.98 | 86.57 | 78.49 | 92.98 | 94.90 | 92.85 | | | | | | Sent. completion xstory_cloze | es | validation MT Max acc. | 60.56 | 54.93 | 57.84 | 60.89 | 62.81 | 65.78 | 72.47 82.13 50.63 | 53.47 | 54.86 | 55.39 | 55.33 | 77.17 | 92.52 | 93.38 | 93.98 | 56.45 | 58.37 | 72.53 | 84.51 | 88.95 | 88.82 | 81.60 | 94.37 | 95.50 | 94.44 | | | | | Sent. completion xstory_cloze | eu | validation EN Median acc. 51.56 | 49.31 | 48.25 | 50.30 | 50.36 | 48.38 | 50.96 49.11 45.14 | 49.31 | 51.82 | 51.82 | 50.50 | 64.13 | 86.04 | 84.38 | 90.87 | 49.44 | 46.72 | 57.05 | 67.44 | 70.81 | 70.02 | 67.17 | 85.24 | 85.04 | 85.04 | | | | | | Sent. completion xstory_cloze | eu | validation EN Max acc. | 54.86 | 52.95 | 54.14 | 54.00 | 55.00 | 56.52 | 62.61 51.36 50.83 | 53.01 | 52.95 | 52.88 | 52.55 | 66.64 | 89.48 | 89.68 | 91.33 | 50.56 | 52.22 | 60.49 | 70.95 | 73.33 | 72.67 | 70.42 | 86.90 | 85.90 | 86.70 | | | | | Sent. completion xstory_cloze | eu | validation MT Median acc. 49.83 | 50.69 | 48.64 | 50.50 | 49.31 | 48.44 | 50.89 50.83 46.46 | 50.30 | 51.42 | 52.55 | 51.29 | 65.92 | 89.54 | 87.16 | 91.20 | 47.39 | 45.14 | 49.83 | 50.17 | 59.89 | 65.59 | 66.38 | 81.40 | 82.59 | 82.73 | | | | | | Sent. completion xstory_cloze | eu | validation MT Max acc. | 55.13 | 52.22 | 52.95 | 54.20 | 54.60 | 56.32 | 62.67 52.15 50.96 | 53.47 | 52.28 | 53.67 | 52.61 | 69.03 | 90.60 | 91.13 | 91.66 | 47.52 | 52.35 | 57.91 | 64.13 | 70.81 | 73.26 | 68.63 | 86.76 | 86.83 | 84.25 | | | | | Sent. completion xstory_cloze | hi | validation EN Median acc. 50.96 | 48.11 | 47.65 | 50.17 | 51.42 | 49.44 | 52.42 49.70 46.33 | 52.22 | 51.09 | 52.55 | 50.89 | 67.84 | 89.81 | 87.76 | 88.68 | 53.34 | 51.82 | 68.03 | 75.98 | 75.31 | 74.85 | 68.17 | 87.03 | 87.43 | 87.09 | | | | | | Sent. completion xstory_cloze | hi | validation EN Max acc. | 55.33 | 52.95 | 54.53 | 56.78 | 56.39 | 59.70 | 63.80 50.23 52.02 | 53.61 | 51.62 | 53.87 | 52.15 | 70.68 | 92.32 | 90.73 | 89.41 | 53.74 | 55.20 | 72.87 | 78.89 | 79.48 | 78.82 | 72.20 | 87.89 | 88.68 | 88.35 | | | | | Sent. completion xstory_cloze | hi | validation MT Median acc. 49.64 | 46.99 | 47.72 | 50.23 | 52.35 | 50.56 | 51.09 48.78 47.12 | 52.08 | 51.89 | 54.60 | 50.17 | 69.89 | 91.79 | 88.95 | 87.82 | 54.80 | 50.89 | 55.46 | 65.06 | 73.33 | 75.84 | 68.83 | 86.70 | 87.36 | 86.23 | | | | | | Sent. completion xstory_cloze | hi | validation MT Max acc. | 56.59 | 53.87 | 54.40 | 56.78 | 57.31 | 60.23 | 65.39 50.76 53.14 | 54.86 | 53.01 | 55.00 | 51.16 | 71.08 | 92.19 | 90.14 | 89.15 | 55.79 | 55.92 | 70.75 | 75.25 | 80.61 | 80.41 | 71.61 | 88.42 | 89.15 | 88.09 | | | | | Sent. completion xstory_cloze | id | validation EN Median acc. 52.42 | 48.97 | 45.86 | 47.85 | 50.63 | 52.28 | 52.02 70.28 46.86 | 48.58 | 50.76 | 53.54 | 54.14 | 72.34 | 90.80 | 90.54 | 91.86 | 55.79 | 55.79 | 64.00 | 71.81 | 82.40 | 78.16 | 74.45 | 90.67 | 90.87 | 91.40 | | | | | | Sent. completion xstory_cloze | id | validation EN Max acc. | 59.36 | 55.00 | 57.51 | 58.77 | 60.29 | 63.53 | 69.03 73.06 49.90 | 51.03 | 52.35 | 54.40 | 54.67 | 73.86 | 93.25 | 93.05 | 92.46 | 57.25 | 57.97 | 74.92 | 82.99 | 84.25 | 83.32 | 77.10 | 92.12 | 92.06 | 92.59 | | | | | Sent. completion xstory_cloze | id | validation MT Median acc. 52.15 | 49.90 | 48.58 | 49.70 | 51.03 | 52.81 | 51.89 68.96 47.85 | 48.05 | 50.83 | 54.73 | 54.14 | 74.98 | 92.46 | 91.73 | 91.66 | 52.75 | 54.67 | 53.14 | 58.97 | 68.96 | 82.26 | 73.46 | 89.28 | 90.87 | 90.07 | | | | | | Sent. completion xstory_cloze | id | validation MT Max acc. | 59.63 | 55.33 | 57.97 | 60.29 | 60.89 | 63.60 | 68.76 70.15 49.97 | 51.49 | 53.08 | 57.38 | 54.27 | 75.71 | 93.51 | 92.19 | 92.65 | 57.84 | 56.98 | 69.29 | 78.89 | 83.32 | 84.58 | 75.12 | 91.46 | 92.79 | 91.40 | | | | | Sent. completion xstory_cloze | my validation EN Median acc. 51.42 | 51.49 | 47.32 | 52.02 | 49.64 | 52.48 | 52.68 52.75 46.00 | 48.91 | 50.89 | 51.16 | 51.16 | 63.20 | 82.79 | 84.78 | 86.96 | 46.46 | 46.00 | 48.31 | 46.99 | 49.70 | 49.17 | 50.63 | 49.97 | 51.89 | 50.63 | | | | | | | Sent. completion xstory_cloze | my validation EN Max acc. | 53.21 | 52.61 | 48.44 | 52.95 | 50.56 | 52.61 | 52.95 54.80 50.89 | 50.56 | 50.96 | 51.42 | 51.69 | 65.65 | 87.49 | 86.70 | 88.35 | 47.05 | 46.39 | 51.03 | 49.90 | 50.43 | 51.42 | 51.03 | 52.35 | 52.35 | 52.68 | | | | | | Sent. completion xstory_cloze | my validation MT Median acc. 49.83 | 50.03 | 46.86 | 50.50 | 49.31 | 51.62 | 52.28 48.18 45.47 | 47.39 | 49.57 | 52.28 | 50.03 | 63.07 | 83.45 | 81.07 | 84.32 | 46.06 | 46.06 | 47.39 | 47.58 | 51.42 | 50.56 | 49.90 | 49.83 | 50.23 | 51.22 | | | | | | | Sent. completion xstory_cloze | my validation MT Max acc. | 52.48 | 51.69 | 47.58 | 52.61 | 50.56 | 52.68 | 53.41 50.17 50.50 | 50.83 | 51.82 | 52.75 | 50.23 | 64.66 | 85.57 | 85.90 | 85.44 | 46.59 | 47.05 | 51.09 | 49.04 | 52.55 | 51.56 | 51.49 | 50.56 | 51.42 | 51.95 | | | | | | Sent. completion xstory_cloze | ru | validation EN Median acc. 50.23 | 49.70 | 50.63 | 52.88 | 50.89 | 50.10 | 50.36 65.12 46.19 | 49.04 | 48.38 | 51.82 | 52.75 | 69.49 | 87.49 | 86.30 | 83.65 | 51.29 | 48.44 | 54.00 | 57.78 | 63.27 | 62.08 | 64.06 | 79.02 | 77.56 | 79.42 | | | | | | Sent. completion xstory_cloze | ru | validation EN Max acc. | 60.09 | 50.69 | 51.22 | 54.14 | 52.08 | 52.35 | 56.06 67.50 51.42 | 56.39 | 49.24 | 53.87 | 53.08 | 71.14 | 90.80 | 90.87 | 84.51 | 53.14 | 50.23 | 56.39 | 61.42 | 65.32 | 64.26 | 66.45 | 81.73 | 79.09 | 79.62 | | | | | Sent. completion xstory_cloze | ru | validation MT Median acc. 49.50 | 50.23 | 51.09 | 52.02 | 52.35 | 50.50 | 49.83 61.95 47.52 | 49.24 | 48.97 | 50.63 | 53.01 | 71.28 | 89.54 | 90.01 | 82.86 | 51.03 | 49.11 | 50.17 | 50.10 | 58.31 | 58.77 | 61.02 | 78.42 | 74.19 | 75.98 | | | | | | Sent. completion xstory_cloze | ru | validation MT Max acc. | 60.42 | 50.30 | 51.42 | 53.14 | 53.41 | 52.15 | 55.46 63.93 51.82 | 55.39 | 49.70 | 53.01 | 53.74 | 74.85 | 91.40 | 91.66 | 84.91 | 52.22 | 50.30 | 55.13 | 56.65 | 63.40 | 62.41 | 64.13 | 79.09 | 78.16 | 76.57 | | | | | Sent. completion xstory_cloze | sw validation EN Median acc. 52.08 | 49.90 | 49.64 | 49.83 | 53.08 | 51.89 | 49.31 49.31 46.59 | 49.04 | 53.61 | 53.21 | 53.94 | 67.11 | 86.17 | 87.76 | 89.15 | 49.24 | 49.24 | 55.59 | 67.44 | 67.70 | 66.31 | 58.84 | 77.83 | 79.42 | 75.71 | | | | | | | Sent. completion xstory_cloze | sw validation EN Max acc. | 56.32 | 50.03 | 50.30 | 51.62 | 55.06 | 53.94 | 60.42 51.49 49.31 | 53.21 | 54.53 | 53.34 | 54.73 | 68.83 | 88.82 | 89.61 | 89.61 | 51.36 | 49.24 | 61.28 | 69.69 | 71.67 | 71.01 | 60.82 | 79.81 | 81.14 | 77.76 | | | | | | Sent. completion xstory_cloze | sw validation MT Median acc. 50.69 | 50.83 | 49.83 | 50.76 | 51.49 | 50.30 | 48.84 51.56 46.46 | 47.92 | 52.81 | 53.14 | 53.81 | 69.69 | 87.36 | 88.15 | 89.08 | 48.64 | 49.17 | 49.24 | 50.23 | 53.28 | 58.37 | 55.79 | 70.81 | 73.00 | 70.28 | | | | | | | Sent. completion xstory_cloze | sw validation MT Max acc. | 55.33 | 51.62 | 50.69 | 51.69 | 53.01 | 53.67 | 60.56 52.28 49.37 | 53.94 | 54.20 | 54.40 | 55.53 | 71.14 | 89.41 | 89.15 | 89.34 | 50.50 | 49.97 | 58.44 | 63.73 | 68.43 | 69.69 | 57.18 | 78.36 | 80.01 | 72.60 | | | | | | Sent. completion xstory_cloze | te | validation EN Median acc. 51.29 | 49.90 | 51.95 | 50.23 | 52.68 | 50.83 | 51.69 48.44 49.04 | 49.97 | 53.21 | 54.67 | 56.32 | 64.86 | 85.04 | 86.10 | 86.37 | 52.02 | 51.36 | 63.40 | 70.28 | 72.01 | 70.09 | 62.08 | 80.61 | 80.15 | 77.70 | | | | | | Sent. completion xstory_cloze | te | validation EN Max acc. | 57.38 | 55.33 | 55.92 | 55.79 | 57.58 | 57.91 | 61.61 49.17 54.00 | 53.34 | 53.34 | 55.72 | 57.18 | 68.70 | 89.54 | 90.40 | 87.29 | 53.61 | 55.26 | 66.25 | 73.66 | 74.72 | 73.06 | 63.14 | 81.20 | 82.40 | 79.88 | | | | | Sent. completion xstory_cloze | te | validation MT Median acc. 49.11 | 51.03 | 52.68 | 49.44 | 52.61 | 50.89 | 50.76 49.50 49.77 | 48.97 | 53.21 | 56.25 | 55.13 | 67.50 | 87.16 | 86.90 | 82.20 | 51.16 | 50.36 | 55.33 | 54.73 | 63.73 | 66.98 | 58.31 | 76.17 | 78.29 | 73.79 | | | | | | Sent. completion xstory_cloze | te | validation MT Max acc. | 57.05 | 55.79 | 56.12 | 56.32 | 58.11 | 57.64 | 62.41 49.83 53.61 | 54.00 | 53.67 | 56.92 | 56.85 | 68.89 | 90.54 | 89.41 | 85.51 | 54.86 | 55.86 | 61.88 | 66.38 | 73.59 | 71.54 | 59.30 | 80.28 | 80.54 | 77.37 | | | | | Sent. completion xstory_cloze | zh | validation EN Median acc. 52.08 | 49.83 | 47.39 | 47.85 | 49.17 | 50.30 | 51.36 49.70 47.65 | 53.21 | 56.25 | 54.00 | 56.32 | 68.17 | 91.20 | 91.79 | 92.92 | 54.53 | 57.18 | 76.84 | 82.59 | 84.91 | 83.85 | 76.70 | 91.99 | 92.32 | 91.13 | | | | | | Sent. completion xstory_cloze | zh | validation EN Max acc. | 55.59 | 54.47 | 56.45 | 58.04 | 59.89 | 61.22 | 65.65 50.89 49.04 | 53.94 | 56.78 | 54.60 | 58.84 | 71.74 | 92.72 | 93.05 | 93.18 | 56.52 | 58.17 | 78.69 | 84.32 | 85.04 | 85.84 | 79.62 | 93.12 | 92.85 | 92.26 | | | | | Sent. completion xstory_cloze | zh | validation HT Median acc. - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 81.67 | - | 77.10 | - | - | - | | | | Sent. completion xstory_cloze | zh | validation HT Max acc. | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 85.37 | - | 79.48 | - | - | - | | | Sent. completion xstory_cloze | zh | validation MT Median acc. 52.15 | 49.24 | 47.45 | 47.65 | 50.23 | 51.89 | 53.01 48.05 46.99 | 52.02 | 55.00 | 54.27 | 57.71 | 72.01 | 92.59 | 91.79 | 91.79 | 55.59 | 56.45 | 70.88 | 74.26 | 81.20 | 84.65 | 78.42 | 91.86 | 91.40 | 90.40 | | | | | | Sent. completion xstory_cloze | zh | validation MT Max acc. | 56.12 | 54.33 | 56.59 | 57.38 | 60.09 | 61.22 | 66.64 50.03 48.97 | 54.20 | 57.78 | 55.72 | 59.50 | 72.93 | 93.85 | 93.05 | 93.58 | 56.45 | 56.85 | 77.17 | 80.87 | 85.11 | 85.90 | 80.34 | 92.39 | 92.52 | 91.93 | | | | | Task | Dataset | Config | Split | Prompt Metric | mT0-300M mT0-560M mT0-1.2B mT0-3.7B mT0-13B BLOOMZ-560M BLOOMZ-1.1B BLOOMZ-1.7B BLOOMZ-3B BLOOMZ-7.1B BLOOMZ | | | | | | | | | | | | |-----------------------------------|----------------------------------------|-----------------------------|---------------|-----------------|----------------------------------------------------------------------------------------------------------------|-------|-------|-------|--------|-------|-------|--------|-------|-------|-------|-------| | Extractive QA | craigslist_bargains | bargains | validation EN | Median acc. | 30.49 | 23.95 | 22.61 | 39.61 | 25.96 | 38.94 | 47.99 | 28.14 | 22.86 | 46.48 | 26.47 | | | Extractive QA | craigslist_bargains | bargains | validation EN | Max acc. | 49.41 | 28.14 | 31.32 | 50.92 | 40.54 | 72.53 | 72.36 | 46.90 | 31.32 | 60.47 | 51.76 | | | Grammar Correction blimp_adjunct | island | validation EN | Median acc. | 50.40 | 51.60 | 51.80 | 53.80 | 55.10 | 51.60 | 52.30 | 50.60 | 49.20 | 49.90 | 49.80 | | | | Grammar Correction blimp_adjunct | island | validation EN | Max acc. | 50.90 | 57.00 | 58.00 | 59.10 | 56.80 | 77.10 | 60.90 | 62.30 | 59.90 | 57.60 | 51.60 | | | | Grammar Correction glue | cola | validation EN | Median acc. | 30.97 | 38.26 | 56.57 | 35.19 | 45.83 | 31.26 | 57.81 | 31.16 | 31.35 | 33.27 | 44.58 | | | | Grammar Correction glue | cola | validation EN | Max acc. | 64.33 | 51.01 | 62.80 | 47.17 | 58.29 | 41.71 | 67.98 | 46.40 | 65.39 | 56.86 | 63.37 | | | | Multiple-Choice QA aqua_rat | raw | validation EN | Median acc. | 27.95 | 25.20 | 24.80 | 20.47 | 16.14 | 19.29 | 22.83 | 22.05 | 22.44 | 24.41 | 27.56 | | | | Multiple-Choice QA aqua_rat | raw | validation EN | Max acc. | 29.53 | 26.38 | 25.59 | 21.65 | 18.90 | 20.08 | 24.80 | 22.83 | 22.83 | 25.20 | 28.35 | | | | Multiple-Choice QA codah | codah | train | EN | Median acc. | 25.25 | 25.43 | 26.48 | 55.04 | 75.58 | 24.93 | 24.35 | 57.17 | 64.12 | 73.60 | 80.66 | | | Multiple-Choice QA codah | codah | train | EN | Max acc. | 25.32 | 26.15 | 27.13 | 55.44 | 76.22 | 25.04 | 24.60 | 57.31 | 64.41 | 73.67 | 80.91 | | | Multiple-Choice QA commonsense_qa | qa | validation EN | Median acc. | 31.20 | 37.43 | 36.61 | 56.35 | 69.53 | 43.98 | 38.90 | 69.86 | 84.44 | 83.05 | 80.26 | | | | Multiple-Choice QA commonsense_qa | qa | validation EN | Max acc. | 31.53 | 37.51 | 39.72 | 60.03 | 69.94 | 44.47 | 42.42 | 72.40 | 84.60 | 84.36 | 83.05 | | | | Multiple-Choice QA head_qa | en | validation EN | Median acc. | 24.89 | 24.38 | 23.43 | 27.53 | 36.02 | 26.72 | 27.16 | 27.53 | 30.01 | 38.58 | 53.15 | | | | Multiple-Choice QA head_qa | en | validation EN | Max acc. | 25.55 | 25.62 | 26.87 | 31.55 | 36.16 | 27.75 | 27.67 | 33.31 | 35.21 | 40.92 | 53.95 | | | | Multiple-Choice QA head_qa | es | validation EN | Median acc. | 24.60 | 24.45 | 23.94 | 27.89 | 34.92 | 26.94 | 25.04 | 24.45 | 26.21 | 34.41 | 50.81 | | | | Multiple-Choice QA head_qa | es | validation EN | Max acc. | 26.21 | 26.21 | 24.74 | 29.50 | 37.04 | 28.26 | 26.28 | 29.87 | 33.02 | 39.75 | 51.76 | | | | Multiple-Choice QA math_qa | qa | test | EN | Median acc. | 21.11 | 20.00 | 22.18 | 23.25 | 23.69 | 19.66 | 21.21 | 20.97 | 21.81 | 21.14 | 21.84 | | | Multiple-Choice QA math_qa | qa | test | EN | Max acc. | 22.21 | 26.03 | 35.64 | 24.89 | 26.60 | 45.56 | 27.94 | 35.24 | 43.28 | 38.12 | 47.37 | | | Multiple-Choice QA mwsc | mwsc | validation EN | Median acc. | 50.00 | 52.44 | 54.88 | 60.98 | 74.39 | 53.66 | 52.44 | 56.10 | 58.54 | 62.20 | 71.95 | | | | Multiple-Choice QA mwsc | mwsc | validation EN | Max acc. | 52.44 | 53.66 | 57.32 | 65.85 | 79.27 | 58.54 | 57.32 | 58.54 | 63.41 | 69.51 | 80.49 | | | | Multiple-Choice QA pubmed_qa | labeled | train | EN | Median acc. | 45.55 | 54.50 | 55.75 | 58.35 | 65.35 | 55.75 | 58.90 | 66.75 | 66.80 | 67.15 | 71.80 | | | Multiple-Choice QA pubmed_qa | labeled | train | EN | Max acc. | 48.60 | 57.60 | 58.30 | 58.60 | 66.20 | 57.50 | 63.50 | 72.10 | 69.80 | 69.50 | 74.40 | | | Multiple-Choice QA riddle_sense | sense | validation EN | Median acc. | 24.39 | 22.04 | 23.41 | 29.63 | 43.14 | 22.87 | 24.53 | 30.02 | 35.11 | 39.47 | 50.64 | | | | Multiple-Choice QA riddle_sense | sense | validation EN | Max acc. | 34.48 | 33.30 | 33.01 | 39.18 | 47.50 | 37.41 | 39.86 | 43.58 | 47.60 | 48.09 | 59.26 | | | | Sentiment | amazon_reviews_multi | en | validation EN | Median acc. | 40.60 | 50.80 | 51.12 | 49.00 | 53.24 | 46.52 | 42.46 | 50.48 | 49.88 | 51.00 | 50.90 | | | Sentiment | amazon_reviews_multi | en | validation EN | Max acc. | 41.34 | 53.88 | 54.18 | 55.92 | 57.04 | 50.44 | 47.74 | 55.94 | 53.74 | 55.08 | 54.16 | | | Sentiment | amazon_reviews_multi | es | validation EN | Median acc. | 39.56 | 48.70 | 49.02 | 47.56 | 52.30 | 37.60 | 38.92 | 45.08 | 45.32 | 44.44 | 43.26 | | | Sentiment | amazon_reviews_multi | es | validation EN | Max acc. | 42.66 | 51.00 | 50.42 | 50.68 | 53.58 | 39.10 | 40.24 | 47.98 | 46.28 | 47.76 | 44.48 | | | Sentiment | amazon_reviews_multi | fr | validation EN | Median acc. | 38.74 | 48.44 | 48.32 | 46.12 | 51.12 | 38.78 | 38.38 | 44.36 | 45.84 | 44.92 | 43.92 | | | Sentiment | amazon_reviews_multi | fr | validation EN | Max acc. | 40.66 | 49.64 | 49.70 | 49.30 | 52.40 | 41.16 | 40.04 | 46.66 | 46.80 | 47.42 | 44.90 | | | Sentiment | amazon_reviews_multi | zh | validation EN | Median acc. | 34.74 | 42.38 | 42.58 | 39.66 | 45.30 | 37.54 | 34.44 | 41.10 | 38.78 | 44.78 | 40.48 | | | Sentiment | amazon_reviews_multi | zh | validation EN | Max acc. | 37.88 | 44.36 | 44.74 | 43.66 | 47.14 | 39.48 | 35.24 | 43.52 | 39.64 | 47.12 | 42.10 | | | Sentiment | financial_phrasebank | allagree | train | EN | Median acc. | 18.33 | 28.98 | 28.09 | 25.44 | 35.25 | 31.10 | 29.28 | 34.76 | 35.91 | 34.89 | 24.82 | | Sentiment | financial_phrasebank | allagree | train | EN | Max acc. | 22.22 | 57.51 | 52.25 | 68.15 | 37.77 | 44.79 | 34.81 | 54.37 | 59.23 | 37.15 | 37.23 | | Sentiment | glue | sst2 | validation EN | Median acc. | 79.70 | 83.49 | 83.37 | 82.80 | 93.58 | 87.96 | 83.72 | 92.09 | 94.50 | 94.04 | 93.92 | | | Sentiment | glue | sst2 | validation EN | Max acc. | 81.88 | 87.96 | 86.81 | 91.51 | 94.84 | 92.89 | 89.79 | 94.15 | 95.87 | 94.61 | 95.07 | | | Sentiment | lince | spaeng | validation EN | Median acc. | 43.63 | 43.09 | 49.11 | 41.69 | 54.81 | 58.04 | 53.85 | 52.82 | 50.19 | 58.15 | 59.60 | | | Sentiment | lince | spaeng | validation EN | Max acc. | 56.91 | 56.05 | 56.37 | 55.78 | 56.80 | 58.53 | 55.35 | 56.37 | 54.60 | 58.47 | 60.09 | | | Sentiment | movie_rationales | rationales | validation EN | Median acc. | 63.50 | 78.00 | 81.00 | 69.50 | 90.00 | 93.50 | 97.50 | 98.50 | 98.00 | 97.50 | 98.50 | | | Sentiment | movie_rationales | rationales | validation EN | Max acc. | 94.50 | 95.50 | 98.50 | 99.50 | 100.00 | 98.50 | 97.50 | 100.00 | 99.50 | 99.00 | 99.50 | | | Sentiment | poem_sentiment | sentiment | validation EN | Median acc. | 17.14 | 18.10 | 16.19 | 16.19 | 26.67 | 20.95 | 29.52 | 24.76 | 24.76 | 22.86 | 23.81 | | | Sentiment | poem_sentiment | sentiment | validation EN | Max acc. | 18.10 | 23.81 | 20.00 | 27.62 | 27.62 | 22.86 | 33.33 | 29.52 | 31.43 | 29.52 | 24.76 | | | Summarization | mlsum | es | validation EN | Median BLEU | 0.18 | 0.18 | 0.18 | 0.19 | 0.19 | 0.20 | 0.18 | 0.19 | 0.19 | 0.20 | 0.19 | | | Summarization | mlsum | es | validation EN | Max BLEU | 2.91 | 3.51 | 3.46 | 3.72 | 4.21 | 3.62 | 2.87 | 3.23 | 3.84 | 4.82 | 4.16 | | | Text Classification | art | art | validation EN | Median acc. | 50.85 | 50.85 | 50.46 | 53.33 | 68.99 | 51.50 | 50.07 | 52.68 | 54.57 | 58.42 | 66.58 | | | Text Classification | art | art | validation EN | Max acc. | 51.04 | 51.83 | 51.76 | 56.07 | 69.71 | 52.68 | 50.65 | 54.24 | 57.31 | 61.10 | 67.43 | | | Text Classification | climate_fever | fever | test | EN | Median acc. | 10.62 | 25.28 | 10.94 | 26.78 | 29.97 | 45.34 | 10.36 | 51.92 | 10.81 | 43.97 | 18.63 | | Text Classification | climate_fever | fever | test | EN | Max acc. | 42.41 | 43.78 | 20.98 | 43.32 | 51.01 | 63.97 | 30.94 | 65.54 | 32.12 | 47.69 | 36.61 | | Text Classification | conv_ai_3 | 3 | validation EN | Median acc. | 35.15 | 38.52 | 37.79 | 39.04 | 39.04 | 39.04 | 39.04 | 39.04 | 39.04 | 39.04 | 39.04 | | | Text Classification | conv_ai_3 | 3 | validation EN | Max acc. | 60.35 | 60.96 | 55.69 | 60.96 | 60.96 | 60.96 | 60.96 | 60.96 | 60.96 | 60.96 | 60.96 | | | Text Classification | emotion | emotion | test | EN | Median acc. | 20.75 | 23.83 | 42.20 | 32.38 | 31.35 | 34.72 | 35.57 | 29.93 | 39.77 | 33.05 | 36.70 | | Text Classification | emotion | emotion | test | EN | Max acc. | 32.40 | 24.65 | 46.25 | 33.05 | 34.65 | 46.70 | 42.40 | 49.20 | 49.35 | 50.25 | 45.20 | | Text Classification | health_fact | fact | validation EN | Median acc. | 31.59 | 27.27 | 31.10 | 43.67 | 54.78 | 42.04 | 45.63 | 44.00 | 32.41 | 31.51 | 47.92 | | | Text Classification | health_fact | fact | validation EN | Max acc. | 50.61 | 43.02 | 42.53 | 44.16 | 59.59 | 54.78 | 56.82 | 63.76 | 62.53 | 57.55 | 61.31 | | | Text Classification | hlgd | hlgd | validation EN | Median acc. | 50.65 | 59.45 | 52.88 | 78.15 | 80.72 | 72.89 | 68.63 | 64.14 | 65.39 | 70.57 | 67.57 | | | Text Classification | hlgd | hlgd | validation EN | Max acc. | 63.80 | 73.71 | 65.83 | 79.36 | 84.92 | 74.92 | 72.50 | 73.37 | 68.15 | 81.83 | 78.44 | | | Text Classification | hyperpartisan_news_detection byarticle | train | EN | Median acc. | 46.20 | 49.15 | 52.87 | 52.87 | 43.26 | 62.95 | 63.10 | 63.10 | 63.10 | 63.10 | 63.10 | | | Text Classification | hyperpartisan_news_detection byarticle | train | EN | Max acc. | 49.15 | 50.39 | 54.57 | 53.64 | 44.96 | 63.10 | 63.26 | 63.10 | 63.41 | 63.10 | 63.72 | | | Text Classification | liar | liar | validation EN | Median acc. | 19.47 | 18.07 | 20.40 | 17.68 | 17.91 | 17.60 | 19.31 | 19.39 | 15.19 | 20.79 | 20.87 | | | Text Classification | liar | liar | validation EN | Max acc. | 19.47 | 18.07 | 20.40 | 17.68 | 17.91 | 17.60 | 19.31 | 19.39 | 15.19 | 20.79 | 20.87 | | | Text Classification | onestop_english | english | trsin | EN | Median acc. | 48.32 | 48.15 | 33.33 | 58.20 | 48.32 | 43.39 | 33.51 | 35.80 | 45.33 | 54.67 | 41.80 | | Text Classification | onestop_english | english | trsin | EN | Max acc. | 56.26 | 58.73 | 46.74 | 65.61 | 56.44 | 55.56 | 34.57 | 41.80 | 63.32 | 64.02 | 53.09 | | Text Classification | scicite | scicite | validation EN | Median acc. | 13.97 | 24.56 | 23.14 | 33.08 | 39.63 | 33.08 | 17.90 | 21.62 | 30.57 | 34.28 | 54.91 | | | Text Classification | scicite | scicite | validation EN | Max acc. | 25.11 | 37.23 | 30.57 | 66.16 | 66.16 | 54.91 | 25.98 | 44.10 | 57.21 | 50.33 | 63.43 | | | Topic Classification | banking77 | banking77 | test | EN | Median acc. | 11.30 | 11.53 | 16.27 | 19.51 | 30.10 | 14.38 | 19.29 | 20.81 | 24.19 | 25.39 | 28.57 | | Topic Classification | banking77 | banking77 | test | EN | Max acc. | 15.10 | 12.99 | 19.94 | 23.83 | 30.94 | 16.10 | 20.45 | 26.04 | 28.90 | 26.36 | 29.06 | | Topic Classification | blbooksgenre_title_genre | classifiction validation EN | Median acc. | 26.21 | 35.43 | 35.83 | 49.14 | 32.03 | 41.47 | 25.17 | 30.47 | 27.13 | 74.94 | 77.07 | | | | Topic Classification | blbooksgenre_title_genre | classifiction validation EN | Max acc. | 33.93 | 43.78 | 73.10 | 74.88 | 85.43 | 74.31 | 74.94 | 73.62 | 71.72 | 84.56 | 86.41 | | | | Topic Classification | selqa | analysis | validation EN | Median acc. | 88.34 | 88.54 | 90.00 | 89.30 | 92.61 | 89.81 | 87.71 | 91.08 | 90.83 | 89.24 | 91.46 | | | Topic Classification | selqa | analysis | validation EN | Max acc. | 91.59 | 90.32 | 91.08 | 91.97 | 94.39 | 92.36 | 88.66 | 92.36 | 92.48 | 91.21 | 94.27 | | | Topic Classification | snips_built_in_intents | intents | train | EN | Median acc. | 35.37 | 45.73 | 34.15 | 62.20 | 82.62 | 27.13 | 39.63 | 11.89 | 25.91 | 39.94 | 70.12 | | Topic Classification | snips_built_in_intents | intents | train | EN | Max acc. | 39.02 | 54.27 | 42.07 | 64.63 | 92.68 | 46.34 | 53.96 | 17.68 | 33.84 | 58.23 | 78.66 | | Translation | wmt14_fr_en | en | validation EN | Median BLEU | 5.47 | 11.33 | 17.00 | 23.92 | 29.87 | 4.70 | 4.28 | 6.10 | 12.29 | 8.03 | 26.07 | | | Translation | wmt14_fr_en | en | validation EN | Max BLEU | 10.84 | 19.23 | 23.15 | 29.63 | 33.65 | 21.24 | 25.38 | 26.19 | 27.93 | 29.54 | 33.71 | | | Translation | wmt14_hi_en | en | validation EN | Median BLEU | 1.02 | 3.35 | 5.11 | 9.14 | 18.43 | 1.36 | 0.39 | 1.11 | 1.84 | 3.62 | 10.05 | | | Translation | wmt14_hi_en | en | validation EN | Max BLEU | 2.47 | 7.57 | 12.96 | 19.80 | 26.13 | 9.02 | 11.09 | 12.02 | 14.82 | 17.02 | 21.18 | | ## L Version Control V1 → V2: - Added evaluation results for the validation datasets used for checkpoint selection (Appendix §K) - Added a section on the effect on generation length (Appendix §G) and rewrote parts of §4.5 - Added a mention of xP3x, the extension of xP3 to 277 languages in Appendix §C - Added an example of XNLI to Appendix §H M Prompts used This section describes the prompts used for training and evaluation. In the following, dataset naming conventions follow those used in the Hugging Face datasets library. Since xP3 expands upon the P3 dataset employed by Sanh et al. (2022), we refer the reader to that work for example prompts from datasets that fall within P3. Here, we provide prompts curated for datasets that belong to xP3 but not to P3. The prompts provided are not exhaustive. Code will be released to provide a canonical reference. For each dataset considered, a dataset example is provided for additional context. Next, it is noted if the prompt does not match the original task formulation of the dataset. This is followed by a reference for the data, an input template and a target template. For prompts with predefined answer choices, these are also included. To provide examples of both human-translated and machine-translated prompts, samples of each kind are included for the xnli es dataset. ## Contents | 1 | Prompts 1.1 Simplification | |--------|---------------------------------------------------------| | 1.1.1 | GEM/BiSECT en | | 1.1.2 | GEM/BiSECT es | | 1.1.3 | GEM/BiSECT fr | | 1.2 | Summarization | | 1.2.1 | GEM/wiki_lingua en | | 1.2.2 | GEM/wiki_lingua es | | 1.2.3 | GEM/xlsum bengali | | 1.2.4 | GEM/xlsum english | | 1.3 | Translation | | 1.3.1 | Helsinki-NLP/tatoeba_mt ben-eng | | 1.3.2 | Helsinki-NLP/tatoeba_mt eng-fra | | 1.3.3 | facebook/flores ben_Beng-eng_Latn | | 1.3.4 | facebook/flores ben_Beng-fra_Latn | | 1.4 | Program Synthesis | | 1.4.1 | Muennighoff/mbpp sanitized | | 1.4.2 | codeparrot/apps all | | 1.4.3 | codeparrot/github-jupyter-text-code-pairs | | 1.4.4 | codeparrot/xlcost-text-to-code C++-program-level | | 1.4.5 | codeparrot/xlcost-text-to-code C-program-level | | 1.4.6 | codeparrot/xlcost-text-to-code Csharp-program-level | | 1.4.7 | codeparrot/xlcost-text-to-code Java-program-level | | 1.4.8 | codeparrot/xlcost-text-to-code Javascript-program-level | | 1.4.9 | codeparrot/xlcost-text-to-code PHP-program-level | | 1.4.10 | codeparrot/xlcost-text-to-code Python-program-level | | 1.4.11 | neural_code_search evaluation_dataset | | 1.4.12 | teven/code_contests | | 1.5 | Coreference Resolution | | 1.5.1 | Muennighoff/xwinograd en | |---------------------------------|-------------------------------------------------------------------------------| | 1.5.2 | Muennighoff/xwinograd fr | | 1.6 | Question Answering Multiple Choice | | 1.6.1 | clue c3 | | 1.7 | Question Answering Extractive | | 1.7.1 | clue cmrc2018 | | 1.7.2 | clue drcd | | 1.7.3 | mlqa mlqa.vi.vi | | 1.7.4 | mlqa mlqa.zh.zh | | 1.7.5 | xquad xquad.vi | | 1.7.6 | xquad xquad.zh | | 1.8 | Topic Classification | | 1.8.1 | clue csl | | 1.8.2 | clue tnews | | 1.9 | Code Misc. | | 1.9.1 | codeparrot/codecomplex codeparrot–codecomplex | | 1.9.2 | great_code | | 1.9.3 | teven/code_docstring_corpus top_level | | 1.10 Word Sense Disambiguation | | | 1.10.1 | pasinit/xlwic xlwic_en_zh | | 1.10.2 | pasinit/xlwic xlwic_fr_fr | | 1.11 Paraphrase Identification | | | 1.11.1 | paws-x en | | 1.11.2 | paws-x es | | 1.12 Sentence Completion | | | 1.12.1 | xcopa vi | | 1.12.2 | xcopa zh | | 1.13 Natural Language Inference | | | 1.13.1 | xnli en | | 1.13.2 | xnli es 1.13.2.1 Human-translated prompts 1.13.2.2 Machine-translated prompts | 1.1 Simplification 1.1.1 GEM/BiSECT en Dataset from Kim et al. (2021). Used in training. ## Data Example | Key | Value | |------------|----------------------------------------| | gem_id | BiSECT_en-train-1 | | source | To view any of the video clips belo... | | target | If you want to watch one of the vid... | | references | If you want to watch one of the vid... | Input Template: Split and simplify the following sentence while retaining its full meaning: {{source}} Simplified version: Target Template: Input Template: | {{target}} | |--------------| {{source}} The above sentence is very complicated. Please provide me a simplified synonymous version consisting of multiple sentences: Target Template: {{target}} Input Template: {{source}}. This sentence is hard to understand. A simpler version with equivalent meaning is the following: Target Template: Data Example | {{target}} | |--------------| | 1.1.2 | GEM/BiSECT es | |---------|-----------------| | Key | Value | |------------|----------------------------------------| | gem_id | BiSECT_es-train-1 | | source | Al final de la Santa Misa , mientra... | | target | Al finalizar la santa misa , mientr... | | references | Al finalizar la santa misa , mientr... | Input Template: {{source}}. Esta frase es difícil de entender. Una versión más simple con significado equivalente es la siguiente: Target Template: | {{target}} | |--------------| Input Template: Divida y simplifique la siguiente oración conservando su significado completo: {{source}} Versión simplificada: Target Template: | {{target}} | |--------------| Input Template: | {{source}} La frase anterior es muy complicada. Por favor, proporcione una versión sinónima simplificada que consta de varias oraciones: | |--------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {{target}} 1.1.3 GEM/BiSECT fr Data Example Prompts Input Template: | Key | Value | |------------|----------------------------------------| | gem_id | BiSECT_fr-train-1 | | source | N'ayez pas peur de poser des questi... | | target | Il ne faut pas avoir peur de poser ... | | references | Il ne faut pas avoir peur de poser ... | Divisez et simplifiez la phrase suivante tout en conservant son sens complet : {{source}} Version simplifiée : Target Template: {{target}} Input Template: {{source}} La phrase ci-dessus est très compliquée. Veuillez me fournir une version synonyme simplifiée composée de plusieurs phrases : Target Template: | {{target}} | |--------------| Input Template: {{source}}. Cette phrase est difficile à comprendre. Une version plus simple avec une signification équivalente est la suivante : Target Template: | {{target}} | |--------------| 1.2 Summarization 1.2.1 GEM/wiki_lingua en Dataset from lad (2020). Used in training. Data Example Notes: xsum DOC_write_summary_of_above template | Key | Value | |-----------------|----------------------------------------| | gem_id | wikilingua_multilingual-train-42437... | | gem_parent_id | wikilingua_multilingual-train-42437... | | source_language | en | | target_language | en | | source | Go online and simply search "Decor ... | | target | Take a quiz online to find your sty... | | references | Take a quiz online to find your sty... | | Input Template: {{source}} === Write a summary of the text above in English : | |---------------------------------------------------------------------------------| Target Template: {{target}} Notes: xsum 'article_DOC_summary' template Input Template: Article in English: {{source}} Summary in English: Target Template: {{target}} Notes: xsum 'DOC_how_would_you_rephrase_few_words' template Input Template: {{source}} How would you rephrase that briefly in English? Target Template: {{target}} Notes: xsum 'DOC_tldr' template Input Template: {{source}} TL;DR in English: Target Template: {{target}} Notes: xsum 'read_below_DOC_write_abstract' template Input Template: First, read the English article below. {{source}} Now, please write a short abstract for it in English. Target Template: {{target}} Input Template: {{target}} Given the above abstract, write an English article for it. Target Template: {{source}} Input Template: {{target}} I'm interested in that, but I only have a few mins. Can you give me the first 500 characters of an article about that? Target Template: {{source[:500]}} 1.2.2 GEM/wiki_lingua es Data Example | Key | Value | |-----------------|----------------------------------------| | gem_id | wikilingua_multilingual-train-34808... | | gem_parent_id | wikilingua_multilingual-train-34808... | | source_language | es | | target_language | es | | source | Navega en la web y simplemente busc... | | target | Haz un cuestionario en línea para e... | | references | Haz un cuestionario en línea para e... | Notes: xsum templates Input Template: {{source}} === Write a summary of the text above in Spanish: Target Template: {{target}} Notes: xsum templates Input Template: First, read the Spanish article below. {{source}} Now, please write a short abstract for it in Spanish. Target Template: {{target}} Notes: xsum templates Input Template: | {{source}} TL;DR in Spanish: Target Template: | |-------------------------------------------------| {{target}} Notes: xsum templates | Input Template: Article in Spanish: {{source}} Summary in Spanish: | |----------------------------------------------------------------------| | Target Template: {{target}} Notes: xsum templates | |-----------------------------------------------------| Input Template: {{source}} How would you rephrase that briefly in Spanish? Target Template: {{target}} 1.2.3 GEM/xlsum bengali Dataset from Hasan et al. (2021). Used in training. Data Example | Key | Value | |------------|----------------------------------------| | gem_id | xlsum_bengali-train-2 | | url | https://www.bbc.com/bengali/news-50... | | title | রািশয়ায় ক্ষমতার ২০ বছর েযভােব েকট... | | target | ভ্লািদিমর পুিতন তাঁর ক্ষমতায় থাকার... | | references | ভ্লািদিমর পুিতন তাঁর ক্ষমতায় থাকার... | | text | গত ২০ বছের িতিন রািশয়ার েপৰ্িসেডন্... | Input Template: একিট িনবেন্ধর নীেচর িশেরানাম এবং সারাংশ েদওয়া, একিট েছাট িনবন্ধ ৈতির করুন বা তােদর সােথ েযেত একিট দীঘর্ িনবেন্ধর শুরু করুন। িশেরানাম: {{title}}সারাংশ: {{target}} Target Template: {{text[:500]}} | Input Template: িবষয়বস্তু: {{text[:7000]}} Target Template: {{target}} Input Template: ডক সংিক্ষপ্ত করার জনয্: {{text[:8500]}} Target Template: {{target}} Input Template: ...{{text[3000:3500]}} Target Template: {{text[5000:]}} Input Template: িশেরানাম: {{title}} Target Template: {{text[:7000]}} Input Template: {{text}} Target Template: {{title}} Input Template: | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | িশেরানাম: {{title}} | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Target Template: {{text[:7000]}} Input Template: {{title}}{{text[:5000]}} Target Template: {{target}} Input Template: {{text[:1000]}} Target Template: {{text[1000:5000]}} | 1.2.4 GEM/xlsum english Data Example | Key | Value | |------------|----------------------------------------| | gem_id | xlsum_english-train-2 | | url | https://www.bbc.com/news/uk-scotlan... | | title | Huge tidal turbine installed at Ork... | | target | The massive tidal turbine AK1000 ha... | | references | The massive tidal turbine AK1000 ha... | | text | Atlantis Resources unveiled the mar... | Input Template: Doc to summarize: {{text[:8500]}}\nSummary in the same language as the doc: | Target Template: {{target}} Input Template: | |-----------------------------------------------| Content: {{text[:7000]}}\nThe previous content can be summarized as follows: Target Template: {{target}} Input Template: {{title}}\n{{text[:5000]}}\n\ntl;dr: Target Template: {{target}} Input Template: {{text}} \n\nGive me a good title for the article above. Target Template: {{title}} Input Template: Given the below title and summary of an article, generate a short article or the beginning of a long article to go along with them. Title: {{title}}\nSummary: {{target}}\nArticle (Max 500 characters): Target Template: {{text[:500]}} Input Template: Title: {{title}}\nGiven the above title of an imaginary article, imagine the article.\n Target Template: {{text[:7000]}} Input Template: Title: {{title}}\nGiven the above title of an imaginary article, imagine the article.\n Target Template: {{text[:7000]}} Input Template: {{text[:1000]}}... Continue the article for another 4000 characters max: Target Template: {{text[1000:5000]}} Input Template: ...{{text[3000:3500]}}... Write the rest of the article: Target Template: {{text[5000:]}} 1.3 Translation 1.3.1 Helsinki-NLP/tatoeba_mt ben-eng Dataset from Tiedemann (2020). Used in training. Data Example | Key | Value | |--------------|---------------------------| | sourceLang | ben | | targetlang | eng | | sourceString | Tatoebaর অথর্ কী? | | targetString | What does "Tatoeba" mean? | Input Template: Translate the following text from English to Bengali {{ targetString }} Target Template: {{ sourceString }} Input Template: Translate the following text from Bengali to English {{ sourceString }} Target Template: {{ targetString }} 1.3.2 Helsinki-NLP/tatoeba_mt eng-fra Data Example | Key | Value | |--------------|--------------------------------| | sourceLang | eng | | targetlang | fra | | sourceString | Aah. Now I understand. | | targetString | Ah ! Maintenant, je comprends. | Input Template: Translate the following text from French to English {{ targetString }} Target Template: {{ sourceString }} Input Template: Translate the following text from English to French {{ sourceString }} Target Template: {{ targetString }} 1.3.3 facebook/flores ben_Beng-eng_Latn Dataset from NLLB (2022). Used in training. ## Data Example | Key | Value | |-------------------|----------------------------------------| | id | 2 | | URL | https://en.wikinews.org/wiki/Scient... | | domain | wikinews | | topic | health | | has_image | 0 | | has_hyperlink | 0 | | sentence_ben_Beng | শীষর্ গেবষকরা বলেছন, এিট িনম্ন-আেয়... | | sentence_eng_Latn | Lead researchers say this may bring... | Input Template: | {{sentence_ben_Beng}} | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Target Template: {{sentence_eng_Latn}} Input Template: A text in Bengali: {{sentence_ben_Beng}} Target Template: {{sentence_eng_Latn}} Input Template: {{sentence_ben_Beng}} Target Template: {{sentence_eng_Latn}} | 1.3.4 facebook/flores ben_Beng-fra_Latn Data Example | Key | Value | |-------------------|----------------------------------------| | id | 2 | | URL | https://en.wikinews.org/wiki/Scient... | | domain | wikinews | | topic | health | | has_image | 0 | | has_hyperlink | 0 | | sentence_ben_Beng | শীষর্ গেবষকরা বলেছন, এিট িনম্ন-আেয়... | | sentence_fra_Latn | Selon les chercheurs principaux, ce... | | Input Template: {{sentence_ben_Beng}} Target Template: {{sentence_fra_Latn}} Input Template: {{sentence_ben_Beng}} Target Template: {{sentence_fra_Latn}} Input Template: A text in Bengali: {{sentence_ben_Beng}} Target Template: {{sentence_fra_Latn}} | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| 1.4 Program Synthesis 1.4.1 Muennighoff/mbpp sanitized Dataset from Austin et al. (2021). Used in training. Data Example Input Template: Key Value {{ prompt }} Here is a solution in Python: | source_file | Benchmark Questions Verification V2... | |---------------|-----------------------------------------------------------------| | prompt | Write a python function to identify... def is_not_prime(n): ... | | test_list | assert is_not_prime(2) == False;ass... | Target Template: {{ code }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {{ prompt }} This can be solved in Python with the following code: Target Template: {{ code }} 1.4.2 codeparrot/apps all Dataset from Hendrycks et al. (2021). Used in training. ## Data Example Prompts | Key | Value | |--------------|----------------------------------------| | problem_id | 1 | | question | Mikhail walks on a Cartesian plane | | solutions | ["q=int(input())\n\nfor e in range(... | | input_output | { "inputs": [ "3\n2 2 3\n4 3 ... | | difficulty | interview | | url | https://codeforces.com/problemset/p... | | starter_code | | Input Template: Solve in Python: {{ question }} | Target Template: {{ solution }} Input Template: {{ question }} Can you solve the above problem using Python? | |----------------------------------------------------------------------------------------------------------------| Target Template: {{ solution }} Input Template: I found an interesting problem on {{url}}: {{ question }} I tried it in Python, but could not do it. Can you solve it? Target Template: {{ solution }} 1.4.3 codeparrot/github-jupyter-text-code-pairs Data Example | Key | Value | |-----------|----------------------------------------| | markdown | Extract the dataset from the compre... | | code | num_classes = 10 np.random.seed(133... | | path | machine-learning/deep-learning/udac... | | repo_name | pk-ai/training | | license | mit | Input Template: "{{ markdown }}" Please write code following the instructions in jupyter notebook style. Target Template: {{ code }} Input Template: I am working on the file "{{ path }}". The first task is: {{ markdown }} Can you write Python code for it? Target Template: {{ code }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {{ markdown }} Target Template: {{ code }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {{ code }} Given the above code, generate some markdown instructions for it. Target Template: {{ markdown }} 1.4.4 codeparrot/xlcost-text-to-code C++-program-level Dataset from Zhu et al. (2022). Used in training. Data Example Key Value text Check if a number can be represente... code \#include <bits/stdc++.h> NEW_LINE u... Prompts Input Template: "{{ text }}" Solution in C++: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in C++? Target Template: {{ code_clean }} 1.4.5 codeparrot/xlcost-text-to-code C-program-level Data Example Key Value text Logarithm tricks for Competitive Pr... code \#include <stdio.h> NEW_LINE \#includ... Prompts Input Template: "{{ text }}" Solution in C: Target Template: {{ code_clean }} Input Template: {{ text }} How can the above be solved in C? Target Template: {{ code_clean }} 1.4.6 codeparrot/xlcost-text-to-code Csharp-program-level Data Example text Check if a number can be represente... code using System ; class GFG { static b... Prompts Input Template: "{{ text }}" Solution in C\#: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in C-Sharp? Target Template: {{ code_clean }} 1.4.7 codeparrot/xlcost-text-to-code Java-program-level Data Example Key Value Key Value text Check if a number can be represente... code import java . io . * ; class GFG { ... Prompts Input Template: "{{ text }}" Solution in Java: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in Java? Target Template: {{ code_clean }} 1.4.8 codeparrot/xlcost-text-to-code Javascript-program-level Data Example Key Value text Check if a number can be represente... code function sumOfTwoCubes ( n ) { var ... Prompts Input Template: "{{ text }}" Solution in Javascript: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in JS? Target Template: {{ code_clean }} 1.4.9 codeparrot/xlcost-text-to-code PHP-program-level Data Example text Rearrange the array to maximize the... code < ? php function solve ( $ a , $ n ... Prompts Input Template: "{{ text }}" Solution in php: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in PHP? Target Template: {{ code_clean }} 1.4.10 codeparrot/xlcost-text-to-code Python-program-level Data Example Key Value Key Value text Check if a number can be represente... code import math NEW_LINE def sumOfTwoCu... Input Template: "{{ text }}" Solution in Python: Target Template: {{ code_clean }} Input Template: "{{ text }}" How can the above be solved in Python? Target Template: {{ code_clean }} 1.4.11 neural_code_search evaluation_dataset Dataset from hug (2018). Used in training. ## Data Example | Key | Value | |---------------------|----------------------------------------| | stackoverflow_id | 4616095 | | question | How to get the build/version number... | | question_url | https://stackoverflow.com/questions... | | question_author | Fahad Ali Shaikh | | question_author_url | https://stackoverflow.com/users/565... | | answer | try { PackageInfo pInfo = this.ge... | | answer_url | https://stackoverflow.com/a/6593822 | | answer_author | plus | | answer_author_url | https://stackoverflow.com/users/709... | | examples | 4130029;3398176;2320640 | | examples_url | https://github.com/altanizio/Concei... | Prompts Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Description: {{ question }} Implementation: Target Template: {{ answer }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Given the following code: {{ answer }} Describe it: Target Template: {{ question }} 1.4.12 teven/code_contests Data Example | Key | Value | |-------------|----------------------------------------| | name | 1575_A. Another Sorting Problem | | description | Andi and Budi were given an assignm... | | source | 2 | | difficulty | 7 | | solution | #include <bits/stdc++.h> using name... | | language | CPP | Input Template: {{description}} | Target Template: {{solution}} | |---------------------------------| Input Template: Can you solve the below in {{language}}? {{description}} | Target Template: {{solution}} Input Template: {{description}} The above is tricky. Write me a correct solution in {{language}}. Target Template: {{solution}} Input Template: {{description}} Solve the task in {{language}}. Target Template: {{solution}} Input Template: {{description}} Using {{language | lower}} can you solve the prior task? Target Template: {{solution}} Input Template: {{description}} {{solution[:5]}} | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {{solution[5:]}} Input Template: {{language}} solution for "{{description}}": Target Template: {{solution}} 1.5 Coreference Resolution 1.5.1 Muennighoff/xwinograd en Dataset from Tikhonov and Ryabinin (2021). Used in evaluation. Data Example | Key | Value | |----------|----------------------------------------| | sentence | The city councilmen refused the dem... | | option1 | The city councilmen | | option2 | the demonstrators | | answer | 2 | Input Template: {{sentence}} Replace the _ in the above sentence with the correct option: - {{option1}} - {{option2}} Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{option1}} ||| {{option2}} Input Template: Fill in the _ in the below sentence: {{sentence}} Choices: - {{ option1 }} - {{ option2 }} Answer: Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{option1}} ||| {{option2}} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: The _ in the sentence below refers to {{option1}}. True or False? {{sentence}} Target Template: {{answer_choices[answer|int - 1]}} Answer Choices Template: True ||| False Input Template: {{ sentence }} In the previous sentence, does _ refer to {{ option1 }} or {{ option2 }}? Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{ option1 }} ||| {{ option2 }} Input Template: {{sentence}} What does the _ in the above sentence refer to? {{ option1 }} or {{ option2 }}? Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{option1}} ||| {{option2}} Input Template: In the sentence below, does the _ stand for {{answer_choices[0]}} or {{answer_choices[1]}}? {{sentence}} Target Template: {{answer_choices[answer | int - 1]}} Answer Choices Template: {{option1}} ||| {{option2}} 1.5.2 Muennighoff/xwinograd fr Data Example | Key | Value | |----------|----------------------------------------| | sentence | La coupe n'entre pas dans la valise... | | option1 | La coupe | | option2 | la valise | | answer | 2 | Prompts Input Template: {{ sentence }} Dans la phrase précédente, _ fait-il référence à {{ option1 }} ou {{ option2 }} ? Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{ option1 }} ||| {{ option2 }} Input Template: Dans la phrase ci-dessous, le _ signifie-t-il {{answer_choices[0]}} ou {{answer_choices[1]}} ? {{sentence}} Target Template: {{answer_choices[answer | int - 1]}} Answer Choices Template: {{option1}} ||| {{option2}} Input Template: {{sentence}} Remplacez le _ dans la phrase ci-dessus par l'option correcte : - {{option1}} - {{option2}} Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{option1}} ||| {{option2}} Input Template: {{sentence}} À quoi le _ dans la phrase ci-dessus fait-il référence ? {{ option1 }} ou {{ option2 }} ? Target Template: {% if answer == '1' %} {{option1}} {% else %} {{ option2 }} {% endif %} Answer Choices Template: {{option1}} ||| {{option2}} Input Template: Le _ dans la phrase ci-dessous fait référence à {{option1}}. Vrai ou faux? {{sentence}} Target Template: {{answer_choices[answer|int - 1]}} Answer Choices Template: Vrai ||| Faux 1.6 Question Answering Multiple Choice 1.6.1 clue c3 Dataset from Sun et al. (2020). Used in training. ## Data Example | Key | Value | |----------|-------------------------------------------------------------------------| | id | 1 | | context | 男:足球比赛是明天上午八点开始吧?;女:因为天气不好,比赛改到后天下午... | | question | 根据对话,可以知道什么? | | choice | 今天天气不好;比赛时间变了;校长忘了时间 | | answer | 比赛时间变了 | Input Template: {% for statement in context %} {{ statement }} {% endfor %} 鉴于上面的对话/段落,问题"{{question}}"的答案是什么 Target Template: {{ answer }} Input Template: 段落:{% for statement in context %} {{ statement }} {% endfor %} 什么样的问题会引起 {{ answer }} 的回答响应? Target Template: {{ question }} Input Template: {% for statement in context %} {{ statement }} {% endfor %} Given the dialogue / passage above, use the following options to answer the question "{{question}}". Options: - {{ answer_choices | join('\n- ') }} Target Template: {{ answer }} Answer Choices Template: {{ choice | join(" ||| ") }} Input Template: {% for statement in context %} ![59_image_0.png](59_image_0.png) {{ statement }} {% endfor %} 鉴于上面的对话/段落,使用以下选项回答问题"{{question}}"。 选项: - {{ answer_choices | join(' - ') }} Target Template: {{ answer }} Answer Choices Template: {{ choice | join(" ||| ") }} Input Template: Passage: {% for statement in context %} {{ statement }} {% endfor %} Question: "{{question}}" Answer choices: {{ answer_choices[:-1] | join(', ') }}, or {{ answer_choices[-1] }}? Target Template: {{ answer }} Answer Choices Template: {{ choice | join(" ||| ") }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Passage: {% for statement in context %} {{ statement }} {% endfor %} What kind of question would elicit an answer response of {{ answer }}? Target Template: {{ question }} Input Template: 段落:{% for statement in context %} {{ statement }} {% endfor %} 问题:"{{question}}" 答案选择:{{ answer_choices[:-1] | join(', ') }} 还是 {{ answer_choices[-1] }}? Target Template: {{ answer }} Answer Choices Template: {{ choice | join(' ||| ') }} Input Template: {% for statement in context %} {{ statement }} {% endfor %} Given the dialogue / passage above, what is the answer for the question "{{question}}" Answer choices: {{ answer_choices[:-1] | join(', ') }}, or {{ answer_choices[-1] }}? Target Template: {{ answer }} Answer Choices Template: {{ choice | join(' ||| ') }} Input Template: {% for statement in context %} {{ statement }} {% endfor %} 鉴于上面的对话/段落,问题"{{question}}"的答案是什么 答案选择:{{ answer_choices[:-1] | join(', ') }} 还是 {{ answer_choices[-1] }}? Target Template: {{ answer }} Answer Choices Template: {{ choice | join(' ||| ') }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {% for statement in context %} {{ statement }} {% endfor %} Given the dialogue / passage above, what is the answer for the question "{{question}}" Target Template: {{ answer }} 1.7 Question Answering Extractive 1.7.1 clue cmrc2018 Dataset from Cui et al. (2018). Used in training. Data Example | Key | Value | |----------|----------------------------------------------------------------------| | id | TRAIN_186_QUERY_1 | | context | 范廷颂枢机(,),圣名保禄·若瑟(),是越南罗马天主教枢机。1963年... | | question | 1990年,范廷颂担任什么职务? | | answers | {'text': ['1990年被擢升为天主教河内总教区宗座署理'],... | Input Template: 问:{{ question }}你能写一些上下文来回答这个问题吗? Target Template: {{ context }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Given this context "{{ context }}", generate a question that would return the answer of "{{ answers['text'][0] }}". Target Template: {{ question }} Input Template: {{ context }} {{ question }} 的答案在上面的段落中。它是什么? Target Template: {{ answers['text'][0] }} Input Template: In an exam, you are asked {{ question }}, and you are tasked to find the answer from the following passage. {{ context }} What's the answer? Target Template: {{ answers['text'][0] }} Input Template: {{ context }} The answer to {{ question }} is in the passage above. What is it? Target Template: {{ answers['text'][0] }} Input Template: Answer the question using the given context. Question: {{ question }} Context: {{ context }} Answer: Target Template: {{ answers['text'][0] }} Input Template: Q: {{ question }} Can you write some context to answer the question? Target Template: {{ context }} Input Template: {{ context[:answers["answer_start"][0]-5] }}... How would you continue the prior text to answer "{{ question }}"? Target Template: {{ context[answers["answer_start"][0]-5:] }} Input Template: {{ context[:answers["answer_start"][0]-5] }}... 你将如何继续前面的文本来回答"{{ question }}"? Target Template: {{ context[answers["answer_start"][0]-5:] }} Input Template: 使用给定的上下文回答问题。 问题:{{ question }} 上下文:{{ context }} 答案: Target Template: {{ answers['text'][0] }} Input Template: 在考试中,你被问到 {{ question }},你的任务是从以下段落中找到答案。 {{ context }} 答案是什么? Target Template: {{ answers['text'][0] }} Input Template: 给定这个上下文"{{ context }}",生成一个返回"{{ answers['text'][0] }}"答案的问题。 Target Template: {{ question }} 1.7.2 clue drcd Dataset from Xu et al. (2020). Used in training. Data Example | Key | Value | |----------|-----------------------------------------------------------------------| | id | 1001-10-2 | | context | 2010年引進的廣州快速公交運輸系統,屬世界第二大快速公交系統,日常載... | | question | 從哪一天開始在廣州市�騎摩托車會被�收? | | answers | {'text': ['2007年1月16日'], 'answer_st... | Input Template: {{ context }} {{ question }} 的答案在上面的段落中。它是什么? Target Template: {{ answers['text'][0] }} | Input Template: Answer the question using the given context. Question: {{ question }} Context: {{ context }} Answer: | |------------------------------------------------------------------------------------------------------------------------| Target Template: {{ answers['text'][0] }} | Input Template: {{context[:answers["answer_start"]-5]}}... 你将如何继续前面的文本来回答"{{ question}}"? | |------------------------------------------------------------------------------------------------------------| Target Template: {{context[answers["answer_start"]-5:]}} Input Template: 在考试中,你被问到 {{ question }},你的任务是找到回答问题的段落。写这样一段话: Target Template: {{ context }} Input Template: {{ context }} The answer to {{ question }} is in the passage above. What is it? Target Template: {{ answers['text'][0] }} Input Template: 在考试中,你被问到 {{ question }},你的任务是从以下段落中找到答案。 {{ context }} 答案是什么? Target Template: {{ answers['text'][0] }} Input Template: 给定这个上下文"{{ context }}",生成一个返回"{{ answers['text'][0] }}"答案的问题。 Target Template: {{ question }} Input Template: {{context[:answers["answer_start"]-5]}}... How would you continue the prior text to answer "{{ question }}"? Target Template: {{context[answers["answer_start"]-5:]}} Input Template: 使用给定的上下文回答问题。 问题:{{ question }} 上下文:{{ context }} 答案: Target Template: {{ answers['text'][0] }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Given this context "{{ context }}", generate a question that would return the answer of "{{ answers['text'][0] }}". Target Template: {{ question }} Input Template: In an exam, you are asked {{ question }}, and you are tasked to find the answer from the following passage. {{ context }} What's the answer? Target Template: {{ answers['text'][0] }} Input Template: In an exam, you are asked {{ question }}, and you are tasked to find a passage answering the question. Write such a passage: Target Template: {{ context }} 1.7.3 mlqa mlqa.vi.vi Dataset from Lewis et al. (2019). Used in training. Data Example | Key | Value | |----------|-------------------------------------------| | context | Thành phố Miêu Lật tiếng Trung:苗栗市,... | | question | Miaoli có tỷ lệ cao loại người nào? | | answers | {'answer_start': [311], 'text': ['K... | | id | 2f0d6ff162619164bb113c0cadbcca06a50... | Input Template: | {{ context[:answers.answer_start[0]-5]}} ... Tiếp tục ở trên, sao cho nó trả lời "{{question}}": Target Template: {{ context[answers.answer_start[0]-5:]}} Input Template: {{context}} Với sự tham chiếu đến ngữ cảnh trên, {{question}} Target Template: {{answers.text[0]}} | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Input Template: {{context}} H: {{question}} Đề cập đến đoạn văn trên, câu trả lời đúng cho câu hỏi đã cho trong ngôn ngữ của đoạn văn là Target Template: {{answers["text"][0]}} Input Template: Câu hỏi: {{question}} Ngữ cảnh: {{context}} Câu trả lời từ ngữ cảnh: | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {{answers.text[0]}} Input Template: Tham khảo đoạn văn dưới đây và sau đó trả lời câu hỏi sau đó bằng ngôn ngữ tương tự như đoạn văn: Đoạn: {{context}} Câu hỏi: {{question}} Target Template: {{answers["text"][0]}} Input Template: Tôi đã tìm thấy một văn bản trả lời "{{question}}" bằng {{answers.text[0]}}. Nó bắt đầu bằng "{{ context[:10] }}". Bạn có thể tiếp tục nó không? Target Template: {{ context[10:] }} Input Template: Đọc đoạn văn sau và sau đó trả lời câu hỏi tiếp theo bằng cách trích một phần đúng trong đoạn văn: {{context}} {{question}} Target Template: {{answers.text[0]}} Input Template: D: {{context}} H: {{question}} Target Template: A: {{answers["text"][0]}} 1.7.4 mlqa mlqa.zh.zh Data Example | Key | Value | |----------|---------------------------------------------------------------------------| | context | 楚河州包括有整个楚河河谷及邻近的山脉与峡谷。河谷的黑土非常肥沃,而且被... | | question | 哪水体有助土地如此多产? | | answers | {'answer_start': [36], 'text': ['楚河... | | id | 1aee17dd937cc1043e3ff47c38396541fc3... | Input Template: 阅读下面的短文,然后从短文中选出正确的部分来回答下面的问题: {{context}} {{question}} Target Template: {{answers.text[0]}} Input Template: {{ context[:answers.answer_start[0]-5]}}... 继续上述操作,使其回答"{{question}}": Target Template: {{ context[answers.answer_start[0]-5:]}} Input Template: D:{{context}} 问:{{question}} 答: Target Template: {{answers["text"][0]}} Input Template: 我找到了一个用 {{answers.text[0]}} 回答"{{answers.text[0]}}"的文本。它以"{{ context[:10] }}"开头。可以继续吗? Target Template: {{ context[10:] }} Input Template: 问题:{{question}} 上下文:{{context}} 从上下文中回答: Target Template: {{answers.text[0]}} Input Template: 参考下面的段落,然后用与段落相同的语言回答问题: 段落:{{context}} 问题:{{question}} Target Template: {{answers["text"][0]}} Input Template: {{context}} 参考上述上下文,{{question}} Target Template: {{answers.text[0]}} Input Template: {{context}} 问:{{question}} 参考上面的段落,用该段落的语言对给定问题的正确答案是 Target Template: {{answers["text"][0]}} 1.7.5 xquad xquad.vi Dataset from Artetxe et al. (2019). Used in training. Data Example | Key | Value | |----------|----------------------------------------| | id | 56beb4343aeaaa14008c925c | | context | Đội thủ của Panthers chỉ thua 308 đ... | | question | Jared Allen có bao nhiêu lần vật ng... | | answers | {'text': ['136'], 'answer_start': [... | Input Template: {{context}} Với sự tham chiếu đến ngữ cảnh trên, {{question}} Target Template: {{answers.text[0]}} Input Template: Đưa ra câu trả lời {{answers.text[0]}} cho {{question}}, hãy viết một văn bản giải thích điều này. Câu trả lời phải bắt đầu ở số ký tự {{answers.answer_start[0]}}. Văn bản: | Target Template: {{context}} | |--------------------------------| | Input Template: | |-------------------| {{question}} Rõ ràng là {{answers.text[0]}}. Bạn có thể cung cấp cho tôi một số bối cảnh? Target Template: {{context}} Input Template: {{context}} H: {{question}} Đề cập đến đoạn văn trên, câu trả lời chính xác cho câu hỏi được đưa ra là Target Template: {{answers["text"][0]}} Input Template: {{context}} H: {{question}} A: Target Template: {{answers["text"][0]}} Input Template: Đọc đoạn văn sau và trả lời câu hỏi sau: {{context}} {{question}} Target Template: {{answers.text[0]}} Input Template: Tham khảo đoạn văn dưới đây và trả lời câu hỏi sau: Đoạn: {{context}} Câu hỏi: {{question}} Target Template: {{answers["text"][0]}} Input Template: | Input Template: {{context}} Từ đoạn văn trên, một câu hỏi hợp lý với "{{answers["text"][0]}}" như câu trả lời sẽ là: | |------------------------------------------------------------------------------------------------------------------------| Target Template: {{question}} Input Template: | {{context}} | |---------------| Tạo câu hỏi từ đoạn văn trên: Target Template: {{question}} 1.7.6 xquad xquad.zh Data Example | Key | Value | |----------|-------------------------------------------------------------------| | id | 56beb4343aeaaa14008c925c | | context | 黑豹队的防守只丢了 308分,在联赛中排名第六,同时也以 24 次拦截... | | question | 贾里德在职业生涯中有多少次擒杀? | | answers | {'text': ['136 次'], 'answer_start':... | Input Template: 阅读下面的短文,回答下面的问题: {{context}} {{question}} Target Template: {{answers.text[0]}} Input Template: {{context}} 问:{{question}} 参考上面的段落,给定问题的正确答案是 Target Template: {{answers["text"][0]}} Input Template: 参考下面的短文,回答下列问题: 段落:{{context}} 问题:{{question}} Target Template: {{answers["text"][0]}} Input Template: {{context}} | 从上面的段落中,一个以"{{answers["text"][0]}}" 为答案的合理问题将是: | |-------------------------------------------------------------------------| Target Template: {{question}} Input Template: {{context}} 从上面的段落中产生一个问题: | Target Template: {{question}} Input Template: {{context}} 参考上述上下文,{{question}} Target Template: {{answers.text[0]}} Input Template: {{context}} 问:{{question}} 答: Target Template: {{answers["text"][0]}} | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| 1.8 Topic Classification 1.8.1 clue csl Data Example | Key | Value | |-----------|---------------------------------------------------------------------| | idx | 1 | | corpus_id | 2565 | | abst | 针对核函数参数选择的重要性,提出了粒子群(PSO)模式搜索算法来搜索最... | | label | -1 | | keyword | 模式搜索;支持向量机;核参数选取 | Prompts Input Template: After John wrote the abstract "{{abst}}", he wrote these keywords "{{ keyword | join(', ') }}". Do you think his choice of keywords was correct? Answer {{ answer_choices[1]}} or {{ answer_choices[0]}}. Target Template: {{ answer_choices[label] }} Answer Choices Template: no ||| yes Input Template: Do these keywords "{{ keyword | join(', ') }}" represent key concepts in the abstract "{{ abst }}"? Target Template: {{ answer_choices[label] }} Answer Choices Template: no ||| yes Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Given the abstract {{abst}}, list out {{ keyword | length }} keywords for it. Target Template: {% if label == 1 %} {{ keyword | join(', ') }} {% endif %} Input Template: 一位学者使用"{{ keyword | join(', ') }}"作为搜索词。你认为搜索引擎会返回摘要 "{{abst}}"吗?回答 {{ answer_choices[1] }} 或 {{ answer_choices[0] }}。 Target Template: {{ answer_choices[label] }} Answer Choices Template: 不 ||| 是的 Input Template: 给定抽象{{abst}},列出{{ keyword | length }} 关键字。 Target Template: {% if label == 1 %} {{ keyword | join(', ') }} {% endif %} Input Template: 写一篇关于"{{ keyword | join(', ') }}"的摘要: Target Template: {% if label == 1 %} {{abst}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: 在约翰写完摘要"{{abst}}"之后,他写了这些关键字"{{ keyword | join(', ') }}"。你认为 他选择的关键词是正确的吗?回答 {{ answer_choices[1]}} 或 {{ answer_choices[0]}}。 Target Template: {{ answer_choices[label] }} Answer Choices Template: 不 ||| 是的 Input Template: 这些关键字"{{ keyword | join(', ') }}"是否代表抽象"{{ abst }}"中的关键概念? Target Template: {{ answer_choices[label] }} Answer Choices Template: 不 ||| 是的 Input Template: A scholar used "{{ keyword | join(', ') }}" as search terms. Do you think the search engine would return the abstract "{{abst}}"? Answer {{ answer_choices[1] }} or {{ answer_choices[0] }}. Target Template: {{ answer_choices[label] }} Answer Choices Template: no ||| yes Input Template: Write an abstract about "{{ keyword | join(', ') }}": Target Template: {% if label == 1 %} {{abst}} {% endif %} Answer Choices Template: no ||| yes 1.8.2 clue tnews Data Example Input Template: Prompts | Key | Value | |----------|-----------------------------------------------| | sentence | 买套房不香吗?为什么会有人愿花600万买部手机? | | label | -1 | | idx | 1 | 将标题"{{ sentence }}"分为以下主题: - {{ answer_choices | join('\n- ') }} 主题: Target Template: {{ answer_choices[label] }} Answer Choices Template: 故事 ||| 文化 ||| 娱乐 ||| 运动的 ||| 金融 ||| 房地产 ||| 车 ||| 教育 ||| 技术 ||| 军队 ||| 旅行 ||| 世界新闻 ||| 股票 ||| 农业 ||| 游戏 Input Template: Classify the title "{{ sentence }}" into the following topics: - {{ answer_choices | join('\n- ') }} Topic: Target Template: {{ answer_choices[label] }} Answer Choices Template: story ||| culture ||| entertainment ||| sports ||| finance ||| real estate ||| car ||| education ||| tech ||| military ||| travel ||| world news ||| stock ||| agriculture ||| game Input Template: Given the topics of {{answer_choices[:-1] | join(', ') }}, and {{ answer_choices[-1] }}, specify which of them best represents the following sentence: {{ sentence }} Best: Target Template: {{ answer_choices[label] }} Answer Choices Template: story ||| culture ||| entertainment ||| sports ||| finance ||| real estate ||| car ||| education ||| tech ||| military ||| travel ||| world news ||| stock ||| agriculture ||| game Input Template: 以下新闻标题"{{ sentence }}"属于什么主题? {{ answer_choices[0] | capitalize }}, {{ answer_choices[1:-1] | join(', ') }} 还是 {{ answer_choices[-1] }}? Target Template: {{ answer_choices[label] }} Answer Choices Template: 故事 ||| 文化 ||| 娱乐 ||| 运动的 ||| 金融 ||| 房地产 ||| 车 ||| 教育 ||| 技术 ||| 军队 ||| 旅行 ||| 世界新闻 ||| 股票 ||| 农业 ||| 游戏 Input Template: 鉴于 {{answer_choices[:-1] | join(', ') }} 和 {{ answer_choices[-1] }},指定它们中 的哪一个最能代表以下句子: {{ sentence }} 最佳: Target Template: {{ answer_choices[label] }} Answer Choices Template: 故事 ||| 文化 ||| 娱乐 ||| 运动的 ||| 金融 ||| 房地产 ||| 车 ||| 教育 ||| 技术 ||| 军队 ||| 旅行 ||| 世界新闻 ||| 股票 ||| 农业 ||| 游戏 Input Template: What topic does the following news title "{{ sentence }}" belong to? {{ answer_choices[0] | capitalize }}, {{ answer_choices[1:-1] | join(', ') }}, or {{ answer_choices[-1] }}? Target Template: {{ answer_choices[label] }} Answer Choices Template: story ||| culture ||| entertainment ||| sports ||| finance ||| real estate ||| car ||| education ||| tech ||| military ||| travel ||| world news ||| stock ||| agriculture ||| game 1.9 Code Misc. 1.9.1 codeparrot/codecomplex codeparrot–codecomplex Data Example | Key | Value | |------------|--------------------------------------| | src | import java.util.Scanner; public ... | | complexity | linear | | problem | 1197_B. Pillars | | from | CODEFORCES | Input Template: {{ code }} What is the time complexity of the previous code? Target Template: {{ complexity }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: Identify the time complexity of the following code as constant, linear, quadratic, cubic, log(n), nlog(n) or NP-hard. {{ code }} Complexity: Target Template: {{ complexity }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {{ code }} Which one is the correct time complexity of the code snippet: constant, linear, quadratic, cubic, log(n), nlog(n) or NP-hard? Target Template: {{ complexity }} 1.9.2 great_code Dataset from Hellendoorn et al. (2020). Used in training. Data Example | Key | Value | |-------------------|----------------------------------------| | id | 1 | | source_tokens | #NEWLINE#;def test_get_params(;self... | | has_bug | True | | error_location | 76 | | repair_candidates | 2;76;4;11;18;22;30;40;48;58;66;80;8... | | bug_kind | 1 | | bug_kind_name | VARIABLE_MISUSE | | repair_targets | 4;11;18;22;30;40;48;58;66;80;88;103... | | edges | [{'before_index': 1, 'after_index':... | | provenances | {'datasetProvenance': {'datasetName... | Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: ![83_image_0.png](83_image_0.png) ", "(") | replace(" )", ")") | replace("[ ", "[") | replace(" ]", "]")}} What is the function name? ![83_image_1.png](83_image_1.png) Target Template: {{ ns.target }} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: {% set result = "" %} {% set indent = ' ' %} ![84_image_0.png](84_image_0.png) (ns.result_lines[ns.buggy_line][:ns.bug_location] + fixed_token + ns.result_lines[ns.buggy_line][ns.bug_location + ns.bug_len:]) | trim | replace(" . ", ".") | replace(" , ", ", ") | replace("( ", "(") | replace(" )", ")") | replace("[ ", "[") | replace(" ]", "]")%} Fix the buggy line: {{buggy_line_content}} {{fixed_buggy_line_content}} {% endif %} ![85_image_0.png](85_image_0.png) Target Template: {{source_tokens[repair_targets[0]]}} {% endif %} Note: the prompt does not correspond to the original task intended by the dataset authors. Input Template: ![85_image_1.png](85_image_1.png) {% endif %} {% set ns.result = ns.result + [token | replace('\\n', '\n'), " "] %} {% endif %} {% endfor %} {{ns.result | join("") | replace(" . ", ".") | replace(" , ", ", ") | replace("( ", "(") | replace(" )", ")") | replace("[ ", "[") | replace(" ]", "]")}} Is there a bug in the code above? Target Template: {{ {True: "Yes", False: "No"}[has_bug] }} Answer Choices Template: Yes ||| No ![86_image_0.png](86_image_0.png) Target Template: {{source_tokens[repair_targets[0]]}} {% endif %} Answer Choices Template: {% if has_bug %} {% set nss = namespace(choices=[]) %} {% for i in repair_candidates %} {% set nss.choices = nss.choices + [source_tokens[(i | int)]] %} {% endfor %} {{nss.choices | unique | join(" ||| ")}} {% endif %} 1.9.3 teven/code_docstring_corpus top_level Data Example | Key | Value | |--------|---------------------------------| | desc | 'XXX22: This has to be present' | | decl | def XXX11(): | | bodies | pass | Input Template: Complete the below {{decl}} '''{{desc | replace(' ', ' ')}}''' Target Template: {{bodies}} Input Template: I wrote the below code {{bodies}} What's a good function header? Target Template: {{decl}} | Input Template: | |-------------------| | {{decl}} | |------------| Target Template: """{{desc | replace(' ', ' ') | replace("'", '')}}""" {{bodies}} 1.10 Word Sense Disambiguation 1.10.1 pasinit/xlwic xlwic_en_zh Dataset from Raganato et al. (2020). Used in training. | Key | Value | |------------------------|----------------------------------------| | id | EN_1 | | context_1 | We like to summer in the Mediterran... | | context_2 | We summered in Kashmir. | | target_word | summer | | pos | V | | target_word_location_1 | {'char_start': 11, 'char_end': 17} | | target_word_location_2 | {'char_start': 3, 'char_end': 11} | | language | EN | | label | 1 | ## Data Example Prompts Input Template: 第 1 句:{{context_1}} 句子 2:{{context_2}} 确定单词"{{target_word}}"在两个句子中的使用是否相同。是还是不是? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: | 不 ||| 是的 | |---------------| Input Template: 家庭作业 判断"{{target_word}}"这个词在以下两个句子中的含义是否相同。回答是或否。 {{context_1}} {{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: {{context_1}} {{context_2}} {{target_word}} 的类似意义? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: "{{target_word}}"这个词有多种含义。第 1 句和第 2 句的意思相同吗?是还是不是? 第 1 句:{{context_1}} 句子 2:{{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: 确定以下两个句子中是否以相同的方式使用了单词"{{target_word}}"。 {{context_1}} {{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: {{context_1}} {{context_2}} 问题:"{{target_word}}"这个词在上面两个句子中的含义是否相同? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: "{{target_word}}"这个词在这两个句子中是否具有相同的含义?是的,不是吗? {{context_1}} {{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: "{{target_word}}"这个词在这两个句子中是否具有相同的含义? {{context_1}} {{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 Input Template: 句子 A:{{context_1}} 句子 B:{{context_2}} "{{target_word}}"在句子 A 和 B 中具有相似的含义。对还是错? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 错误的 ||| 真的 Input Template: {{context_1}} {{context_2}} 问题:"{{target_word}}"这个词在上面两个句子中的含义是否相同?是的,不是吗? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: 不 ||| 是的 1.10.2 pasinit/xlwic xlwic_fr_fr Data Example | Key | Value | |----------|---------| | id | FR_1 | | pos | N | | language | FR | | label | 1 | id FR_1 context_1 L'éclaircie est généralement une co... context_2 Améliorations utiles. target_word amélioration pos N target_word_location_1 {'char_start': 41, 'char_end': 53} target_word_location_2 {'char_start': 0, 'char_end': 13} language FR label 1 Input Template: {{context_1}} {{context_2}} Sens similaire de {{target_word}} ? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: Non ||| Oui Input Template: Devoirs Décidez si le mot "{{target_word}}" est utilisé avec le même sens dans les deux phrases suivantes. Répondez par oui ou non. {{context_1}} {{context_2}} Target Template: | {% if label != -1%} {{answer_choices[label]}} {% endif %} | |-------------------------------------------------------------| | Answer Choices Template: | |----------------------------| | Non ||| Oui | |---------------| | Input Template: Le mot "{{target_word}}" a-t-il le même sens dans ces deux phrases ? Oui Non? {{context_1}} {{context_2}} | |-----------------------------------------------------------------------------------------------------------------------------| Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: | Non ||| Oui | |---------------| Input Template: | {{context_1}} {{context_2}} Question : Le mot '{{target_word}}' est-il utilisé dans le même sens dans les deux phrases ci-dessus ? Oui Non? | |-----------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: | Non ||| Oui | |---------------| Input Template: Le mot "{{target_word}}" a plusieurs significations. A-t-il le même sens dans les phrases 1 et 2 ? Oui ou non? Phrase 1 : {{context_1}} Phrase 2 : {{context_2}} Target Template: | {% if label != -1%} {{answer_choices[label]}} {% endif %} | |-------------------------------------------------------------| Answer Choices Template: Non ||| Oui | Input Template: Phrase 1 : {{context_1}} Phrase 2 : {{context_2}} Déterminez si le mot "{{target_word}}" est utilisé dans le même sens dans les deux phrases. Oui ou non? | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: Non ||| Oui | Input Template: Le mot "{{target_word}}" a-t-il le même sens dans ces deux phrases ? {{context_1}} {{context_2}} | |--------------------------------------------------------------------------------------------------------------------| Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: | Non ||| Oui | |---------------| | Input Template: | |-------------------| Phrase A : {{context_1}} Phrase B : {{context_2}} "{{target_word}}" a une signification similaire dans les phrases A et B. Vrai ou faux ? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: Faux ||| Vrai Input Template: Déterminez si le mot '{{target_word}}' est utilisé de la même manière dans les deux phrases ci-dessous. {{context_1}} {{context_2}} Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: Non ||| Oui Input Template: {{context_1}} {{context_2}} Question : Le mot '{{target_word}}' est-il utilisé dans le même sens dans les deux phrases ci-dessus ? Target Template: {% if label != -1%} {{answer_choices[label]}} {% endif %} Answer Choices Template: Non ||| Oui 1.11 Paraphrase Identification 1.11.1 paws-x en Dataset from Yang et al. (2019). Used in training. Data Example | Key | Value | |-----------|----------------------------------------| | id | 2 | | sentence1 | The NBA season of 1975 -- 76 was th... | | sentence2 | The 1975 -- 76 season of the Nation... | | label | 1 | Notes: Generalized prompt format, task_description-input. Input Template: Determine if the following two sentences paraphrase each other or not. Sent 1: {{sentence1}} Sent 2: {{sentence2}} Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Natural question. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Do Sentence 1 and Sentence 2 express the same meaning? Yes or No? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Generalized prompt format, context-question without any label. Input Template: {{sentence1}} Is that a paraphrase of the following sentence? {{sentence2}}? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Natural Question without label. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Can we rewrite Sentence 1 to Sentence 2? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Generalized prompt format, context-question. Input Template: {{sentence1}} Is that a paraphrase of the following sentence? {{sentence2}}? Yes or No. Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Concatenation of sentence 1 and sentence 2. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Does Sentence 1 paraphrase Sentence 2? Yes or No? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Note: the prompt does not correspond to the original task intended by the dataset authors. Notes: Create a generative paraphrase task. Input Template: {% if label == 1 %} Paraphrase the sentence: {{sentence1}} Target Template: {{sentence2}} {% endif %} Notes: Concatenation of sentence 1 and sentence 2 without any label. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Does Sentence 1 paraphrase Sentence 2? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Notes: Natural question without label. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Do Sentence 1 and Sentence 2 express the same meaning? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Prompt from Brown et al. (2020) **Notes:** ANLI prompt format from Table G7 in the GPT3 paper Brown et al. (2020) Input Template: {{sentence1}} Question: {{sentence2}} True or False? Target Template: {{answer_choices[label]}} Answer Choices Template: False ||| True Notes: Natural Question. Input Template: Sentence 1: {{sentence1}} Sentence 2: {{sentence2}} Question: Can we rewrite Sentence 1 to Sentence 2? Yes or No? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Yes Prompt from Brown et al. (2020) **Notes:** ANLI prompt format from Table G7 in the GPT3 paper Brown et al. (2020). Additionally added task information without any label. Input Template: {{sentence1}} Question: {{sentence2}} Paraphrase or not? Target Template: {{answer_choices[label]}} | Answer Choices Template: No ||| Yes | |---------------------------------------| 1.11.2 paws-x es Data Example | Key | Value | |-----------|----------------------------------------| | id | 2 | | sentence1 | La temporada de la NBA de 1975: 76 ... | | sentence2 | La temporada 1975 - 76 de la Asocia... | | label | 1 | Prompts Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no? Target Template: {{answer_choices[label]}} | Answer Choices Template: No ||| Sí Input Template: {{sentence1}} Pregunta: {{sentence2}} ¿Parafrasear o no? | |---------------------------------------------------------------------------------------------------------------| Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: {{sentence1}} ¿Es una paráfrasis de la siguiente oración? {{sentence2}}? Si o no. Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿La Oración 1 y la Oración 2 expresan el mismo significado? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: {% if label == 1 %} Parafrasea la oración: {{sentence1}} Target Template: {{sentence2}} {% endif %} Input Template: {{sentence1}} Pregunta: {{sentence2}} ¿Verdadero o falso? Target Template: {{answer_choices[label]}} Answer Choices Template: Falso ||| Verdadero Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿La oración 1 parafrasea la oración 2? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: Determina si las siguientes dos oraciones se parafrasean entre sí o no. Enviado 1: {{sentence1}} Enviado 2: {{sentence2}} Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿La Oración 1 y la Oración 2 expresan el mismo significado? ¿Si o no? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿Podemos reescribir la Oración 1 a la Oración 2? ¿Si o no? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: {{sentence1}} ¿Es una paráfrasis de la siguiente oración? {{sentence2}}? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí Input Template: Oración 1: {{sentence1}} Oración 2: {{sentence2}} Pregunta: ¿Podemos reescribir la Oración 1 a la Oración 2? Target Template: {{answer_choices[label]}} Answer Choices Template: No ||| Sí 1.12 Sentence Completion 1.12.1 xcopa vi Dataset from Ponti et al. (2020). Used in evaluation. Data Example | Key | Value | |----------|----------------------------------------| | premise | Cô gái tìm thấy con bọ trong ngũ cố... | | choice1 | Cô đổ sữa vào bát. | | choice2 | Cô mất cảm giác ngon miệng. | | question | effect | | label | 1 | | idx | 1 | | changed | False | Input Template: {{ premise }} Tôi đang lưỡng lự giữa hai lựa chọn. Giúp tôi chọn nguyên nhân {% if question == "cause" %} có khả năng xảy ra cao hơn: {% else %} effect: {% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %} {{ answer_choices[label] }} {% endif %} Answer Choices Template: {{choice1}} ||| {{choice2}} | Input Template: | |-------------------| {{ premise }} Lựa chọn tốt nhất là gì? - {{choice1}} - {{choice2}} Chúng tôi đang tìm kiếm {% if question == "cause" %} một nguyên nhân {% else %} một ảnh hưởng {% endif %} Target Template: {% if label != -1 %} {{answer_choices[label]}} {% endif %} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} {% if question == "cause" %} Điều này xảy ra vì ... {% else %} Do đó ... {% endif %} Giúp tôi chọn tùy chọn hợp lý hơn: - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %} {{ answer_choices[label] }} {% endif %} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: "{{ answer_choices[0] }}" hay "{{ answer_choices[1] }}"? {{ premise }} {% if question == "cause" %} bởi vì {% else %} nên {% endif %} Target Template: {% if label != -1 %} {{ answer_choices[label] }} {% endif %} Answer Choices Template: {{choice1 }} ||| {{choice2}} Input Template: {{ premise }} Chọn nguyên nhân {% if question == "cause" %} hợp lý nhất: {% else %} effect: {% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %} {{ answer_choices[label] }} {% endif %} Answer Choices Template: {{choice1}} ||| {{choice2}} 1.12.2 xcopa zh Data Example | Key | Value | |----------|------------------------------------| | premise | 这个女孩在麦片粥中发现了一个虫子。 | | choice1 | 她向碗里倒了牛奶。 | | choice2 | 她没了食欲。 | | question | effect | | label | 1 | | idx | 1 | | changed | False | Input Template: {{ premise }} {% if question == "cause" %} 这是因为... {% else %} 结果... {% endif %} 帮助我选择更合理的选项: - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: | {{choice1}} ||| {{choice2}} | |-------------------------------| | Input Template: | |-------------------| {{ premise }} 选择最合理的 {% if question == "cause" %} 原因:{% else %} 效果:{% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: "{{ answer_choices[0] }}"还是"{{ answer_choices[1] }}"? {{ premise }} {% if question == "cause" %} 因为 {% else %} 所以 {% endif %} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{% endif %} Answer Choices Template: {{choice1 }} ||| {{choice2}} Input Template: {{ premise }} 最好的选择是什么? - {{choice1}} - {{choice2}} 我们正在寻找 {% if question == "cause" %} 一个原因 {% else %} 一个结果 {% endif %} Target Template: {% if label != -1 %}{{answer_choices[label]}}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} 我在两个选项之间犹豫不决。帮我选择更有可能的 {% if question == "cause" %} 原因:{% else %} 效果:{% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} 我正在考虑两个选项。请帮我最有可能的{% if question == "cause" %}导因:{% else %}后果: {% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} {% if question == "cause" %}这个会发生是因为... {% else %}结果是... {% endif %} 帮我挑选合适的选项: - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Notes: Adapted from Perez et al. (2021) and Schick and Schütze (2020). Input Template: "{{ answer_choices[0] }}" 还是"{{ answer_choices[1] }}"? {{ premise }} {% if question == "cause" %}因为{% else %}所以{% endif %} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{% endif %} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} 哪个是最好的答案? - {{choice1}} - {{choice2}} 我们正在考虑{% if question == "cause" %}起因{% else %}后果 {% endif %} Target Template: {% if label != -1 %}{{answer_choices[label]}}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} Input Template: {{ premise }} 请选择最贴切的答案: {% if question == "cause" %}导因:{% else %}结果: {% endif %} - {{choice1}} - {{choice2}} Target Template: {% if label != -1 %}{{ answer_choices[label] }}{%endif%} Answer Choices Template: {{choice1}} ||| {{choice2}} 1.13 Natural Language Inference 1.13.1 xnli en Dataset from Conneau et al. (2018). Used in evaluation. Data Example | Key | Value | |------------|----------------------------------------| | premise | you know during the season and i gu... | | hypothesis | You lose the things to the followin... | | label | 0 | Notes: Sanh et al. (2022) Input Template: Take the following as truth: {{premise}} Then the following statement: "{{hypothesis}}" is {{"true"}}, {{"false"}}, or {{"inconclusive"}}? Target Template: {{ answer_choices[label] }} Answer Choices Template: True ||| Inconclusive ||| False Notes: Sanh et al. (2022) Input Template: {{premise}} Question: Does this imply that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{answer_choices[label]}} Answer Choices Template: ## Yes ||| Maybe ||| No Notes: Same as reported in Figure G7 of Brown et al. (2020), except that there is no task identifying tokens like "anli R1: ". Input Template: {{premise}} Question: {{hypothesis}} True, False, or Neither? Target Template: {{ answer_choices[label] }} Answer Choices Template: True ||| Neither ||| False Notes: Sanh et al. (2022) Input Template: Given that {{premise}} Does it follow that {{hypothesis}} Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Adapted from the BoolQ prompts in Schick and Schütze (2020). Input Template: {{premise}} Based on the previous passage, is it true that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Webson and Pavlick (2021) Input Template: Given {{premise}} Is it guaranteed true that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Webson and Pavlick (2021) Input Template: Given {{premise}} Should we assume that "{{hypothesis}}" is true? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Sanh et al. (2022) Input Template: Given that {{premise}} Therefore, it must be true that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Webson and Pavlick (2021) Input Template: Suppose {{premise}} Can we infer that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Webson and Pavlick (2021) Input Template: {{premise}} Are we justified in saying that "{{hypothesis}}"? Yes, no, or maybe? Target Template: {{ answer_choices[label] }} Answer Choices Template: Yes ||| Maybe ||| No Notes: Sanh et al. (2022) Input Template: {{premise}} Based on that information, is the claim: "{{hypothesis}}" {{"true"}}, {{"false"}}, or {{"inconclusive"}}? Target Template: {{ answer_choices[label] }} Answer Choices Template: True ||| Inconclusive ||| False Notes: Sanh et al. (2022) Input Template: {{premise}} Keeping in mind the above text, consider: {{hypothesis}} Is this {{"always"}}, {{"sometimes"}}, or {{"never"}} correct? Target Template: {{ answer_choices[label] }} Answer Choices Template: Always ||| Sometimes ||| Never Notes: Sanh et al. (2022) Input Template: Suppose it's true that {{premise}} Then, is "{{hypothesis}}" {{"always"}}, {{"sometimes"}}, or {{"never"}} true? Target Template: {{ answer_choices[label] }} Answer Choices Template: Always ||| Sometimes ||| Never Notes: Sanh et al. (2022) Input Template: Assume it is true that {{premise}} Therefore, "{{hypothesis}}" is {{"guaranteed"}}, {{"possible"}}, or {{"impossible"}}? Target Template: {{ answer_choices[label] }} Answer Choices Template: Guaranteed ||| Possible ||| Impossible Notes: Adapted from Williams et al. (2018) instructions to crowdsourcing workers. Input Template: {{premise}} Using only the above description and what you know about the world, "{{hypothesis}}" is definitely correct, incorrect, or inconclusive? Target Template: {{ answer_choices[label] }} Answer Choices Template: Correct ||| Inconclusive ||| Incorrect 1.13.2 xnli es Data Example | Key | Value | |------------|----------------------------------------| | premise | Usted sabe durante la temporada y s... | | hypothesis | Pierdes las cosas al siguiente nive... | | label | 0 | 1.13.2.1 **Human-translated prompts** Notes: Same as reported in Figure G7 of Brown et al. (2020), except that there is no task identifying tokens like "anli R1: ". Input Template: {{premise}} Pregunta: {{hypothesis}} Verdadero, Falso, o Ninguno? Target Template: {{ answer_choices[label] }} Answer Choices Template: Verdadero ||| Ninguno ||| Falso Notes: Webson and Pavlick (2021) Input Template: Supongamos {{premise}} Podemos inferir que "{{hypothesis}}"? Si, no, o tal vez? Target Template: {{ answer_choices[label] }} Answer Choices Template: Sí ||| Tal vez ||| No Notes: Webson and Pavlick (2021) Input Template: {{premise}} Estamos justificados en decir que "{{hypothesis}}"? Si, no, o tal vez? Target Template: {{ answer_choices[label] }} Answer Choices Template: Sí ||| Tal vez ||| No Notes: Sanh et al. (2022) Input Template: Supongamos que es cierto que {{premise}} por lo tanto, "{{hypothesis}}" es {{"garantizado"}}, {{"posible"}}, o {{"imposible"}}? Target Template: {{ answer_choices[label] }} Answer Choices Template: Garantizado ||| Posible ||| Imposible Notes: Adapted from Williams et al. (2018) instructions to crowdsourcing workers. Input Template: {{premise}} Usando solo la descripción anterior y lo que sabe sobre el mundo, "{{hypothesis}}" es definitivamente correcto, incorrecto o no concluyente? Target Template: {{ answer_choices[label] }} Answer Choices Template: Correcto ||| No concluyente ||| Incorrecto ## 1.13.2.2 Machine-Translated Prompts Input Template: {{premise}} ¿Estamos justificados al decir que "{{hypothesis}}"? ¿Sí, no o tal vez? Target Template: {{ answer_choices[label] }} Answer Choices Template: Sí ||| Quizás ||| No Input Template: {{premise}} Pregunta: {{hypothesis}} ¿Verdadero, falso o ninguno? Target Template: {{ answer_choices[label] }} Answer Choices Template: Verdadero ||| Ninguno de los dos ||| Falso Input Template: {{premise}} Usando solo la descripción anterior y lo que sabe sobre el mundo, "{{hypothesis}}" es definitivamente correcta, incorrecta o no concluyente. Target Template: {{ answer_choices[label] }} Answer Choices Template: Correcto ||| Poco concluyente ||| Incorrecto Input Template: Supongamos {{premise}} ¿Podemos inferir que "{{hypothesis}}"? ¿Sí, no o tal vez? Target Template: {{ answer_choices[label] }} | Answer Choices Template: Sí ||| Quizás ||| No | |-------------------------------------------------| | Input Template: Supongamos que es cierto que {{premise}} Por lo tanto, "{{hypothesis}}" es {{"guaranteed"}}, {{"possible"}} o {{"impossible"}}. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------| Target Template: {{ answer_choices[label] }} | Answer Choices Template: garantizado ||| Posible ||| Imposible | |------------------------------------------------------------------| ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Abstract & Section 1. ✗ A2. Did you discuss any potential risks of your work? We did not find our work to have major potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract & Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 & Appendix D ✓ B1. Did you cite the creators of artifacts you used? Section 3 & Appendix D ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix D ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The artifacts used did not detail their intended use except for license compliance, which we adhere to. The intended use of our artifacts is clear from their accompanying licenses. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? As we reuse existing datasets (ROOTS, XCOPA, etc), we refer to the original works for such information. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We explain the languages covered by our artifacts in Section 3. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Dataset sizes are reported in Figures 1 and 2, as well as the accompanying text. ## C ✓ **Did You Run Computational Experiments?** Sections 3 & 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Model parameters and infrastructure details are reported in Section 3. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Hyperparamters are reported in Section 3. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Results are reported in Figures 4 - 8 and Tables 1 - 2 as well as in the Appendix. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We report parameter settings in Section 3 and will provide code for reproduction of all our experiments. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
shou-lin-2023-evaluate
Evaluate {AMR} Graph Similarity via Self-supervised Learning
https://aclanthology.org/2023.acl-long.892
In work on AMR (Abstract Meaning Representation), similarity metrics are crucial as they are used to evaluate AMR systems such as AMR parsers. Current AMR metrics are all based on nodes or triples matching without considering the entire structures of AMR graphs. To address this problem, and inspired by learned similarity evaluation on plain text, we propose AMRSim, an automatic AMR graph similarity evaluation metric. To overcome the high cost of collecting human-annotated data, AMRSim automatically generates silver AMR graphs and utilizes self-supervised learning methods. We evaluated AMRSim on various datasets and found that AMRSim significantly improves the correlations with human semantic scores and remains robust under diverse challenges. We also discuss how AMRSim can be extended to multilingual cases.
# Evaluate Amr Graph Similarity Via Self-Supervised Learning Ziyi Shou and **Fangzhen Lin** Department of Computer Science and Engineering HKUST-Xiaoi Joint Laboratory The Hong Kong University of Science and Technology {zshou, flin}@cse.ust.hk ## Abstract In work on AMR (Abstract Meaning Representation), similarity metrics are crucial as they are used to evaluate AMR systems such as AMR parsers. Current AMR metrics are all based on nodes or triples matching without considering the entire structures of AMR graphs. To address this problem, and inspired by learned similarity evaluation on plain text, we propose AMRSim, an automatic AMR graph similarity evaluation metric. To overcome the high cost of collecting human-annotated data, AMRSim automatically generates silver AMR graphs and utilizes self-supervised learning methods. We evaluated AMRSim on various datasets and found that AMRSim significantly improves the correlations with human semantic scores and remains robust under diverse challenges. We also discuss how AMRSim can be extended to multilingual cases.1 ## 1 Introduction An Abstract Meaning Representation (AMR; Banarescu et al., 2013) is a rooted, directed graph where nodes represent concepts and edges correspond to relations of concepts. The goal of an AMR metric is to evaluate the similarities of pairs of AMR graphs so that it can be used to evaluate the outputs of AMR generators such as parsers. Therefore, a good AMR metric is crucial for the design and evaluation of AMR parsers. However, the research on AMR metrics has so far lagged far behind the work on AMR parsing. Current AMR metrics either transfer AMR graphs to triples and consider the one-to-one matching of variables (Cai and Knight, 2013) or linearize AMR graphs as sequences and calculate n-gram matching (Song and Gildea, 2019). These metrics fail to consider the entire AMR structure and lack flexibility, resulting in poor correlation with human annotations. 1The code and datasets can be found at https://github. com/zzshou/AMRSim. Inspired by plain-text automatic similarity assessment methods that encode sentences into latent semantic representations to measure a similarity of the two representations (Reimers and Gurevych, 2019), we propose to learn the automatic assessment of AMR graph similarity through a similar pipeline. Our proposed metric, called AMRSim, adopts the pre-trained language model BERT as the backbone and incorporates GNN adapters to capture the structural information of AMR graphs. To overcome the high cost of collecting training data, we utilize self-supervised learning methods. The training objective is to maximize the dot product between positive embeddings and to minimize the dot product between different encodings. In contrast to one-to-one matching metrics, our AMRSim is alignment-free by computing the cosine of contextualized token embeddings. The prediction process can be considerably accelerated by leveraging GPUs. We experiment with AMRSim on the transformed STSB (Baudiš et al., 2016) and SICK (Marelli et al., 2014) datasets, which contain pairs of AMR graphs and corresponding similarity scores. Our experiments demonstrate that AMRSim achieves prominent improvements in correlation with human annotations. In further analysis, AMRSim retains the highest performance under various challenges compared to previous metrics. We also explore the potential of extending our metric to multilingual cases, taking into account the generic nature of the transformer structure and the fact that the pipeline of AMRSim is not constrained to any particular language. The remaining paper is structured as follows. Section 2 gives an overview of existing AMR metrics; Section 3 introduces our proposed metric, AMRSim; Section 4 includes experimental settings and main results; We analyze the robustness and the efficiency of our metrics in Section 5 and conclude in Section 6. ## 2 Existing Metrics AMR similarity metrics play a vital role in evaluating the performance of AMR parsers. However, computing the degree of similarity between AMR graphs is not trivial. In this section, we summarise the existing AMR metrics. S**MATCH** SMATCH (Cai and Knight, 2013) evaluates the overlap of structures as the similarity score. Each AMR graph is viewed as a conjunction of triples. SMATCH tries to find a one-to-one matching of variables that maximizes the numbers of exact matches of triples through the hill climbing method in a greedy style. The alignment process, which has been proved as NP-hard, limits the efficiency of SMATCH, especially as the sizes of AMR graphs increase. Another weakness is that the structural matching is insufficient for meaning similarity assessment and fragile in concept synonym replacement and structure deformation (Blloshmi et al., 2020; Opitz et al., 2021). S 2**MATCH** To yield a better variable alignment, Opitz et al. (2020) propose S 2MATCH by allowing soft semantic match instead of matching only identical triples in SMATCH. Accounting for cosine similarity of concepts helps to assess the similarity of graphs, however, computational limitations and structure deformation confusion in SMATCH have not been addressed. SEMA SMATCH adds a TOP relation to the structure, but SEMA (Anchiêta et al., 2019) argues that this addition can potentially distort the evaluation. Therefore, they ignored triples identifying the graph top. Furthermore, instead of computing the maximum score like SMATCH, SEMA works as a breadth-first search and produces a deterministic result, making it faster than one-to-one matching of variable employed by SMATCH. SEMBLEU BLEU (Papineni et al., 2002), which assesses text quality by comparing n-grams, is frequently adopted in machine translation evaluation. To extend BLEU for matching AMR graphs, SEMBLEU (Song and Gildea, 2019) linearizes AMR graphs through breadth-first traversal and extracts n-gram for comparison. The metric is alignmentfree and thus computationally efficient. Experimental results show that SEMBLEU achieved slightly higher consistency with human judgment than SMATCH. WWLK Besides treating AMR graphs as triples and linearized grams, Weisfeiler-Leman AMR similarity metrics (Opitz et al., 2021) consider AMR graphs as high-dimensional objects. They first propagate node embeddings iteratively by incorporating contextualization and then employ Wasserstein Weisfeiler Leman kernel (WWLK) to calculate the minimum cost of transforming one graph to another. WWLK only considers node embeddings (GloVe embeddings) in AMR graphs, while edge labels have no corresponding embeddings. Therefore, WWLKθ is extended to learn AMR edge labels, which requires additional training data. Considering embeddings of AMR edges improves the performance of metrics. However, supervised learning methods require additional efforts to collect human-labeled data. In contrast, our proposed learned AMR similarity metric adopts selfsupervised learning and achieves higher correlation performance. ## 3 The Proposed Approach In this section, we introduce our proposed approach for learning AMR graphs similarity evaluation. ## 3.1 Problem Formulations AMR graph similarity metrics compute the graded similarities of AMR graphs. Given a reference AMR graph G = {*V, E, R*} where V , E and R denote the node set, edge set, and relation set of AMR graph G respectively, and a candidate AMR graph Gˆ = {V , ˆ E, ˆ Rˆ}, the primary motivation is to automatically learn the similarity sim(G, Gˆ) between the two AMR representations. To do that, we propose to use network structures to derive the contextual embeddings of the two AMR graphs and then compute their similarity score as the cosine similarity of the embeddings. More precisely, we use sim(G, Gˆ) = sim(e, eˆ) = e⊤eˆ ∥e∥∥eˆ∥ , where e and eˆ are the contextual embeddings of G and Gˆ, respectively. Hence the key of our approach is learning contextual embeddings. ## 3.2 Amrsim The use of self-supervised methods is a wellestablished approach in semantic text similarity tasks (STS, Carlsson et al., 2021; Gao et al., 2021). To assess AMR graph similarity efficiently and well-correlated with human judgments, we propose migrating the self-supervised training process in STS to AMR similarity evaluation. ![2_image_0.png](2_image_0.png) ## 3.2.1 Self-Supervised Training In plain text applications, e.g., STS tasks and text generation tasks, many learned metrics are trained to optimize correlation with human annotations (Lowe et al., 2017; Reimers and Gurevych, 2019). However, AMR graph similarity data collection is more time-consuming because AMR evaluation has a learning cost of understanding the semantics of graphs, which are not as straightforward as plain text. Thus, self-supervised learning methods are an alternative solution. We adopt an efficient self-supervised approach Contrastive Tension (CT; Carlsson et al., 2021) in AMRSim metrics. The basic assumption is that AMR graphs with adjacent distributions have similar meanings. In CT, two independent encoders are initialized identically. The training objective is to maximize the dot product between positive embeddings and to minimize the dot product between different encodings. CT constructs positive and negative pairs in each batch. For each randomly selected AMR graph G, copy G into an identical graph pair to construct the positive instance, and sample other K graphs to construct negative instances by pairing G. The K + 1 instances are included in the same batch. The training contrastive loss L is binary cross-entropy between the generated similarity scores and labels. $${\mathcal{L}}(G,{\hat{G}})=\begin{cases}-\log\sigma(e\cdot{\hat{e}}),&{\mathrm{if}}\ G={\hat{G}}\\ -\log\sigma(1-e\cdot{\hat{e}}),&{\mathrm{if}}\ G\neq{\hat{G}}\end{cases}$$ where σ refers to the Logistic function. Figure 1 demonstrates the pipeline of AMRSim. An instance containing two AMR graphs is input to graph encoders to generate contextual embeddings, then the dot product of the embeddings is used to compute the loss. ## 3.2.2 Network Structures Compared to plain text, AMR graphs contain more structural information. We propose to incorporate graph neural networks into transformers to adapt to AMR graph structures and derive more descriptive contextual embeddings. Transformers Transformer based neural networks have demonstrated exemplary performance in the fields of natural language processing but they only accept inputs as sequence data. Therefore, we first convert AMR graphs to sequences. In AMR graph G, the labeled edge (*u, r, v*) ∈ E, where u, v ∈ V and r ∈ R is a relation type, means that there is an edge labeled as r from node u to node v. Similar to Ribeiro et al. (2022), we convert each AMR graph G into an unlabeled graph by replacing each labeled edge (*u, r, v*) with two unlabeled edges {(u, r),(*r, v*)}. Unlabeled G′ = {V′, E′} where V′include original nodes as well as additional nodes converted from relations. This preprocessing method facilitates BERT to learn word embeddings for edges. The unlabeled AMR graph is then linearized. Position embeddings are crucial for modeling sequence order in transformers. The widely used position embedding takes the advantage of absolute positions of the input sequence. However, we argue that the linearization order should not affect AMR graph encoding, because of the inherent nature of AMR graphs that the relationships between nodes are mainly defined by the underlying semantic connections rather than their linear positioning. Therefore, instead of absolute position encoding, the shortest path lengths between all nodes and the root node are encoded as relative position embeddings. In pre-trained language models like BERT, tokens may be split into smaller subwords due to the fixed vocabulary. We refuse to split AMR relations and instead add them to the vocabulary as new words. However, concept nodes are tokenized as usual. The main reason is that relation words are artificial, while nodes are always concepts, closer to linguistic tokens in the vocabulary. For unlabeled edge (*u, r*), if node u is split into subword u1*, . . . , u*k in tokenization, we consider each subword as a new node and connect each subword to r, thus (*u, r*) is replaced by {(u1, r), . . . ,(uk, r)}. Graph Neural Networks Adapter To generalize transformers to capture the structural information of AMR graphs, we incorporate graph neural networks as adapter into transformer layers. In particular, we refer to the idea of the k-dimensional Graph Neural Networks (k-GNN; Morris et al., 2019), which are based on the k-dimensional Weisfeiler and Leman algorithm (k-WL). For a given k, a k-set in [G] k, s = {Vs, Es} is a k-element subgraph over G, then the neighborhood of s is N(s) = {t ∈ [G] k||s ∩ t| = k − 1}. The GNN at layer t computes new features: h t k (s) = σ(h t−1 k(s) · Wt 1 +Pu∈N(s) h t−1 k(u) · Wt 2 ), where σ is the activation function, Wt 1 and Wt 2 are layer parameters. Different from the graph adapter in Ribeiro et al. (2022), we employ an adapter module after the feed-forward sub-layer of the last layer of BERT. The experimental results comparison can be found in section 5.6. The normalization layer before the GNN layer and the projection after the GNN layer is kept, see figure 1. So given the hidden states hv for node v, the GNN adapter layer computes: zv = W · GNN_layer(LN(hv)) + hv, where W is the adapter parameters, LN(·) denotes layer normalization and GNN_layer represents the calculation process of GNN layers. ## 4 Experiments 4.1 Data Construction A major advantage of the self-supervised method is that no human-annotated data is required for training. Following data preparation in AMR-DA (Shou et al., 2022), AMRSim utilized SPRING (Bevilacqua et al., 2021) to parse one-million sentences randomly sampled from English Wikipedia2 to AMR graphs. Generated silver AMR graphs were linearized by a depth-first traversal algorithm 2The one-million sentences sampled from Wikipedia is taken from datasets for SimCSE (Gao et al., 2021), which can be downloaded from https://huggingface.co/datasets/ princeton-nlp/datasets-for-simcse/tree/main. (the choice of linearization method does not have impact on embeddings, refer to section 5.1), meanwhile, all edges were recorded as pairs (*u, r*). Computing relative positions of tokens was implemented by Dijkstra's Method 3. ## 4.2 Experimental Setups We implemented AMRSim with sentence transformers (Reimers and Gurevych, 2019). During training, we set the positive ratio to be 4/16. In a batch of 16, there were 4 positive graph pairs and 12 negative pairs. This indicates that we sampled 4 graphs and created one positive pair and three negative pairs for each graph. The transformer parameters were initialized from uncased BERT base model (Devlin et al., 2019), and parameters for graph adapters were initialized randomly. We carried out a search of sequence length ∈ {64, 128, 256} and determined the length of linearized AMR graphs as 128. Other hyperparameters were set as follows: learning rate as 1e-5, dropout rate as 0.1, and graph adapter size as 128. GNN embeddings from k layers were concatenated and input to a projection function to generate final embeddings. Our experiments were done using GeForce RTX 2080 Ti GPU. We trained our models for one epoch, which took approximately two and a half hours and reported in the table the average performance of our models over five repeated experiments with different seeds. ## 4.3 Main Results We compared AMRSim with other AMR similarity metrics on evaluation datasets modified from semantic textual similarity datasets, STSB (Baudiš et al., 2016) and SICK (Marelli et al., 2014). Original datasets contain a set of pairs of sentences with a human-labeled similarity score. Opitz et al. (2021) utilized a strong parser to construct AMR graph pairs from sentence pairs and normalized similarity scores to the range [0, 1] to facilitate standardized evaluation. There are 1379 and 4927 test instances in the two datasets, respectively. Over 95% randomly selected data from generated AMR graphs were assessed as gold or with minor errors. The best performance of baseline metrics on the test dataset was included in the comparison. For example, k = 2 achieved superior performance in STS and SICK than the default value k = 3 3The implementation code comes from networkx: https: //networkx.org. in SEMBLEU, so we included SEMBLEUk=2 as SEMBLEU. | Metrics | STSB | SICK | | |-----------|------------|------------|------------| | SMATCH | 58.45 | 59.72 | | | S 2MATCH | 58.82 | 60.42 | | | SEMA | 55.90 | 55.32 | | | SEMBLEU | 60.62 | 59.86 | | | WWLK | 63.15 | 65.58 | | | WWLKθ | 66.94 | 67.64 | | | Baselines | AMRSim1 | 69.61±0.31 | 72.17±0.49 | | AMRSim2 | 70.10±0.31 | 71.95±0.79 | | | AMRSim3 | 70.59±0.64 | 72.82±0.46 | | | AMRSim4 | 70.88±0.61 | 73.10±0.42 | | | AMRSim5 | 70.94±0.74 | 72.64±0.44 | | | Ours | | | | Table 1: Comparison of different AMR metrics. Results are Pearson correlation (x100) on STSB and SICK test set. We repeated our experiments five times and reported the average score with the standard variance. AMRSimk indicates that the graph encoder adopts k-GNN as the adapter. Table 1 shows the comparison of the evaluation results of various metrics on two test datasets. Baseline results are from Opitz et al.'s (2021) work. We conducted five experiments for each different kGNN setups and reported the mean and standard deviation of Pearson correlation scores. Our proposed AMRSim significantly outperformed previous AMR metrics and achieved the highest score on correlation with human annotation. Specifically, AMRSim5 with 5-GNN adapter improved the previous best Pearson score from 66.94% to 70.94% on STSB, and AMRSim4 with 4-GNN adapter improved the score from 67.64% to 73.10% on SICK dataset. When k increased from one to four, the average performance of AMRSim in both datasets increased, however, when increasing from four to five, the average performance of the model decreased. More parameters to optimize made the model harder to train without more improvement. Considering the average Pearson score and standard deviation, we took AMRSim4 as our final metric. It is worth mentioning that WWLKθ learned edge encodings through supervised learning methods. By contrast, our proposed AMRSim adopted self-supervised learning, and no human-labeled data was required. ## 5 Analysis AMRSim shows a high correlation with human annotations. In this section, we further analyze the robustness and efficiency of AMRSim. ## 5.1 Robustness Analysis At first, we explore in depth how AMRSim performs under various challenges. Reification Challenges Reification challenges import AMR rephrasing, which means changing the structure of graphs, but not its meaning (Banarescu et al., 2013; Goodman, 2019; Opitz et al., 2021). The reification rule is that edge (*u, r, v*) in the original graph induced (u, r1*, ins*r) ∧ (insr, r2, v), where insr is r's reification and rns are new generated edges. For example, in the following AMR graph: (xv0 / brush-01 ## 6 Conclusion In this paper we have presented a new method for computing the $\alpha$-function of the \(\alpha Edge label :part has its reification :have-part-91, so for the above AMR graph, (*girl,* :part*, hair*) can be replaced by (*girl,* :ARG-of, *have-part-91*) ∧ (*have-part-91*, :ARG2*, hair*). The modified AMR graph keeps the meaning displayed as: (xv0 / brush-01 :ARG1 (xv2 / hair)) * [10] A.Arag (xv1 / girl : ARG1-of (nn / have-part-91 : ARG2 xv2)) Metrics STSB ∆ SICK ∆ SMATCH 57.98 -0.47 61.81 2.09 S2MATCH 58.08 -0.74 62.25 1.83 SEMA 55.51 -0.39 56.16 0.84 SEMBLEU 54.84 -5.78 57.70 -2.16 WWLK 59.78 -3.37 65.53 -0.05 WWLKθ 64.34 -2.60 65.49 -2.15 AMRSim **70.54** -0.34 **73.42** 0.32 Table 2: Comparison of AMR metrics for reification challenges. Results are Pearson correlation (x100) on reified STSB and SICK test dataset. ∆ means the difference in score before and after reification. Reification challenges require that metrics have the capacity to handle structure variants. Table 2 shows the comparison of AMR metrics under reification challenges. AMRSim ranked first on both reified datasets. On reified STSB test set, AMRSim achieved the highest score of 70.54% and the lowest loss of 0.43% compared with performance on STSB dataset. Similar results were presented on SICK dataset. AMRSim achieved the highest score of 73.42% and the second lowest loss of 0.31%. Encoding the entire AMR graph and comparing contextual embeddings is a contributing factor that improves robustness under reification challenges. Synonym Challenges Another challenge is synonym challenge, which is conceptual synonym substitution while preserving meaning. AMR nodes are iterated over and replaced by their (near-) synonyms in PropBank, or their synset in WordNet (Opitz et al., 2021). The synonym transformation is not trivial for the reason that the single node may be replaced by a graph substructure. For example, instance(*x, tofu*) can be extended as (x, :mod, y) ∧ instance(*x, curd*) ∧ instance(*y, bean*) , so the AMR graph (xv0 / cut-01 :ARG0 (xv2 / woman) :ARG1 (xv1 / tofu)) is transformed as: (xv0 / cut-01 :ARG0 (xv2 / womanhood) :ARG1 (xv1 / curd :mod (nn52 / bean))) | Metrics | STSB | ∆ | SICK | ∆ | |-----------|--------|-------|--------|-------| | SMATCH | 56.14 | -2.31 | 57.39 | -2.33 | | S2MATCH | 56.70 | -2.12 | 57.92 | -2.50 | | SEMA | 50.16 | -5.74 | 48.87 | -6.45 | | SEMBLEU | 52.82 | -7.80 | 53.47 | -6.39 | | WWLK | 59.40 | -3.75 | 59.98 | -5.60 | | WWLKθ | 60.11 | -4.23 | 62.29 | -5.35 | | AMRSim | 66.50 | -4.38 | 67.73 | -5.37 | Table 3: Comparison of AMR metrics for synonym challenges. Results are Pearson correlation (x100) on transformed STSB and SICK test dataset. ∆ means the score difference before and after concept node transformation. Table 3 shows the performance of various AMR metrics on synonym challenges. From the results, synonym challenges are more complex than reification challenges. Nevertheless, AMRSim still ranked first on both test datasets, with 66.50% and 67.73% respectively. SEMBLEU that matches ngram of linearized AMR graphs is the most vulnerable under this challenge. Besides, WWLKθ and AMRSim suffer more loss than reification challenges. One possible reason is that both WWLKθ and AMRSim are learned metrics, and they have a common generalization problem to some extent. However, our proposed AMRSim outperformed WWLKθ by a large margin, but with a close score loss. Linearization Challenges The basic assumption of AMRSim is that the similarity score is concerned with the entire AMR graph and will not be affected regardless of the linearization of the AMR graph. As the transformer model treats each token as independent, position embedding is added to retain the order information of tokens. To test AMRSim's ability to handle the linearization challenge, we compared model outputs for different inputs: (i) A pair of AMR graphs that are isomorphic but have different linearization sequences; (ii) a pair of different AMR graphs with the same linearized sequence after adopting the original position embedding strategy in BERT; (iii) a pair of different AMR graphs with the same linearized sequence after adopting relative position embedding strategy in AMRSim. For example, four AMR graphs are listed below: AMR A: (xv0 / play-01 :ARG0 (xv2 / group :consist-of (xv4 / boy)) :ARG1 (xv1 / soccer) :location (xv3 / beach)) AMR B: (xv0 / play-01 :location (xv3 / beach) :ARG0 (xv2 / group :consist-of (xv4 / boy)) :ARG1 (xv1 / soccer)) AMR C: (xv0 / play-01 :ARG0 (xv2 / group :consist-of (xv4 / boy :ARG1 (xv1 / soccer :location (xv3 / beach))))) AMR D: (xv0 / play-01 :ARG0 (xv2 / group) :ARG1 (xv1 / soccer) :location (xv3 / beach :consist-of (xv4 / boy))) AMR graphs A and B are isomorphic but the position of :location *beach* differs in the linearized sequences. Graphs A and C are different, however, token positions ranked by the ascending algorithm in BERT are the same. Graphs A and D are also inconsistent, but the relative positions of all tokens are consistent. For example, the distance between :consist-of and the root *play*-01 is 4 on both graphs. To demonstrate the effectiveness of GNN adapters in AMRSim, we compare AMRSim with the original BERT and BERT with relative position embeddings (AMRSim without GNN). Table 4 con- | Metrics | sim(A, B) | sim(A, C) | sim(A, D) | |--------------|-------------|-------------|-------------| | Expected | =1 | ̸=1 | ̸=1 | | BERT | 0.98 | 1 | 0.99 | | BERTrelative | 1 | 0.97 | 1 | | AMRSim | 1 | 0.97 | 0.98 | cluded similarity scores from BERT, BERT with relative position embedding, and AMRSim. BERT was confused by various linearizations of the same graph whereas BERT*relative* has a shortage of capturing structure information. With GNN adapters, AMRSim can handle the linearization challenge. In terms of the other metrics, only SemBleu requires linearization, and it exhibits a high level of sensitivity towards the chosen linearization. ## 5.2 Scatter Plot Analysis To visually analyze the distribution patterns and the relationship between human annotations and scores obtained from different metrics, we utilize scatter plots to show instances distributions and best fit lines to depict the trend. All scores are normalized to the range [0,1]. In figure 2, the y-axis of each subplots represents human annotated scores, while the x-axis represents the scores calculated using various metrics. By plotting individual examples on scatter plots, we observed that WWLK-related metrics and our AMRSim outperform other metrics significantly, with higher concentration around the fit and fewer outlier. However, they exhibit biases towards instances with distinct human scores. WWLK-related metrics tend to underrate examples with higher human scores while AMRSim tends to ![6_image_0.png](6_image_0.png) overrate examples with lower human scores. ## 5.3 Case Study Disagreements in preference for AMR graphs can affect the ranking of AMR parsers. Thus we employ a case study to study metrics' preference for parsed results. Here is an example from Opitz et al.'s (2020) paper. CAMR and JAMR are two AMR parsers. ::snt: Legally, there are two remedies. Gold AMR: (t / thing :quant 2 :ARG2-of (r / remedy-01) :mod (l / law) CAMR Parsed Result: (x6 / remedy-01 :quant 2) JAMR Parsed Result: (l / legally ## :Manner-Of (R / Remedy-01 :Quant 2)) The scores obtained from different metrics are as follows: SMATCH (0.2, 0.167); S2MATCH (0.2, 0.252); SEMA (0, 0); SEMBLEU (0.215, 0.215); WWLK (0.335, 0.352); WWLKθ(0.329, 0.345); AMRSim (0.81, 0.92). Among these metrics, SMATCH gives higher scores to CAMR parsed results. Conversely, SEMBLEU and SEMA assign the identical score to both parsed results. SEMA employs a more strict evaluation method as it assesses not only the individual node but also its dependencies. However, S2MATCH, WWLK, WWLKθand AMRSim grant higher scores to the second parsed results. In this particular example, *legally* is indispensable information of semantics. Our preference leans towards the second parsed results. ## 5.4 Computational Time In this section, we emphasize the importance of computational time as a crucial factor to consider for practical use. While the time complexity of AMRSim is O(n), it is necessary to consider that the processing with transformers possibly impact the overall time required. We provide an analysis of various metrics' reference time based on SICK test dataset consisting of 4927 samples. The reference time is listed as follows: SMATCH: 5.24s, S2MATCH: 37.51s, SEMA: 1.132s, SEMBLEU: 0.67s, WWLK: 46.60s, WWLKθ: 48.13s. Our AMRSim differs slightly from other metrics as we employ neural networks to predict graph similarity, allowing us to utilize both CPU and GPU for the task. When using CPU, the reference time is approximately 67.66s. However, this is not the most optimal choice, as matrix computations can be accelerated by GPUs. By utilizing a single GPU (RTX2080ti in our experiments), for instance, with a batch size of 16, the reference time reduces to around 14.20s. Scaling up to a batch size of 256 further reduces the reference time to 8.04 seconds. Further increasing the batch size has limited benefits as the processing time for input and output becomes the dominant factor. In term of O(n) time complexity, the advantages of our AMRSim become more pronounced as the graph sizes increase. ## 5.5 Multilingual Metrics The lack of multilingual AMR evaluation metrics becomes an obstacle to developing cross-lingual AMR parsers. It is common practice to translate other languages into English or use multilingual embeddings for matching, which does not take into account that the source language may affect the structure of the AMR (Wein and Schneider, 2022). AMRSim has the potential to be extended to multilingual cases given the generic nature of our pipeline: encoding AMR graphs and computing the cosine similarity of graph embeddings, not limited to specific languages. However, annotating the similarity of a multilingual dataset is more challenging and subject due to varying levels of language proficiency. So we performed a preliminary test by collecting a small-scale test dataset comprising 20 Chinese-English AMR pairs, then adopting a well trained multilingual model4to encode the corresponding text and calculating cosine similarity of text embeddings as the graph similarity score. To train our multilingual AMRSim, we employed the multilingual version of BERT base model and added a Chinese AMR dataset containing about 2,500 Chinese AMR graphs from Li et al. (2016) to English training data in Section 4.1. Re- | SMATCH | S2MATCH | SEMBLEU | WWLK | AMRSim | | |----------|-----------|-----------|--------|----------|-------| | Pearson | 42.02 | 52.72 | 40.08 | 30.25 | 63.87 | Table 5: Pearson correlation (x100) on 20 ChineseEnglish AMR Pairs. sults of metrics are concluded in table 5. SEMA lacks support for multilingual cases, resulting in it returning 0 for any multilingual inputs. Additionally, due to the absence of a multilingual dataset for WWLKθsupervised training, we exclude them from our comparison. A specific example is illustrated in Figure 3. The predicted similarity score for these two graphs needs to be close to 1 because they correspond to the same sentence in the Chinese and English versions of The Little Prince. We | SMATCH | S2MATCH | SEMBLEU | WWLK | AMRSim | | |----------|-----------|-----------|--------|----------|-------| | Score | 0.174 | 0.252 | 0.101 | -0.055 | 0.576 | Table 6: Similarity scores for AMR graphs in Figure 3. used different metrics to evaluate the similarity of these two graphs and concluded the score in table 6. While AMRSim surpasses other metrics, its performance is poor compared to the English version of AMRSim. The possible explanation is the lim-4The multilingual encoding model is downloaded from https://huggingface.co/sentence-transformers/ paraphrase-multilingual-MiniLM-L12-v2. R62 (s / surprise-01 : polarity - AR60 (t / that) :AR61 (t / t) :degree (m / much) :AR61-of (r / real-04))) Figure 3: AMR graphs for the same sentence from The Little Princes in Chinese and English. ited training data hinders effective self-supervised learning. A comprehensive study on multilingual applications is considered to be a future work. ## 5.6 Ablation Studies We investigated the impact of GNN adapters' position. Comparison of models' performance is included in table 7. GNN represents that the model only contains GNN layers; GNN-Each means that one GNN adapter is included in each transformer layer, same as Ribeiro et al. (2022); GNN-Before denotes that GNN adapter is before transformer layers and AMRSim indicates that GNN adapters are incorporated after transformer layers. Models other than GNN-Each come with 4-GNN. GNN | GNN | GNN-Each | GNN-Before | AMRSim | | |-------|------------|--------------|----------|-------| | STSB | 33.26 | 62.46 | 69.63 | 70.88 | Table 7: Comparison of different neural networks. Results are Pearson correlation (x100) on STSB test set. achieved the worst correlation score because it is notoriously hard for GNN to train (Li et al., 2019). However, setting transformers as the backbone and incorporating GNN as adapters are more acceptable in our experimental settings. GNN-Each underperformed in the experiments, possibly because too many GNN parameters appear in each layer to be optimized for self-supervised training. AMRSim achieved superior performance, which is intuitive because BERT encoded sequence information, and then GNN captured structural information. ## 6 Conclusion We have proposed a learning based AMR graph similarity metric called AMRSim. Its underlying network structure incorporates GNNs into a transformer network and employs self-supervised learning. Our proposed similarity metric addresses two common pitfalls of existing metrics: firstly, a low correlation with human annotations due to lack of considering the entire structures of AMR graphs. AMRSim effectively encodes AMR graphs, exhibiting the highest correlations with human evaluations and maintaining robust against various challenges. The second is computational inefficiency associated with matching or alignment of AMR graphs. AMRSim is alignment-free, allowing for a significant reduction in inference time when utilizing GPUs. We also show the potential of extending AMRSim to multilingual AMR similarity metrics. ## Limitations AMRSim has high prediction efficiency but the training process is time-consuming. In our experiments, one epoch training on one GeForce RTX2080Ti took about two and a half hours. Selfsupervised learning requires a large amount of training data. Parsing wiki sentences into graphs requires time, but the advantage is that it can be processed offline. Another issue is the length limit. Transformers can only handle limited sequence lengths due to the computational and memory complexity of attention calculation. Therefore, encoding large AMR graphs is challenging. Possible solutions include applying sliding window algorithm to split a large AMR graph into several subgraphs and merge the scores. ## References Rafael T Anchiêta, Marco AS Cabezudo, and Thiago AS Pardo. 2019. Sema: an extended semantic evaluation metric for amr. *arXiv preprint arXiv:1905.12069*. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In *Proceedings of the 7th linguistic annotation workshop and interoperability with* discourse, pages 178–186. Petr Baudiš, Jan Pichl, Tomáš Vyskocil, and Jan Še- ˇ divy. 2016. Sentence pair scoring: Towards unified ` framework for text comprehension. arXiv preprint arXiv:1603.06127. Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In *Proceedings of AAAI*. Rexhina Blloshmi, Rocco Tripodi, and Roberto Navigli. 2020. Xl-amr: Enabling cross-lingual amr parsing with transfer learning techniques. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2487– 2500. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752. Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, and Magnus Sahlgren. 2021. Semantic re-tuning with contrastive tension. In *9th International Conference on Learning* Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910. Michael Wayne Goodman. 2019. Amr normalization for fairer evaluation. arXiv preprint arXiv:1909.01568. Bin Li, Yuan Wen, Weiguang Qu, Lijun Bu, and Nianwen Xue. 2016. Annotating the little prince with chinese amrs. In Proceedings of the 10th Linguistic Annotation Workshop held in Conjunction with ACL 2016 (LAW-X 2016), pages 7–15. Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. 2019. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision, pages 9267–9276. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In *Proceedings of the 55th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 1116–1126. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216– 223. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, pages 4602–4609. Juri Opitz, Angel Daza, and Anette Frank. 2021. Weisfeiler-leman in the bamboo: Novel amr graph metrics and a benchmark for amr graph similarity. Transactions of the Association for Computational Linguistics, 9:1425–1441. Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. Amr similarity metrics from principles. *Transactions of the Association for Computational Linguistics*, 8:522–538. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, and Mohit Bansal. 2022. FactGraph: Evaluating factuality in summarization with semantic graph representations. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3238–3253, Seattle, United States. Association for Computational Linguistics. Ziyi Shou, Yuxin Jiang, and Fangzhen Lin. 2022. AMRDA: Data augmentation by Abstract Meaning Representation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3082–3098, Dublin, Ireland. Association for Computational Linguistics. Linfeng Song and Daniel Gildea. 2019. SemBleu: A robust metric for AMR parsing evaluation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4547– 4552, Florence, Italy. Association for Computational Linguistics. Shira Wein and Nathan Schneider. 2022. Accounting for language effect in the evaluation of crosslingual amr parsers. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3824–3834. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Experiments ✓ B1. Did you cite the creators of artifacts you used? Experiments ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Experiments ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Experiments ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Experiments ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Experiments ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Experiments ## C ✓ **Did You Run Computational Experiments?** Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Experiments The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Experiments ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Experiments ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Experiments D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
dar-etal-2023-analyzing
Analyzing Transformers in Embedding Space
https://aclanthology.org/2023.acl-long.893
Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by {``}translating{''} the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.
# Analyzing Transformers In Embedding Space Guy Dar1 Mor Geva2 Ankit Gupta1 **Jonathan Berant**1 1The Blavatnik School of Computer Science, Tel-Aviv University 2Allen Institute for Artificial Intelligence {guy.dar,joberant}@cs.tau.ac.il, morp@allenai.org, ankitgupta.iitkanpur@gmail.com ## Abstract Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that an inputindependent approach, where parameters are interpreted directly without a forward/backward pass is feasible for *some* Transformer parameters, and for two-layer attention networks. In this work, we present a conceptual framework where all parameters of a trained Transformer are interpreted by projecting them into the *embedding space*, that is, the space of vocabulary items they operate on. Focusing mostly on GPT-2 for this paper, we provide diverse evidence to support our argument. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier *without training* by "translating" the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings show that at least in part, we can abstract away model specifics and understand Transformers in the embedding space. ## 1 **Introduction** Transformer-based models [Vaswani et al., 2017] currently dominate Natural Language Processing [Devlin et al., 2018; Radford et al., 2019; Zhang et al., 2022] as well as many other fields of machine learning [Dosovitskiy et al., 2020; Chen et al., 2020; Baevski et al., 2020]. Consequently, understanding their inner workings has been a topic of great interest. Typically, work on interpreting Transformers relies on feeding inputs to the model and analyzing the resulting activations [Adi et al., 2016; Shi et al., 2016; Clark et al., 2019]. Thus, interpretation involves an expensive forward, and sometimes also a backward pass, over multiple inputs. Moreover, such interpretation methods are conditioned on the input and are not guaranteed to generalize to all inputs. In the evolving literature on static interpretation, i.e., without forward or backward passes, [Geva et al., 2022b] showed that the value vectors of the Transformer feed-forward module (the second layer of the feed-forward network) can be interpreted by projecting them into the embedding space, i.e., multiplying them by the embedding matrix to obtain a representation over vocabulary items.1[Elhage et al., 2021] have shown that in a 2-layer attention network, weight matrices can be interpreted in the embedding space as well. Unfortunately, their innovative technique could not be extended any further. In this work, we extend and unify the theory and findings of [Elhage et al., 2021] and [Geva et al., 2022b]. We present a zero-pass, input-independent framework to understand the behavior of Transformers. Concretely, we interpret all weights of a pretrained language model (LM) in embedding space, including both keys and values of the feed-forward module ([Geva et al., 2020, 2022b] considered just FF values) as well as all attention parameters ([Elhage et al., 2021] analyzed simplified architectures up to two layers of attention with no MLPs). Our framework relies on a simple observation. Since [Geva et al., 2022b] have shown that one can project hidden states to the embedding space via the embedding matrix, we intuit this can be extended to other parts of the model by projecting to the embedding space and then *projecting back* by multiplying with a right-inverse of the embedding matrix. Thus, we can recast inner products in the model as inner products in embedding space. Viewing inner products this way, we can interpret such products as interactions between pairs of vocabulary items. This applies to (a) interactions between attention queries and keys as well as to (b) interactions between attention value vectors and the parameters that project them at the output of the attention module. Taking this perspective to the extreme, one can view Transformers as operating implicitly in the embedding space. This entails the existence of a single linear space that depends only on the tokenizer, 1We refer to the unique items of the vocabulary as *vocabulary items*, and to the (possibly duplicate) elements of a tokenized input as *tokens*. When clear, we might use the term token for *vocabulary item*. 16124 ![1_image_0.png](1_image_0.png) in which parameters of different Transformers can be compared. Thus, one can use the embedding space to compare and transfer information across different models that share a tokenizer. We provide extensive empirical evidence for the validity of our framework, focusing mainly on GPT-2 medium [Radford et al., 2019]. We use GPT-2 for two reasons. First, we do this for concreteness, as this paper is mainly focused on introducing the new framework and not on analyzing its predictions. Second, and more crucially, unlike many other architectures (such as BERT [Devlin et al., 2018], RoBERTa [Liu et al., 2019], and T5 [Raffel et al., 2019]), the GPT family has a *linear* language modeling head (LM head) - which is simply the output embedding matrix. All the other architectures' LM heads are two layer networks that contain *non-linearities* before the output embedding matrix. Our framework requires a linear language modeling head to work. That being said, we believe in practice this will not be a major obstacle, and we indeed see in the experiments that model alignment works well for BERT in spite of the theoretical difficulties. We leave the non-linearities in the LM head for future work. On the interpretation front (Fig. 1, Left), we provide qualitative and quantitative evidence that Transformer parameters can be interpreted in embedding space. We also show that when fine-tuning GPT-2 on a sentiment analysis task (over movie reviews), projecting *changes* in parameters into embedding space yields words that characterize sentiment towards movies. Second (Fig. 1, Center), we show that given two distinct instances of BERT pretrained from different random seeds [Sellam et al., 2022], we can align layers of the two instances by casting their weights into the embedding space. We find that indeed layer i of the first instance aligns well to layer i of the second instance, showing the different BERT instances converge to a semantically similar solution. Last (Fig. 1, Right), we take a model fine-tuned on a sentiment analysis task and "transfer" the learned weights to a different model that was only pretrained by going through the embedding spaces of the two models. We show that in 30% of the cases, this procedure, termed *stitching*, results in a classifier that reaches an impressive accuracy of 70% on the IMDB benchmark [Maas et al., 2011] without any training. Overall, our findings suggest that analyzing Transformers in embedding space is valuable both as an interpretability tool and as a way to relate different models that share a vocabulary and that it opens the door to interpretation methods that operate in embedding space only. Our code is available at https://github.com/guyd1995/embedding-space. ## 2 **Background** We now present the main components of the Transformer [Vaswani et al., 2017] relevant to our analysis. We discuss the residual stream view of Transformers, and recapitulate a view of the attention layer parameters as *interaction matrices* WVO and WQK [Elhage et al., 2021]. Similar to them, we exclude biases and layer normalization from our analysis. ## 2.1 **Transformer Architecture** The Transformer consists of a stack of layers, each includes an attention module followed by a FeedForward (FF) module. All inputs and outputs are sequences of N vectors of dimensionality d. Attention Module takes as input a sequence of representations X ∈ R N×d, and each layer L is parameterized by four matrices W (L) Q , W(L) K , W(L) V, W(L) O ∈ R d×d(we henceforth omit the layer superscript for brevity). The input X is projected to produce queries, keys, and values: Qatt = XWQ, Katt = XWK, Vatt = XWV . Each one of Qatt, Katt, Vatt is split along the columns to H different *heads* of dimensionality R N× dH , denoted by Qiatt, Ki att, V i att respectively. We then compute H *attention maps*: $$A^{i}=\mathrm{softmax}\left({\frac{Q_{\mathrm{att}}^{i}K_{\mathrm{att}}^{i\mathrm{T}}}{\sqrt{d/H}}}+M\right)\in\mathbb{R}^{N\times N},$$ where M ∈ R N×N is the attention mask. Each attention map is applied to the corresponding value head as AiV i att, results are concatenated along columns and projected via WO. The input to the module is added via a residual connection, and thus the attention module's output is: $$X+\text{Concat}\left[A^{1}V^{1}_{\text{att}},\ldots,A^{i}V^{i}_{\text{att}},\ldots,A^{H}V^{H}_{\text{att}}\right]W_{O}.\tag{1}$$ FF Module is a two-layer neural network, applied to each position independently. Following past terminology [Sukhbaatar et al., 2019; Geva et al., 2020], weights of the first layer are called *FF keys* and weights of the second layer *FF values*. This is an analogy to attention, as the FF module too can be expressed as: f(QKT)V , where f is the activation function, Q ∈ R N×dis the output of the attention module and the input to the FF module, and *K, V* ∈ R dff×dare the weights of the first and second layers of the FF module. Unlike attention, keys and values are learnable parameters. The output of the FF module is added to the output of the attention module to form the output of the layer via a residual connection. The output of the i-th layer is called the i-th *hidden state*. Embedding Matrix To process sequences of discrete tokens, Transformers use an embedding matrix E ∈ R d×ethat provides a d-dimensional representation to vocabulary items before entering the *first* Transformer layer. In different architectures, including GPT2, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the *last* Transformer layer and project it back to the vocabulary dimension, i.e., into the *embedding space*. In this work, we show how to interpret all the components of the Transformer model in the embedding space. ## 2.2 **The Residual Stream** We rely on a useful view of the Transformer through its residual connections popularized by [Elhage et al., 2021].2 Specifically, each layer takes a hidden state as 2Originally introduced in [nostalgebraist, 2020]. input and adds information to the hidden state through its residual connection. Under this view, the hidden state is a *residual stream* passed along the layers, from which information is read, and to which information is written at each layer. [Elhage et al., 2021] and [Geva et al., 2022b] observed that the residual stream is often barely updated in the last layers, and thus the final prediction is determined in early layers and the hidden state is mostly passed through the later layers. An exciting consequence of the residual stream view is that we can project hidden states in *every* layer into embedding space by multiplying the hidden state with the embedding matrix E, treating the hidden state as if it were the output of the last layer. [Geva et al., 2022a] used this approach to interpret the prediction of Transformer-based language models, and we follow a similar approach. ## 2.3 Wqk And Wvo Following [Elhage et al., 2021], we describe the attention module in terms of *interaction matrices* WQK and WVO which will be later used in our mathematical derivation. The computation of the attention module (§2.1) can be re-interpreted as follows. The attention projection matrices WQ, WK, WV can be split along the *column* axis to H equal parts denoted by WiQ , WiK , WiV ∈ R d× dH for 1 ≤ i ≤ H. Similarly, the attention output matrix WO can be split along the row axis into H heads, WiO ∈ R d H ×d. We define the interaction matrices as $$\begin{array}{r}{{W_{\mathrm{QK}}^{i}:=W_{\mathrm{Q}}^{i}W_{\mathrm{K}}^{i\mathrm{T}}\in\mathbb{R}^{d\times d},}}\\ {{W_{\mathrm{Vo}}^{i}:=W_{\mathrm{V}}^{i}W_{\mathrm{0}}^{i}\in\mathbb{R}^{d\times d}.}}\end{array}$$ Importantly, WiQK, WiVO are *input-independent*. Intuitively, WQK encodes the amount of attention between pairs of tokens. Similarly, in WiVO, the matrices WV and WO can be viewed as a transition matrix that determines how attending to certain tokens affects the subsequent hidden state. We can restate the attention equations in terms of the interaction matrices. Recall (Eq. 1) that the output of the i'th head of the attention module is AiV i att and the final output of the attention module is (without the residual connection): $\textbf{Concat}\begin{bmatrix}A^1V^1_\text{att},...,A^iV^i_\text{att},...,A^H V^H_\text{att}\end{bmatrix}W_\text{O}=\quad(2)\\ \\ \sum_{i=1}^H A^i(XW^i_\text{V})W^i_\text{O}=\sum_{i=1}^H A^i XW^i_\text{VO}.$ milick: the attention map, Ai at the i'th hard in terms. Similarly, the attention map Aiat the i'th head in terms of WQK is (softmax is done row-wise): $$A^{i}=\text{softmax}\left(\frac{(X W_{\mathbf{Q}}^{i})(X W_{\mathbf{K}}^{i})^{\text{T}}}{\sqrt{d/H}}+M\right)\tag{3}$$ $$=\text{softmax}\left(\frac{X(W_{\mathbf{QK}}^{i})X^{\text{T}}}{\sqrt{d/H}}+M\right).$$ 16126 ## 3 **Parameter Projection** In this section, we propose that Transformer parameters can be projected into embedding space for interpretation purposes. We empirically support our framework's predictions in §4-§5. Given a matrix A ∈ R N×d, we can project it into embedding space by multiplying by the embedding matrix E as Aˆ = AE ∈ R N×e. Let E′ be a right-inverse of E, that is, EE′ = I ∈ R d×d. 3 We can reconstruct the original matrix with E′as A = A(EE′) = AEˆ ′. We will use this simple identity to reinterpret the model's operation in embedding space. To simplify our analysis we ignore LayerNorm and biases. This has been justified in prior work [Elhage et al., 2021]. Briefly, LayerNorm can be ignored because normalization changes only magnitudes and not the direction of the update. At the end of this section, we discuss why in practice we choose to use E′ = ETinstead of a seemingly more appropriate right inverse, such as the pseudo-inverse [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]. In this section, we derive our framework and summarize its predictions in Table 1. Attention Module Recall that WiVO := WiVWiO ∈ R d×dis the interaction matrix between attention values and the output projection matrix for attention head i. By definition, the output of each head is: AiXWiVO = AiXEˆ ′WiVO. Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values. Thus, we expect the sequence of N e-dimensional vectors (AiXWiVO)E = AiXˆ(E′WiVOE) to be interpretable. Importantly, the role of Aiis just to mix the representations of the updated N input vectors. This is similar to the FF module, where FF values (the parameters of the second layer) are projected into embedding space, and FF keys (parameters of the first layer) determine the *coefficients* for mixing them. Hence, we can assume that the interpretable components are in the term Xˆ(E′WiVOE). Zooming in on this operation, we see that it takes the previous hidden state in the embedding space (Xˆ) and produces an output in the embedding space which will be incorporated into the next hidden state through the residual stream. Thus, E′WiVOE is a *transition matrix* that takes a representation of the embedding space and outputs a new representation in the same space. Similarly, the matrix WiQK can be viewed as a bilinear map (Eq. 2.3). To interpret it in embedding space, we perform the following operation with E′: $$\begin{array}{c}{{X W_{\mathrm{{QK}}}^{i}X^{\mathrm{{T}}}=(X E E^{\prime})W_{\mathrm{{QK}}}^{i}(X E E^{\prime})^{\mathrm{{T}}}=}}\\ {{(X E)E^{\prime}W_{\mathrm{{QK}}}^{i}E^{\prime\mathrm{{T}}}(X E)^{\mathrm{{T}}}=\hat{X}(E^{\prime}W_{\mathrm{{QK}}}^{i}E^{\prime\mathrm{{T}}})\hat{X}^{\mathrm{{T}}}.}}\end{array}$$ Therefore, the interaction between tokens at different positions is determined by an e×e matrix that expresses 3E ′exists if d ≤ e and E is full-rank. the interaction between pairs of vocabulary items. FF Module [Geva et al., 2022b] showed that FF value vectors V ∈ R dff×dare meaningful when projected into embedding space, i.e., for a FF value vector v ∈ R d, vE ∈ R eis interpretable (see §2.1). In vectorized form, the rows of V E ∈ R dff×eare interpretable. On the other hand, the keys K of the FF layer are multiplied on the left by the output of the attention module, which are the queries of the FF layer. Denoting the output of the attention module by Q, we can write this product as QKT = QEˆ ′KT = Qˆ(KE′T) T. Because Q is a hidden state, we assume according to the residual stream view that Qˆ is interpretable in embedding space. When multiplying Qˆ by KE′T, we are capturing the interaction in embedding space between each query and key, and thus expect KE′Tto be interpretable in embedding space as well. Overall, FF keys and values are intimately connected - the i-th key controls the coefficient of the i-th value, so we expect their interpretation to be related. While not central to this work, we empirically show that keyvalue pairs in the FF module are similar in embedding space in Appendix B.1. Subheads Another way to interpret the matrices WiVO and WiQK is through the *subhead view*. We use the following identity: AB =Pb j=1 A:,jBj,:, which holds for arbitrary matrices A ∈ R a×b, B ∈ R b×c, where A:,j ∈ R a×1are the *columns* of the matrix A and Bj,: ∈ R 1×care the *rows* of the matrix B. Thus, we can decompose WiVO and WiQK into a sum of dH rank-1 matrices: $$W_{\mathrm{VO}}^{i}=\sum_{j=1}^{\frac{d}{H}}W_{\mathrm{V}}^{i,j}W_{\mathrm{O}}^{i,j},\quad W_{\mathrm{QK}}^{i}=\sum_{j=1}^{\frac{d}{H}}W_{\mathrm{Q}}^{i,j}W_{\mathrm{K}}^{i,j}{}^{\mathrm{T}}.$$ where W i,j Q , Wi,j K , Wi,j V ∈ R d×1are columns of WiQ , WiK , WiV respectively, and W i,j O ∈ R 1×dare the rows of WiO . We call these vectors *subheads*. This view is useful since it allows us to interpret subheads directly by multiplying them with the embedding matrix E. Moreover, it shows a parallel between interaction matrices in the attention module and the FF module. Just like the FF module includes key-value pairs as described above, for a given head, its interaction matrices are a sum of interactions between pairs of subheads (indexed by j), which are likely to be related in embedding space. We show this is indeed empirically the case for pairs of subheads in Appendix B.1. Choosing E′ = ETIn practice, we do not use an exact right inverse (e.g. the pseudo-inverse). We use the transpose of the embedding matrix E′ = ETinstead. The reason pseudo-inverse doesn't work is that for interpretation we apply a top-k operation after projecting to embedding space (since it is impractical for humans to read through a sorted list of 50K tokens). So, we only keep the list of the vocabulary items that have the k largest logits, for manageable values of k. | Symbol | Projection | Approximate Projection | | | |---------------------------|--------------|--------------------------|-----------|------| | FF values | v | vE | vE | | | FF keys | k | kE′T | kE | | | Attention query-key | Wi QK | E ′Wi QKE ′T | E TWi QKE | | | Attention value-output | Wi VO | E ′Wi VOE | ETWi VOE | | | Attention value subheads | Wi,j V | Wi,j | ′T | Wi,j | | V E | V E | | | | | Attention output subheads | Wi,j O | Wi,j O E | Wi,j O E | | | Attention query subheads | Wi,j Q | Wi,j Q E ′T | Wi,j Q E | | | Attention key subheads | Wi,j K | Wi,j | ′T | Wi,j | | K E | K E | | | | In Appendix A, we explore the exact requirements for E′to interact well with top-k. We show that the top k entries of a vector projected with the pseudo-inverse do not represent the entire vector well in embedding space. We define keep-k *robust invertibility* to quantify this. It turns out that empirically ETis a decent *keep-k* robust inverse for E in the case of GPT-2 medium (and similar models) for plausible values of k. We refer the reader to Appendix A for details. To give intuition as to why ET works in practice, we switch to a different perspective, useful in its own right. Consider the FF keys for example - they are multiplied on the left by the hidden states. In this section, we suggested to re-cast this as h T K = (h T E)(E′K). Our justification was that the hidden state is interpretable in the embedding space. A related perspective (dominant in previous works too; e.g. [Mickus et al., 2022]) is thinking of the hidden state as an aggregation of interpretable updates to the residual stream. That is, schematically, h =Pk i=1 αiri, where αi are scalars and ri are vectors corresponding to specific concepts in the embedding space (we roughly think of a concept as a list of tokens related to a single topic). Inner product is often used as a similarity metric between two vectors. If the similarity between a column Ki and h is large, the corresponding i-th output coordinate will be large. Then we can think of K as a *detector* of concepts where each neuron (column in K) lights up if a certain concept is "present" (or a superposition of concepts) in the inner state. To understand which concepts each detector column encodes we see which tokens it responds to. Doing this for all (input) token embeddings and packaging the inner products into a vector of scores is equivalent to simply multiplying by ET on the left (where E is the input embedding in this case, but for GPT-2 they are the same). A similar argument can be made for the interaction matrices as well. For example for WVO, to understand if a token embedding ei maps to a ej under a certain head, we apply the matrix to ei, getting e T i WVO and use the inner product as a similarity metric and get the score e T i WVOej . ## 4 **Interpretability Experiments** In this section, we provide empirical evidence for the viability of our approach as a tool for interpreting Transformer parameters. For our experiments, we use Huggingface Transformers ([Wolf et al., 2020]; License: Apache-2.0). ## 4.1 **Parameter Interpretation Examples** Attention Module We take GPT-2 medium (345M parameters; [Radford et al., 2019]) and manually analyze its parameters. GPT-2 medium has a total of 384 attention heads (24 layers and 16 heads per layer). We take the embedded transition matrices E′WiVOE for all heads and examine the top-k pairs of vocabulary items. As there are only 384 heads, we manually choose a few heads and present the top-k pairs in Appendix C.1 (k = 50). We observe that different heads capture different types of relations between pairs of vocabulary items including word parts, heads that focus on gender, geography, orthography, particular part-of-speech tags, and various semantic topics. In Appendix C.2 we perform a similar analysis for WQK. We supplement this analysis with a few examples from GPT-2 base and large (117M, 762M parameters - respectively) as proof of concept, similarly presenting interpretable patterns. A technical note: WVO operates on row vectors, which means it operates in a "transposed" way to standard intuition - which places inputs on the left side and outputs on the right side. It does not affect the theory, but when visualizing the top-k tuples, we take the transpose of the projection (E′WiVOE) Tto get the "natural" format (input token, output token). Without the transpose, we would get the *same* tuples, but in the format (output token, input token). Equivalently, in the terminology of linear algebra, it can be seen as a linear transformation that we represent in the basis of row vectors and we transform to the basis of column vectors, which is the standard one. FF Module Appendix C.3 provides examples of key-value pairs from the FF modules of GPT-2 medium. We show random pairs (*k, v*) from the set of those pairs such that when looking at the top-100 vocabulary items for k and v, at least 15% overlap. Such pairs account for approximately 5% of all key-value pairs. The examples show how key-value pairs often revolve around similar topics such as media, months, organs, etc. We again include additional examples from GPT-2 base and large. Knowledge Lookup Last, we show we can use embeddings to locate FF values (or keys) related to a par- ![5_image_0.png](5_image_0.png) ticular topic. We take a few vocabulary items related to a certain topic, e.g., ['cm', 'kg', 'inches'], average their embeddings,4and rank all FF values (or keys) based on their dot-product with the average. Appendix C.4 shows a few examples of FF values found with this method that are related to programming, measurements, and animals. 4.2 **Hidden State and Parameters** One merit of zero-pass interpretation is that it does not require running inputs through the model. Feeding inputs might be expensive and non-exhaustive. In this section and *in this section only*, we run a forward pass over inputs and examine if the embedding space representations of dynamically computed hidden states are "similar" to the representations of the activated static parameter vectors. Due to the small number of examples we run over, the overall GPU usage is still negligible. A technical side note: we use GPT-2, which applies LayerNorm to the Transformer output before projecting it to the embedding space with E. Thus, conservatively, LayerNorm should be considered as part of the projection operation. Empirically, however, we observe that projecting parameters directly without LayerNorm works well, which simplifies our analysis in §3. Unlike parameters, we apply LayerNorm to hidden states before projection to embedding space to improve interpretability. This nuance was also present in the 4We subtract the average embedding µ from E before averaging, which improves interpretability. Experimental Design We use GPT-2 medium and run it over 60 examples from IMDB (25,000 train, 25,000 test examples; [Maas et al., 2011]).5 This provides us with a dynamically-computed hidden state h for every token and at the output of every layer. For the projection hˆ ∈ R e of each such hidden state, we take the projections of the m most active parameter vectors {xˆi} m i=1 in the layer that computed h and check if they cover the dominant vocabulary items of hˆ in embedding space. Specifically, let top-k(wE) be the k vocabulary items with the largest logits in embedding space for a vector w ∈ R d. We compute: $$R_{k}({\hat{x}}_{1},...,{\hat{x}}_{m},{\hat{h}})={\frac{|\operatorname{top-k}({\hat{h}})\cap\bigcup_{i=1}^{m}\operatorname{top-k}({\hat{x}}_{i})|}{k}}$$ to capture if activated parameter vectors cover the main vocabulary items corresponding to the hidden state. We find the m most active parameter vectors separately for FF keys (K), FF values (V ), attention value subheads (WV) (see §3), and attention output subheads (WO), where the activation of each parameter vector is determined by the vector's "coefficient" as follows. For a FF key-value pair (*k, v*) the coefficient is σ(q Tk), where q ∈ R dis an input to the FF module, and σ is the FF non-linearity. For attention, value-output subhead pairs (*v, o*) the coefficient is x Tv, where x is the 5Note that IMDB was designed for sentiment analysis and we use it here as a general-purpose corpus. input to this component (for attention head i, the input is one of the rows of AiX, see Eq. 3). Results and Discussion Figure 2 presents the Rk score averaged across tokens per layer. As a baseline, we compare Rk of the activated vectors {xˆi} m i=1 of the correctly-aligned hidden state hˆ at the output of the relevant layer (blue bars) against the Rk when *randomly* sampling hˆrand from all the hidden states (orange bars). We conclude that representations in embedding space induced by activated parameter vector mirror, at least to some extent, the representations of the hidden states themselves. Appendix §B.2 shows a variant of this experiment, where we compare activated parameters throughout GPT-2 medium's layers to the *last* hidden state, which produces the logits used for prediction. ## 4.3 **Interpretation Of Fine-Tuned Models** We now show that we can interpret the *changes* a model goes through during fine-tuning through the lens of embedding space. We fine-tune the top-3 layers of the 12layer GPT-2 base (117M parameters) with a sequence classification head on IMDB sentiment analysis (binary classification) and compute the difference between the original parameters and the fine-tuned model. We then project the difference of parameter vectors into embedding space and test if the change is interpretable w.r.t. sentiment analysis. Appendix D shows examples of projected differences randomly sampled from the fine-tuned layers. Frequently, the difference or its negation is projected to nouns, adjectives, and adverbs that express sentiment for a movie, such as 'amazing', 'masterpiece', *'incompetence'*, etc. This shows that the differences are indeed projected into vocabulary items that characterize movie reviews' sentiments. This behavior is present across WQ, WK, WV, K, but not V and WO, which curiously are the parameters added to the residual stream and not the ones that react to the input directly. ## 5 **Aligning Models In Embedding Space** The assumption Transformers operate in embedding space leads to an exciting possibility - we can relate different models to one another so long as they share the vocabulary and tokenizer. In §5.1, we show that we can align the layers of BERT models trained with different random seeds. In §5.2, we show the embedding space can be leveraged to "stitch" the parameters of a fine-tuned model to a model that was not fine-tuned. ## 5.1 **Layer Alignment** Experimental Design Taking our approach to the extreme, the embedding space is a universal space, which depends only on the tokenizer, in which Transformer parameters and hidden states reside. Thus, we can align parameter vectors from different models in this space and compare them even if they come from different models, as long as they share a vocabulary. To demonstrate this, we use MultiBERTs ([Sellam et al., 2022]; License: Apache-2.0), which contains 25 different instantiations of BERT-base (110M parameters) initialized from different random seeds.6 We take parameters from two MultiBERT seeds and compute the correlation between their projections to embedding space. For example, let VA, VB be the FF values of models A and B. We can project the values into embedding space: VAEA, VBEB, where EA, EB are the respective embedding matrices, and compute Pearson correlation between projected values. This produces a similarity matrix S ∈ ˜ R|VA|×|VB|, where each entry is the correlation coefficient between projected values from the two models. We bin S˜ by layer pairs and average the absolute value of the scores in each bin (different models might encode the same information in different directions, so we use absolute value) to produce a matrix S ∈ R L×L, where L is the number of layers – that is, the average (absolute) correlation between vectors that come from layer ℓA in model A and layer ℓB in Model B is registered in entry (ℓA, ℓB) of S. Last, to obtain a one-to-one layer alignment, we use the Hungarian algorithm [Kuhn, 1955], which assigns exactly one layer from the first model to a layer from the second model. The algorithm's objective is to maximize, given a similarity matrix S, the sum of scores of the chosen pairs, such that each index in one model is matched with exactly one index in the other. We repeat this for all parameter groups (WQ, WK, WV, WO, K). Results and Discussion Figure 3 (left) shows the resulting alignment. Clearly, parameters from a certain layer in model A tend to align to the same layer in model B across all parameter groups. This suggests that different layers from different models that were trained separately (but with the same training objective and data) serve a similar function. As further evidence, we show that if not projected, the matching appears absolutely random in Figure §3 (right). We show the same results for other seed pairs as well in Appendix B.3. ## 5.2 **Zero-Shot Stitching** Model stitching [Lenc and Vedaldi, 2015; Csiszarik ´ et al., 2021; Bansal et al., 2021] is a relatively underexplored feature of neural networks, particularly in NLP. The idea is that different models, even with different architectures, can learn representations that can be aligned through a *linear* transformation, termed *stitching*. Representations correspond to hidden states, and thus one can learn a transformation matrix from one model's hidden states to an equivalent hidden state in the other model. Here, we show that going through embedding space one can align the hidden states of two models, i.e., stitch, *without training*. Given two models, we want to find a linear stitching transformation to align their representation spaces. 6Estimated compute costs: around 1728 TPU-hours for each pre-training run, and around 208 GPU-hours plus 8 TPU-hours for associated fine-tuning experiments. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) According to our theory, given a hidden state v ∈ R d1 from model A, we can project it to the embedding space as vEA, where EA is its embedding matrix. Then, we can re-project to the feature space of model B, with E + B ∈ R e×d2, where E + B is the Penrose-Moore pseudoinverse of the embedding matrix EB. 7 This transformation can be expressed as multiplication with the kernel KAB := EAE + B ∈ R d1×d2. We employ the above approach to take representations of a fine-tuned classifier, A, and stitch them on top of a model B that was only pretrained, to obtain a new classifier based on B. Experimental Design We use the 24-layer GPT-2 medium as model A and 12-layer GPT-2 base model trained in §4.3 as model B. We fine-tune the last three layers of model B on IMDB, as explained in §4.3. Stitching is simple and is performed as follows. Given the sequence of N hidden states HℓA ∈ R N×d1 at the output of layer ℓ of model A (ℓ is a hyperparameter), we apply the *stitching layer*, which multiplies the hidden states with the kernel, computing HℓAKAB. This results in hidden states HB ∈ R N×d2, used as input to the three fine-tuned layers from B. Results and Discussion Stitching produces models with accuracies that are higher than random on IMDB evaluation set, but not consistently. Figure 4 shows the accuracy of stitched models against the layer index from model A over which stitching is performed. 7Since we are not interested in interpretation we use an exact right-inverse and not the transpose. Out of 11 random seeds, three models obtained accuracy that is significantly higher than the baseline 50% accuracy, reaching an accuracy of roughly 70%, when stitching is done over the top layers. ## 6 **Related Work** Interpreting Transformers is a broad area of research that has attracted much attention in recent years. A large body of work has focused on analyzing hidden representations, mostly through probing [Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019; Rogers et al., 2020]. [Voita et al., 2019a] used statistical tools to analyze the evolution of hidden representations throughout layers. Recently, [Mickus et al., 2022] proposed to decompose the hidden representations into the contributions of different Transformer components. Unlike these works, we interpret parameters rather than the hidden representations. Another substantial effort has been to interpret specific network components. Previous work analyzed single neurons [Dalvi et al., 2018; Durrani et al., 2020], attention heads [Clark et al., 2019; Voita et al., 2019b], and feedforward values [Geva et al., 2020; Dai et al., 2021; Elhage et al., 2022]. While these works mostly rely on input-dependent neuron activations, we inspect "static" model parameters, and provide a comprehensive view of all Transformer components. Our work is most related to efforts to interpret specific groups of Transformer parameters. [Cammarata et al., 2020] made observations about the interpretability of weights of neural networks. [Elhage et al., 2021] analyzed 2-layer attention networks. We extend their analysis to multi-layer pre-trained Transformer models. [Geva et al., 2020, 2022a,b] interpreted feedforward values in embedding space. We coalesce these lines of work and offer a unified interpretation framework for Transformers in embedding space. ## 7 **Discussion** While our work has limitations (see §8), we think the benefits of our work overshadow its limitations. We provide a simple approach and a new set of tools to interpret Transformer models and compare them. The realm of input-independent interpretation methods is still nascent and it might provide a fresh perspective on the internals of the Transformer, one that allows to glance intrinsic properties of specific parameters, disentangling their dependence on the input. Moreover, many models are prohibitively large for practitioners to run. Our method requires only a fraction of the compute and memory requirements, and allows interpreting a single parameter in isolation. Importantly, our framework allows us to view parameters from different models as residents of a canonical embedding space, where they can be compared in model-agnostic fashion. This has interesting implications. We demonstrate two consequences of this observation (model alignment and stitching) and argue future work can yield many more use cases. ## 8 **Limitations** Our work has a few limitations that we care to highlight. First, it focuses on interpreting models through the vocabulary lens. While we have shown evidence for this, it does not preclude other factors from being involved. Second, we used E′ = ET, but future research may find variants of E that improve performance. Additionally, most of the work focused on GPT-2. This is due to shortcomings in the current state of our framework, as well as for clear presentation. We believe nonlinearities in language modeling are resolvable, as is indicated in the experiment with BERT. In terms of potential bias in the framework, some parameters might consider terms related to each due to stereotypes learned from the corpus. ## References Y. Adi, E. Kermany, Y. Belinkov, O. Lavi, and Y. Goldberg. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks, 2016. URL https://arxiv.org/abs/1608.04207. A. Baevski, H. Zhou, A. Mohamed, and M. Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations, 2020. URL https: //arxiv.org/abs/2006.11477. Y. Bansal, P. Nakkiran, and B. Barak. Revisiting model stitching to compare neural representations. In *NeurIPS*, 2021. A. Bjerhammar. Application of calculus of matrices to method of least squares : with special reference to geodetic calculations. In Trans. Roy. Inst. Tech. Stockholm, 1951. N. Cammarata, S. Carter, G. Goh, C. Olah, M. Petrov, L. Schubert, C. Voss, B. Egan, and S. K. Lim. Thread: Circuits. *Distill*, 2020. doi: 10.23915/ distill.00024. https://distill.pub/2020/circuits. M. Chen, A. Radford, R. Child, J. Wu, H. Jun, D. Luan, and I. Sutskever. Generative pretraining from pixels. In H. D. III and A. Singh, editors, Proceed- ings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pages 1691–1703. PMLR, 13–18 Jul 2020. URL https://proceedings. mlr.press/v119/chen20s.html. K. Clark, U. Khandelwal, O. Levy, and C. D. Manning. What does BERT look at? an analysis of bert's attention. *CoRR*, abs/1906.04341, 2019. URL http://arxiv.org/abs/1906.04341. A. Csiszarik, P. Kor ´ osi-Szab ¨ o,´ A. K. Matszangosz, ´ G. Papp, and D. Varga. Similarity and matching of neural network representations. In *NeurIPS*, 2021. D. Dai, L. Dong, Y. Hao, Z. Sui, B. Chang, and F. Wei. Knowledge neurons in pretrained transformers, 2021. URL https://arxiv.org/abs/ 2104.08696. F. Dalvi, N. Durrani, H. Sajjad, Y. Belinkov, A. Bau, and J. Glass. What is one grain of sand in the desert? analyzing individual neurons in deep nlp models, 2018. URL https://arxiv.org/ abs/1812.09355. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018. URL https: //arxiv.org/abs/1810.04805. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL https://arxiv.org/abs/2010.11929. N. Durrani, H. Sajjad, F. Dalvi, and Y. Belinkov. Analyzing individual neurons in pre-trained language models. *CoRR*, abs/2010.02695, 2020. URL https://arxiv.org/abs/2010.02695. N. Elhage, N. Nanda, C. Olsson, T. Henighan, N. Joseph, B. Mann, A. Askell, Y. Bai, A. Chen, T. Conerly, N. DasSarma, D. Drain, D. Ganguli, Z. Hatfield-Dodds, D. Hernandez, A. Jones, J. Kernion, L. Lovitt, K. Ndousse, D. Amodei, T. Brown, J. Clark, J. Kaplan, S. McCandlish, and C. Olah. A mathematical framework for transformer circuits, 2021. URL https://transformer-circuits.pub/ 2021/framework/index.html. N. Elhage, T. Hume, C. Olsson, N. Nanda, T. Henighan, S. Johnston, S. ElShowk, N. Joseph, N. DasSarma, B. Mann, D. Hernandez, A. Askell, K. Ndousse, A. Jones, D. Drain, A. Chen, Y. Bai, D. Ganguli, L. Lovitt, Z. Hatfield-Dodds, J. Kernion, T. Conerly, S. Kravec, S. Fort, S. Kadavath, J. Jacobson, E. Tran-Johnson, J. Kaplan, J. Clark, T. Brown, S. McCandlish, D. Amodei, and C. Olah. Softmax linear units. *Transformer Circuits Thread*, 2022. https://transformercircuits.pub/2022/solu/index.html. K. Ethayarajh. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings, 2019. URL https://arxiv.org/abs/1909.00512. J. Gao, D. He, X. Tan, T. Qin, L. Wang, and T. Liu. Representation degeneration problem in training natural language generation models. In International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=SkEYojRqtm. M. Geva, R. Schuster, J. Berant, and O. Levy. Transformer feed-forward layers are key-value memories, 2020. URL https://arxiv.org/abs/ 2012.14913. M. Geva, A. Caciularu, G. Dar, P. Roit, S. Sadde, M. Shlain, B. Tamir, and Y. Goldberg. Lmdebugger: An interactive tool for inspection and intervention in transformer-based language models. arXiv preprint arXiv:2204.12130, 2022a. M. Geva, A. Caciularu, K. R. Wang, and Y. Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space, 2022b. URL https://arxiv.org/ abs/2203.14680. P. Jaccard. The distribution of the flora in the alpine zone. *The New Phytologist*, 11(2):37–50, 1912. ISSN 0028646X, 14698137. URL http://www. jstor.org/stable/2427226. H. W. Kuhn. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2): 83–97, 1955. K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. *2015 IEEE Conference on Computer* Vision and Pattern Recognition (CVPR), pages 991– 999, 2015. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. URL https://arxiv.org/ abs/1907.11692. A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies*, pages 142– 150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http:// www.aclweb.org/anthology/P11-1015. T. Mickus, D. Paperno, and M. Constant. How to dissect a muppet: The structure of transformer embedding spaces. *arXiv preprint arXiv:2206.03529*, 2022. E. H. Moore. On the reciprocal of the general algebraic matrix. *Bull. Am. Math. Soc.*, 26:394–395, 1920. nostalgebraist. interpreting gpt: the logit lens, 2020. URL https://www.lesswrong. com/posts/AcKRB8wDpdaN6v6ru/ interpreting-gpt-the-logit-lens. https://www.lesswrong.com/ posts/AcKRB8wDpdaN6v6ru/ interpreting-gpt-the-logit-lens. R. Penrose. A generalized inverse for matrices. In Mathematical proceedings of the Cambridge philosophical society, volume 51, pages 406–413. Cambridge University Press, 1955. O. Press and L. Wolf. Using the output embedding to improve language models, 2016. URL https:// arxiv.org/abs/1608.05859. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. In *OpenAI blog*, 2019. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-totext transformer, 2019. URL https://arxiv. org/abs/1910.10683. A. Rogers, O. Kovaleva, and A. Rumshisky. A primer in bertology: What we know about how bert works, 2020. URL https://arxiv.org/abs/ 2002.12327. W. Rudman, N. Gillman, T. Rayne, and C. Eickhoff. Isoscore: Measuring the uniformity of vector space utilization. *CoRR*, abs/2108.07344, 2021. URL https://arxiv.org/abs/2108.07344. T. Sellam, S. Yadlowsky, I. Tenney, J. Wei, N. Saphra, A. D'Amour, T. Linzen, J. Bastings, I. R. Turc, J. Eisenstein, D. Das, and E. Pavlick. The multiBERTs: BERT reproductions for robustness analysis. In *International Conference on Learning Representations*, 2022. URL https://openreview. net/forum?id=K0E_F0gFDgA. X. Shi, I. Padhi, and K. Knight. Does string-based neural MT learn source syntax? In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526–1534, Austin, Texas, Nov. 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1159. URL https://aclanthology.org/D16-1159. S. Sukhbaatar, E. Grave, G. Lample, H. Jegou, and A. Joulin. Augmenting self-attention with persistent memory. *arXiv preprint arXiv:1907.01470*, 2019. I. Tenney, D. Das, and E. Pavlick. BERT rediscovers the classical NLP pipeline. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–4601, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1452. URL https://aclanthology.org/P19-1452. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2017. URL https://arxiv.org/abs/1706.03762. E. Voita, R. Sennrich, and I. Titov. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives, 2019a. URL https://arxiv. org/abs/1909.01380. E. Voita, D. Talbot, F. Moiseev, R. Sennrich, and I. Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy, July 2019b. Association for Computational Linguistics. doi: 10.18653/v1/P19-1580. URL https:// aclanthology.org/P19-1580. L. Wang, J. Huang, K. Huang, Z. Hu, G. Wang, and Q. Gu. Improving neural language generation with spectrum control. In International Conference on Learning Representations, 2020. URL https:// openreview.net/forum?id=ByxY8CNtvr. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations, pages 38–45. Association for Computational Linguistics, October 2020. URL https://www.aclweb.org/ anthology/2020.emnlp-demos.6. S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. URL https://arxiv.org/abs/ 2205.01068. ## A **Rethinking Interpretation** ![11_image_0.png](11_image_0.png) The process of interpreting a vector v in [Geva et al., 2022b] proceeds in two steps: first the *projection* of the vector to the embedding space (vE); then, we use the list of the tokens that were assigned the largest values in the projected vector, i.e.: top-k(vE), as the *interpretation* of the projected vector. This is reasonable since (a) the most activated coordinates contribute the most when added to the residual stream, and (b) this matches how we eventually decode: we project to the embedding space and consider the top-1 token (or one of the few top tokens, when using beam search). In this work, we interpret inner products and matrix multiplications in the embedding space: given two vectors x, y ∈ R d, their inner product x Ty can be considered in the embedding space by multiplying with E and then by one of its right inverses (e.g., its pseudo-inverse E+ [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]): x Ty = x TEE+y = (x TE)(E+y). Assume xE is interpretable in the embedding space, crudely meaning that it represents logits over vocabulary items. We expect y, which interacts with x, to also be interpretable in the embedding space. Consequently, we would like to take E+y to be the projection of y. However, this projection does not take into account the subsequent interpretation using top-k. The projected vector E+y might be harder to interpret in terms of its most activated tokens. To alleviate this problem, we need a different "inverse" matrix E′that works well when considering the top-k operation. Formally, we want an E′ with the following "robustness" guarantee: keep-k(x TE)keep-k(E′y) ≈ x Ty, where keep-k(v) is equal to v for coordinates whose absolute value is in the top-k, and zero elsewhere. This is a stronger notion of inverse - not only is EE′ ≈ I, but even when truncating the vector in the embedding space we can still reconstruct it with E′. We claim that ETis a decent instantiation of E′and provide some empirical evidence. While a substantive line of work [Ethayarajh, 2019; Gao et al., 2019; Wang et al., 2020; Rudman et al., 2021] has shown that embedding matrices are not isotropic (an isotropic matrix E has to satisfy EET = αI for some scalar α), we show that it is isotropic enough to make ETa legitimate compromise. We randomly sample 300 vectors drawn from the normal distribution N (0, 1), and compute for every pair *x, y* the cosine similarity between x Ty and keep-k(x TE)keep-k(E′y) for k = 1000, and then average over all pairs. We repeat this for E′ ∈ {E+, ET} and obtain a score of 0.10 for E+, and 0.83 for ET, showing the ETis better under when using top-k. More globally, we compare E′ ∈ {E+, ET} for k ∈ {10, 50, 100, 200, 300, 500} with three distributions: - *x, y* drawn from the normal N (0, 1) distribution - *x, y* chosen randomly from the FF values - *x, y* drawn from hidden states along Transformer computations. In Figure 5 we show the results, where dashed lines represent E+ and solid lines represent ET. The middle row shows the plots for GPT-2 medium, which is the main concern of this paper. For small values of k (which are more appropriate for interpretation), ETis superior to E+ across all distributions. Interestingly, the hidden state distribution is the only distribution where E+ has similar performance to ET. Curiously, when looking at higher values of k the trend is reversed (k = {512, 1024, 2048, 4096, 10000, 15000, 20000, 30000}) - see Figure 5 (Right). This settles the deviation from findings showing embedding matrices are not isotropic, as we see that indeed as k grows, ET becomes an increasingly bad approximate right-inverse of the embedding matrix. The only distribution that keeps high performance with ETis the hidden state distribution, which is an interesting direction for future investigation. For completeness, we provide the same analysis for GPT-2 base and large in Figure 5. We can see that GPT-2 base gives similar conclusions. GPT-2 large, however, seems to show a violent zigzag movement for E+ but for most values it seems to be superior to ET. It is however probably best to use ETsince it is more predictable. This zigzag behavior is very counter-intuitive and we leave it for future work to decipher. ## B **Additional Material** B.1 **Corresponding Parameter Pairs Are Related** We define the following metric applying on vectors *after projecting* them into the embedding space: $\text{Sim}_{k}(\hat{x},\hat{y})=\frac{|\text{top}-\text{k}(\hat{x})\cap\text{top}-\text{k}(\hat{y})|}{|\text{top}-\text{k}(\hat{x})\cup\text{top}-\text{k}(\hat{y})|}$. where top-k(v) is the set of k top activated indices in the vector v (which correspond to tokens in the embedding space). This metric is the Jaccard index [Jaccard, 1912] applied to the top-k tokens from each vector. In Figure 6, Left, we demonstrate that FF key vectors and their corresponding value vectors are more similar (in embedding space) than two random key and value vectors. In Figure 6, Right, we show a similar result for attention value and output vectors. In Figure 6, Bottom, the same analysis is done for attention query and key vectors. This shows that there is a much higher-than-chance relation between corresponding FF keys and values (and the same for attention values and outputs). ## B.2 **Final Prediction And Parameters** We show that the final prediction of the model is correlated in embedding space with the most activated parameters from each layer. This implies that these objects are germane to the analysis of the final prediction in the embedding space, which in turn suggests that the embedding space is a viable choice for interpreting these vectors. Figure 7 shows that just like §4.2, correspondence is better when hidden states are not randomized, suggesting their parameter interpretations have an impact on the final prediction. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) B.3 **Parameter Alignment Plots for Additional Model Pairs** Alignment in embedding space of layers of pairs of BERT models trained with different random seeds for additional model pairs. Seed 1 VS Seed 2 ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_3.png](14_image_3.png) ![14_image_2.png](14_image_2.png) ![14_image_4.png](14_image_4.png) ![14_image_5.png](14_image_5.png) ![14_image_6.png](14_image_6.png) ![14_image_7.png](14_image_7.png) ## C **Example Cases** C.1 Wvo **Matrices** Below we show output-value pairs from different heads of GPT-2 medium. For each head, we show the 50 pairs with the largest values in the e × e transition matrix. There are 384 attention heads in GPT-2 medium from which we manually choose a subset. Throughout the section some lists are marked with asterisks indicating the way this particular list was created: * - pairs of the form (*x, x*) were excluded from the list ** - pairs where both items are present in the corpus (we use IMDB training set). Along with GPT-2 medium, we also provide a few examples from GPT-2 base and GPT-2 large. ## C.1.1 **Low-Level Language Modeling** GPT-2 Medium - Layer 21 Head 7* ('NF', 'FN'), ('Ram', ' Ramos'), ('Hug', ' Hughes'), ('gran', 'GR'), ('FN', 'NF'), ('CLA', 'CL'), ('McC', ' McCain'), ('Marsh', ' Marshall'), (' Hughes', 'Hug'), ('Tan', ' Tanner'), ('nih', 'NH'), ('NRS', 'NR'), (' Bowman', 'Bow'), (' Marshall', 'Marsh'), ('Jac', ' Jacobs'), ('Hay', ' Hayes'), (' Hayes', 'Hay'), ('McC', ' McCorm'), ('NI', 'NR'), (' sidx', ' Dawson'), (' Tanner', 'Tan'), ('gra', 'GR'), ('JA', 'jac'), ('zos', 'zo'), ('NI', 'NF'), ('McC', ' McCull'), (' Jacobs', 'Jac'), (' Beetle', ' Beet'), ('GF', 'FG'), ('jas', 'ja'), ('Wil', ' Wilkinson'), (' Ramos', 'Ram'), ('GRE', 'GR'), (' NF', 'FN'), (' McCorm', 'McC'), ('Scar', ' Scarborough'), (' Baal', 'Ba'), ('FP', 'FG'), ('FH', 'FN'), (' Garfield', 'Gar'), ('jas', 'jac'), ('nuts', 'nut'), ('WI', ' Wis'), (' Vaughn', ' Vaughan'), ('FP', 'PF'), ('RNA', 'RN'), (' Jacobs', 'jac'), ('FM', 'FN'), (' Knox', 'Kn'), ('NI', 'nic') GPT-2 Medium - Layer 19 Head 13 (first letter/consonant of the word and last token of the word) (' R', 'senal'), \# arsenal ('senal', 'R'), (' G', 'vernment'), \# government (' Madness', ' M'), (' M', ' Mayhem'), (' W', 'nesday'), \# wednesday ('vernment', 'G'), ('M', ' Madness'), (' N', 'lace'), \# necklace ('nesday', 'W'), ('Rs', 'senal'), (' g', 'vernment'), (' N', 'farious'), \# nefarious ('eneg', ' C'), (' r', 'senal'), (' F', 'ruary'), \# february ('senal', 'RIC'), (' R', 'ondo'), (' N', ' Mandela'), \# nelson (' Mayhem', 'M'), (' RD', 'senal'), (' C', 'estine'), ('Gs', 'vernment'), ('RF', 'senal'), (' N', 'esis'), (' N', 'Reviewed'), (' C', 'arette'), \# cigarette ('rome', ' N'), (' N', 'theless'), \# nonetheless ('lace', 'N'), (' H', 'DEN'), (' V', ' versa'), (' P', 'bably'), \# probably ('vernment', 'GF'), ('g', 'vernment'), ('GP', 'vernment'), (' C', 'ornia'), \# california ('ilipp', ' F'), (' N', 'umbered'), (' C', 'arettes'), ('RS', 'senal'), (' N', 'onsense'), ('RD', 'senal'), ('RAL', 'senal'), (' F', 'uci'), ('R', 'ondo'), (' RI', 'senal'), (' H', 'iday'), \# holiday ('senal', ' Rx'), (' F', 'odor') GPT-2 Medium - Layer 20 Head 9 ('On', ' behalf'), (' On', ' behalf'), (' on', ' behalf'), ('during', ' periods'), ('within', ' bounds'), (' inside', ' envelope'), ('outside', 'door'), ('inside', ' envelope'), (' Under', ' regime'), | (' during', ' periods'), (' LIKE', 'lihood'), (' on', ' occasions'), ('Under', ' regime'), ('inside', 'door'), ('during', 'period'), ('Like', 'lihood'), (' During', ' periods'), ('Inside', ' envelope'), ('for', ' sake'), (' inside', ' doors'), (' under', ' regime'), (' ON', ' behalf'), ('for', ' purposes'), ('On', ' occasions'), ('inside', ' doors'), (' on', ' basis'), (' Under', ' regimes'), ('outside', 'doors'), ('inside', ' Osc'), ('During', ' periods'), (' inside', 'door'), (' UNDER', ' regime'), (' under', ' regimes'), ('Under', ' regimes'), ('inside', 'doors'), ('inside', 'zx'), ('during', ' period'), ('inside', 'ascript'), ('Inside', 'door'), (' On', ' occasions'), ('BuyableInstoreAndOnline', 'ysc'), (' Inside', ' envelope'), ('during', ' pauses'), ('under', ' regime'), (' on', ' occasion'), ('outside', ' doors'), (' UNDER', ' banner'), ('within', ' envelope'), (' here', 'abouts'), ('during', ' duration') GPT-2 Base - Layer 10 Head 11** (' sources', 'ources') (' repertoire', ' reperto') (' tales', ' stories') (' stories', ' tales') (' journals', ' magazines') ('stories', ' tales') (' journal', ' journals') (' magazines', 'Magazine') (' magazines', ' newspapers') (' reperto', ' repertoire') (' cameras', ' Camer') (' source', ' sources') (' newspapers', ' magazines') (' position', ' positions') (' tale', ' tales') (' positions', ' position') (' obstacles', ' hurdles') (' chores', ' tasks') (' journals', ' papers') (' role', ' roles') (' hurdles', ' obstacles') (' journals', ' journal') (' windows', ' doors') (' ceiling', ' ceilings') (' loophole', ' loopholes') (' Sources', 'ources') ('source', ' sources') | (' documentaries', ' films') (' microphone', ' microphones') (' cameras', ' camera') ('Journal', ' journals') (' restrooms', ' bathrooms') (' tasks', ' chores') (' perspectives', ' viewpoints') (' shelf', ' shelves') (' rooms', ' bedrooms') (' hurdle', ' hurdles') (' barriers', ' fences') (' magazines', ' journals') (' journals', 'Magazine') (' sources', ' source') (' manuals', ' textbooks') (' story', ' stories') (' labs', ' laboratories') (' tales', ' Stories') (' chores', ' duties') (' roles', ' role') (' ceilings', ' walls') (' microphones', ' microphone') (' pathway', ' pathways') GPT-2 Large - Layer 27 Head 6 (' where', 'upon'), ('where', 'upon'), ('with', ' regard'), ('with', ' regards'), (' with', ' regards'), (' Where', 'upon'), (' Like', 'lihood'), ('of', ' course'), (' with', ' regard'), (' LIKE', 'lihood'), ('Where', 'upon'), ('from', ' afar'), ('with', 'stood'), (' FROM', ' afar'), (' like', 'lihood'), (' WHERE', 'upon'), ('Like', 'lihood'), (' with', 'stood'), (' of', ' course'), ('of', 'course'), ('Of', ' course'), (' from', ' afar'), (' WITH', ' regard'), (' where', 'abouts'), ('with', ' impunity'), (' WITH', ' regards'), ('With', 'stood'), ('for', ' purposes'), ('with', ' respect'), (' With', 'stood'), ('like', 'lihood'), (' Of', ' course'), ('With', ' regard'), (' With', ' regard'), ('where', 'abouts'), (' WITH', 'stood'), ('With', ' regards'), (' OF', ' course'), (' From', ' afar'), (' with', ' impunity'), (' With', ' regards'), (' with', ' respect'), ('From', ' afar'), ('with', 'standing'), (' on', ' behalf'), | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 16140 | | | (' by', 'products'), (' for', ' purposes'), (' or', 'acle'), ('for', ' sake'), (' with', 'standing') | (' herself', 'Maria'), (' her', 'Maria'), (' herself', ' Anne'), ('She', 'Maria'), (' hers', ' Louise'), (' herself', ' Louise'), (' hers', ' Anne'), (' hers', 'pher'), ('she', 'Maria'), (' actress', ' actresses'), (' herself', ' Isabel'), (' herself', 'pher'), (' she', 'Maria'), (' SHE', ' Marie'), (' herself', ' Gloria'), (' herself', ' Amanda'), (' Ivanka', ' Ivanka'), (' her', ' Louise'), (' herself', ' Kate'), (' her', 'pher'), (' her', ' Anne'), (' she', 'pher'), ('she', ' Louise'), (' herself', 'Kate'), (' she', ' Louise'), (' she', ' Anne'), (' She', ' Marie'), ('she', ' Gloria'), ('She', ' Louise'), (' hers', ' Gloria'), (' herself', ' Diana'), ('She', ' Gloria'), ('she', ' Anne'), ('she', 'pher'), ('Her', ' Marie'), (' she', ' Gloria'), (' Paleo', ' Paleo'), (' hers', ' Diana') GPT-2 Base - Layer 9 Head 7** (' her', ' herself') ('She', ' herself') (' she', ' herself') ('she', ' herself') ('Her', ' herself') (' She', ' herself') (' SHE', ' herself') ('their', ' themselves') (' hers', ' herself') ('Their', ' themselves') (' Her', ' herself') (' Their', ' themselves') (' THEIR', ' themselves') (' HER', ' herself') (' their', ' themselves') ('They', ' themselves') ('His', ' himself') (' herself', 'erest') ('they', ' themselves') ('his', ' himself') ('Their', 'selves') (' They', ' themselves') (' herself', ' Louise') ('their', 'selves') ('her', ' herself') (' his', ' himself') (' herself', ' Marie') ('He', ' himself') ('She', ' Louise') (' they', ' themselves') | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | C.1.2 | Gender | | GPT-2 Medium - Layer 18 Head 1 ('women', ' Marie'), (' actresses', ' Marie'), ('women', ' Anne'), ('Women', ' Anne'), ('woman', ' Marie'), ('Women', ' Marie'), ('woman', ' Anne'), ('Woman', ' Marie'), (' actresses', ' Anne'), (' heroine', ' Marie'), ('Women', 'Jane'), (' heroine', ' Anne'), ('women', 'Jane'), ('Women', ' actresses'), ('Woman', ' Anne'), ('Women', ' Esther'), ('women', ' Esther'), ('girls', ' Marie'), ('Mrs', ' Anne'), (' actress', ' Marie'), ('women', ' actresses'), ('Woman', 'Jane'), (' girls', ' Marie'), (' actresses', 'Jane'), ('Woman', 'Anne'), ('Girls', ' Marie'), ('women', 'Anne'), ('Girls', ' Anne'), ('Woman', ' actresses'), (' Women', ' Marie'), (' Women', ' Anne'), (' girls', ' Anne'), ('girl', ' Anne'), ('Women', 'Anne'), ('Woman', 'Women'), ('girls', ' Anne'), (' actresses', 'Anne'), ('women', ' Michelle'), (' Actress', ' Marie'), ('girl', ' Marie'), (' Feminist', ' Anne'), (' women', ' Marie'), ('Women', ' Devi'), ('Women', ' Elizabeth'), (' actress', ' Anne'), ('Mrs', 'Anne'), ('answered', 'Answer'), ('woman', 'Anne'), ('Woman', 'maid'), ('women', 'Marie') GPT-2 Large - Layer 27 Head 12 (' herself', ' Marie'), (' hers', ' Marie'), ('she', ' Marie'), (' she', ' Marie'), (' her', ' Marie'), ('She', ' Marie'), (' hers', 'Maria'), (' actresses', ' actresses'), | | ('their', 'chairs') (' herself', ' dow') (' herself', 'eva') (' THEY', ' themselves') (' herself', ' Mae') (' His', ' himself') ('clinton', 'enegger') ('She', 'erest') (' her', ' Louise') (' herself', ' Devi') (' Their', 'selves') ('Their', 'chairs') (' Himself', 'enegger') (' she', ' Louise') (' herself', ' Anne') ('Its', ' itself') (' her', 'erest') (' herself', ' Christina') ('she', 'erest') ('their', ' selves') | (' Halifax', ' Scotia') ('Saudi', ' Arabia') (' Nova', ' Scotia') (' Tamil', ' Nadu') (' Finnish', 'onen') (' Saudi', ' Arabia') ('Pitt', 'sburgh') ('Dutch', 'ijk') (' Schwartz', 'enegger') (' Afghans', ' Kabul') (' Icelandic', 'sson') (' Finland', 'onen') ('Pitt', 'enegger') (' Czech', 'oslov') (' Manitoba', ' Winnipeg') (' Malaysian', ' Lumpur') (' Swedish', 'borg') (' Saskatchewan', ' Sask') (' Chennai', ' Nadu') (' Argentine', ' Aires') (' Iceland', ' Icelandic') (' Swedish', 'sson') (' Tasman', ' Nadu') ('Houston', ' Astros') ('Colorado', ' Springs') (' Kuala', ' Lumpur') ('Tai', 'pport') ('Houston', ' Dynamo') (' Manitoba', 'Marginal') (' Afghan', ' Kabul') (' Buenos', ' Aires') (' Alberta', ' Calgary') (' Stockholm', 'sson') (' Sweden', 'borg') ('Brazil', ' Paulo') (' Iceland', 'sson') (' Winnipeg', ' Manitoba') (' Sweden', 'sson') (' Carolina', ' Hurricanes') (' Dutch', 'ijk') (' Swed', 'borg') (' Aki', 'pport') (' Winnipeg', 'Marginal') (' Argentine', ' pes') (' Halifax', 'imore') (' Brisbane', 'enegger') | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| (' Melbourne', ' Nadu') (' Adelaide', ' Nadu') (' Cambod', ' Nguyen') (' Vietnamese', ' Nguyen') ## Gpt-2 Medium - Layer 16 Head 6* Gpt-2 Medium - Layer 16 Head 2* | (' Chennai', ' Mumbai'), ('India', ' Mumbai'), (' Mumbai', ' Chennai'), (' Queensland', ' Tasmania'), ('India', ' Rahul'), ('India', ' Gujar'), (' Chennai', ' Bangalore'), ('England', 'Scotland'), (' Chennai', ' Kerala'), (' Delhi', ' Mumbai'), ('Britain', 'Scotland'), (' Bangalore', ' Mumbai'), ('Pakistan', 'India'), ('Scotland', 'Ireland'), (' Mumbai', ' Bangalore'), (' Bangalore', ' Chennai'), (' Aadhaar', ' Gujar'), (' Mumbai', ' Maharashtra'), (' Maharashtra', ' Gujarat'), (' Gujarat', ' Gujar'), ('Australian', 'Australia'), ('India', ' Gujarat'), (' Rahul', ' Gujar'), (' Maharashtra', ' Mumbai'), ('Britain', 'England'), ('India', ' Chennai'), (' Mumbai', ' Bombay'), (' Tamil', ' Kerala'), (' Hindi', ' Mumbai'), (' Tasmania', ' Tasman'), (' Mumbai', 'India'), (' Hindi', ' Gujar'), (' Maharashtra', ' Gujar'), (' Australians', 'Austral'), (' Maharashtra', ' Kerala'), ('India', ' Bangalore'), ('India', ' Kerala'), ('India', ' Bombay'), ('Australia', 'Austral'), (' Aadhaar', 'India'), (' Sharma', ' Mumbai'), ('Australian', 'Austral'), (' Mumbai', ' Kerala'), ('Scotland', 'England'), (' Mumbai', ' Gujar'), (' Rahul', ' Mumbai'), (' Queensland', ' Tasman'), (' Tamil', ' Chennai'), (' Gujarat', ' Maharashtra'), ('India', ' Modi') | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ('Austral', ' Australians'), ('Australia', 'Austral'), (' Canberra', 'Austral'), ('Austral', ' Canberra'), (' Winnipeg', ' Edmonton'), ('Australian', 'Austral'), (' Alberta', ' Edmonton'), ('Australia', ' Australians'), (' Australians', 'Austral'), ('Ukraine', 'ovych'), | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (' Quebec', ' Canad'), ('Australian', ' Australians'), (' Winnipeg', ' Manitoba'), (' Manitoba', ' Winnipeg'), ('Canadian', 'Canada'), ('Moscow', ' Bulgar'), (' Manitoba', ' Edmonton'), ('berra', 'Austral'), ('Austral', 'Australian'), (' Ukrainians', 'ovych'), ('Canada', ' Canadians'), (' Canberra', ' Australians'), ('Canada', 'Canadian'), (' Yanukovych', 'ovych'), ('Canada', ' Trudeau'), (' Dmitry', ' Bulgar'), (' Australia', 'Austral'), (' Mulcair', ' Canad'), ('berra', ' Canberra'), ('Turkish', 'oglu'), ('udeau', 'Canada'), (' Edmonton', ' Oilers'), ('Australia', ' Canberra'), ('Canada', ' Edmonton'), (' Edmonton', ' Calgary'), (' Alberta', ' Calgary'), ('udeau', ' Trudeau'), (' Calgary', ' Edmonton'), ('Canadian', ' Trudeau'), ('Australian', ' Canberra'), (' Vancouver', ' Canucks'), ('Australia', 'Australian'), (' Vancouver', ' Fraser'), ('Canadian', ' Edmonton'), ('Austral', 'elaide'), ('Tex', ' Braz'), ('Canada', ' RCMP'), ('Moscow', 'sov'), ('Russia', ' Bulgar'), (' Canadians', 'Canada') GPT-2 Medium - Layer 21 Head 12* (' Indonesian', ' Indones'), (' Vietnamese', ' Nguyen'), (' Indonesian', ' Jakarta'), (' Indonesian', ' Indonesia'), ('Turkish', 'oglu'), (' Indonesia', ' Indones'), (' Jakarta', ' Indones'), (' Korean', ' Koreans'), (' Turkish', 'oglu'), (' Taiwan', ' Taiwanese'), (' Thai', ' Nguyen'), (' Brazilian', 'Brazil'), (' Indones', ' Indonesia'), ('Tai', ' Taiwanese'), (' Istanbul', 'oglu'), (' Indones', ' Indonesian'), (' Indones', ' Jakarta'), (' Laos', ' Nguyen'), (' Slovenia', ' Sloven'), (' Koreans', ' Korean'), (' Cambod', ' Nguyen'), ('Italy', 'zzi'), (' Taiwanese', 'Tai'), (' Indonesia', ' Jakarta'), (' Indonesia', ' Indonesian'), (' Bulgarian', ' Bulgaria'), (' Iceland', ' Icelandic'), (' Korea', ' Koreans'), | ('Brazil', ' Brazilian'), (' Bulgarian', ' Bulgar'), (' Malaysian', ' Malays'), (' Ankara', 'oglu'), (' Bulgaria', ' Bulgarian'), (' Malays', ' Indones'), (' Taiwanese', ' Tai'), ('Turkey', 'oglu'), ('Brazil', ' Janeiro'), ('Italian', 'zzi'), (' Kuala', ' Malays'), ('Japanese', ' Fuk'), (' Jakarta', ' Indonesian'), (' Taiwanese', ' Taiwan'), (' Erdogan', 'oglu'), (' Viet', ' Nguyen'), (' Philippine', ' Filipino'), (' Jakarta', ' Indonesia'), (' Koreans', ' Jong'), (' Filipino', ' Duterte'), (' Azerbaijan', ' Azerbai'), (' Bulgar', ' Bulgarian') GPT-2 Large - Layer 23 Head 5 ('Canada', ' Trudeau'), (' Canadians', ' Trudeau'), ('Canadian', ' Trudeau'), (' Queensland', ' Tasman'), (' Tasman', ' Tasman'), (' Canada', ' Trudeau'), (' Canberra', ' Canberra'), (' Winnipeg', ' Winnipeg'), (' Canberra', ' Tasman'), ('Canadian', 'Canada'), (' Canadian', ' Trudeau'), (' Brisbane', ' Brisbane'), (' Quebec', ' Trudeau'), ('Canadian', ' Canadian'), (' Brisbane', ' Tasman'), (' Tasmania', ' Tasman'), ('Canadian', ' Canadians'), (' RCMP', ' Trudeau'), (' Manitoba', ' Trudeau'), (' Queensland', ' Brisbane'), (' Queensland', ' Canberra'), ('Canada', ' Saskatchewan'), ('Canadian', ' Saskatchewan'), ('Canada', ' Canadian'), (' RCMP', ' Saskatchewan'), (' Canberra', ' Brisbane'), (' Canadians', 'Canada'), (' Winnipeg', ' Trudeau'), ('Canadian', ' Canada'), ('Canada', ' Canadians'), ('Australian', ' Canberra'), (' Melbourne', ' Canberra'), (' RCMP', ' Canad'), (' Canadians', ' Canadians'), ('CBC', ' Trudeau'), (' Canadian', ' Canadian'), ('Canadian', ' Winnipeg'), (' Australians', ' Canberra'), (' Quebec', 'Canada'), (' Canadian', 'Canada'), (' NSW', ' Canberra'), ('Toronto', ' Canad'), ('Canada', 'Canada'), (' NSW', ' Tasman'), (' RCMP', ' RCMP'), (' Canadian', ' Canadians'), | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| (' Saskatchewan', ' Saskatchewan'), (' Canadians', ' Saskatchewan'), ('Canadian', ' Canad'), (' Ottawa', ' Winnipeg') | (' realise', ' Whilst'), (' Whilst', ' Whilst'), (' realised', ' Whilst'), (' organise', ' Whilst'), (' recognise', ' Whilst'), (' civilisation', ' Whilst'), (' organisation', ' Whilst'), (' whilst', ' Whilst'), (' organising', ' Whilst'), (' organised', ' Whilst'), (' organis', ' Whilst'), (' util', ' Whilst'), (' apologise', ' Whilst'), (' emphas', ' Whilst'), (' analyse', ' Whilst'), (' organisations', ' Whilst'), (' recognised', ' Whilst'), (' flavours', ' Whilst'), (' colour', ' Whilst'), ('colour', ' Whilst'), (' Nasa', ' Whilst'), (' Nato', ' Whilst'), (' analys', ' Whilst'), (' flavour', ' Whilst'), (' colourful', ' Whilst'), (' colours', ' Whilst'), (' realise', ' organising'), (' behavioural', ' Whilst'), (' coloured', ' Whilst'), (' learnt', ' Whilst'), (' favourable', ' Whilst'), ('isation', ' Whilst'), (' programmes', ' Whilst'), (' realise', ' organis'), (' authorised', ' Whilst'), (' practise', ' Whilst'), (' criticised', ' Whilst'), (' organisers', ' Whilst'), (' organise', ' organising'), (' analysed', ' Whilst'), (' programme', ' Whilst'), (' behaviours', ' Whilst'), (' humour', ' Whilst'), ('isations', ' Whilst'), (' tyres', ' Whilst'), (' aluminium', ' Whilst'), (' realise', ' organised'), (' favour', ' Whilst'), (' ageing', ' Whilst'), (' organise', ' organis') | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## C.1.5 **Related Words** Gpt-2 Medium - Layer 13 Head 8* (' miraculous', ' mirac'), (' miracle', ' mirac'), (' nuance', ' nuanced'), (' smarter', 'Better'), (' healthier', ' equitable'), (' liberated', ' liberating'), (' untouched', ' unaffected'), | (' unbiased', ' equitable'), ('failed', ' inconsistent'), (' liberated', ' emanc'), (' humane', ' equitable'), (' liberating', ' liberated'), ('failed', ' incompatible'), (' miracles', ' mirac'), (' peacefully', ' consensual'), (' unconditional', ' uncond'), (' unexpectedly', ' unexpected'), (' untouched', ' unconditional'), (' healthier', 'Better'), (' unexpected', ' unexpectedly'), (' peacefully', ' graceful'), (' emancipation', ' emanc'), (' seamlessly', ' effortlessly'), (' peacefully', ' honorable'), (' uncond', ' unconditional'), (' excuses', ' rubbish'), (' liberating', ' emanc'), (' peacefully', ' equitable'), (' gracious', ' Feather'), (' liberated', ' emancipation'), (' nuances', ' nuanced'), (' avoids', 'icable'), (' freeing', ' liberated'), (' freeing', ' liberating'), (' lousy', ' inconsistent'), ('failed', ' lousy'), (' unaffected', ' unconditional'), ('ivable', ' equitable'), ('Honest', ' equitable'), (' principled', 'erning'), ('surv', ' survival'), (' lackluster', 'ocre'), (' liberating', ' equitable'), ('Instead', 'Bah'), (' inappropriate', ' incompatible'), (' emanc', ' emancipation'), (' unaffected', ' unchanged'), (' peaceful', ' peacefully'), (' safer', ' equitable'), (' uninterrupted', ' unconditional') GPT-2 Medium - Layer 12 Head 14* (' died', ' perished'), (' dies', ' perished'), (' testifying', ' testify'), (' interven', ' intervened'), (' advising', ' advises'), (' disband', ' disbanded'), (' perished', 'lost'), (' perished', ' died'), (' applaud', ' applauded'), (' dictate', ' dictates'), (' prevailed', ' prev'), (' advising', ' advise'), ('thood', 'shed'), ('orsi', 'Reviewed'), (' perished', ' dies'), (' publishes', 'published'), (' prevail', ' prevailed'), (' dies', ' died'), (' testifying', ' testified'), (' testify', ' testifying'), (' governs', ' dictates'), (' complicity', ' complicit'), (' dictate', ' dictated'), ('CHO', 'enough'), ('independence', ' skelet'), | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (' oversee', ' overseeing'), ('shed', ' skelet'), ('chart', 'EY'), (' overseeing', ' presiding'), ('pees', ' fundament'), ('appro', ' sanction'), (' prevailed', ' prevail'), (' regulates', ' governs'), ('shed', 'tails'), ('chart', ' Period'), ('hower', 'lihood'), (' prevail', ' prev'), ('helps', ' aids'), (' dict', ' dictated'), (' dictates', ' dictated'), ('itta', ' Dise'), ('CHO', 'REC'), ('ORTS', 'exclusive'), ('helps', ' Helpful'), ('ciples', 'bart') GPT-2 Medium - Layer 14 Head 1* (' incorrectly', ' misunderstand'), (' properly', ' Proper'), (' incorrectly', ' inaccur'), (' wrongly', ' misunderstand'), (' incorrectly', ' misinterpret'), (' incorrectly', ' incorrect'), (' incorrectly', ' mistakes'), (' incorrectly', ' misunderstanding'), (' properly', ' proper'), (' incorrectly', 'fail'), (' incorrectly', ' faulty'), (' incorrectly', ' misrepresent'), (' fails', ' failing'), (' incorrectly', ' inaccurate'), (' incorrectly', ' errors'), (' Worse', ' harmful'), (' wrong', ' misunderstand'), (' improperly', ' misunderstand'), (' incorrectly', 'wrong'), (' incorrectly', ' harmful'), (' incorrectly', ' mistake'), (' incorrectly', ' mis'), (' fails', 'fail'), (' Worse', ' detrimental'), (' properly', ' rightful'), (' inappropriately', ' misunderstand'), (' unnecessarily', ' harmful'), (' unnecessarily', ' neglect'), (' properly', ' correctly'), (' Worse', ' Worst'), (' fails', ' failure'), (' adequately', ' satisfactory'), (' incorrectly', ' defective'), (' mistakenly', ' misunderstand'), (' Worse', ' harming'), (' incorrectly', ' mishand'), (' adequately', 'adequ'), (' incorrectly', ' misuse'), (' fails', 'Failure'), (' Worse', ' hurts'), ('wrong', ' misunderstand'), (' incorrectly', ' mistakenly'), (' fails', ' failures'), | (' incorrectly', ' mistaken'), (' adversely', ' harming') GPT-2 Large - Layer 24 Head 9 (' interviewer', ' interviewer'), (' lectures', ' lectures'), (' lecture', ' lecture'), (' interview', 'Interview'), (' interview', ' interview'), (' interview', ' interviewer'), (' interviewing', ' interviewing'), (' magazine', ' magazine'), (' Reviews', ' Reviews'), (' reviewer', ' reviewer'), (' reviewers', ' reviewers'), (' lectures', ' lecture'), (' testers', ' testers'), (' editors', ' editors'), (' interviewer', ' interview'), (' Interview', 'Interview'), (' interviewer', 'Interview'), ('Interview', 'Interview'), (' lecture', ' lectures'), (' interviewing', ' interviewer'), (' journal', ' journal'), (' interviewer', ' interviewing'), (' blogs', ' blogs'), (' editorial', ' editorial'), (' tests', ' tests'), (' presentations', ' presentations'), (' Editorial', ' Editorial'), (' interview', ' Interview'), (' reviewer', ' reviewers'), (' interviews', 'Interview'), (' interview', ' interviewing'), (' interviewer', ' Interview'), (' interviews', ' interview'), (' Interview', ' Interview'), (' interviewing', 'Interview'), ('Interview', ' interviewer'), (' testifying', ' testifying'), (' reviewers', ' reviewer'), (' blogging', ' blogging'), (' broadcast', ' broadcast'), (' Interview', ' interviewer'), (' magazine', ' magazines'), (' editorial', ' Editorial'), (' interview', ' interviews'), (' interviewing', ' interview'), (' Interview', ' interview'), (' interviews', ' interviews'), (' tests', 'tests'), (' interviews', ' interviewing'), ('Interview', ' interview') GPT-2 Medium - Layer 14 Head 13* (' editorial', ' editors'), (' broadcasting', ' broadcasters'), (' broadcasts', ' broadcasting'), (' broadcasts', ' broadcast'), (' broadcasters', ' Broadcasting'), (' Editorial', ' editors'), (' broadcast', ' broadcasters'), (' broadcast', ' Broadcasting'), (' lecture', ' lectures'), | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (' broadcasting', ' Broadcast'), (' broadcaster', ' broadcasters'), (' broadcasts', ' broadcasters'), (' publishing', ' Publishers'), (' broadcast', ' broadcasting'), (' Broadcasting', ' broadcasters'), (' Publishing', ' Publishers'), (' lectures', ' lecture'), (' editorial', ' Editors'), (' broadcasting', ' broadcast'), (' broadcasts', ' Broadcasting'), (' broadcasters', ' broadcasting'), (' journalistic', ' journalism'), ('Journal', 'reports'), (' Broadcasting', ' Broadcast'), ('Publisher', ' Publishers'), (' Broadcasting', 'azeera'), ('Journal', 'Reporting'), (' journalism', ' journalistic'), (' broadcaster', ' Broadcasting'), (' broadcaster', ' broadcasting'), (' broadcasting', ' broadcaster'), (' publication', ' editors'), ('journal', ' journalism'), ('Journal', ' Journalists'), (' documentaries', ' documentary'), (' filmed', ' filming'), (' publishing', ' publishers'), ('Journal', ' journalism'), (' broadcasts', ' Broadcast'), (' broadcasters', ' broadcast'), ('Journal', ' articles'), ('reports', ' reporting'), (' manuscript', ' manuscripts'), (' publishing', ' publish'), (' broadcasters', 'azeera'), (' publication', ' Publishers'), (' publications', ' Publishers'), (' Newsp', ' newspapers'), (' broadcasters', ' Broadcast'), ('Journal', ' Readers') | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## C.2 **Query-Key Matrices** Gpt-2 Large - Layer 19 Head 7** | (' tonight', 'Friday'), (' Copyright', 'Returns'), ('TM', 'review'), (' Weekend', 'Preview'), (' tonight', 'Thursday'), (' recently', 'Closure'), (' Copyright', 'Contents'), (' Copyright', 'Wisconsin'), (' Copyright', 'Methods'), (' tonight', 'Sunday'), (' tomorrow', ' postpone'), (' tomorrow', ' tonight'), (' recently', 'acerb'), (' Copyright', 'Rated'), (' myself', ' my'), (' Copyright', 'Cop'), (' Wednesday', 'Closure'), (' Billion', ' 1935'), (' tonight', 'Saturday'), (' tonight', ' celebr'), (' tomorrow', ' postponed'), (' Copyright', 'Show'), (' Wednesday', 'Friday'), (' Copyright', 'Earn'), | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## Gpt-2 Medium - Layer 22 Head 1 | (' Billion', ' 1934'), (' Eric', 'Larry'), (' 2015', 'Released'), (' Copyright', 'Rat'), (' tomorrow', ' postp'), (' 2017', 'Latest'), (' previous', 'obin'), (' controversial', 'Priv'), (' recently', ' nightly'), ('Base', ' LV'), (' recently', 'Project'), (' historically', ' globalization'), (' recently', ' vulner'), (' tonight', 'Wednesday'), (' Copyright', 'Abstract'), (' Tuesday', 'Friday'), (' Anthony', 'Born'), (' Budget', 'Premium'), (' tonight', 'Welcome'), ('yle', 'lite'), (' Wednesday', 'Latest'), (' Latest', 'show'), (' B', ' pione'), (' Copyright', 'cop'), (' Pablo', ' Dia'), (' recent', 'Latest') | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (' usual', ' usual'), (' occasional', ' occasional'), (' aforementioned', ' aforementioned'), (' general', ' usual'), (' usual', ' slightest'), ('agn', 'ealous'), (' traditional', ' usual'), (' free', 'amina'), (' major', ' major'), (' frequent', ' occasional'), (' generous', ' generous'), (' free', 'lam'), (' regular', ' usual'), (' standard', ' usual'), (' main', ' usual'), (' complete', ' Finished'), (' main', 'liest'), (' traditional', ' traditional'), (' latest', ' aforementioned'), (' current', ' aforementioned'), (' normal', ' usual'), (' dominant', ' dominant'), (' free', 'ministic'), (' brief', ' brief'), (' biggest', 'liest'), ('usual', ' usual'), (' rash', ' rash'), (' regular', ' occasional'), (' specialized', ' specialized'), (' free', 'iosis'), (' free', 'hero'), (' specialty', ' specialty'), (' general', 'iosis'), (' nearby', ' nearby'), (' best', 'liest'), (' officially', ' formal'), (' immediate', 'mediate'), (' special', ' ultimate'), (' free', 'otropic'), (' rigorous', ' comparative'), (' actual', ' slightest'), | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (' complete', ' comparative'), (' typical', ' usual'), (' modern', ' modern'), (' best', ' smartest'), (' free', ' free'), (' highest', ' widest'), (' specialist', ' specialist'), (' appropriate', ' slightest'), (' usual', 'liest') GPT-2 Large - Layer 20 Head 13 ** (' outdoors', ' outdoors'), (' outdoor', ' outdoors'), (' Gre', 'burg'), (' healing', ' healing'), (' indoor', ' outdoors'), (' Hemp', 'burg'), (' Ticket', ' Ticket'), (' accommodations', ' accommodations'), ('eco', 'aco'), ('prem', 'otti'), (' Candy', 'cott'), (' decorative', ' ornament'), ('yan', 'ava'), (' deadlines', ' schedule'), (' Lor', 'ian'), (' architectural', ' ornament'), (' Ratings', ' Ratings'), (' Bod', 'za'), (' exotic', ' exotic'), (' food', ' baths'), (' Marketplace', ' Marketplace'), (' heal', ' healing'), (' Ex', 'ilus'), (' indoors', ' outdoors'), (' therm', ' therm'), (' bleach', ' coated'), (' Sod', 'opol'), (' District', ' Metropolitan'), (' Anonymous', ' Rebell'), (' Corn', 'burg'), (' indoor', ' indoors'), (' R', 'vale'), ('rom', 'otti'), (' ratings', ' Ratings'), (' attendance', ' attendance'), (' destinations', ' destinations'), (' VIDEOS', ' VIDEOS'), ('yan', 'opol'), (' Suffolk', 'ville'), (' retali', ' against'), ('mos', 'oli'), (' pacing', ' pacing'), (' Spectrum', ' QC'), (' Il', 'ian'), (' archived', ' archived'), (' Pledge', ' Pledge'), ('alg', 'otti'), (' Freedom', 'USA'), ('anto', 'ero'), (' decorative', ' decoration') | ('54', '88'), ('156', '39'), ('212', '79'), ('59', '28'), ('57', '27'), ('212', '57'), ('156', '29'), ('36', '27'), ('217', '79'), ('59', '38'), ('63', '27'), ('72', '39'), ('57', '26'), ('57', '34'), ('59', '34'), ('156', '27'), ('91', '27'), ('156', '38'), ('63', '26'), ('59', '25'), ('138', '27'), ('217', '38'), ('72', '27'), ('54', '27'), ('36', '29'), ('72', '26'), ('307', '39'), ('37', '26'), ('217', '57'), ('37', '29'), ('54', '38'), ('59', '29'), ('37', '28'), ('307', '38'), ('57', '29'), ('63', '29'), ('71', '27'), ('138', '78'), ('59', '88'), ('89', '27'), ('561', '79'), ('212', '29'), ('183', '27'), ('54', '29') GPT-2 Medium - Layer 17 Head 6* (' legally', ' legal'), (' legal', ' sentencing'), (' legal', ' arbitration'), (' boycot', ' boycott'), (' legal', ' criminal'), (' legal', ' Judicial'), (' legal', ' rulings'), (' judicial', ' sentencing'), (' marketing', ' advertising'), (' legal', ' confidential'), (' protesting', ' protest'), (' recruited', ' recruit'), (' recruited', ' recruits'), (' judicial', ' criminal'), (' legal', ' exemptions'), (' demographics', ' demographic'), (' boycott', ' boycot'), (' sentencing', ' criminal'), (' recruitment', ' recruits'), (' recruitment', ' recruit'), (' Constitutional', ' sentencing'), (' Legal', ' sentencing'), (' constitutional', ' sentencing'), (' legal', ' subpoena'), | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GPT-2 Medium - Layer 0 Head 9 ('59', '27'), ('212', '39'), ('212', '38'), ('217', '39'), ('37', '27'), ('59', '26'), | | (' injury', ' injuries'), (' FOIA', ' confidential'), (' legal', ' licenses'), (' donation', ' donations'), (' disclosure', ' confidential'), (' negotiation', ' negotiating'), (' Judicial', ' legal'), (' legally', ' criminal'), (' legally', ' confidential'), (' legal', ' jur'), (' legal', ' enforcement'), (' legal', ' lawyers'), (' legally', ' enforcement'), (' recruitment', ' recruiting'), (' recruiting', ' recruit'), (' criminal', ' sentencing'), (' legal', ' attorneys'), (' negotiations', ' negotiating'), (' legally', ' arbitration'), (' recruited', ' recruiting'), (' legally', ' exemptions'), (' legal', ' judicial'), (' voting', ' Vote'), (' negotiated', ' negotiating'), (' legislative', ' veto'), (' funding', ' funded') ## Gpt-2 Medium - Layer 17 Head 7 ('tar', 'idia'), (' [...]', '..."'), (' lecture', ' lectures'), (' Congress', ' senate'), (' staff', ' staffers'), (' Scholarship', ' collegiate'), (' executive', ' overseeing'), (' Scholarship', ' academic'), (' academ', ' academic'), ('."', '..."'), (' [', '..."'), ('";', '..."'), (' Memorial', 'priv'), (' festival', 'conference'), ('crew', ' supervisors'), (' certification', ' grading'), (' scholarship', ' academic'), (' rumored', ' Academic'), (' Congress', ' delegated'), (' staff', ' technicians'), ('Plex', ' CONS'), (' congress', ' senate'), (' university', ' tenure'), (' Congress', ' appointed'), (' Congress', ' duly'), (' investigative', ' investig'), (' legislative', ' senate'), ('ademic', ' academic'), ('bench', ' academic'), (' scholarship', ' tenure'), (' campus', ' campuses'), (' staff', ' Facilities'), (' Editorial', 'mn'), (' clinic', ' laboratory'), (' crew', ' crews'), (' Scholarship', ' academ'), (' staff', ' staffer'), ('icken', 'oles'), ('?"', '..."'), (' Executive', ' overseeing'), (' academic', ' academ'), (' Congress', 'atra'), ('aroo', 'anny'), (' academic', ' academia'), (' Congress', ' Amendments'), (' academic', ' academics'), ('student', ' academic'), (' committee', ' convened'), ('",', '..."'), ('ove', 'idia') ## Gpt-2 Medium - Layer 16 Head 13 (' sugg', ' hindsight'), (' sugg', ' anecdotal'), (' unsuccessfully', ' hindsight'), ('didn', ' hindsight'), ('orously', 'staking'), ('illions', 'uries'), ('until', 'era'), (' lobbied', ' hindsight'), (' incorrectly', ' incorrect'), (' hesitate', ' hindsight'), ('ECA', ' hindsight'), (' regret', ' regrets'), ('inventoryQuantity', 'imore'), ('consider', ' anecdotal'), (' errone', ' incorrect'), (' someday', ' eventual'), ('illions', 'Murray'), (' recently', 'recent'), (' Learned', ' hindsight'), ('before', ' hindsight'), (' lately', 'ealous'), ('upon', 'rity'), ('ja', ' hindsight'), (' regretted', ' regrets'), (' unsuccessfully', 'udging'), (' lately', 'dated'), (' sugg', ' anecd'), (' inform', 'imore'), (' lately', 'recent'), (' anecd', ' anecdotal'), ('orously', ' hindsight'), (' postwar', ' Era'), (' lately', ' recent'), (' skept', ' cynicism'), (' sugg', 'informed'), (' unsuccessfully', 'ealous'), ('ebin', ' hindsight'), (' underest', ' overest'), (' Jinn', ' hindsight'), (' someday', '2019'), (' recently', 'turned'), (' sugg', ' retrospect'), (' unsuccessfully', 'didn'), (' unsuccessfully', 'gged'), (' mistakenly', ' incorrect'), ('assment', ')</'), ('ja', 'didn'), ('illions', ' hindsight'), (' sugg', ' testimony'), ('jri', ' hindsight') GPT-2 Medium - Layer 12 Head 9 (' PST', ' usual'), ('etimes', ' foreseeable'), ('uld', 'uld'), (' Der', ' Mankind'), (' statewide', ' yearly'), (' guarantees', ' guarantees'), (' Flynn', ' Logged'), ('borne', ' foreseeable'), (' contiguous', ' contiguous'), (' exceptions', ' exceptions'), (' redist', ' costly'), (' downstream', ' day'), (' ours', ' modern'), (' foreseeable', ' foreseeable'), (' Posted', ' Posted'), (' anecdotal', ' anecdotal'), (' moot', ' costly'), (' successor', ' successor'), (' any', ' ANY'), (' generational', ' modern'), (' temporarily', ' costly'), (' overall', ' overall'), (' effective', ' incentiv'), (' future', ' tomorrow'), (' ANY', ' lifetime'), (' dispatch', ' dispatch'), (' legally', ' WARRANT'), (' guarantees', ' incentiv'), (' listed', ' deductible'), (' CST', ' foreseeable'), (' anywhere', ' any'), (' guaranteed', ' incentiv'), (' successors', ' successor'), (' weekends', ' day'), ('iquid', ' expensive'), (' Trib', ' foreseeable'), (' phased', ' modern'), (' constitutionally', ' foreseeable'), (' any', ' anybody'), (' anywhere', ' ANY'), (' veto', ' precedent'), (' veto', ' recourse'), (' hopefully', ' hopefully'), (' potentially', ' potentially'), (' ANY', ' ANY'), (' substantive', ' noteworthy'), ('morrow', ' day'), ('ancial', ' expensive'), ('listed', ' breastfeeding'), (' holiday', ' holidays') GPT-2 Medium - Layer 11 Head 10 (' Journalism', ' acron'), (' democracies', ' governments'), ('/-', 'verty'), (' legislatures', ' governments'), ('ocracy', ' hegemony'), ('osi', ' RAND'), (' Organizations', ' organisations'), ('ellectual', ' institutional'), (' Journalists', ' acron'), ('eworks', ' sponsors'), (' Inqu', ' reviewer'), ('ocracy', ' diversity'), (' careers', ' Contributions'), ('gency', '\\-'), ('ellectual', ' exceptions'), (' Profession', ' specializing'), ('online', ' Online'), (' Publications', ' authorised'), ('Online', ' Online'), (' sidx', ' Lazarus'), ('eworks', ' Networks'), (' Groups', ' organisations'), (' Governments', ' governments'), (' democracies', ' nowadays'), (' psychiat', ' Mechdragon'), (' educ', ' Contributions'), (' Ratings', ' organisations'), ('vernment', 'spons'), ('..."', '),"'), (' Caucas', ' commodity'), (' dictators', ' governments'), ('istration', ' sponsor'), ('iquette', ' acron'), (' Announce', ' answ'), (' Journalism', ' empowering'), ('Media', ' bureaucr'), (' Discrimination', ' organizations'), (' Journalism', 'Online'), ('FAQ', 'sites'), (' antitrust', ' Governments'), ('..."', '..."'), ('Questions', ' acron'), ('rities', ' organisations'), (' Editorial', ' institutional'), (' tabl', ' acron'), (' antitrust', ' governments'), (' Journalism', ' Everyday'), ('icter', ' Lieberman'), (' defect', 'SPONSORED'), (' Journalists', ' organisations') GPT-2 Medium - Layer 22 Head 5 (names and parts of names seem to attend to each other here) (' Smith', 'ovich'), (' Jones', 'ovich'), (' Jones', 'Jones'), (' Smith', 'Williams'), (' Rogers', 'opoulos'), ('Jones', 'ovich'), (' Jones', 'inez'), ('ug', ' Ezek'), (' Moore', 'ovich'), ('orn', 'roit'), ('van', 'actionDate'), (' Jones', 'inelli'), (' Edwards', 'opoulos'), (' Jones', ' Lyons'), ('Williams', 'opoulos'), ('Moore', 'ovich'), (' Rodriguez', 'hoff'), (' North', ' suburbs'), (' Smith', 'chio'), ('Smith', 'ovich'), (' Smith', 'opoulos'), ('Mc', 'opoulos'), ('Johnson', 'utt'), (' Jones', 'opoulos'), ('Ross', 'Downloadha'), ('pet', 'ilage'), (' Everett', ' Prairie'), (' Cass', 'isma'), (' Jones', 'zynski'), ('Jones', 'Jones'), (' McCl', 'elman'), (' Smith', 'Jones'), (' Simmons', 'opoulos'), (' Smith', 'brown'), (' Mc', 'opoulos'), (' Jones', 'utt'), (' Richards', 'Davis'), (' Johnson', 'utt'), (' Ross', 'bred'), (' McG', 'opoulos'), (' Stevens', 'stadt'), ('ra', 'abouts'), (' Johnson', 'hoff'), | (' North', ' Peninsula'), (' Smith', 'Smith'), ('Jones', 'inez'), (' Hernandez', 'hoff'), (' Lucas', 'Nor'), (' Agu', 'hoff'), ('Jones', 'utt') GPT-2 Medium - Layer 19 Head 12 (' 2015', 'ADVERTISEMENT'), (' 2014', '2014'), (' 2015', '2014'), (' 2015', 'Present'), (' 2013', '2014'), (' 2017', 'ADVERTISEMENT'), (' 2016', 'ADVERTISEMENT'), ('itor', ' Banner'), ('2015', ' Bulletin'), ('2012', ' Bulletin'), ('2014', ' Bulletin'), (' Airl', 'Stream'), ('2016', ' Bulletin'), (' 2016', '2014'), ('2017', ' Bulletin'), (' 2013', ' 2014'), (' 2012', '2014'), (' stadiums', 'ventions'), (' 2015', ' Bulletin'), ('2013', ' Bulletin'), (' 2017', '2014'), (' 2011', ' 2011'), (' 2014', ' 2014'), (' 2011', ' 2009'), (' mile', 'eming'), (' 2013', 'ADVERTISEMENT'), (' 2014', '2015'), (' 2014', 'Present'), (' 2011', '2014'), (' 2011', '2009'), (' 2015', ' 2014'), (' 2013', ' Bulletin'), (' 2015', '2015'), (' 2011', ' 2003'), (' 2011', ' 2010'), (' 2017', 'Documents'), ('2017', 'iaries'), (' 2013', '2015'), ('2017', 'Trend'), (' 2011', '2011'), (' 2016', 'Present'), (' 2011', ' 2014'), (' years', 'years'), ('Plug', 'Stream'), (' 2014', 'ADVERTISEMENT'), ('2015', 'Present'), (' 2018', 'thora'), (' 2017', 'thora'), (' 2012', ' 2011'), (' 2012', ' 2014') | #annels | #Els | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|--------| | #netflix | #osi | | | telev | #mpeg | | | #tv | #vous | | | #avi | #iane | | | #flix | transmitter | | | Television | Sinclair | | | #outube | Streaming | | | #channel | #channel | | | Vid | mosqu | | | #Channel | broadcaster | | | documentaries | airs | | | #videos | Broadcasting | | | Hulu | broadcasts | | | channels | streams | | | #levision | channels | | | DVDs | broadcasters | | | broadcasts | broadcasting | | | #azeera | #RAFT | | | MPEG | #oded | | | televised | htt | | | aired | transmissions | | | broadcasters | playback | | | Streaming | Instruction | | | viewership | nic | | | #TV | Sirius | | | Kodi | viewership | | | ITV | radio | | | #ovies | #achers | | | channel | channel | | | GPT-2 Medium - Layer 3 Dim 2711 purposes purposes sake sake purpose reasons reasons purpose convenience ages reason reason Seasons #ummies #Plex #going Reasons foreseeable #ummies Reasons #asons #reason #lation #pur #alsh Developers #agos #akers #ACY transl STATS Reason #itas consideration ages #purpose #purpose beginners #=[ awhile #gencies Pur Millennium #benefit Brewers #atel Festival #tun EVENT pur #payment Ages #=- preservation #printf Metatron beginners um Expo #KEN GPT-2 Medium - Layer 4 Dim 621 #ovie headlined newspapers pestic television dime editorial describ #journal Afric broadcasters broadcasts | | | | C.3 | Feedforward Keys and Values | | | Key-value pairs, (ki, vi), where at least 15% of the top-k vocabulary items overlap, with k = 100. We follow our forerunner's convention of calling the index of the value in the layer "dimension" (Dim). Here again we use two asterisks (**) to represent lists where we discarded tokens outside the corpus vocabulary. GPT-2 Medium - Layer 0 Dim 116 | | | | #Journal | #(' | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------| | publication | #umbnails | | Newsweek | #adish | | Zeit | #uggest | | columnist | splash | | Editorial | #ZX | | newsletter | objectionable | | cartoon | #article | | #eport | Bucc | | telev | #London | | radio | reprint | | headlined | #azine | | #ribune | Giov | | BBC | #ender | | reprint | headline | | sitcom | #oops | | reprinted | #articles | | broadcast | snipp | | tabloid | Ajax | | documentaries | marqu | | journalist | #(" | | TV | #otos | | headline | mast | | news | #idem | | GPT-2 Medium - Layer 7 Dim 72 sessions session dinners sessions #cation #cation session #iesta dinner Booth #eteria screenings Dinner booked #Session #rogram rehears vacation baths baths Lunch #pleasant #hops meetings visits #Session Session greet #session #athon meetings Sessions chatting boarding lunch rituals chats booking festivities Grape boarding #miah #workshop #session #rooms Pars #tests simulated seated Dispatch visit Extras appointments toile #vu Evening #rations showers #luaj abroad GPT-2 Medium - Layer 10 Dim 8 Miy Tai #imaru #jin Gong Jin Jinn Makoto Xia #etsu Makoto Shin Kuro Hai Shin Fuj #Tai Dai Yamato Miy Tai #iku Ichigo Yun | | | #Shin | Ryu | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| | #atsu | Shu | | Haku | Hua | | Chun | Suzuki | | #ku | Yang | | Qing | Xia | | Tsuk | #Shin | | Hua | #iru | | Jiang | Yu | | Nanto | #yu | | manga | Chang | | Yosh | Nan | | yen | Qian | | Osaka | #hao | | Qian | Fuk | | #uku | Chun | | #iku | Yong | | Yue | #Tai | | GPT-2 Medium - Layer 11 Dim 2 progressing toward #Progress towards #progress Pace #osponsors progression #oppable #inness advancement onward progress canon Progress #progress #senal pace #venge #peed queue advancement #pun advancing progression progressing #wagon ladder advancing path #cknowled honoring #Goal ranks momentum standings #zag goal #hop #grand pursuits momentum #encing #ometer #Improve timetable STEP nearing #chini quest standings spiral #eway trajectory #chie progress #ibling accelerating Esports escal GPT-2 Medium - Layer 15 Dim 4057 EDITION copies versions Version copies #edition version #Version Version version edition #download editions download reprint versions #edition #Download EDIT copy Edition #release reproduce #version originals release #edited #copy VERS VERS #Versions #pub #Publisher Download reprodu #released | | | #uploads | editions | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------| | playthrough | edition | | | Printed | reprint | | | reproduction | Release | | | #Reviewed | #Available | | | copy | #published | | | #Version | #Published | | | paperback | EDITION | | | preview | print | | | surv | #Quantity | | | #Download | #available | | | circulate | RELEASE | | | GPT-2 Medium - Layer 16 Dim 41 #duino alarm #Battery alarms Morse signal alarms circuit GPIO GPIO LEDs timers batteries voltage #toggle signals signal circuitry circuitry electrical #PsyNetMessage circuits alarm LEDs autop standby signalling signalling #volt signaling volt lights signals Idle voltage triggers LED batteries electrom Morse timers LED malfunction #LED amplifier button radios Signal wiring timer #Alert wiring signaling buzz #Clock disconnect arming Arduino Arduino triggered GPT-2 Medium - Layer 17 Dim 23 responsibility responsibility Responsibility respons responsibilities responsibilities #ipolar Responsibility #responsible oversee duties #respons #respons duties superv supervision supervision superv #abwe stewards Adin chore respons oversight oversee oversees entrusted responsible overseeing #responsible helicop handling presided handles overseen overseeing #dyl chores responsible manage #ADRA managing reins duty #accompan Respons chores charge | oversees | reins | | supervised | handle | | | blame | oversaw | | | oversaw | CONTROL | | | #archment | RESP | | | RESP | tasks | | | GPT-2 Medium - Layer 19 Dim 29 subconscious thoughts thoughts thought #brain Thoughts #Brain minds memories mind OCD thinking flashbacks #thought brainstorm imagination Anxiety Thinking #mind Thought fantas imagin amygdala thinker impuls #thinking Thinking #mind #Memory memories Thoughts #think dreams imagining #ocamp impulses #Psych fantasies #mares think mentally urges #mental desires mind dreams #thinking delusions #Mind subconscious #dream emotions psyche imag prefrontal #dream PTSD conscience Memories visions GPT-2 Medium - Layer 20 Dim 65 exercises volleyball #Sport tennis #athlon sports Exercise sport #ournaments #basketball volleyball Tennis Recre soccer Mahjong golf #basketball playground exercise Golf bowling athletics skating #athlon spar athletic skiing rugby gymn amusement #sports gymn drills sled #Training #Sport tournaments cricket sled Soccer Volunte amuse skate Activities golf recreational #Pract Ski dunk activities #hower basketball athletics #games sport skating Solitaire hockey #BALL #sports | | | | 16152 | | | | GPT-2 Medium - Layer 21 Dim 86 | #kat | k | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-----| | #kus | #kt | | | #KS | K | | | #ked | #kr | | | #kr | #kl | | | #kB | #kish | | | #kan | #kos | | | #kw | #king | | | #ket | #ked | | | #king | #kie | | | #kb | #KB | | | #kos | #kk | | | #kHz | #kowski | | | #kk | #KR | | | #kick | #KING | | | #kers | #KT | | | #kowski | #KK | | | #KB | #KC | | | #krit | #kw | | | #KING | #kb | | | #kt | #Ka | | | #ksh | #krit | | | #kie | #KN | | | #ky | #kar | | | #KY | #kh | | | #ku | #ket | | | GPT-2 Medium - Layer 23 Dim 907 hands hand hand #Hand #hands Hand #hand #hand fingers hands #feet Hands fingertips fist claws #hands paw finger paws handed metab thumb palms fingers fingert foot #Hand #handed fists paw wrists handing levers #finger thumbs #hander tentacles fingertips feet claw limb fingert slider #Foot #handed Stick #dimension arm jaws #Accessory skelet #fing lapt Foot ankles index weap toe foot #auntlet GPT-2 Large - Layer 25 Dim 2685** #manager engineering #Engineers Marketing chemist #engineering humanities Communications sciences #communications anthropology anthropology lingu Engineering #engineering lingu psychologist psychology Coordinator neurolog | | | | IDs | number | | | identifiers | #number | | | surname | #Number | | | surn | Number | | | identifier | NUM | | | initials | numbers | | | #Registered | Numbers | | | NAME | #Numbers | | | #names | address | | | pseudonym | #address | | | #codes | #Num | | | nomine | #NUM | | | names | addresses | | | username | Address | | | #IDs | identifier | | | ID | #Address | | | registration | #num | | | #76561 | ID | | | #soDeliveryDate | numbering | | | #ADRA | IDs | | | CLSID | #ID | | | numbering | identifiers | | | #ername | identification | | | #address | numer | | | addresses | digits | | | codes | #numbered | | | #Names | numerical | | | regist | Ident | | | name | numeric | | | Names | Identification | | | GPT-2 Medium - Layer 21 Dim 400 #July Oct July Feb #February Sept #January Dec #Feb Jan November Nov #October Aug January #Oct Feb May October #Nov #September Apr September March #June April #Sept #Sept February June #November #Aug #April October April #Feb June July #December December August Sep #March November Sept #Jan December #May Aug August March Jul #August Jun #Aug September #wcs January Apr February GPT-2 Medium - Layer 23 Dim 166 #k #k #ks #K #kish #ks #K #KS | 16153 | | | Analyst | Economics | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|----------|-------------| | #iologist | designer | | | | accountant | sociology | | | | strategist | communications | | | | #ographer | marketing | | | | curator | pharmac | | | | Engineers | sciences | | | | archae | economics | | | | Designer | Accounting | | | | Editing | #econom | | | | biologist | chemist | | | | #ologist | merch | | | | psychologists | pharm | | | | theolog | economist | | | | Marketing | architect | | | | #Manager | engineer | | | | Architects | Architect | | | | sociology | #technical | | | | engineer | architects | | | | physicist | logistics | #ometers | measurement | | surve | gauge | | | | #Stats | estimation | | | | #Statistics | monitoring | | | | calculate | #stats | | | | Measure | #tracking | | | | quant | track | | | | #asuring | measuring | | | | Calculator | Monitoring | | | | #ometer | #Detailed | | | | calculator | #ometer | | | | Monitoring | estim | | | | #Maps | stats | | | | pione | charts | | | | timet | timet | | | | GPT-2 Base - Layer 9 Dim 1776 radios cable antennas modem radio wireless modem WiFi voltage wired transformer broadband Ethernet Ethernet telev radios #Radio power electricity radio loudspe Cable kW Wireless #radio telephone broadband network volt signal microphones Networks telecommunications networks cable electricity Telephone wifi amplifier #levision wifi coax broadcasting transmit transistor transmitter Radio TV wireless Network LTE television watts transmission microwave router telephone cables amps amplifier GPT-2 Base - Layer 9 Dim 2771 arous increase freeing increasing incent accelerating stimulate allev induce exped discourage enhanced inducing aggrav mitigating enhance stimulating inhib emanc improving alleviate infl empowering #oint preventing alien #ufact alter #HCR enabling influencing incre handc indu disadvant #Impro #roying intens arresting improve allev easing | | | | | GPT-2 Large - Layer 21 Dim 3419** #overty impoverished #wana poverty poverty poorest #Saharan poorer poorest Yemen Poverty families malnutrition Poverty Senegal marginalized impoverished refugees #poor subsistence Gujar displaced homelessness hardship Homeless refugee #heid households Ramadan migrant #Palest disadvantaged poorer Sudan Rahman oppressed #amily socioeconomic illiter peasant Mahmoud homeless Haitian poor #advertisement Ethiopian #hya Kaf #African Rw wealthier #poor Africans Af caste rural homeless #fam Hait needy GPT-2 Large - Layer 25 Dim 2442** Tracker tracking gau Tracker charts tracker tracker Tracking #Measure quant measurement #Stats measuring gau #Tracker GPS gauge Track tracking estimating Tracking tally #Monitor #ometers #chart tracked Meter calculate #HUD calculating | 16154 | | | | weaken | elevate | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------| | depri | encouraging | | | dissu | accelerate | | | impede | enlarg | | | convol | energ | | | encouraging | accent | | | #xiety | acceler | | | #akening | depri | | | lowering | elong | | | GPT-2 Base - Layer 1 Dim 2931 evening week #shows evening night night #sets morning #lav afternoon afternoon month #/+ #'s Night #naissance Loll #genre Kinnikuman semester Weekend #ched morning #ague #enna weekend Saturday latest Sunday #cher week #EST Blossom #icter #Night happens #atto day #vertising happened #spr #essim #Sunday Masquerade #morning #ished #Thursday sounded Week #ching Panc pesky Evening #chy #allery trope #ADVERTISEMENT #feature #Street #fy GPT-2 Base - Layer 0 Dim 1194 Pay receipts #Pay depos refund Deposit police deduct #pay #milo #paying #igree #Tax #eln debit levied PayPal deposit ATM #enforcement cops endot tax #soType ID paperwork #payment deposits payment loopholes checkout waivers #police receipt agents waive DMV loophole application arresting card commissioner applications Forms office transporter arrested Dupl #paid confisc pay Clapper #tax #ventures | RCMP | #Tax | | PAY | whistleblowers | | | APPLIC | #ADRA | | | GPT-2 Base - Layer 9 Dim 2771 flaws flaws lurking weaknesses failings dangers vulnerabilities scams inaccur shortcomings scams pitfalls shortcomings injust flawed faults glitches flawed pitfalls abuses inconsistencies imperfect rigged lurking biases wrongdoing deficiencies corruption weaknesses inaccur discrepancies inadequ hypocrisy fraud rigging inequ deceptive weakness misinformation scam #urities hazards lur problematic imperfect hoax regress danger #abase failings #errors problems #lived injustice abuses plagiar misinterpret plag suspic deceptive C.4 Knowledge Lookup Given a few seed embeddings of vocabulary items we find related FF values by taking a product of the average embeddings with FF values. Seed vectors: ["python", "java", "javascript"] Layer 14 Dim 1215 (ranked 3rd) filesystem debugging Windows HTTP configure Python debug config Linux Java configuration cache Unix lib runtime kernel plugins virtual FreeBSD hash plugin header file server PHP | | | | 16155 | | | GNU headers Apache initialization Mozilla Seed vectors: ["cm", "kg", "inches"] Layer 20 Dim 2917 (ranked 1st) percent years hours minutes million seconds inches months miles weeks pounds #% kilometers ounces kilograms grams kilometres metres centimeters thousand days km yards Years meters #million acres kg #years inch Seed vectors: ["horse", "dog", "lion"] Layer 21 Dim 3262 (ranked 2nd) animal animals Animal dogs horse wildlife Animals birds horses dog mammal bird mammals predator beasts Wildlife species #Animal #animal Dogs fish rabbits deer elephants wolves pets veterinary canine beast | GNU headers Apache initialization Mozilla Seed vectors: ["cm", "kg", "inches"] Layer 20 Dim 2917 (ranked 1st) percent years hours minutes million seconds inches months miles weeks pounds #% kilometers ounces kilograms grams kilometres metres centimeters thousand days km yards Years meters #million acres kg #years inch Seed vectors: ["horse", "dog", "lion"] Layer 21 Dim 3262 (ranked 2nd) animal | predators reptiles rodent primates hunting livestock creature rabbit rept elephant creatures human hunters hunter shark Rept cattle wolf Humane tiger lizard | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------| ## D **Sentiment Analysis Fine-Tuning Vector Examples** This section contains abusive language Classification Head Parameters Below we show the finetuning vector of the classifier weight. "POSITIVE" designates the vector corresponding to the label "POSITIVE", and similarly for "NEGATIVE". POSITIVE **NEGATIVE** ----------- ------------ #yssey bullshit #knit lame #etts crap passions incompetent #etooth inco #iscover bland pioneers incompetence #emaker idiots Pione crappy #raft shitty #uala idiot prosper pointless #izons retarded #encers worse #joy garbage cherish CGI loves FUCK #accompan Nope strengthens useless #nect shit comr mediocre honoured poorly insepar stupid embraces inept battled lousy #Together fuck intrig sloppy #jong Worse friendships Worst #anta meaningless In the following sub-sections, we sample 4 difference vectors per each parameter group (FF keys, FF values; attention query, key, value, and output subheads), and each one of the fine-tuned layers (layers 9-11). We present the ones that seemed to contain relevant patterns upon manual inspection. We also report the number of "good" vectors among the four sampled vectors for each layer and parameter group. ## Ff Keys Layer 9 4 out of 4 | diff | -diff | | | |-----------------|-----------------|------|-------| | ----------- | --------------- | | | | amazing | seiz | | | | movies | coerc | | | | wonderful | Citiz | | | | love | #cffff | | | | movie | #GBT | | | | cinematic | targ | | | | enjoyable | looph | | | | wonderfully | Procedures | | | | beautifully | #iannopoulos | | | | enjoy | #Leaks | | | | films | #ilon | | | | comedy | grievance | | | | fantastic | #merce | | | | awesome | Payments | | | | #Enjoy | #RNA | | | | cinem | Registrar | | | | film | Regulatory | | | | loving | immobil | | | | enjoyment | #bestos | | | | masterpiece | #SpaceEngineers | diff | -diff | | --------------- | ------------ | | | | reperto | wrong | | | | congratulations | unreasonable | | | | Citation | horribly | | | | thanks | inept | | | | Recording | worst | | | | rejo | egregious | | | | Profile | #wrong | | | | Tradition | unfair | | | | canopy | worse | | | | #ilion | atro | | | | extracts | stupid | | | | descendant | egreg | | | | #cele | bad | | | | enthusiasts | terribly | | | | :-) | ineffective | | | | #photo | nonsensical | | | | awaits | awful | | | | believer | #worst | | | | #IDA | incompetence | | | | welcomes | #icably | | | | diff | -diff | | | | ------- | ---------- | | | | movie | seiz | | | | fucking | Strongh | | | | really | #etooth | | | | movies | #20439 | | | | damn | #Secure | | | | funny | Regulation | | | | shit | Quarterly | | | | kinda | concess | | | | REALLY | Recep | | | | Movie | #aligned | | | | stupid | targ | | | | #movie | mosqu | | | | goddamn | #verning | | | | crap | FreeBSD | | | | shitty | PsyNet | | | | film | Facilities | | | | crappy | #Lago | | | | damned | #Register | | | | #Movie | #"}]," | | | | cheesy | Regist | diff | -diff | | ------------ | ------------ | | | | incompetence | #knit | | | | bullshit | #Together | | | | crap | Together | | | | useless | versatile | | | | pointless | #Discover | | | | incompetent | richness | | | | idiots | #iscover | | | | incompet | forefront | | | | garbage | inspiring | | | | meaningless | pioneering | | | | stupid | #accompan | | | | crappy | unparalleled | | | | shitty | #Explore | | | | nonexistent | powerfully | | | | worthless | #"},{" | | | | Worse | #love | | | | lame | admired | | | | worse | #uala | | | | inco | innovative | | | | ineffective | enjoyed | | | Layer 10 4 out of 4 | diff | -diff | | | |--------------------|---------------|------|-------| | --------------- | ------------- | | | | quotas | wonderfully | | | | #RNA | wonderful | | | | cessation | beautifully | | | | subsidy | amazing | | | | #SpaceEngineers | fantastic | | | | placebo | incredible | | | | exemptions | amazingly | | | | treadmill | great | | | | Labs | unforgettable | | | | receipt | beautiful | | | | moratorium | brilliantly | | | | designation | hilarious | | | | ineligible | love | | | | reimbursement | marvelous | | | | roundup | vividly | | | | Articles | terrific | | | | PubMed | memorable | | | | waivers | #Enjoy | | | | Citiz | loving | | | | landfill | fascinating | diff | -diff | | ------------------ | ------------- | | | | isEnabled | wonderfully | | | | guiActiveUnfocu... | beautifully | | | | #igate | cinem | | | | waivers | cinematic | | | | expires | wonderful | | | | expire | amazing | | | | reimb | Absolutely | | | | expired | storytelling | | | | #rollment | fantastic | | | | #Desktop | Definitely | | | | prepaid | unforgettable | | | | #verning | comedy | | | | #andum | movie | | | | reimbursement | comedic | | | | Advisory | hilarious | | | | permitted | #movie | | | | #pta | #Amazing | | | | issuance | scenes | | | | Priebus | Amazing | | | | #iannopoulos | enjoyable | | | | diff | -diff | | | | ------------ | ---------- | | | | horror | #deals | | | | whim | #iband | | | | subconscious | [& | | | | unrealistic | #heid | | | | imagination | #APD | | | | viewers | withdrew | | | | enjoyment | #Shares | | | | nostalgia | mathemat | | | | absolute | [+] | | | | sentimental | #Tracker | | | | unreal | #zb | | | | Kubrick | testified | | | | awe | #ymes | | | | inspiration | mosqu | | | | subtle | #Commerce | | | | cinematic | administr | | | | perfection | feder | | | | comedic | repaired | | | | fantasy | #pac | | | | mindless | #Community | diff | -diff | | ------------- | ------------- | | | | #Leaks | loving | | | | quotas | love | | | | #RNA | loved | | | | subsidy | lovers | | | | #?'" | wonderful | | | | Penalty | lover | | | | #iannopoulos | nostalgic | | | | #>] | alot | | | | discredited | beautiful | | | | #conduct | amazing | | | | #pta | great | | | | waivers | passionate | | | | Authorization | admire | | | | #admin | passion | | | | HHS | lovely | | | | arbitrarily | loves | | | | #arantine | unforgettable | | | | #ERC | proud | | | | memorandum | inspiration | | | | #Federal | #love | | | Layer 11 4 out of 4 | diff | -diff | |-------------|--------------| | ----------- | ----------- | | inco | cherish | | pointless | #knit | | Nope | #terday | | bullshit | #accompan | | crap | prosper | | useless | versatile | | nonsense | friendships | | futile | #uala | | anyways | Lithuan | | anyway | cherished | | meaningless | redes | | clueless | inspires | | lame | Proud | | wasting | friendship | | bogus | exceptional | | vomit | #beaut | | nonsensical | #ngth | | retarded | pioneering | | idiots | pioneers | | shit | nurt | | diff | -diff | | ----------- | ------------ | | #accompan | bad | | Pione | crap | | celebrate | inefficient | | #Discover | stupid | | #knit | worse | | pioneering | mistake | | recogn | incompetence | | reunited | mistakes | | comr | incompetent | | thriving | miser | | #iscover | garbage | | commemorate | retarded | | Remem | #bad | | ecstatic | poor | | forefront | ineffective | | enthusi | retard | | renewed | Poor | | colle | bullshit | | Inspired | inept | | #uala | errors | diff **-diff** --------------- ----------- #SpaceEngineers love nuisance definitely #erous always #aband wonderful Brist loved racket wonderfully Penalty cherish bystand loves #iannopoulos truly Citiz enjoy Codec really courier #olkien #>] beautifully #termination #love incapac great #interstitial LOVE fugitive never breaching adore targ loving thug amazing diff **-diff** ----------- ------------ #accompan bad Pione crap celebrate inefficient #Discover stupid #knit worse pioneering mistake recogn incompetence reunited mistakes comr incompetent thriving miser #iscover garbage commemorate retarded Remem #bad ecstatic poor forefront ineffective enthusi retard renewed Poor colle bullshit Inspired inept #uala errors | diff | -diff | |-----------------|--------------| | --------------- | ----------- | | #SpaceEngineers | love | | nuisance | definitely | | #erous | always | | #aband | wonderful | | Brist | loved | | racket | wonderfully | | Penalty | cherish | | bystand | loves | | #iannopoulos | truly | | Citiz | enjoy | | Codec | really | | courier | #olkien | | #>] | beautifully | | #termination | #love | | incapac | great | | #interstitial | LOVE | | fugitive | never | | breaching | adore | | targ | loving | | thug | amazing | | diff | -diff | | ------------ | ------------ | | #knit | bullshit | | passions | crap | | #accompan | idiots | | #ossom | goddamn | | #Explore | stupid | | welcomes | shitty | | pioneering | shit | | forefront | garbage | | embraces | fuck | | pioneers | incompetence | | intertw | crappy | | #izons | bogus | | #iscover | useless | | unparalleled | idiot | | evolving | #shit | | Together | pointless | | vibrant | stupidity | | prosper | fucking | | strengthens | nonsense | | #Together | FUCK | ## Ff Values Layer 9 0 out of 4 Layer 10 0 out of 4 Layer 11 0 out of 4 WQ **Subheads** Layer 9 3 out of 4 diff -diff ------------ ----------- bullshit strengthens bogus Also faux #helps spurious adjusts nonsense #ignt nonsensical evolves inept helps crap grew junk grows shitty #cliffe fake recognizes incompetence #assadors crappy regulates phony flourished sloppy improves dummy welcomes mediocre embraces lame gathers outrage greets inco prepares diff -diff ---------- ------------ alot Provision kinda coerc amazing Marketable definitely contingency pretty #Dispatch tho seiz hilarious #verning VERY #iannopoulos really #Reporting lol #unicip wonderful Fiscal thats issuance dont provision pics #Mobil doesnt #etooth underrated policymakers funny credential REALLY Penalty #love #activation alright #Officials | diff | -diff | |--------------|--------------| | ------------ | ------------ | | #ARGET | kinda | | #idal | alot | | #--+ | amazing | | Prev | interesting | | #enger | wonderful | | #iannopoulos | definitely | | #report | unbelievable | | #RELATED | really | | issuance | amazingly | | #earcher | pretty | | Previous | nice | | Legislation | absolutely | | #astical | VERY | | #iper | wonderfully | | #>[ | incredible | | #</ | hilarious | | Vendor | funny | | #"> | fantastic | | #phrine | quite | | #wcsstore | defin | | diff | -diff | | ---------- | ------------ | | alot | Provision | | kinda | coerc | | amazing | Marketable | | definitely | contingency | | pretty | #Dispatch | | tho | seiz | | hilarious | #verning | | VERY | #iannopoulos | | really | #Reporting | | lol | #unicip | | wonderful | Fiscal | | thats | issuance | | dont | provision | | pics | #Mobil | | doesnt | #etooth | | underrated | policymakers | | funny | credential | | REALLY | Penalty | | #love | #activation | | alright | #Officials | Layer 10 4 out of 4 | diff | -diff | |--------------------|--------------| | -------- | --------- | | crap | #Register | | shit | Browse | | bullshit | #etooth | | stupid | #ounces | | shitty | #verning | | horrible | #raft | | awful | #egu | | fucking | #Lago | | comedic | Payments | | crappy | #orsi | | cheesy | Coinbase | | comedy | #ourse | | fuck | #iann | | mediocre | #"}]," | | terrible | #onductor | | movie | #obil | | bad | #rollment | | gimmick | #ivot | | filler | #Secure | | inept | #ETF | | diff | -diff | | ------------------ | ------------ | | #knit | crap | | #"},{" | bullshit | | #"}]," | stupid | | #estones | inept | | #Learn | shit | | #ounces | idiots | | #egu | shitty | | #Growing | crappy | | #ributes | incompetence | | #externalAction... | fuck | | #encers | pointless | | Browse | nonsense | | jointly | nonsensical | | Growing | stupidity | | #ossom | gimmick | | honoured | inco | | #accompan | lame | | #agos | incompetent | | #raft | mediocre | | #iership | bland | diff -diff ------------- ------------ love Worse unforgettable Nope beautiful #Instead loved Instead #love #Unless loving incompetence amazing incapable #joy Unless inspiring #failed passion incompet adventure incompetent loves ineffective excitement #Fuck joy #Wr LOVE inept together spurious memories #Failure wonderful worthless enjoyment obfusc themes inadequate diff **-diff** ------------------ ------------ #knit crap #"},{" bullshit #"}]," stupid #estones inept #Learn shit #ounces idiots #egu shitty #Growing crappy #ributes incompetence #externalAction... fuck #encers pointless Browse nonsense jointly nonsensical Growing stupidity #ossom gimmick honoured inco #accompan lame #agos incompetent #raft mediocre #iership bland | diff | -diff | |---------------|--------------| | ------------- | ------------ | | love | Worse | | unforgettable | Nope | | beautiful | #Instead | | loved | Instead | | #love | #Unless | | loving | incompetence | | amazing | incapable | | #joy | Unless | | inspiring | #failed | | passion | incompet | | adventure | incompetent | | loves | ineffective | | excitement | #Fuck | | joy | #Wr | | LOVE | inept | | together | spurious | | memories | #Failure | | wonderful | worthless | | enjoyment | obfusc | | themes | inadequate | | diff | -diff | | --------- | ----------- | | crap | #egu | | bullshit | #etooth | | shit | #verning | | :( | #ounces | | lol | #accompan | | stupid | coh | | filler | #assadors | | shitty | #pherd | | fucking | #acio | | pointless | #uchs | | idiots | strengthens | | anyways | #reprene | | nonsense | Scotia | | anyway | #rocal | | crappy | reciprocal | | stupidity | Newly | | fuck | fost | | #shit | #ospons | | anymore | #onductor | | Nope | governs | ## Layer 11 3 out of 4 diff **-diff** ------------- ------------------ #also meaningless #knit incompetence helps inco strengthens pointless :) incompetent broaden Worse #ossom inept incorporates nonsensical #Learn coward incorporate unint #"},{" obfusc enjoy excuses enjoyed panicked complementary useless #etts bullshit enhances stupid integrates incompet #ospons incomprehensibl... differs stupidity #arger lifeless diff -diff ------------- --------------- amazing #iannopoulos beautifully expired love ABE wonderful Yiannopoulos wonderfully liability unforgettable #SpaceEngineers beautiful #isance loving Politico #love waivers #beaut #utterstock enjoyable excise #Beaut #Stack inspiring phantom fantastic PubMed defin #ilk incredible impunity memorable ineligible greatness Coulter amazingly issuance timeless IDs | diff | -diff | |---------------|-----------------| | ------------- | ------------ | | #utterstock | amazing | | #ARGET | movie | | #cffff | alot | | #etooth | scenes | | #Federal | comedy | | POLITICO | movies | | #Register | cinematic | | #Registration | greatness | | #rollment | wonderful | | #ETF | storytelling | | #ulia | film | | Payments | tho | | #IRC | masterpiece | | Regulatory | films | | Alternatively | Kubrick | | #RN | realism | | #pta | comedic | | Regulation | cinem | | #GBT | #movie | | #":""},{" | genre | | diff | -diff | | ------------- | --------------- | | amazing | #iannopoulos | | beautifully | expired | | love | ABE | | wonderful | Yiannopoulos | | wonderfully | liability | | unforgettable | #SpaceEngineers | | beautiful | #isance | | loving | Politico | | #love | waivers | | #beaut | #utterstock | | enjoyable | excise | | #Beaut | #Stack | | inspiring | phantom | | fantastic | PubMed | | defin | #ilk | | incredible | impunity | | memorable | ineligible | | greatness | Coulter | | amazingly | issuance | | timeless | IDs | ## Wk **Subheads** Layer 9 3 Out Of 4 | diff | -diff | | | |---------------|-------------|------|-------| | ------- | ---------- | | | | enclave | horrible | | | | #. | pretty | | | | #; | alot | | | | #omial | MUCH | | | | apiece | VERY | | | | #assian | nothing | | | | #.</ | #much | | | | #ulent | terrible | | | | #,[ | crappy | | | | #eria | strange | | | | #ourse | everything | | | | exerc | very | | | | #\/ | shitty | | | | #Wire | nice | | | | #arium | many | | | | #icle | wonderful | | | | #.[ | genuinely | | | | #/$ | beautiful | | | | #API | much | | | | #ium | really | diff | -diff | | ------------- | ----------- | | | | Then | any | | | | Instead | #ady | | | | Unfortunately | #imate | | | | Why | #cussion | | | | Sometimes | #ze | | | | Secondly | appreci | | | | #Then | #raq | | | | But | currently | | | | Luckily | #kers | | | | Anyway | #apixel | | | | And | active | | | | Suddenly | significant | | | | Thankfully | #ade | | | | Eventually | #imal | | | | Somehow | specific | | | | Fortunately | #ability | | | | Meanwhile | anyone | | | | What | #ker | | | | Obviously | #unction | | | | Because | reap | | | | diff | -diff | | | | ----------- | --------- | | | | bullshit | #avorite | | | | anyway | #ilyn | | | | crap | #xtap | | | | anyways | #insula | | | | unless | #cedented | | | | nonsense | #aternal | | | | #falls | #lyak | | | | fuck | #rieve | | | | #. | #uana | | | | fallacy | #accompan | | | | #tics | #ashtra | | | | #punk | #icer | | | | damned | #andum | | | | #fuck | Mehran | | | | stupidity | #andise | | | | shit | #racuse | | | | commercials | #assadors | | | | because | #Chel | | | | despite | rall | | | | movies | #abella | | | Layer 10 2 out of 4 diff **-diff** ------------ ------------ #, Nope work Instead #icle Thankfully #. Surely outdoors #Instead inspiring Fortunately exped Worse ahead Luckily together #Thankfully touches Unless out Apparently personalized Perhaps #joy #Unless #unction #Fortunately warm Sorry exceptional Secondly experience #Luckily lasting #Rather integ Hence #astic Neither diff -diff -------- --------- #sup #etting Amazing #liness #airs #ktop awesome #ulkan Bless #enthal Loving #enance my #yre #OTHER #eeds #BW omission #perfect #reys #-) #lihood amazing #esian #adult #holes perfect syndrome welcome grievance Rated offenders #Amazing #wig #anch #hole FANT #creen #anche #pmwiki Layer 11 2 out of 4 diff -diff -------------- ----------- #ly #say storytelling actionGroup sounding prefers spectacle #ittees #ness #reon #hearted presumably cinematic waivers #est #aucuses portrayal #Phase quality #racuse paced #arge combination #hers juxtap #sup representation #later mixture expired #!!!!! stricter filmmaking #onds enough #RELATED thing #rollment rendition #orders WV **Subheads** ## Layer 9 4 out of 4 | diff | -diff | |------------|--------------| | ---------- | ------------ | | shots | #Kind | | shit | suscept | | bullshit | Fathers | | stuff | #Footnote | | tits | concess | | crap | #accompan | | boobs | Strait | | creepy | #orig | | noises | #ESE | | spectacle | #ufact | | boring | Founder | | things | #iere | | everything | #HC | | noise | #Prev | | #anim | #alias | | ugly | participated | | garbage | #Have | | stupidity | #coe | | visuals | #Father | | selfies | strugg | | diff | -diff | |--------------|----------------| | ----------- | ---------- | | #":""},{" | honestly | | #etooth | definitely | | #ogenesis | hilarious | | #verning | alot | | broker | amazing | | #ounces | funn | | threatens | cinem | | #astical | Cinem | | foothold | comedic | | intruder | Absolutely | | #vernment | comedy | | #activation | absolutely | | #Oracle | amazingly | | fugitive | satire | | visitor | underrated | | #assian | really | | barrier | fantastic | | #":[ | enjoyable | | #vier | REALLY | | #oak | wonderful | | diff | -diff | | ------------ | -------------- | | crap | Pione | | bullshit | pioneers | | shit | complementary | | vomit | pioneering | | nonsense | #knit | | stupid | #raits | | idiots | Browse | | fucking | #iscover | | #shit | strengthened | | idiot | #rocal | | fuck | prosper | | gimmick | Communities | | stupidity | neighbourhoods | | goddamn | #Learn | | shitty | strengthens | | incompetence | #iscovery | | lame | #ributes | | FUCK | strengthen | | inco | #izons | | blah | Mutual | diff -diff -------- ------------- crap jointly shit #verning bullshit #pora fucking #rocal idiots #raft fuck #etooth goddamn #estead stupid #ilitation FUCK #ourse #fuck migr shitty #ourses damn #iership #shit Pione lol #iscover fuckin pioneering nonsense #egu crappy #ivities kinda neighbourhood Fuck pioneer idiot nurt diff -diff ------------ -------------- crap Pione bullshit pioneers shit complementary vomit pioneering nonsense #knit stupid #raits idiots Browse fucking #iscover #shit strengthened idiot #rocal fuck prosper gimmick Communities stupidity neighbourhoods goddamn #Learn shitty strengthens incompetence #iscovery lame #ributes FUCK strengthen inco #izons blah Mutual Layer 10 4 out of 4 | diff | -diff | |-----------|----------------| | -------- | ------------- | | crap | jointly | | shit | #verning | | bullshit | #pora | | fucking | #rocal | | idiots | #raft | | fuck | #etooth | | goddamn | #estead | | stupid | #ilitation | | FUCK | #ourse | | #fuck | migr | | shitty | #ourses | | damn | #iership | | #shit | Pione | | lol | #iscover | | fuckin | pioneering | | nonsense | #egu | | crappy | #ivities | | kinda | neighbourhood | | Fuck | pioneer | | idiot | nurt | | diff | -diff | | --------- | -------------- | | anime | #rade | | kinda | #jamin | | stuff | #ounces | | shit | #pherd | | lol | Unable | | tho | #pta | | realism | Roche | | damn | Payments | | :) | Gupta | | fucking | #odan | | alot | #uez | | movie | #adr | | funny | #ideon | | anyways | #Secure | | enjoyable | #raught | | crap | Bei | | comedy | sovere | | genre | unsuccessfully | | anyway | #moil | | fun | #Register | | diff | -diff | |---------------|---------------| | ------------- | ------------ | | #knit | crap | | welcomes | bullshit | | Together | idiots | | Growing | stupid | | #Explore | shitty | | pioneering | incompetence | | complementary | pointless | | milestone | goddamn | | pioneer | retarded | | #Together | lame | | strengthens | Worse | | #ossom | crappy | | pioneers | incompet | | #Learn | shit | | jointly | stupidity | | #Growing | fucking | | embraces | Nope | | #"},{" | FUCK | | sharing | incompetent | | #Discover | pathetic | | diff | -diff | | ------------ | ------------- | | bullshit | inspiring | | incompetence | unforgettable | | Worse | #knit | | idiots | #love | | crap | passions | | dummy | cherish | | incompetent | richness | | Nope | timeless | | stupid | loves | | retarded | passionate | | lame | beautifully | | nonexistent | overcoming | | wasting | unique | | #Fuck | highs | | bogus | nurture | | worse | unparalleled | | nonsense | vibrant | | ineligible | #beaut | | pointless | intertw | | inco | insepar | diff **-diff** ----------- --------- #"}]," crap #verning stupid #etooth shit #"},{" fucking Browse fuck #Register shitty #Lago bullshit #raft crappy #egu idiots jointly horrible #iership stupidity strengthens kinda Scotia goddamn #ounces awful #uania mediocre #iann pathetic workspace #fuck seiz damn Payments FUCK #Learn damned diff -diff ------------ ------------- bullshit inspiring incompetence unforgettable Worse #knit idiots #love crap passions dummy cherish incompetent richness Nope timeless stupid loves retarded passionate lame beautifully nonexistent overcoming wasting unique #Fuck highs bogus nurture worse unparalleled nonsense vibrant ineligible #beaut pointless intertw inco insepar | diff | -diff | |--------------|---------------| | ----------- | --------- | | #"}]," | crap | | #verning | stupid | | #etooth | shit | | #"},{" | fucking | | Browse | fuck | | #Register | shitty | | #Lago | bullshit | | #raft | crappy | | #egu | idiots | | jointly | horrible | | #iership | stupidity | | strengthens | kinda | | Scotia | goddamn | | #ounces | awful | | #uania | mediocre | | #iann | pathetic | | workspace | #fuck | | seiz | damn | | Payments | FUCK | | #Learn | damned | | diff | -diff | | ------------ | ------------- | | bullshit | Pione | | crap | pioneers | | stupid | pioneering | | nonsense | complementary | | incompetence | #knit | | idiots | #Learn | | shit | #accompan | | stupidity | pioneer | | pointless | invaluable | | inco | #ossom | | retarded | #Together | | idiot | Browse | | vomit | versatile | | lame | welcomes | | meaningless | #"},{" | | goddamn | admired | | nonsensical | jointly | | garbage | Sharing | | #shit | Together | | useless | #Discover | Layer 11 4 out of 4 | diff | -diff | |----------------|--------------| | ------------ | ------------ | | Provision | alot | | issuance | amazing | | Securities | kinda | | #ogenesis | fucking | | Holdings | awesome | | Regulatory | funny | | indefinitely | damn | | Advisory | REALLY | | designation | hilarious | | unilaterally | tho | | Province | unbelievable | | Regulation | fuckin | | #Lago | wonderful | | issued | doesnt | | Recep | definitely | | Advis | thats | | #verning | yeah | | broker | fantastic | | #Mobil | badass | | Policy | dont | | diff | -diff | | -------------- | ------------ | | pioneers | bullshit | | pioneering | crap | | Browse | shit | | Pione | idiots | | complementary | stupid | | #knit | vomit | | prosper | incompetence | | #raits | nonsense | | #Trend | gimmick | | #ributes | stupidity | | #Learn | idiot | | strengthen | shitty | | strengthened | fucking | | #ossom | lame | | pioneer | crappy | | #iscover | goddamn | | #Growing | pointless | | prosperity | inco | | neighbourhoods | #shit | | #owship | Nope | diff -diff ------------ --------- crap #rocal fucking #verning bullshit #etooth fuck #uania goddamn caches shit Browse #fuck #"},{" stupidity #imentary pathetic exerc spoiler #Lago stupid #"}]," inept #cium blah #enges FUCK #ysis awful quarterly shitty #iscover trope Scotia Godd #resso inco #appings incompetence jointly diff -diff -------------- ------------ pioneers bullshit pioneering crap Browse shit Pione idiots complementary stupid #knit vomit prosper incompetence #raits nonsense #Trend gimmick #ributes stupidity #Learn idiot strengthen shitty strengthened fucking #ossom lame pioneer crappy #iscover goddamn #Growing pointless prosperity inco neighbourhoods #shit #owship Nope ## Wo **Subheads** Layer 9 0 out of 4 Layer 10 0 out of 4 Layer 11 0 out of 4 diff -diff ------------ ------------- Worse #knit bullshit pioneers Nope pioneering crap inspiring incompetence #iscover idiots complementary incompetent pioneer stupid #ossom incompet passionate pointless passions inco journeys Stupid unique meaningless embraces nonsense admired lame forefront idiot richness worse invaluable #Fuck prosper whining vibrant nonsensical enriched ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4,5 ✓ B1. Did you cite the creators of artifacts you used? 4,5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4,5 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? IMDB is a well studied dataset and has been discussed many times before ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? IMDB is a well studied dataset and has been discussed many times before ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4,5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4,5 - wherever budget is known The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4,5 - no hyperparameters were searched ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4,5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-shot-data
Few-Shot Data-to-Text Generation via Unified Representation and Multi-Source Learning
https://aclanthology.org/2023.acl-long.894
In this paper, we present a novel approach for data-to-text generation that addresses the limitations of current methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph triples, and meaning representations. We demonstrate that our proposed approach can effectively adapt to new structured forms, and can improve performance in comparison to current methods. For example, our method resulted in a 66{\%} improvement in zero-shot BLEU scores when transferring models trained on table inputs to a knowledge graph dataset. Our proposed method is an important step towards a more general data-to-text generation framework.
# Few-Shot Data-To-Text Generation Via Unified Representation And Multi-Source Learning Alexander Hanbo Li, Mingyue Shang, Evangelia Spiliopoulou, Jie Ma Patrick Ng, Zhiguo Wang, Bonan Min, William Wang Kathleen McKeown, Vittorio Castelli, Dan Roth, Bing Xiang AWS AI Labs {hanboli, myshang, spilieva, jieman, patricng, zhiguow, bonanmin, wyw}@amazon.com {mckeownk, vittorca, drot, bxiang}@amazon.com ## Abstract We present a novel approach for structured datato-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multitask training, zero-shot and few-shot scenarios by providing a unified representation that can handle various forms of structured data such as tables, knowledge graph triples, and meaning representations. We demonstrate that our proposed approach can effectively adapt to new structured forms, and can improve performance in comparison to current methods. For example, our method resulted in a 66% improvement in zero-shot BLEU scores when transferring models trained on table inputs to a knowledge graph dataset. Our proposed method is an important step towards a more general data-to-text generation framework. 1 ## 1 Introduction Data-to-text generation is the task of converting structured data into natural language text that can be easily understood by humans. Previous methods for data-to-text generation have been limited to specific structured forms. For example, graph neural networks (GNNs) have been used to encode knowledge graph input (Rik Koncel-Kedziorski and Hajishirzi, 2019; Ribeiro et al., 2020; Guo et al., 2020; Li et al., 2021), while table-specific encoders have been proposed for tables (Liu et al., 2017; Bao et al., 2018; Nema et al., 2018; Jain et al., 2018; Wang et al., 2022). However, these methods are not easily transferable to other structured forms, creating a barrier for scientific development and preventing models from learning across tasks. Recent work has attempted to address the problem of limited structured form applicability by using pretrained language models (PLMs) as a single text-to-text framework for all data structures, by 1Our code will be open-sourced at anonymous-link. linearizing the data as text sequences. As shown by Kale and Rastogi (2020); Xie et al. (2022), these methods achieve state-of-the-art performance on a wide range of data-to-text tasks. Despite the advancements made in the field, there are still unresolved questions regarding the relationship between various structured forms, particularly in the context of zero-shot or few-shot settings, where models are required to rapidly adapt to new structured forms. This is particularly pertinent in cases of data scarcity, when structured forms vary across different domains and there is a limited amount of data available for a specific structured form, but a single model is needed to operate on all of them. Such an example is to adapt a knowledge-graph-to-text model to a new domain with data in table format. Even when there is an abundance of data, developing a universal model that can handle all structured forms remains a challenging task. As seen in Xie et al. (2022), a multi-task trained model may perform worse than a single-task model on table inputs. One important reason for such performance drop is because previous research has not fully examined the impact of various linearization methods on these tasks and their effect on cross-task generalization. Despite the use of text-to-text transformers, linearization methods for various structured forms remain diverse, and even within one structured form, linearization can vary across studies. For example, the linearization of KG triples differs in Nan et al. (2021) and Xie et al. (2022), highlighting the need for further research on the relationship between data formats and data-to-text tasks. In this paper, we address the unresolved questions surrounding the relationship between various structured forms by introducing a *unified representation* for knowledge graphs, tables, and meaning representations. We demonstrate that our method allows for the conversion of knowledge graph triples and meaning representations into virtual ta16171 bles, which can then be linearized in a consistent manner. Through evaluating our approach on five representative data-to-text tasks across the aforementioned formats, we show that our method not only achieves competitive performance compared to other data-specific linearizations for individual tasks, but also leads to significant improvements in transfer learning scenarios across structured forms, particularly in zero-shot or few-shot settings. For example, using the unified representation improves the zero-shot BLEU score by relatively 66% when transferring from ToTTo (Parikh et al., 2020) to DART (Nan et al., 2021). Additionally, our approach results in improved performance when used in multi-task settings compared to models trained with varied linearizations. These results provide a clear indication of the effectiveness of our proposed unified representation in enhancing cross-task generalization. ## 2 Related Work Data-Type Specific Knowledge Encoding Research has been conducted to encode structured knowledge using various models and approaches, including Graph Neural Networks (GNNs) (Rik Koncel-Kedziorski and Hajishirzi, 2019; Ribeiro et al., 2020; Guo et al., 2020; Li et al., 2021; Song et al., 2018; Ribeiro et al., 2019; Cai and Lam, 2020; Zhang et al., 2020; Ribeiro et al., 2021b; Schmitt et al., 2021) and neural encoder-decoder models based on Gated Recurrent Units (GRUs) and Transformers (Gehrmann et al., 2018; Ferreira et al., 2019). These models have been used to assist in encoding knowledge graph inputs and meaning representations. Additionally, several models have been proposed for table-to-text generation, including approaches that combine content selection or entity memory in a Long Short-Term Memory (LSTM) model (Puduppully et al., 2018, 2019), and others that focus on table-specific encoders (Liu et al., 2017; Bao et al., 2018; Nema et al., 2018;Jain et al., 2018). More recent studies have utilized the capabilities of pre-trained language models in their designs, but have also incorporated specialized encoder structures or attention mechanisms specifically for table inputs. These include encoder-only models (Arik and Pfister, 2019; Yin et al., 2020; Herzig et al., 2020; Huang et al., 2020; Wang et al., 2021; Iida et al., 2021; Eisenschlos et al., 2021; Yang et al., 2022), as well as encoder-decoder models (Cao, 2020; Andrejczuk et al., 2022; Wang et al., 2022). However, it should be noted that the encoder structures of these works are specifically tailored for table input and cannot be directly applied to other types of data. Structured Data Linearization Recent developments in pretrained language models (Devlin et al., 2019; Radford et al., 2019; Lewis et al., 2020; Raffel et al., 2020) have made it possible to use a single text-to-text framework for various types of data by linearizing them as text sequences. Studies have been conducted on finetuning PLMs on table input (Parikh et al., 2020) and knowledge graph input (Kasner and Dušek, 2020; Ribeiro et al., 2021a), single-task and multi-task training on a collection of structured data grounding tasks (Xie et al., 2022), and the effectiveness of pretraining and fine-tuning strategies for data-to-text tasks (Kale and Rastogi, 2020) and table-based question answering tasks (Shi et al., 2022). These studies have consistently found that linearizing structured data as a sequence of tokens without modifying the model structure, is a simple yet effective strategy that outperforms pipelined neural architectures specifically tailored to particular data types. Zero/Few-Shot Data-to-Text Generation The studies such as Chen et al. (2020b) and Ke et al. (2021) have evaluated the zero and few-shot performance of PLMs on knowledge graph input, highlighting the benefits of a joint pretraining strategy on knowledge graphs and texts for learning better KG representations. Keymanesh et al. (2022) studied the prompt-tuning method for KG-to-text generation and found it to be effective in a fewshot setting. Chen et al. (2020d) combines PLM with a table content selector using a switch policy. Other researchers have also explored methods such as data augmentation (Chang et al., 2021) and retrieval-based input augmentation (Su et al., 2021) to aid in few-shot data-to-text generation. Kasner and Dusek (2022) proposes a pipeline approach involving a sequence of operations, such as ordering and aggregation, and only finetunes the PLMs of these modules to make the pipeline more domainindependent. ## 3 Unified Representation In this section, we demonstrate that structured data, such as tables, highlighted cells, knowledge graph triples, and meaning representations, can be linearized in a consistent manner. We begin by show- ![2_image_0.png](2_image_0.png) ing in Section 3.1 how knowledge graph triples and meaning representations can be mapped to a virtual table and subsequently linearized in the same way as tables. Next, in Section 3.2, we demonstrate the process of linearizing a table or highlighted cells. The entire method is illustrated in Figure 1. ## 3.1 Virtual Table KG Triple The method for converting triples from a connected sub-graph into a virtual table involves using the tail node of each triple as a cell value and the relation as the column header. Nodes that do not appear as tail nodes are not assigned a column header. An example is provided in Figure 1. "William Wasmund" does not have a column header assigned since it never appears as a tail node. If a set of knowledge graph triples contains multiple connected components, each component is converted into a separate table. Meaning Representation We focus on textual MRs that appear as a list of comma-separated attribute-value pairs (Dušek et al., 2020). These MRs can be treated as virtual tables by associating each Attribute[Value] with a cell value, represented by the "Value", and the "Attribute" as its corresponding column header. An example of this can be seen in Figure 1. ## 3.2 Linearization Of Tables After converting both KGs and MRs into virtual tables, we end up with only table inputs that need to be linearized. In this section, we discuss one choice of such a linearization method, motivated by ToTTo linearization (Parikh et al., 2020). Additionally, we will provide a specific example of how to linearize Table 1 in the following sections. | Table Title: Alma Jodorowsky Section Title: Filmography Year Title | Role | | |----------------------------------------------------------------------|--------------------------|---------| | 2014 | La Vie devant elles [fr] | Solana | | 2016 | Kids in Love | Evelyn | | 2017 | The Starry Sky Above Me | Justyna | Table 1: An example table to showcase our linearization. symbol, <xx>, and an end symbol, </xx>. | Start Symbol | Meaning | |----------------|-----------------------------------------| | <table> | contents in a table | | <column> | contents in a column | | <row> | contents in a row | | <cell> | content in a cell | | <col_header> | column header name | | <row_header> | row header name | | <title> | main title /domain / topic of the input | | <sub_title> | sub-title /domain /topic of the input | Linearization of Highlighted Cells To linearize the highlighted cells, we proceed in a left-to-right, top-to-bottom order. For instance, in Table 1, the linearization of the highlighted cells (in yellow background) appears as follows: 2 1 <title > Alma Jodorowsky </ title > 2 <sub_title > Filmography </ sub_title > 3 <table > 4 <cell > 2016 5 <col_header > Year </ col_header > 6 </ cell > 7 <cell > Kids in Love 8 <col_header > Title </ col_header > 9 </ cell > 10 <cell > Evelyn 11 <col_header > Role </ col_header > 12 </ cell > 13 </ table > 2Indentation is used for clarity in this example, but it is not present in the actual input. Basic Units The basic units for linearization are presented in Table 2. Each unit is defined by a start Linearization of (Sub)Table A row-wise linearization of the entire Table 1 is: ![3_image_0.png](3_image_0.png) to column-wise. An example is provided in the Appendix B. ## 4 Experiments Datasets We test our method on five data-totext datasets: The **ToTTo** dataset (Parikh et al., 2020) poses the challenge of generating a onesentence description, given highlighted cells from a Wikipedia table. Our models are evaluated on the validation set, as the annotations for the test set are not publicly available. The **DART** corpus (Nan et al., 2021) is an open-domain structured data-to-text resource, consisting of entity-relation triples. The **LogicNLG** dataset (Chen et al., 2020a) investigates the ability to generate logical inferences from table contents to implicit insights, as the target sentences. The **WebNLG** dataset (Gardent et al., 2017) includes triples from 15 DBpedia categories, which are mapped to their verbalization. Results are reported on the Seen (S), Unseen (U), and All (A) subsets of the data. The **E2E clean** dataset (Dušek et al., 2019) consists of meaning representations (MRs) from the restaurant domain. The task is to generate a sentence that verbalizes the useful information from the MR. Dataset statistics are summarized in Table 7 in the appendix. Evaluation Metrics We evaluate the quality of generated texts using several widely accepted metrics. *BLEU* (Papineni et al., 2002) measures the similarity between generated text and references in terms of n-gram overlap. *METEOR* (Banerjee and Lavie, 2005) assesses the quality of generated text by comparing unigram matches between the text and references, including exact, stem, synonym, and paraphrase matches. TER (Snover et al., 2006) is a measure of the number of edits required to change the generated text into one of the references. *PARENT* (Dhingra et al., 2019) takes into account the table input when evaluating generated text. *NIST* (Doddington, 2002) is similar to BLEU, but also considers the informativeness of each ngram. *CIDEr* (Vedantam et al., 2015) uses TF-IDF to lower the weights of common n-grams that appear in all references when calculating uni-gram to 4-gram overlaps between generated and reference sentences. We also use the *NLI score* (Chen et al., 2020a) on the LogicNLG dataset to evaluate the logical fidelity, which is a model-based evaluation using the BERT model trained on the TabFact (Chen et al., 2020c) dataset. Comparing Linearizations We compare our proposed *unified representation* to other linearization methods from previous papers. Specifically, on DART, WebNLG, and E2E datasets, we compare our method to the linearization used in UnifiedSKG (Xie et al., 2022).3 On ToTTo and LogicNLG datasets, we use the linearization from their original papers (Parikh et al., 2020; Chen et al., 2020a) for comparison. Examples of their linearization methods can be found in the appendix. ## 4.1 Zero And Few-Shot Experiments Our hypothesis is that a model trained on one structured form will transfer better to other forms under zero or few-shot settings when using our unified method of representation. We test this by focusing on transferring from ToTTo data (table input) to other types and from WebNLG (KGs) to ToTTo in this section. Results for other transfers can be found in the appendix. | Setting | Src representation | Tgt representation | |---------------------|----------------------|----------------------| | Only on tgt | - | Others | | Src to tgt, unified | Unified | Unified | | Src to tgt, varied | Others | Others | As shown in Table 3, for each experiment, we compare **three settings**: (i) *Only on tgt* - In fewshot experiments, we only train the model on the target task using the linearization from other papers. In zero-shot experiments, we use the foundational 3The E2E dataset is not studied in the paper, but the linearization is included in their official repository. model without any training. (ii) *Src to tgt, unified* – First, train the model on the source task and then fine-tune it on k-shot4target-task data, using our unified representation for both. (iii) *Src to tgt, varied* - Similar to (ii), but we use the linearization from other papers for each task, as described in 4. We refer to this as the varied setting because the source and target-task linearizations are different. During inference, we apply the same linearization method utilized during training to each target task. More implementation details are presented in the appendix. ## 4.1.1 Zero-Shot Performance The zero-shot results are summarized in Table 4. We compare our results to recent works GPT2- XL (Keymanesh et al., 2022), KGPT (Chen et al., 2020b), JointGT (Ke et al., 2021) and HTLM (Aghajanyan et al., 2022). Both KGPT and JointGT models are pretrained on large amounts of aligned knowledge graph and text data. HTLM is a hypertext language model pre-trained on a large-scale web crawl. It allows for structured prompting in the HTML format. From the results, we make several observations. (1) The *Only on tgt* performance is very low as expected, as the T5-base model has not been trained on any data. However, surprisingly the NLI score on LogicNLG is the highest under this setting. We observe that this NLI score is very unstable and might not be a good metric for judging the entailment of generated text. (2) The performance of Src to tgt, unified consistently and significantly surpasses that of *Src to tgt, varied*, even though both models are trained using the same source-task data, but with different representations. This demonstrates that representing source and target tasks in the same format is crucial for successful zero-shot transfer, as a common representation facilitates the transfer of knowledge learned on the source data to other structured forms and tasks. (3) The zero-shot performance of the "unified" model is even better than few-shot results of the baseline models. On DART, the "unified" model's BLEU score is 43% higher than that of HTLM. The improvement on WebNLG is particularly noteworthy for unseen categories. Utilizing a unified representation results in a zero-shot BLEU score of 39.82, surpassing the few-shot results of 37.18 by Ke et al. (2021) and 18.5 by Aghajanyan et al. (2022). ## 4.1.2 Few-Shot Results Figure 2 shows the few-shot results for sample sizes 8, 16, 32, 64, and 128. We repeat the experiments 5 times for each sample size and report the mean and 95% confidence intervals. Table −→ **KG Triples** From Figure 2a, 2b and 2c, we have identified three key observations: (1) Both the models *Src to tgt, unified* and *Src to tgt,* varied, which were initially trained on ToTTo, perform significantly better than the model *Only on* tgt, which was only trained on target tasks. This indicates that these two structured forms share common knowledge and that training the model on tabular input can greatly enhance its understanding of KG triples. (2) Furthermore, *Src to tgt, unified* (represented by the red curve) outperforms Src to tgt, varied (represented by the blue curve) by a substantial margin. This observation aligns with our previous findings in the zero-shot setting (as seen in Table 4) and highlights the importance of our unified representation approach in transferring knowledge learned from tables to KG triples. (3) Additionally, on the task of WebNLG, the improvement on unseen categories is particularly notable, further reinforcing our zero-shot findings. Table −→ **Meaning Representations** Based on Figure 2d, similar observations can be made for the E2E dataset. The improvement in terms of CIDEr is particularly significant when using fewer than 64 samples, indicating that the unified model generates more informative text compared to the varied and vanilla models. Table Description −→ **Table Insights** The LogicNLG task is distinct from the ToTTo task in that it requires the model to generate insights by analyzing the contents of a table, rather than generating surface-form descriptions based on highlighted cells. As shown in Figure 2e, when using only 8 samples, the *Src to tgt, varied* model performs better than the *Src to tgt, unified* model. This may be due to the fact that both tasks involve generating text from tables, and that the unified model is more proficient at transferring knowledge learned on the source task to the target task, which may lead to the generation of table descriptions rather than insights when provided with a limited number of samples. However, as the number of samples increases, the performance of the unified model improves, and it surpasses the varied model when k=128. A concrete example is provided in the case study section | Setting | DART (KG) | WebNLG (KG) | E2E clean (MR) | LogicNLG (Table) | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------|------------------|--------------------|------|------|------|-------|--------|-----|------| | BLEU | MET | TER ↓ | S | U | A | BLEU | NIST | CIDEr | BLEU-3 | NLI | | | GPT2-XL | 13.3 | 0.24 | 0.65 | - | - | - | - | - | - | - | - | | KGPT | - | - | - | - | - | 13.9 | - | - | - | - | - | | JointGT (0.5%)a | - | - | - | - | 37.2 | - | - | - | - | - | - | | HTLM (1-shot)a | 22.1 | 0.12 | 0.91 | 28.1 | 18.5 | 22.8 | - | - | - | - | - | | Only on tgtb | 0.3 | 0.01 | 2.82 | 0.36 | 0.08 | 0.23 | 0.0 | 0.0 | 0.0 | 0.2 | 85.1 | | Src to tgt, varied | 18.9 | 0.21 | 1.00 | 34.1 | 28.5 | 31.3 | 12.1 | 2.8 | 0.3 | 7.8 | 70.9 | | Src to tgt, unified | 31.5 | 0.32 | 0.56 | 35.9 | 39.8 | 37.7 | 22.6 | 4.4 | 0.9 | 8.9 | 81.3 | | a We compare our results to their few-shot performance, as zero-shot results are not reported in their papers. b Under zero-shot, this means directly testing T5-base model on target test set without any training. | | | | | | | | | | | | ![5_image_1.png](5_image_1.png) ![5_image_0.png](5_image_0.png) 4.3 to further illustrate our observation. KG Triples −→ **Table** The benefits of utilizing unified representation are particularly substantial when transferring models that have been trained on knowledge graphs to table inputs. In Figure 2f, the PARENT gap between unified and varied models is consistently greater than 2 points. In fact, the performance of "varied" and "only on tgt" models converge when utilizing 128 samples, and is only slightly superior to that of the "unified" model when provided with only 8 samples. This suggests that the use of unified representation is highly efficient in terms of sample utilization. ## 4.2 Full-Set Finetuning Results In this section, we train the models on full training sets, in either single-task or multi-task settings. Additional experimental results are presented in the appendix. | Model | Linear | ToTTo | DART | WebNLG | LogicNLG | E2E | | | | | |----------------------------------|----------|---------|--------|----------|------------|--------|------|-------|------|------| | BLEU | PARENT | BLEU | S | U | A | BLEU-3 | BLEU | CIDEr | | | | Single-Task Training | | | | | | | | | | | | LATTICE(Wang et al., 2022) | Tab | 48.6 | - | - | - | - | - | 20.1 | - | - | | UnifiedSKG (base) | O | 48.3 | - | 46.2 | - | - | - | - | - | - | | UnifiedSKG (3B) | O | 49.0 | - | 46.7 | - | - | - | - | - | - | | DCVED(Chen et al., 2021) | O | - | - | - | - | - | - | 15.3 | - | - | | HTLM(Aghajanyan et al., 2022) | O | - | - | 47.2 | 65.4 | 48.4 | 55.6 | - | - | - | | T5-base | Uni | 49.3 | 58.9 | 48.6 | 65.4 | 50.1 | 58.5 | 24.7 | 41.8 | 1.90 | | O | 49.2 | 58.9 | 49.0 | 65.9 | 49.5 | 58.2 | 25.2 | 42.1 | 1.91 | | | T5-3B | Uni | 49.4 | 58.9 | 49.6 | 65.1 | 52.7 | 59.5 | 25.1 | 42.8 | 1.92 | | O | 49.6 | 59.0 | 49.3 | 65.3 | 53.5 | 60.0 | 25.3 | 42.5 | 1.94 | | | Multi-Task Training | | | | | | | | | | | | UnifiedSKG (base) | O | 45.3 | - | - | - | - | - | - | - | - | | C-P (large) (Clive et al., 2021) | O | - | - | 52.0 | 67.0 | 55.6 | 61.8 | - | 44.2 | - | | T5-base | Uni | 49.7 | 59.2 | 49.8 | 64.9 | 50.3 | 58.3 | 25.2 | 42.9 | 1.94 | | O | 48.5 | 58.7 | 48.1 | 64.1 | 50.2 | 57.9 | 24.7 | 41.7 | 1.89 | | | T5-3B | Uni | 50.8 | 60.4 | 50.2 | 65.4 | 53.4 | 60.0 | 25.4 | 43.2 | 1.99 | | O | 50.2 | 59.5 | 49.8 | 65.3 | 51.9 | 59.4 | 25.3 | 41.8 | 1.89 | | Single-Task Training From the "single-task training" results in Table 5, a key finding is that the proposed unified representation method results in performance comparable to other linearization techniques studied in previous research. This is particularly evident on the DART, WebNLG, and E2E tasks, where the data was first converted into virtual tables, and the results from both methods are similar, indicating that this conversion does not result in a significant loss of information. Multi-Task Training The performance of multitask models is summarized in Table 5 under the "multi-task training" section, revealing several key findings: (1) *Overall, multi-task training using different linearizations for each dataset results in a* worse *performance compared to single-task training.* BLEU scores for T5-base models decrease from 49.2 to 48.5 on ToTTo, from 49.0 to 48.1 on DART, and from 65.9 to 64.1 on seen categories of WebNLG. This confirms the findings of UnifiedSKG (Xie et al., 2022), which found that singletask model performance was higher than multitask performance on ToTTo dataset. However, it is unclear if this drop in performance was due to task differences, as their study included other tasks. Our results provide further insight into data-to-text tasks alone and show that multi-task performance can still be inferior if input formats are not unified. (2) In contrast, *multi-task trained "unified"* models consistently outperform single-task models, with the only exception of the base model on the WebNLG dataset. This demonstrates that utilizing a unified representation approach helps models learn common knowledge across various tasks without negatively impacting performance. (3) The "unified" models consistently demonstrate superior performance compared to "varied" models in multitask training, with a larger margin of improvement observed in base-sized models. ## 4.3 Qualitative Study We conduct a qualitative case study to compare the texts generated by the *Src to tgt, unified* and Src to tgt, varied models. The results are illustrated in Table 6, which displays the model's generations for different sample sizes. For the WebNLG example, the input contains 5 KG triples. When k = 8, the "varied" model only covers one KG triple fact, while the "unified" model includes many more nodes and relations from the input. As the sample size increases to 128, the "unified" model's generation covers all facts accurately, while the "varied" model's generation still misses the "funk and disco" origin of pop music. In the E2E example, the "unified" model output is consistent and accurate with both 8 and 128 samples. In contrast, the "varied" model produces "Sorrento" twice. This serves as additional evidence that using a unified representation enhances the transfer of the generation style learned on table input to meaning representations. | k-shot = | Src to tgt, unified | Src to tgt, varied | |------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------| | ToTTo (table) −→ WebNLG (KG) example | | | | 8 | Hip hop music is influenced by Disco by Allen Forrest | Allen Forrest was born in Fort Campbell. | | (born in Fort Campbell) and Funk with drum and bass. | | | | 128 | Allen Forrest, born in Fort Campbell, is known for his | Allen Forrest was born in Fort Campbell and is known | | roots in hip hop music. Disco and Funk are stylistic | for hip hop music. Hip hop music is a derivative of drum | | | origins, while drum and bass are derivatives. | and bass. | | | Groundtruth | Allen Forrest was born in Fort Campbell and is a hip hop musician. Hip hop originates from funk and disco and was derived into drum and bass music. | | | KG triples | (Hip hop music, stylistic origin, Disco) (Allen Forrest, birth place, Fort Campbell) (Allen Forrest, genre, Hip hop music) (Hip hop music, stylistic origin, Funk) (Hip hop music, derivative, Drum and bass) ToTTo (table) −→ E2E (MR) example | | | 8 | Zizzi is a pub near The Sorrento. | Zizzi is a gastropub in Sorrento, near The Sorrento. | | 128 | Zizzi is a pub near The Sorrento. | Zizzi is a pub near The Sorrento. | | Groundtruth | There is a pub called Zizzi located near The Sorrento. | | | MRs | name[Zizzi], eatType[pub], near[The Sorrento] ToTTo (table) −→ LogicNLG (table) example | | | 8 | In the world golf championships, the United States has 12 individual winners, Australia has 3 individual winners, England has 3 individual winners, South Africa has 1 individual winner, Canada has 1 individual winner, Fiji has 1 individual winner, Italy has 1 individual winner, Japan has 0 individual winner, and Wales has no individual winner. | The United States has the highest number of individual winners of any country in the world. | | 128 | The United States is the only nation to have won 12 | The United States has the highest number of individual | | World Golf Championship. | winners. | | | Input table | | | The results of the LogicNLG input generation offer validation for our hypothesis that the "unified" model performs less effectively than the "varied" model when the sample size is small, due to its persistent focus on generating descriptions of the table input, as it has been trained to do on the ToTTo data. Indeed, the descriptions generated by the "unified" model when sample size is 8, are accurate reflections of the table's content. When the sample size is increased to 128, both models generate sentences that are more akin to insights. It is noteworthy that the "unified" model generates "world golf championship" even though it is not present in the table, which pertains to the golf championship. We posit that this information is carried over from the ToTTo data, and the "unified" model is able to retain this information while the "varied" model does not. ## 5 Conclusion And Future Work We have introduced a unified representation approach for data-to-text tasks, which effectively converts table contents, knowledge graph triples, and meaning representations into a single representation. Our experiments demonstrate that this unified representation significantly improves generalization across different structured forms, especially in zero-shot or few-shot settings. Our method is particularly beneficial in situations where data is scarce. Additionally, by using the unified representation, our multi-task-trained models consistently outperform single-task models, which is in contrast to previous findings that mixing different data types can negatively impact overall performance. One future direction is to apply our method to other tasks that involve heterogeneous inputs, such as question answering over knowledge bases, where knowledge can be stored in both tables and knowledge graphs. It would also be interesting to investigate whether a model pre-trained on large knowledge graphs can more effectively transfer learned commonsense knowledge to table QA tasks, when using our unified representation approach. ## Limitations It is important to note that the unified representation proposed in our study is just one option among many. Other linearization methods may potentially yield better results. For example, research by Yin et al. (2022) and Aghajanyan et al. (2022) has explored using code generation with Jupyter notebooks and a hyper-text language model with structured prompting, respectively. Further research in these areas, such as converting all structured forms to markdown language or hyper-texts, may yield alternative unified representations. ## Ethics Statement We acknowledge the importance of the ACL Ethics Policy and agree with it. This study addresses the problem of data-to-text generation and explores whether a unified representation can enhance cross-task performance on various structured forms. Since our input comes from knowledge bases, a potential concern is that biases or fairness issues may be present in the KB, which could also be reflected in the generated text. Therefore, it is crucial to use the model with caution in practice. We believe this work can contribute to the field of data-to-text generation, particularly in situations where data is scarce. ## References Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, and Luke Zettlemoyer. 2022. HTLM: Hyper-text pre-training and prompting of language models. In *International Conference on Learning Representations*. Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, and Yasemin Altun. 2022. Table-to-text generation and pre-training with tabt5. arXiv preprint arXiv:2210.09162. Sercan Ö. Arik and Tomas Pfister. 2019. Tabnet: Attentive interpretable tabular learning. *ArXiv*, abs/1908.07442. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, M. Zhou, and Tiejun Zhao. 2018. Table-to-text: Describing table region with natural language. In AAAI Conference on Artificial Intelligence. Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7464– 7471. Juan Cao. 2020. Generating natural language descriptions from tables. *IEEE Access*, 8:46206–46216. Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021. Neural data-to-text generation with LM-based text augmentation. In *Proceedings of* the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 758–768, Online. Association for Computational Linguistics. Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7929– 7942, Online. Association for Computational Linguistics. Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020b. KGPT: Knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8635– 8648, Online. Association for Computational Linguistics. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020c. Tabfact: A large-scale dataset for table-based fact verification. In *International Conference on Learning Representations*. Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yaohui Jin. 2021. De-confounded variational encoderdecoder for logical table-to-text generation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 5532–5542, Online. Association for Computational Linguistics. Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020d. Few-shot NLG with pre-trained language model. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183–190, Online. Association for Computational Linguistics. Jordan Clive, Kris Cao, and Marek Rei. 2021. Control prefixes for text generation. *CoRR*, abs/2110.08329. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, MingWei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Association for Computational Linguistics. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In *Proceedings of the Second International Conference on Human Language Technology* Research, HLT '02, page 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Ondˇrej Dušek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural language generation. In *Proc. of the 12th International* Conference on Natural Language Generation, pages 421–426, Tokyo, Japan. Association for Computational Linguistics. Ondˇrej Dušek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge. *Computer Speech & Language*, 59:123–156. Julian Eisenschlos, Maharshi Gor, Thomas Müller, and William Cohen. 2021. MATE: Multi-view attention for table transformer efficiency. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7606–7619, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Thiago Castro Ferreira, Chris van der Lee, Emiel Van Miltenburg, and Emiel Krahmer. 2019. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552–562. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124–133, Santiago de Compostela, Spain. Association for Computational Linguistics. Sebastian Gehrmann, Falcon Dai, Henry Elder, and Alexander M Rush. 2018. End-to-end content and plan selection for data-to-text generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 46–56. Qipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, and Zheng Zhang. 2020. CycleGT: Unsupervised graph-to-text and text-to-graph generation via cycle training. In *Proceedings of the 3rd International Workshop on Natural Language Generation* from the Semantic Web (WebNLG+), pages 77–88, Dublin, Ireland (Virtual). Association for Computational Linguistics. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4320–4333, Online. Association for Computational Linguistics. Xin Huang, Ashish Khetan, Milan Cvitkovic, and Zohar S. Karnin. 2020. Tabtransformer: Tabular data modeling using contextual embeddings. *ArXiv*, abs/2012.06678. Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. TABBIE: Pretrained representations of tabular data. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446–3456, Online. Association for Computational Linguistics. Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M. Khapra, and Shreyas Shetty. 2018. A mixed hierarchical attention based encoder-decoder approach for standard table summarization. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 622–627, New Orleans, Louisiana. Association for Computational Linguistics. Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre-training for data-to-text tasks. In Proceedings of the 13th International Conference on Natural Language Generation, pages 97–102, Dublin, Ireland. Association for Computational Linguistics. Zdenek Kasner and Ond ˇ ˇrej Dušek. 2020. Train hard, finetune easy: Multilingual denoising for RDF-totext generation. In *Proceedings of the 3rd International Workshop on Natural Language Generation* from the Semantic Web (WebNLG+), pages 171–176, Dublin, Ireland (Virtual). Association for Computational Linguistics. Zdenek Kasner and Ondrej Dusek. 2022. ˇ Neural pipeline for zero-shot data-to-text generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3914–3932, Dublin, Ireland. Association for Computational Linguistics. Pei Ke, Haozhe Ji, Yu Ran, Xin Cui, Liwei Wang, Linfeng Song, Xiaoyan Zhu, and Minlie Huang. 2021. JointGT: Graph-text joint representation learning for text generation from knowledge graphs. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2526–2538, Online. Association for Computational Linguistics. Moniba Keymanesh, Adrian Benton, and Mark Dredze. 2022. What makes data-to-text generation hard for pretrained language models? *ArXiv*, abs/2205.11505. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, and Ji-Rong Wen. 2021. Fewshot knowledge graph-to-text generation with pretrained language models. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1558–1568, Online. Association for Computational Linguistics. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2017. Table-to-text generation by structure-aware seq2seq learning. *arXiv preprint* arXiv:1711.09724. Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Opendomain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432–447, Online. Association for Computational Linguistics. Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, and Mitesh M. Khapra. 2018. Generating descriptions from structured data using a bifocal attention mechanism and gated orthogonalization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1539–1550, New Orleans, Louisiana. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ankur P Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. Totto: A controlled table-to-text generation dataset. *arXiv preprint arXiv:2004.14373*. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. *arXiv e-prints*, pages arXiv–1809. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with entity modeling. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2023– 2035. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Leonardo F. R. Ribeiro, Claire Gardent, and Iryna Gurevych. 2019. Enhancing AMR-to-text generation with dual graph representations. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3183–3194, Hong Kong, China. Association for Computational Linguistics. Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021a. Investigating pretrained language models for graph-to-text generation. In *Proceedings of the 3rd Workshop on Natural* Language Processing for Conversational AI, pages 211–227, Online. Association for Computational Linguistics. Leonardo F. R. Ribeiro, Yue Zhang, Claire Gardent, and Iryna Gurevych. 2020. Modeling global and local node contexts for text generation from knowledge graphs. *Transactions of the Association for Computational Linguistics*, 8:589–604. Leonardo F. R. Ribeiro, Yue Zhang, and Iryna Gurevych. 2021b. Structural adapters in pretrained language models for AMR-to-Text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4269–4282, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yi Luan Mirella Lapata Rik Koncel-Kedziorski, Dhanush Bekal and Hannaneh Hajishirzi. 2019. Text Generation from Knowledge Graphs with Graph Transformers. In *NAACL*. Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna Gurevych, and Hinrich Schütze. 2021. Modeling graph structure via relative position for text generation from knowledge graphs. In *Proceedings* of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15), pages 10–21, Mexico City, Mexico. Association for Computational Linguistics. Peng Shi, Patrick Ng, Feng Nan, Henghui Zhu, J. Wang, Jia-Jian Jiang, Alexander Hanbo Li, Rishav Chakravarti, Donald Weidner, Bing Xiang, and Zhiguo Wang. 2022. Generation-focused table-based intermediate pre-training for free-form question answering. In *AAAI Conference on Artificial Intelligence*. Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for AMRto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616–1626, Melbourne, Australia. Association for Computational Linguistics. Yixuan Su, Zaiqiao Meng, Simon Baker, and Nigel Collier. 2021. Few-shot table-to-text generation with prototype memory. In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 910–917, Punta Cana, Dominican Republic. Association for Computational Linguistics. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pages 4566–4575. Fei Wang, Zhewei Xu, Pedro Szekely, and Muhao Chen. 2022. Robust (controlled) table-to-text generation with structure-aware equivariance learning. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5037–5048, Seattle, United States. Association for Computational Linguistics. Zhiruo Wang, Haoyu Dong, Ran Jia, Jia Li, Zhiyi Fu, Shi Han, and Dongmei Zhang. 2021. Tuta: Treebased transformers for generally structured table pretraining. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery amp; Data Mining, KDD '21, page 1780–1790, New York, NY, USA. Association for Computing Machinery. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. EMNLP. Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022. TableFormer: Robust transformer modeling for tabletext encoding. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 528–537, Dublin, Ireland. Association for Computational Linguistics. Pengcheng Yin, Wen-Ding Li, Kefan Xiao, A. Eashaan Rao, Yeming Wen, Kensen Shi, Joshua Howland, Paige Bailey, Michele Catasta, Henryk Michalewski, Alex Polozov, and Charles Sutton. 2022. Natural language to code generation in interactive data science notebooks. *ArXiv*, abs/2212.09248. Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online. Association for Computational Linguistics. Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, and Lidong Bing. 2020. Lightweight, dynamic graph convolutional networks for AMR-to-text generation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2162–2172, Online. Association for Computational Linguistics. ## A Data Statistics | Dataset | Input | # Examples | | | |-----------|------------|--------------|-------|-------| | Train | Validation | Test | | | | ToTTo | Table | 120,761 | 7,700 | 7,700 | | DART | KG | 30,526 | 2,768 | 6,959 | | LogicNLG | Table | 28,450 | 4,260 | 4,305 | | WebNLG | KG | 18,102 | 872 | 1,862 | | E2E clean | MR | 33,525 | 1,484 | 1,847 | We summarize the input type and number of examples in each dataset. Table 7: Data statistics. ## B Column-Wise Linearization Of (Sub)Table A column-wise linearization of Table 1 is: ![12_image_0.png](12_image_0.png) ## C Other Linearizations Used In Previous Papers Table highlights : Our unified representation is motivated by ToTTo linearization, and hence they are very similar. The only difference is ToTTo uses <page_title> instead of <title> and <section_title> instead of <sub_title>. KG triples : Given a set of triples {(William Wasmund, FIELD_GOALS, 0), (William Wasmund, EXTRA_POINTS, 0)}, an alternative linearization used in UnifiedSKG (Xie et al., 2022) is William Wasmund : field goals : 0 | William Wasmund : extra points : 0 Entire table : The alternative linearization used in LogicNLG (Chen et al., 2020a) for Table 1 is: Given the table title of Alma Jodorowsky, Filmograph. In row 1 , the Year is 2014 , the Title is La ..., the Role is Solana, the Notes is TV ... In row 2 , ... Mearning representation : The alternative linearization we use for the example in Figure 1 is simply concatenating all the MRs: name[Cocum], eatType[coffee shop], food[Italian], priceRange[cheap], familyFriendly[yes]. ## D Implementation Details In the zero- and few-shot experiments, we employ the T5-base model as the base model and train it for 30 epochs for both the source and target tasks. For the source task, we use a learning rate of 5e-5 and a batch size of 32, and for the target task, we use a learning rate of 2e-5 and a batch size of 8. ## E More Multi-Task Results We present more detailed multi-task results on each of the dataset in this section. The results are summarized in Table 8, 9, 10 and 11. ## F More Few-Shot Results We present other few-shot results using more metrics in Figure 3, 4 and 5. ## G Human Evaluation We conducted a human evaluation on the fewshot ToTTo to WebNLG transferring experiment. Specifically, we randomly selected 50 WebNLG test data from the unseen schema and compared the performance of the 8-shot *src to tgt, unified* and src to tgt, varied models. For each of the 50 samples, we generated texts using both models and asked three annotators to choose the better option based on factuality, coverage of the triples, and fluency. We received only two annotations for two of the samples as one of the annotators did not respond. For the remaining 48 samples, all three annotators reached a consensus on 21 of them (43.75%). Out of these 21 samples, the "unified" model received unanimous preference from the annotators in 15 cases (71.43%). If we consider the majority vote among the three annotators, then 75% of the results favored the "unified" model. The Fleiss Kappa value, which measures agreement among the three annotators, is around 0.23 (fair agreement). ## H More Qualitative Study We present additional few-shot predictions for models transferred from ToTTo to WebNLG and LogicNLG in Tables 12 and 13, respectively. We also provide error analysis under each example. | Model | Task | Linearization | METEOR | ROUGE-L | CIDEr | NIST | BLEU | |------------------------|---------|-----------------|----------|-----------|---------|--------|--------| | CONTROL PREFIX (large) | MT | Alt | 39.2 | - | - | - | 44.2 | | ST | Unified | 38.3 | 56.6 | 1.90 | 6.20 | 41.8 | | | ST | Alt | 38.3 | 56.4 | 1.91 | 6.23 | 42.1 | | | T5-base | MT | Unified | 38.6 | 57.0 | 1.94 | 6.31 | 42.9 | | MT | Varied | 38.3 | 56.6 | 1.89 | 6.20 | 41.7 | | | ST | Unified | 38.5 | 56.7 | 1.92 | 6.30 | 42.8 | | | ST | Alt | 38.5 | 56.5 | 1.94 | 6.31 | 42.5 | | | T5-3B | MT | Unified | 38.7 | 57.4 | 1.99 | 6.34 | 43.2 | | MT | Varied | 38.3 | 56.8 | 1.89 | 6.21 | 41.8 | | Table 8: Test set performance on E2E clean. Table 9: Development set performance on ToTTo. | Model | Task | Linearization | Overall | Overlap | Non-overlap | | | | |-------------------|---------|-----------------|-----------|-----------|---------------|------|------|------| | BLEU | PARENT | BLEU | PARENT | BLEU | PARENT | | | | | LATTICE (T5-base) | ST | Table-specific | 48.6 | - | 56.6 | - | 40.8 | - | | UnifiedSKG (base) | ST | Alt | 48.3 | - | - | - | - | - | | UnifiedSKG (base) | MT | Varied | 45.3 | - | - | - | - | - | | UnifiedSKG (3B) | ST | Alt | 49.0 | - | - | - | - | - | | Text2Text (3B) | ST | Alt | 48.4 | 57.8 | - | - | 40.4 | 53.3 | | ST | Unified | 49.3 | 58.9 | 57.1 | 62.7 | 41.9 | 55.3 | | | T5-base | MT | Unified | 49.7 | 59.2 | 57.7 | 63.2 | 41.9 | 55.2 | | MT | Varied | 48.5 | 58.7 | 56.2 | 62.5 | 41.1 | 55.0 | | | ST | Unified | 49.4 | 58.9 | 57.1 | 62.7 | 42.0 | 55.3 | | | T5-3B | MT | Unified | 50.8 | 60.4 | 58.5 | 64.4 | 43.4 | 56.5 | | MT | Varied | 50.2 | 59.5 | 57.5 | 63.2 | 43.2 | 55.9 | | Table 10: Test set performance on DART and WebNLG. | Model | Task | Linearization | DART | WebNLG | | | | | |------------------------|------------|-----------------|--------|----------|------|------|------|------| | BLEU (↑) | METERO (↑) | TER (↓) | Seen | Unseen | All | | | | | UnifiedSKG (base) | ST | Alt | 46.2 | - | - | - | - | - | | UnifiedSKG (3B) | ST | Alt | 46.7 | - | - | - | - | - | | CONTROL PREFIX (large) | MT | Alt | 52.0 | 0.41 | 0.43 | 67.0 | 55.6 | 61.8 | | ST | Unified | 48.6 | 0.40 | 0.45 | 65.4 | 50.1 | 58.5 | | | ST | Alt | 49.0 | 0.40 | 0.45 | 65.9 | 49.5 | 58.2 | | | T5-base | MT | Unified | 49.8 | 0.40 | 0.44 | 64.9 | 50.3 | 58.3 | | MT | Varied | 48.1 | 0.39 | 0.45 | 64.1 | 50.2 | 57.9 | | | ST | Unified | 49.6 | 0.40 | 0.45 | 65.1 | 52.7 | 59.5 | | | ST | Alt | 49.3 | 0.40 | 0.45 | 65.3 | 53.5 | 60.0 | | | T5-3B | MT | Unified | 50.2 | 0.40 | 0.44 | 65.4 | 53.4 | 60.0 | | MT | Varied | 49.8 | 0.40 | 0.44 | 65.3 | 51.9 | 59.4 | | Table 11: Test set performance on LogicNLG. | Model | Task | Linearization | Orientation | Surface-Level Fidelity | Logical Fidelity | | | | |------------|---------|-----------------|---------------|--------------------------|--------------------|------|------|------| | BLEU-1 | BLEU-2 | BLEU-3 | NLI-acc | SP-acc | | | | | | GPT-TabGen | ST | Alt | row | 48.8 | 27.1 | 12.6 | 68.7 | 42.1 | | DCVED | ST | Alt | row | 49.5 | 28.6 | 15.3 | 76.9 | 43.9 | | ST | Unified | column | 52.8 | 34.9 | 24.3 | 79.6 | 45.2 | | | ST | Unified | row | 53.3 | 35.4 | 24.7 | 84.7 | 45.8 | | | ST | Alt | row | 54.6 | 36.1 | 25.2 | 85.5 | 45.9 | | | T5-base | MT | Unified | column | 53.8 | 35.8 | 25.1 | 78.7 | 47.2 | | MT | Unified | row | 54.4 | 36.1 | 25.2 | 80.4 | 46.3 | | | MT | Varied | row | 53.9 | 35.5 | 24.7 | 84.2 | 46.3 | | | ST | Unified | column | 54.9 | 36.4 | 25.4 | 88.4 | 49.8 | | | ST | Unified | row | 54.1 | 35.9 | 25.1 | 87.1 | 47.9 | | | T5-3B | ST | Alt | row | 54.4 | 36.1 | 25.3 | 81.1 | 47.3 | | MT | Unified | column | 54.8 | 36.3 | 25.4 | 87.0 | 49.4 | | | MT | Unified | row | 55.1 | 36.4 | 25.4 | 82.9 | 49.1 | | | MT | Varied | row | 54.4 | 36.0 | 25.3 | 80.7 | 47.4 | | ![14_image_2.png](14_image_2.png) ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_3.png](14_image_3.png) ![14_image_4.png](14_image_4.png) | k-shot = | Src to tgt, unified | Src to tgt, varied | |----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------| | ToTTo (table) −→ WebNLG (KG) example | | | | 8 | Uruguay is led by Ral Fernando Sendic Rodrguez, who | Ral Fernando Sendic Rodrguez died in Montevideo. | | died in Montevideo, and by Daniel Martnez, a Spanish politician. | | | | 128 | The leader of Uruguay is Ral Fernando Sendic Rodrguez | Ral Fernando Sendic Rodrguez was the leader of | | who died in Montevideo, where Alfredo Zitarrosa died. | Uruguay and Alfredo Zitarrosa died in Montevideo. | | | The leader is Daniel Martnez who speaks Spanish. | Daniel Martnez was a politician who led the country in Spanish. | | | Groundtruth | Alfredo Zitarrosa died in Montevideo, Uruguay. Daniel Martinez is a political leader in Montevideo, and Raul Fernando Sendic Rodriguez is a leader in Uruguay, where Spanish is spoken. | | | KG triples | (Uruguay : leader name : Ral Fernando Sendic Rodrguez | Alfredo Zitarrosa : death place : Montevideo | Montevideo : country : Uruguay | Montevideo : leader name : Daniel Martnez (politician) | Uruguay : language : Spanish language | | | Error analysis | When sample size is 8, the "unified" model generation contains almost all nodes except Alfredo Zitarrosa, but the "varied" model output only contains one triple. | | | 8 | Twilight (band) is a black metal band with Aaron Turner, | Twilight is a black metal music fusion genre. | | and Old Man Gloom is a death metal band with electric guitar. | | | | 128 | Twilight (band) is associated with black metal, and Old | Twilight is a genre of black metal music and Aaron | | Man Gloom is associated with death metal, where Aaron | Turner plays the electric guitar in Old Man Gloom. Death | | | Turner played electric guitar. | metal is a genre of black metal music. | | | Groundtruth | Aaron Turner is an electric guitar player who has played with the black metal band Twilight and with Old Man Gloom. Death metal is a musical fusion of black metal. | | | KG triples | (Twilight (band) : genre : Black metal | Aaron Turner : associated band/associated musical artist : Twilight (band) | Aaron Turner : associated band/associated musical artist : Old Man Gloom | Aaron Turner : instrument : Electric guitar | Black metal : music fusion genre : Death metal | | | Error analysis | In "unified" model output, the Twilight is always recognized correctly as a band, but in varied model output, it is wrongly predicted as a genre. | | | 8 | Al Anderson (NRBQ band) played guitar in rock music, | Al Anderson is a guitarist with the NRBQ band. | | country music and blues music, while Bhangra (music) is a genre of music fusion. | | | | 128 | Bhangra is a genre of music fusion, where Al Anderson | Al Anderson is an Indian country musician who plays | | plays guitar, Country music and Blues. | guitar and plays Bhangra (music). Country music is a music fusion genre which is influenced by blues and rock music. | | | Groundtruth | Al Anderson plays the guitar for rock band NRBQ. Rock comes from country music which originates from blues. Bhangra music is a musical fusion of rock music. | | | KG triples | (Rock music : music fusion genre : Bhangra (music) | Al Anderson (NRBQ band) : instrument : Guitar | Al Anderson (NRBQ band) : genre : Rock music | Rock music : stylistic origin : Country music | Country music : stylistic origin : Blues | | | Error analysis | This is an negative example for "unified" model. When sample size is 8, the unified model generation covers more nodes compared to "varied" model generation. However, when sample size is 128, the "unified" model wrongly generates Anderson to play guitar, country music and Blue. This is a wrong fact. But "varied" model generation is able to be more aligned with the input triples. Table 12: Case study: few-shot prediction examples on WebNLG. | | | k-shot = | Src to tgt, unified | Src to tgt, varied | | | | | | | | | |---------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|-----------|---------|------------|-----------|--------|--------|------------|---------------| | ToTTo (highlighted cells) —> LogicNLG (table) example | | | | | | | | | | | | 8 | Eric Turner, Ed King, James Jones, Pio Sagapolutele, | In the 1991 cleveland browns season, Eric Turner, Ed | | | | | | | | | | Michael Jackson, Raymond Irvin, Shawn Wiggins, Brian | King, Pio Sagapolutele and Raymond Irvin were the | | | | | | | | | | | Greenfield, Todd Jones, Todd Jones, and Elijah Austin | Defensive Tackle. | | | | | | | | | | | played in the 1991 cleveland browns season. | | | | | | | | | | | | 128 | Raymond Irvin was the Defensive Back in the 1991 | Brian Greenfield, Todd Jones and Elijah Austin were the | | | | | | | | | | Cleveland Browns season. | Defensive Tackle in the 1991 Cleveland Browns season. | | | | | | | | | | | 18 | Eric Turner, Ed King, James Jones, Pio Sagapolutele, | In the 1991 cleveland browns season, Eric Turner, Ed | | | | | | | | | | Michael Jackson, Raymond Irvin, Shawn Wiggins, Brian | King, Pio Sagapolutele and Raymond Irvin were the | | | | | | | | | | | Greenfield, Todd Jones, Todd Jones, and Elijah Austin. | Defensive Tackle. | | | | | | | | | | | played in the 1991 cleveland browns season. | | | | | | | | | | | | 128 | Raymond Irvin was the Defensive Back in the 1991 | Brian Greenfield, Todd Jones and Elijah Austin were the | | | | | | | | | | Cleveland Browns season. | Defensive Tackle in the 1991 Cleveland Browns season. | | | | | | | | | | | Eric Turner, Ed King, James Jones, Pio Sagapolutele, | In the 1991 cleveland browns season, Eric Turner, Ed | | | | | | | | | | | Michael Jackson, Raymond Irvin, Shawn Wiggins, Brian | King, Pio Sagapolutele and Raymond Irvin were the | | | | | | | | | | | Greenfield, Todd Jones, Todd Jones, and Elijah Austin | Defensive Tackle. | | | | | | | | | | | played in the 1991 cleveland browns season | | | | | | | | | | | | 128 | Raymond Irvin was the Defensive Back in the 1991 | Brian Greenfield, Todd Jones and Elijah Austin were the | | | | | | | | | | Cleveland Browns season. | Defensive Tackle in the 1991 Cleveland Browns season. | | | | | | | | | | | Groundtruths | "Raymond Irvin is the second Defensive Back to get drafted", "Frank Conover is the third Defensive Tackle to | | | | | | | | | | | get drafted", "Elijah Austin is the last Defensive Tackle to get drafted", "Frank Conover has an Overall that is 56 | | | | | | | | | | | | higher than Michael Jackson", "Shawn Wiggins is the second Wide Receiver to get drafted" | | | | | | | | | | | | Erk | Ed King | James | Plo | Michael | Frank | Raymond | Shawn | Brian | Todd Jones | Elijah Austin | | Turne | Sagap | Jackson | Irvin | Wiggins | Greenfield | | | | | | | Jones | | | | | | | | | | | | Defersh | Guard | Defensiv | Defensive | Wide | Defershe | Defensive | Wide | Punter | Offershe | Defensive | | e Back | e Tackle | Tackle | Receiver | Tackle | Back | Receiver | Tackle | Tackle | | | | Input table | | | | | | | | | | | | Error analysis | Similar to our analysis in Section 4.3, the "unified" model generation is more like description when sample size is | | | | | | | | | | | 8. Again this is because the source task is ToTTo, which is a task to generate surface-level description of table | | | | | | | | | | | | contents. The "unified" model transfers this learned knowledge better, and hence generates sentences that are | | | | | | | | | | | | more like descriptions. When sample size is 128, both models generate similar contents. | | | | | | | | | | | | Table 13: Case study: few-shot prediction examples on LogicNLG. | | | | | | | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the section "Limitations" after Section 5. ✓ A2. Did you discuss any potential risks of your work? In the section "Ethics Statement" after Section 5. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** In Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Section 4 and in Appendix D. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.